From 2d40d24744a3388ffff6bb92185a5b836bcbd9d1 Mon Sep 17 00:00:00 2001 From: Daniel Chadwick Date: Wed, 22 Jan 2025 17:11:23 -0500 Subject: [PATCH 001/669] ocpbugs18820 Document how to disable HTTP2 --- modules/nw-disable-http2.adoc | 44 ++++++++++++++++++ modules/nw-enable-http2.adoc | 46 +++++++++++++++++++ modules/nw-http2-haproxy.adoc | 44 ++---------------- .../ingress-operator.adoc | 4 ++ 4 files changed, 98 insertions(+), 40 deletions(-) create mode 100644 modules/nw-disable-http2.adoc create mode 100644 modules/nw-enable-http2.adoc diff --git a/modules/nw-disable-http2.adoc b/modules/nw-disable-http2.adoc new file mode 100644 index 000000000000..ad0737db02b7 --- /dev/null +++ b/modules/nw-disable-http2.adoc @@ -0,0 +1,44 @@ +// Module included in the following assemblies: +// +// * networking/ingress-operator.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nw-disable-http2_{context}"] += Disable HTTP/2 on an Ingress Controller + +== Disable HTTP/2 on a single Ingress Controller + +* To disable HTTP/2 on an Ingress Controller, enter the `oc annotate` command: ++ +[source,terminal] +---- +$ oc -n openshift-ingress-operator annotate ingresscontrollers/ ingress.operator.openshift.io/default-enable-http2=false <1> +---- +<1> This command adds the annotation `ingress.operator.openshift.io/default-enable-http2=false` to the specified Ingress Controller, disabling HTTP/2. Replace `` with the actual name of the Ingress Controller to annotate. ++ +Replace `` with the name of the Ingress Controller to annotate. + +== Disable HTTP/2 on the entire cluster + +* To disable HTTP/2 for the entire cluster, enter the `oc annotate` command: ++ +[source,terminal] +---- +$ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=false <2> +---- +<2> This command adds the annotation `ingress.operator.openshift.io/default-enable-http2=false` to the cluster-wide Ingress configuration, disabling HTTP/2 for the entire cluster. ++ +[TIP] +==== +You can alternatively apply the following YAML to add the annotation: +[source,yaml] +---- +apiVersion: config.openshift.io/v1 +kind: Ingress +metadata: + name: cluster + annotations: + ingress.operator.openshift.io/default-enable-http2: "false" <3> +---- +<3> This YAML configuration provides an alternative method to add the annotation `ingress.operator.openshift.io/default-enable-http2: "false"` to the cluster-wide Ingress configuration, disabling HTTP/2 for the entire cluster. +==== \ No newline at end of file diff --git a/modules/nw-enable-http2.adoc b/modules/nw-enable-http2.adoc new file mode 100644 index 000000000000..24b544dd6fcd --- /dev/null +++ b/modules/nw-enable-http2.adoc @@ -0,0 +1,46 @@ +// Module included in the following assemblies: +// +// * networking/ingress-operator.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nw-enable-http2_{context}"] += Enable HTTP/2 on an Ingress Controller + +== Enable HTTP/2 on a single Ingress Controller + +.Procedure + +* To enable HTTP/2 on an Ingress Controller, enter the `oc annotate` command: ++ +[source,terminal] +---- +$ oc -n openshift-ingress-operator annotate ingresscontrollers/ ingress.operator.openshift.io/default-enable-http2=true <1> +---- +<1> This command adds the annotation `ingress.operator.openshift.io/default-enable-http2=true` to the specified Ingress Controller, enabling HTTP/2. Replace `` with the actual name of the Ingress Controller to annotate. ++ +Replace `` with the name of the Ingress Controller to annotate. + +== Enable HTTP/2 on the entire cluster + +* To enable HTTP/2 for the entire cluster, enter the `oc annotate` command: ++ +[source,terminal] +---- +$ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true <2> +---- +<2> This command adds the annotation `ingress.operator.openshift.io/default-enable-http2=true` to the cluster-wide Ingress configuration, enabling HTTP/2 for the entire cluster. ++ +[TIP] +==== +You can alternatively apply the following YAML to add the annotation: +[source,yaml] +---- +apiVersion: config.openshift.io/v1 +kind: Ingress +metadata: + name: cluster + annotations: + ingress.operator.openshift.io/default-enable-http2: "true" <3> +---- +<3> This YAML configuration provides an alternative method to add the annotation `ingress.operator.openshift.io/default-enable-http2: "true"` to the cluster-wide Ingress configuration, enabling HTTP/2 for the entire cluster. +==== \ No newline at end of file diff --git a/modules/nw-http2-haproxy.adoc b/modules/nw-http2-haproxy.adoc index b501451fa8b8..da3d1d159060 100644 --- a/modules/nw-http2-haproxy.adoc +++ b/modules/nw-http2-haproxy.adoc @@ -4,11 +4,11 @@ :_mod-docs-content-type: PROCEDURE [id="nw-http2-haproxy_{context}"] -= Enabling HTTP/2 Ingress connectivity += Enabling and Disabling HTTP/2 Ingress Connectivity -You can enable transparent end-to-end HTTP/2 connectivity in HAProxy. It allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more. +You can enable or disable transparent end-to-end HTTP/2 connectivity in HAProxy. This allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more. -You can enable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster. +You can enable or disable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster. To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate. @@ -17,40 +17,4 @@ The connection from HAProxy to the application pod can use HTTP/2 only for re-en [IMPORTANT] ==== For non-passthrough routes, the Ingress Controller negotiates its connection to the application independently of the connection from the client. This means a client may connect to the Ingress Controller and negotiate HTTP/1.1, and the Ingress Controller may then connect to the application, negotiate HTTP/2, and forward the request from the client HTTP/1.1 connection using the HTTP/2 connection to the application. This poses a problem if the client subsequently tries to upgrade its connection from HTTP/1.1 to the WebSocket protocol, because the Ingress Controller cannot forward WebSocket to HTTP/2 and cannot upgrade its HTTP/2 connection to WebSocket. Consequently, if you have an application that is intended to accept WebSocket connections, it must not allow negotiating the HTTP/2 protocol or else clients will fail to upgrade to the WebSocket protocol. -==== - -.Procedure - -Enable HTTP/2 on a single Ingress Controller. - -* To enable HTTP/2 on an Ingress Controller, enter the `oc annotate` command: -+ -[source,terminal] ----- -$ oc -n openshift-ingress-operator annotate ingresscontrollers/ ingress.operator.openshift.io/default-enable-http2=true ----- -+ -Replace `` with the name of the Ingress Controller to annotate. - -Enable HTTP/2 on the entire cluster. - -* To enable HTTP/2 for the entire cluster, enter the `oc annotate` command: -+ -[source,terminal] ----- -$ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true ----- -+ -[TIP] -==== -You can alternatively apply the following YAML to add the annotation: -[source,yaml] ----- -apiVersion: config.openshift.io/v1 -kind: Ingress -metadata: - name: cluster - annotations: - ingress.operator.openshift.io/default-enable-http2: "true" ----- -==== +==== \ No newline at end of file diff --git a/networking/networking_operators/ingress-operator.adoc b/networking/networking_operators/ingress-operator.adoc index 53fa5da80856..cdeebd15d87a 100644 --- a/networking/networking_operators/ingress-operator.adoc +++ b/networking/networking_operators/ingress-operator.adoc @@ -99,6 +99,10 @@ include::modules/nw-using-ingress-forwarded.adoc[leveloffset=+2] include::modules/nw-http2-haproxy.adoc[leveloffset=+2] +include::modules/nw-enable-http2.adoc[leveloffset=+3] + +include::modules/nw-disable-http2.adoc[leveloffset=+3] + // Configuring the PROXY protocol for an Ingress Controller include::modules/nw-ingress-controller-configuration-proxy-protocol.adoc[leveloffset=+2] From bd9f6bf293ea9e364b28c4099e8326646d75e33d Mon Sep 17 00:00:00 2001 From: Daniel Chadwick Date: Wed, 22 Jan 2025 17:54:29 -0500 Subject: [PATCH 002/669] fixing a mistake --- networking/networking_operators/ingress-operator.adoc | 4 ---- 1 file changed, 4 deletions(-) diff --git a/networking/networking_operators/ingress-operator.adoc b/networking/networking_operators/ingress-operator.adoc index cdeebd15d87a..53fa5da80856 100644 --- a/networking/networking_operators/ingress-operator.adoc +++ b/networking/networking_operators/ingress-operator.adoc @@ -99,10 +99,6 @@ include::modules/nw-using-ingress-forwarded.adoc[leveloffset=+2] include::modules/nw-http2-haproxy.adoc[leveloffset=+2] -include::modules/nw-enable-http2.adoc[leveloffset=+3] - -include::modules/nw-disable-http2.adoc[leveloffset=+3] - // Configuring the PROXY protocol for an Ingress Controller include::modules/nw-ingress-controller-configuration-proxy-protocol.adoc[leveloffset=+2] From 788a488821f5697c3528bd42fe1082fd23b6c009 Mon Sep 17 00:00:00 2001 From: Daniel Chadwick Date: Wed, 22 Jan 2025 17:57:28 -0500 Subject: [PATCH 003/669] fixing a mistake --- modules/nw-http2-haproxy.adoc | 42 ++++++++++++++++++++++++++++++++--- 1 file changed, 39 insertions(+), 3 deletions(-) diff --git a/modules/nw-http2-haproxy.adoc b/modules/nw-http2-haproxy.adoc index da3d1d159060..98b877e3812b 100644 --- a/modules/nw-http2-haproxy.adoc +++ b/modules/nw-http2-haproxy.adoc @@ -4,11 +4,11 @@ :_mod-docs-content-type: PROCEDURE [id="nw-http2-haproxy_{context}"] -= Enabling and Disabling HTTP/2 Ingress Connectivity += Enabling HTTP/2 Ingress connectivity -You can enable or disable transparent end-to-end HTTP/2 connectivity in HAProxy. This allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more. +You can enable transparent end-to-end HTTP/2 connectivity in HAProxy. It allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more. -You can enable or disable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster. +You can enable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster. To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate. @@ -17,4 +17,40 @@ The connection from HAProxy to the application pod can use HTTP/2 only for re-en [IMPORTANT] ==== For non-passthrough routes, the Ingress Controller negotiates its connection to the application independently of the connection from the client. This means a client may connect to the Ingress Controller and negotiate HTTP/1.1, and the Ingress Controller may then connect to the application, negotiate HTTP/2, and forward the request from the client HTTP/1.1 connection using the HTTP/2 connection to the application. This poses a problem if the client subsequently tries to upgrade its connection from HTTP/1.1 to the WebSocket protocol, because the Ingress Controller cannot forward WebSocket to HTTP/2 and cannot upgrade its HTTP/2 connection to WebSocket. Consequently, if you have an application that is intended to accept WebSocket connections, it must not allow negotiating the HTTP/2 protocol or else clients will fail to upgrade to the WebSocket protocol. +==== + +.Procedure + +Enable HTTP/2 on a single Ingress Controller. + +* To enable HTTP/2 on an Ingress Controller, enter the `oc annotate` command: ++ +[source,terminal] +---- +$ oc -n openshift-ingress-operator annotate ingresscontrollers/ ingress.operator.openshift.io/default-enable-http2=true +---- ++ +Replace `` with the name of the Ingress Controller to annotate. + +Enable HTTP/2 on the entire cluster. + +* To enable HTTP/2 for the entire cluster, enter the `oc annotate` command: ++ +[source,terminal] +---- +$ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true +---- ++ +[TIP] +==== +You can alternatively apply the following YAML to add the annotation: +[source,yaml] +---- +apiVersion: config.openshift.io/v1 +kind: Ingress +metadata: + name: cluster + annotations: + ingress.operator.openshift.io/default-enable-http2: "true" +---- ==== \ No newline at end of file From ce79d8b8e19e07eb698f821a52a1fa204d37159f Mon Sep 17 00:00:00 2001 From: Daniel Chadwick Date: Wed, 22 Jan 2025 18:09:07 -0500 Subject: [PATCH 004/669] remvoing extra docs --- modules/nw-disable-http2.adoc | 44 --------------------------------- modules/nw-enable-http2.adoc | 46 ----------------------------------- 2 files changed, 90 deletions(-) delete mode 100644 modules/nw-disable-http2.adoc delete mode 100644 modules/nw-enable-http2.adoc diff --git a/modules/nw-disable-http2.adoc b/modules/nw-disable-http2.adoc deleted file mode 100644 index ad0737db02b7..000000000000 --- a/modules/nw-disable-http2.adoc +++ /dev/null @@ -1,44 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/ingress-operator.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-disable-http2_{context}"] -= Disable HTTP/2 on an Ingress Controller - -== Disable HTTP/2 on a single Ingress Controller - -* To disable HTTP/2 on an Ingress Controller, enter the `oc annotate` command: -+ -[source,terminal] ----- -$ oc -n openshift-ingress-operator annotate ingresscontrollers/ ingress.operator.openshift.io/default-enable-http2=false <1> ----- -<1> This command adds the annotation `ingress.operator.openshift.io/default-enable-http2=false` to the specified Ingress Controller, disabling HTTP/2. Replace `` with the actual name of the Ingress Controller to annotate. -+ -Replace `` with the name of the Ingress Controller to annotate. - -== Disable HTTP/2 on the entire cluster - -* To disable HTTP/2 for the entire cluster, enter the `oc annotate` command: -+ -[source,terminal] ----- -$ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=false <2> ----- -<2> This command adds the annotation `ingress.operator.openshift.io/default-enable-http2=false` to the cluster-wide Ingress configuration, disabling HTTP/2 for the entire cluster. -+ -[TIP] -==== -You can alternatively apply the following YAML to add the annotation: -[source,yaml] ----- -apiVersion: config.openshift.io/v1 -kind: Ingress -metadata: - name: cluster - annotations: - ingress.operator.openshift.io/default-enable-http2: "false" <3> ----- -<3> This YAML configuration provides an alternative method to add the annotation `ingress.operator.openshift.io/default-enable-http2: "false"` to the cluster-wide Ingress configuration, disabling HTTP/2 for the entire cluster. -==== \ No newline at end of file diff --git a/modules/nw-enable-http2.adoc b/modules/nw-enable-http2.adoc deleted file mode 100644 index 24b544dd6fcd..000000000000 --- a/modules/nw-enable-http2.adoc +++ /dev/null @@ -1,46 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/ingress-operator.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-enable-http2_{context}"] -= Enable HTTP/2 on an Ingress Controller - -== Enable HTTP/2 on a single Ingress Controller - -.Procedure - -* To enable HTTP/2 on an Ingress Controller, enter the `oc annotate` command: -+ -[source,terminal] ----- -$ oc -n openshift-ingress-operator annotate ingresscontrollers/ ingress.operator.openshift.io/default-enable-http2=true <1> ----- -<1> This command adds the annotation `ingress.operator.openshift.io/default-enable-http2=true` to the specified Ingress Controller, enabling HTTP/2. Replace `` with the actual name of the Ingress Controller to annotate. -+ -Replace `` with the name of the Ingress Controller to annotate. - -== Enable HTTP/2 on the entire cluster - -* To enable HTTP/2 for the entire cluster, enter the `oc annotate` command: -+ -[source,terminal] ----- -$ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true <2> ----- -<2> This command adds the annotation `ingress.operator.openshift.io/default-enable-http2=true` to the cluster-wide Ingress configuration, enabling HTTP/2 for the entire cluster. -+ -[TIP] -==== -You can alternatively apply the following YAML to add the annotation: -[source,yaml] ----- -apiVersion: config.openshift.io/v1 -kind: Ingress -metadata: - name: cluster - annotations: - ingress.operator.openshift.io/default-enable-http2: "true" <3> ----- -<3> This YAML configuration provides an alternative method to add the annotation `ingress.operator.openshift.io/default-enable-http2: "true"` to the cluster-wide Ingress configuration, enabling HTTP/2 for the entire cluster. -==== \ No newline at end of file From e32922f8217419d7619056c8bb36e476d76745b6 Mon Sep 17 00:00:00 2001 From: Daniel Chadwick Date: Wed, 22 Jan 2025 18:12:20 -0500 Subject: [PATCH 005/669] fixing spacing --- modules/nw-http2-haproxy.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/nw-http2-haproxy.adoc b/modules/nw-http2-haproxy.adoc index 98b877e3812b..b501451fa8b8 100644 --- a/modules/nw-http2-haproxy.adoc +++ b/modules/nw-http2-haproxy.adoc @@ -53,4 +53,4 @@ metadata: annotations: ingress.operator.openshift.io/default-enable-http2: "true" ---- -==== \ No newline at end of file +==== From 2123e5e71882d659fa3fe87de9312b1ca4e33764 Mon Sep 17 00:00:00 2001 From: xenolinux Date: Wed, 22 Jan 2025 18:30:20 +0530 Subject: [PATCH 006/669] OCPBUGS#46005: Add DNS entries for HCP for non BM --- modules/hcp-non-bm-dns.adoc | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/modules/hcp-non-bm-dns.adoc b/modules/hcp-non-bm-dns.adoc index 7e3d7ff0140e..6e3daa1dea2b 100644 --- a/modules/hcp-non-bm-dns.adoc +++ b/modules/hcp-non-bm-dns.adoc @@ -14,6 +14,11 @@ The DNS entry can be as simple as a record that points to one of the nodes in th + [source,text] ---- +api.example.krnl.es. IN A 192.168.122.20 +api.example.krnl.es. IN A 192.168.122.21 +api.example.krnl.es. IN A 192.168.122.22 +api-int.example.krnl.es. IN A 192.168.122.20 +api-int.example.krnl.es. IN A 192.168.122.21 api-int.example.krnl.es. IN A 192.168.122.22 `*`.apps.example.krnl.es. IN A 192.168.122.23 ---- @@ -22,6 +27,11 @@ api-int.example.krnl.es. IN A 192.168.122.22 + [source,text] ---- +api.example.krnl.es. IN A 2620:52:0:1306::5 +api.example.krnl.es. IN A 2620:52:0:1306::6 +api.example.krnl.es. IN A 2620:52:0:1306::7 +api-int.example.krnl.es. IN A 2620:52:0:1306::5 +api-int.example.krnl.es. IN A 2620:52:0:1306::6 api-int.example.krnl.es. IN A 2620:52:0:1306::7 `*`.apps.example.krnl.es. IN A 2620:52:0:1306::10 ---- @@ -30,7 +40,21 @@ api-int.example.krnl.es. IN A 2620:52:0:1306::7 + [source,text] ---- +host-record=api-int.hub-dual.dns.base.domain.name,192.168.126.10 +host-record=api.hub-dual.dns.base.domain.name,192.168.126.10 +address=/apps.hub-dual.dns.base.domain.name/192.168.126.11 +dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,192.168.126.20 +dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,192.168.126.21 +dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,192.168.126.22 +dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,192.168.126.25 +dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,192.168.126.26 + host-record=api-int.hub-dual.dns.base.domain.name,2620:52:0:1306::2 +host-record=api.hub-dual.dns.base.domain.name,2620:52:0:1306::2 address=/apps.hub-dual.dns.base.domain.name/2620:52:0:1306::3 dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,[2620:52:0:1306::5] +dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,[2620:52:0:1306::6] +dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,[2620:52:0:1306::7] +dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,[2620:52:0:1306::8] +dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,[2620:52:0:1306::9] ---- From d175ef598f933a487cd99a957e2f2e2f916ddc04 Mon Sep 17 00:00:00 2001 From: Ben Hardesty Date: Tue, 21 Jan 2025 16:58:45 -0500 Subject: [PATCH 007/669] OSDOCS-12565: Add Service Mesh to ROSA HCP --- _topic_maps/_topic_map_rosa.yml | 6 ++ _topic_maps/_topic_map_rosa_hcp.yml | 94 +++++++++++++++++++ modules/distr-tracing-config-storage.adoc | 4 +- modules/ossm-about-smcp.adoc | 8 +- ...add-project-using-label-selectors-cli.adoc | 8 +- ...project-using-label-selectors-console.adoc | 8 +- ...uring-the-threescale-wasm-auth-module.adoc | 4 +- modules/ossm-control-plane-cli.adoc | 24 ++--- modules/ossm-control-plane-web.adoc | 20 ++-- ...deploy-cluster-wide-control-plane-cli.adoc | 20 ++-- ...oy-cluster-wide-control-plane-console.adoc | 20 ++-- modules/ossm-federation-across-cluster.adoc | 12 +-- modules/ossm-federation-config-smcp.adoc | 8 +- modules/ossm-install-ossm-operator.adoc | 8 +- ...grating-with-user-workload-monitoring.adoc | 12 +-- modules/ossm-remove-cleanup.adoc | 8 +- modules/ossm-rn-fixed-issues.adoc | 4 +- modules/ossm-rn-known-issues.adoc | 16 ++-- modules/ossm-rn-new-features.adoc | 64 ++++++------- modules/ossm-supported-configurations.adoc | 16 ++-- modules/ossm-tutorial-bookinfo-install.adoc | 12 +-- ...ossm-tutorial-bookinfo-verify-install.adoc | 8 +- modules/ossm-validate-smcp-cli.adoc | 8 +- modules/ossm-validate-smcp-kiali.adoc | 8 +- modules/support-submitting-a-case.adoc | 24 ++--- service_mesh/v2x/installing-ossm.adoc | 12 +-- service_mesh/v2x/ossm-create-smcp.adoc | 4 +- service_mesh/v2x/ossm-extensions.adoc | 4 +- service_mesh/v2x/ossm-observability.adoc | 4 +- .../v2x/ossm-performance-scalability.adoc | 2 +- service_mesh/v2x/ossm-reference-smcp.adoc | 4 +- service_mesh/v2x/ossm-route-migration.adoc | 2 +- service_mesh/v2x/ossm-security.adoc | 6 +- .../ossm-threescale-webassembly-module.adoc | 2 +- service_mesh/v2x/ossm-traffic-manage.adoc | 4 +- .../v2x/ossm-troubleshooting-istio.adoc | 4 +- .../v2x/servicemesh-release-notes.adoc | 4 +- 37 files changed, 288 insertions(+), 188 deletions(-) diff --git a/_topic_maps/_topic_map_rosa.yml b/_topic_maps/_topic_map_rosa.yml index 1b447e8eac13..6f4c7723d345 100644 --- a/_topic_maps/_topic_map_rosa.yml +++ b/_topic_maps/_topic_map_rosa.yml @@ -1729,6 +1729,12 @@ Name: Service Mesh Dir: service_mesh Distros: openshift-rosa Topics: +# Tech Preview +# - Name: Service Mesh 3.x +# Dir: v3x +# Topics: +# - Name: OpenShift Service Mesh 3.0 TP1 overview +# File: ossm-service-mesh-3-0-overview - Name: Service Mesh 2.x Dir: v2x Topics: diff --git a/_topic_maps/_topic_map_rosa_hcp.yml b/_topic_maps/_topic_map_rosa_hcp.yml index 11ca4036ed77..a681baab4a0a 100644 --- a/_topic_maps/_topic_map_rosa_hcp.yml +++ b/_topic_maps/_topic_map_rosa_hcp.yml @@ -904,6 +904,100 @@ Topics: # - Name: Adding worker nodes to single-node OpenShift clusters # File: nodes-sno-worker-nodes --- +Name: Service Mesh +Dir: service_mesh +Distros: openshift-rosa-hcp +Topics: +# Tech Preview +# - Name: Service Mesh 3.x +# Dir: v3x +# Topics: +# - Name: OpenShift Service Mesh 3.0 TP1 overview +# File: ossm-service-mesh-3-0-overview +- Name: Service Mesh 2.x + Dir: v2x + Topics: + - Name: About OpenShift Service Mesh + File: ossm-about + - Name: Service Mesh 2.x release notes + File: servicemesh-release-notes + - Name: Service Mesh architecture + File: ossm-architecture + - Name: Service Mesh deployment models + File: ossm-deployment-models + - Name: Service Mesh and Istio differences + File: ossm-vs-community + - Name: Preparing to install Service Mesh + File: preparing-ossm-installation + - Name: Installing the Operators + File: installing-ossm + - Name: Creating the ServiceMeshControlPlane + File: ossm-create-smcp + - Name: Adding workloads to a service mesh + File: ossm-create-mesh + - Name: Enabling sidecar injection + File: prepare-to-deploy-applications-ossm + - Name: Upgrading Service Mesh + File: upgrading-ossm + - Name: Managing users and profiles + File: ossm-profiles-users + - Name: Security + File: ossm-security + - Name: Traffic management + File: ossm-traffic-manage + - Name: Metrics, logs, and traces + File: ossm-observability + - Name: Performance and scalability + File: ossm-performance-scalability + - Name: Deploying to production + File: ossm-deploy-production + - Name: Federation + File: ossm-federation + - Name: Extensions + File: ossm-extensions + - Name: 3scale WebAssembly for 2.1 + File: ossm-threescale-webassembly-module + - Name: 3scale Istio adapter for 2.0 + File: threescale-adapter + - Name: Troubleshooting Service Mesh + File: ossm-troubleshooting-istio + - Name: Control plane configuration reference + File: ossm-reference-smcp + - Name: Kiali configuration reference + File: ossm-reference-kiali + - Name: Jaeger configuration reference + File: ossm-reference-jaeger + - Name: Uninstalling Service Mesh + File: removing-ossm +# Service Mesh 1.x is tech preview +# - Name: Service Mesh 1.x +# Dir: v1x +# Topics: +# - Name: Service Mesh 1.x release notes +# File: servicemesh-release-notes +# - Name: Service Mesh architecture +# File: ossm-architecture +# - Name: Service Mesh and Istio differences +# File: ossm-vs-community +# - Name: Preparing to install Service Mesh +# File: preparing-ossm-installation +# - Name: Installing Service Mesh +# File: installing-ossm +# - Name: Security +# File: ossm-security +# - Name: Traffic management +# File: ossm-traffic-manage +# - Name: Deploying applications on Service Mesh +# File: prepare-to-deploy-applications-ossm +# - Name: Data visualization and observability +# File: ossm-observability +# - Name: Custom resources +# File: ossm-custom-resources +# - Name: 3scale Istio adapter for 1.x +# File: threescale-adapter +# - Name: Removing Service Mesh +# File: removing-ossm +--- Name: Serverless Dir: serverless Distros: openshift-rosa-hcp diff --git a/modules/distr-tracing-config-storage.adoc b/modules/distr-tracing-config-storage.adoc index ab773101da77..16dd7969421f 100644 --- a/modules/distr-tracing-config-storage.adoc +++ b/modules/distr-tracing-config-storage.adoc @@ -645,7 +645,7 @@ spec: <3> Secret which defines environment variables ES_PASSWORD and ES_USERNAME. Created by kubectl create secret generic tracing-secret --from-literal=ES_PASSWORD=changeme --from-literal=ES_USERNAME=elastic <4> Volume mounts and volumes which are mounted into all storage components. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [id="distr-tracing-manage-es-certificates_{context}"] = Managing certificates with Elasticsearch @@ -722,4 +722,4 @@ spec: The {JaegerName} Operator sets the Elasticsearch custom resource `name` to the value of `spec.storage.elasticsearch.name` from the Jaeger custom resource when provisioning Elasticsearch. The certificates are provisioned by the {es-op} and the {JaegerName} Operator injects the certificates. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] diff --git a/modules/ossm-about-smcp.adoc b/modules/ossm-about-smcp.adoc index f94ea4aa8653..efde7b34dbef 100644 --- a/modules/ossm-about-smcp.adoc +++ b/modules/ossm-about-smcp.adoc @@ -12,16 +12,16 @@ The control plane includes Istiod, Ingress and Egress Gateways, and other compon This basic installation is configured based on the default {product-title} settings and is not designed for production use. Use this default installation to verify your installation, and then configure your `ServiceMeshControlPlane` settings for your environment. ==== -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [NOTE] ==== The {SMProductShortName} documentation uses `istio-system` as the example project, but you can deploy the service mesh to any project. ==== -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] -ifdef::openshift-rosa[] +ifdef::openshift-rosa,openshift-rosa-hcp[] If you are deploying the control plane for use on {product-rosa}, see the Red Hat Knowledgebase article link:https://access.redhat.com/solutions/6529231[OpenShift service mesh operator Istio basic not starting due to authentication errors], which discusses adding a new project and starting pods. -endif::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] ifdef::openshift-dedicated[] If you are deploying the control plane for use on {product-dedicated}, see the Red Hat Knowledgebase article link:https://access.redhat.com/solutions/6529231[OpenShift service mesh operator Istio basic not starting due to authentication errors], which discusses adding a new project and starting pods. endif::openshift-dedicated[] diff --git a/modules/ossm-add-project-using-label-selectors-cli.adoc b/modules/ossm-add-project-using-label-selectors-cli.adoc index 7b9c71347259..b0b98e7385b1 100644 --- a/modules/ossm-add-project-using-label-selectors-cli.adoc +++ b/modules/ossm-add-project-using-label-selectors-cli.adoc @@ -11,12 +11,12 @@ You can use label selectors to add a project to the {SMProductShortName} with th .Prerequisites * You have installed the {SMProductName} Operator. * The deployment has an existing `ServiceMeshMemberRoll` resource. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as`cluster-admin`. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure diff --git a/modules/ossm-add-project-using-label-selectors-console.adoc b/modules/ossm-add-project-using-label-selectors-console.adoc index c1084c186fcc..7c4da3010779 100644 --- a/modules/ossm-add-project-using-label-selectors-console.adoc +++ b/modules/ossm-add-project-using-label-selectors-console.adoc @@ -11,12 +11,12 @@ You can use labels selectors to add a project to the {SMProductShortName} with t .Prerequisites * You have installed the {SMProductName} Operator. * The deployment has an existing `ServiceMeshMemberRoll` resource. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to the {product-title} web console as `cluster-admin`. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to the {product-title} web console as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure diff --git a/modules/ossm-configuring-the-threescale-wasm-auth-module.adoc b/modules/ossm-configuring-the-threescale-wasm-auth-module.adoc index d2924b25cd5f..382f9dba47d7 100644 --- a/modules/ossm-configuring-the-threescale-wasm-auth-module.adoc +++ b/modules/ossm-configuring-the-threescale-wasm-auth-module.adoc @@ -25,9 +25,9 @@ Configuring the WebAssembly extension is currently a manual process. Support for * Identify a Kubernetes workload and namespace on your {SMProductShortName} deployment that you will apply this module. * You must have a 3scale tenant account. See link:https://www.3scale.net/signup[SaaS] or link:https://access.redhat.com/documentation/en-us/red_hat_3scale_api_management/2.11/html-single/installing_3scale/index#install-threescale-on-openshift-guide[3scale 2.11 On-Premises] with a matching service and relevant applications and metrics defined. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * If you apply the module to the `` microservice in the `bookinfo` namespace, see the xref:../../service_mesh/v1x/prepare-to-deploy-applications-ossm.adoc#ossm-tutorial-bookinfo-overview_deploying-applications-ossm-v1x[Bookinfo sample application]. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] ** The following example is the YAML format for the custom resource for `threescale-wasm-auth` module. This example refers to the upstream Maistra version of {SMProductShortName}, `WasmPlugin` API. You must declare the namespace where the `threescale-wasm-auth` module is deployed, alongside a `selector` to identify the set of applications the module will apply to: + diff --git a/modules/ossm-control-plane-cli.adoc b/modules/ossm-control-plane-cli.adoc index c21dab7f1513..7229720a0fea 100644 --- a/modules/ossm-control-plane-cli.adoc +++ b/modules/ossm-control-plane-cli.adoc @@ -13,12 +13,12 @@ You can deploy a basic `ServiceMeshControlPlane` from the command line. * The {SMProductName} Operator must be installed. * Access to the OpenShift CLI (`oc`). -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as`cluster-admin`. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure @@ -29,11 +29,11 @@ endif::openshift-rosa,openshift-dedicated[] $ oc new-project istio-system ---- + -ifdef::openshift-rosa,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] The `ServiceMeshControlPlane` resource must be installed in the `istio-system` project, separate from your microservices and Operators. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . Create a `ServiceMeshControlPlane` file named `istio-installation.yaml` using the following example. The version of the {SMProductShortName} control plane determines the features available regardless of the version of the Operator. + .Example version {MaistraVersion} istio-installation.yaml @@ -56,8 +56,8 @@ spec: grafana: enabled: true ---- -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . Create a `ServiceMeshControlPlane` file named `istio-installation.yaml` using the following example. The version of the {SMProductShortName} control plane determines the features available regardless of the version of the Operator. + .Example `ServiceMeshControlPlane` resource @@ -88,13 +88,13 @@ spec: telemetry: type: Istiod ---- -ifdef::openshift-rosa[] +ifdef::openshift-rosa,openshift-rosa-hcp[] <1> Specifies a required setting for {product-rosa}. -endif::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] ifdef::openshift-dedicated[] <1> Specifies a required setting for {product-dedicated}. endif::openshift-dedicated[] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] + . Run the following command to deploy the {SMProductShortName} control plane, where `` includes the full path to your file. + diff --git a/modules/ossm-control-plane-web.adoc b/modules/ossm-control-plane-web.adoc index b5fa597e407b..d28a3c9cc1b6 100644 --- a/modules/ossm-control-plane-web.adoc +++ b/modules/ossm-control-plane-web.adoc @@ -11,12 +11,12 @@ You can deploy a basic `ServiceMeshControlPlane` by using the web console. In t .Prerequisites * The {SMProductName} Operator must be installed. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to the {product-title} web console as `cluster-admin`. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to the {product-title} web console as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure @@ -27,16 +27,16 @@ endif::openshift-rosa,openshift-dedicated[] .. Navigate to *Home* -> *Projects*. + .. Click *Create Project*. -ifndef::openshift-rosa,openshift-dedcated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedcated[] + .. In the *Name* field, enter `istio-system`. The `ServiceMeshControlPlane` resource must be installed in a project that is separate from your microservices and Operators. + These steps use `istio-system` as an example, but you can deploy your {SMProductShortName} control plane in any project as long as it is separate from the project that contains your services. -endif::openshift-rosa,openshift-dedcated[] -ifdef::openshift-rosa,openshift-dedcated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedcated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedcated[] + .. In the *Name* field, enter `istio-system`. The `ServiceMeshControlPlane` resource must be installed in the `istio-system` project, separate from your microservices and Operators. -endif::openshift-rosa,openshift-dedcated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedcated[] + .. Click *Create*. @@ -49,9 +49,9 @@ endif::openshift-rosa,openshift-dedcated[] -- .. Accept the default {SMProductShortName} control plane version to take advantage of the features available in the most current version of the product. The version of the control plane determines the features available regardless of the version of the Operator. -ifdef::openshift-rosa[] +ifdef::openshift-rosa,openshift-rosa-hcp[] .. Add the `spec.security.identity.type.ThirdParty` field, required by {product-rosa}. -endif::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] ifdef::openshift-dedicated[] .. Add the `spec.security.identity.type.ThirdParty` field, required by {product-dedicated}. endif::openshift-dedicated[] diff --git a/modules/ossm-deploy-cluster-wide-control-plane-cli.adoc b/modules/ossm-deploy-cluster-wide-control-plane-cli.adoc index 630a3350d209..f76c66af9ace 100644 --- a/modules/ossm-deploy-cluster-wide-control-plane-cli.adoc +++ b/modules/ossm-deploy-cluster-wide-control-plane-cli.adoc @@ -13,12 +13,12 @@ You can configure the `ServiceMeshControlPlane` resource for cluster-wide deploy * The {SMProductName} Operator is installed. * You have access to the OpenShift CLI (`oc`). -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as`cluster-admin`. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure @@ -31,7 +31,7 @@ $ oc new-project istio-system . Create a `ServiceMeshControlPlane` file named `istio-installation.yaml` using the following example: + -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Example version {MaistraVersion} istio-installation.yaml [source,yaml, subs="attributes,verbatim"] ---- @@ -44,8 +44,8 @@ spec: version: v{MaistraVersion} mode: ClusterWide ---- -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Example `ServiceMeshControlPlane` resource [source,yaml, subs="attributes,verbatim"] ---- @@ -62,13 +62,13 @@ spec: type: ThirdParty <2> ---- <1> Specifies that the resource is for a cluster-wide deployment. -ifdef::openshift-rosa[] +ifdef::openshift-rosa,openshift-rosa-hcp[] <2> Specifies a required setting for {product-rosa}. -endif::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] ifdef::openshift-dedicated[] <2> Specifies a required setting for {product-dedicated}. endif::openshift-dedicated[] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . Run the following command to deploy the {SMProductShortName} control plane: + diff --git a/modules/ossm-deploy-cluster-wide-control-plane-console.adoc b/modules/ossm-deploy-cluster-wide-control-plane-console.adoc index ab7429e63851..cf6de3a3d981 100644 --- a/modules/ossm-deploy-cluster-wide-control-plane-console.adoc +++ b/modules/ossm-deploy-cluster-wide-control-plane-console.adoc @@ -11,12 +11,12 @@ You can configure the `ServiceMeshControlPlane` resource for cluster-wide deploy .Prerequisites * The {SMProductName} Operator is installed. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as`cluster-admin`. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure @@ -40,7 +40,7 @@ These steps use `istio-system` as an example. You can deploy the {SMProductShort . Click *YAML view*. The version of the {SMProductShortName} control plane determines the features available regardless of the version of the Operator. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . Modify the `spec.mode` field of the YAML file to specify `ClusterWide`. + .Example version {MaistraVersion} istio-installation.yaml @@ -56,8 +56,8 @@ spec: version: v{MaistraVersion} mode: ClusterWide ---- -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . Modify the `spec.mode` field and add the `spec.security.identity.type.ThirdParty` field: + .Example `ServiceMeshControlPlane` resource @@ -94,13 +94,13 @@ spec: type: Istiod ---- <1> Specifies that the resource is for a cluster-wide deployment. -ifdef::openshift-rosa[] +ifdef::openshift-rosa,openshift-rosa-hcp[] <2> Specifies a required setting for {product-rosa}. -endif::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] ifdef::openshift-dedicated[] <2> Specifies a required setting for {product-dedicated}. endif::openshift-dedicated[] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . Click *Create*. The Operator creates pods, services, and {SMProductShortName} control plane components based on your configuration parameters. The operator also creates the `ServiceMeshMemberRoll` if it does not exist as part of the default configuration. diff --git a/modules/ossm-federation-across-cluster.adoc b/modules/ossm-federation-across-cluster.adoc index fa54ce4b82f2..403d8b9b7572 100644 --- a/modules/ossm-federation-across-cluster.adoc +++ b/modules/ossm-federation-across-cluster.adoc @@ -15,12 +15,12 @@ If the cluster runs on bare metal and fully supports `LoadBalancer` services, th If the cluster does not support `LoadBalancer` services, using a `NodePort` service could be an option if the nodes are accessible from the cluster running the other mesh. In the `ServiceMeshPeer` object, specify the IP addresses of the nodes in the `.spec.remote.addresses` field and the service's node ports in the `.spec.remote.discoveryPort` and `.spec.remote.servicePort` fields. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] == Exposing the federation ingress on clusters running on {ibm-power-title} and {ibm-z-title} If the cluster runs on {ibm-power-name} or {ibm-z-name} infrastructure and fully supports `LoadBalancer` services, the IP address found in the `.status.loadBalancer.ingress.ip` field of the ingress gateway `Service` object should be specified as one of the entries in the `.spec.remote.addresses` field of the `ServiceMeshPeer` object. If the cluster does not support `LoadBalancer` services, using a `NodePort` service could be an option if the nodes are accessible from the cluster running the other mesh. In the `ServiceMeshPeer` object, specify the IP addresses of the nodes in the `.spec.remote.addresses` field and the service's node ports in the `.spec.remote.discoveryPort` and `.spec.remote.servicePort` fields. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] ifndef::openshift-dedicated[] == Exposing the federation ingress on Amazon Web Services (AWS) @@ -31,16 +31,16 @@ service.beta.kubernetes.io/aws-load-balancer-type: nlb The Fully Qualified Domain Name found in the `.status.loadBalancer.ingress.hostname` field of the ingress gateway `Service` object should be specified as one of the entries in the `.spec.remote.addresses` field of the `ServiceMeshPeer` object. endif::openshift-dedicated[] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] == Exposing the federation ingress on Azure On Microsoft Azure, merely setting the service type to `LoadBalancer` suffices for mesh federation to operate correctly. The IP address found in the `.status.loadBalancer.ingress.ip` field of the ingress gateway `Service` object should be specified as one of the entries in the `.spec.remote.addresses` field of the `ServiceMeshPeer` object. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] -ifndef::openshift-rosa[] +ifndef::openshift-rosa,openshift-rosa-hcp[] == Exposing the federation ingress on Google Cloud Platform (GCP) On Google Cloud Platform, merely setting the service type to `LoadBalancer` suffices for mesh federation to operate correctly. The IP address found in the `.status.loadBalancer.ingress.ip` field of the ingress gateway `Service` object should be specified as one of the entries in the `.spec.remote.addresses` field of the `ServiceMeshPeer` object. -endif::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] diff --git a/modules/ossm-federation-config-smcp.adoc b/modules/ossm-federation-config-smcp.adoc index 3b7f39dac202..45ce20f50389 100644 --- a/modules/ossm-federation-config-smcp.adoc +++ b/modules/ossm-federation-config-smcp.adoc @@ -11,7 +11,7 @@ Before a mesh can be federated, you must configure the `ServiceMeshControlPlane` In the following example, the administrator for the `red-mesh` is configuring the SMCP for federation with both the `green-mesh` and the `blue-mesh`. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Sample SMCP for red-mesh [source,yaml, subs="attributes,verbatim"] ---- @@ -83,8 +83,8 @@ spec: trust: domain: red-mesh.local ---- -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Sample SMCP for red-mesh [source,yaml, subs="attributes,verbatim"] ---- @@ -162,7 +162,7 @@ spec: trust: domain: red-mesh.local ---- -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .ServiceMeshControlPlane federation configuration parameters [options="header"] diff --git a/modules/ossm-install-ossm-operator.adoc b/modules/ossm-install-ossm-operator.adoc index 4552d29fea73..aae688fd70a5 100644 --- a/modules/ossm-install-ossm-operator.adoc +++ b/modules/ossm-install-ossm-operator.adoc @@ -31,12 +31,12 @@ If you have already installed the {es-op} as part of OpenShift {logging-uc}, you .Procedure -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . Log in to the {product-title} web console as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . Log in to the {product-title} web console as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . In the {product-title} web console, click *Operators* -> *OperatorHub*. diff --git a/modules/ossm-integrating-with-user-workload-monitoring.adoc b/modules/ossm-integrating-with-user-workload-monitoring.adoc index 73c90b72428a..b29854ddbe3f 100644 --- a/modules/ossm-integrating-with-user-workload-monitoring.adoc +++ b/modules/ossm-integrating-with-user-workload-monitoring.adoc @@ -39,7 +39,7 @@ subjects: . Configure Kiali for user-workload monitoring: + -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [source,yaml] ---- apiVersion: kiali.io/v1alpha1 @@ -59,8 +59,8 @@ spec: enabled: true url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 ---- -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp[] [source,yaml] ---- apiVersion: kiali.io/v1alpha1 @@ -78,7 +78,7 @@ spec: ingress_enabled: true namespace: istio-system ---- -endif::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] ifdef::openshift-dedicated[] [source,yaml] ---- @@ -127,7 +127,7 @@ spec: url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 version: v1.65 ---- -ifdef::openshift-rosa,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] + [NOTE] ==== @@ -143,7 +143,7 @@ This means that the following common settings for `spec.deployment.accessible_na The validation error message provides a complete list of all the restricted namespaces. ==== -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . Configure the SMCP for external Prometheus: + diff --git a/modules/ossm-remove-cleanup.adoc b/modules/ossm-remove-cleanup.adoc index e9a6a2a960a0..c6955458e762 100644 --- a/modules/ossm-remove-cleanup.adoc +++ b/modules/ossm-remove-cleanup.adoc @@ -16,7 +16,7 @@ You can manually remove resources left behind after removing the {SMProductName} .Procedure -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . Log in to the {product-title} CLI as a cluster administrator. . Run the following commands to clean up resources after uninstalling the Operators. If you intend to keep using {JaegerShortName} as a stand-alone service without service mesh, do not delete the Jaeger resources. @@ -75,10 +75,10 @@ $ oc delete cm -n openshift-operators -lmaistra-version ---- $ oc delete sa -n openshift-operators -lmaistra-version ---- -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] // Hiding in ROSA/OSD, dedicated-admins cannot delete resource "mutatingwebhookconfigurations" or "validatingwebhookconfigurations" or "customresourcedefinitions" -ifdef::openshift-rosa,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . Log in to the {product-title} CLI as a cluster administrator. . Run the following commands to clean up resources after uninstalling the Operators. If you intend to keep using {JaegerShortName} as a stand-alone service without service mesh, do not delete the Jaeger resources. @@ -127,4 +127,4 @@ $ oc delete cm -n openshift-operators istio-cni-config istio-cni-config-v2-3 ---- $ oc delete sa -n openshift-operators istio-cni ---- -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] diff --git a/modules/ossm-rn-fixed-issues.adoc b/modules/ossm-rn-fixed-issues.adoc index 67ad1f93d18c..917a0c15bbcd 100644 --- a/modules/ossm-rn-fixed-issues.adoc +++ b/modules/ossm-rn-fixed-issues.adoc @@ -229,11 +229,11 @@ Upgrading the operator to 2.0 might break client tools that read the SMCP status + This also causes the READY and STATUS columns to be empty when you run `oc get servicemeshcontrolplanes.v1.maistra.io`. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa-hcp,openshift-rosa,openshift-dedicated[] * link:https://issues.jboss.org/browse/MAISTRA-1947[MAISTRA-1947] _Technology Preview_ Updates to ServiceMeshExtensions are not applied. + Workaround: Remove and recreate the `ServiceMeshExtensions`. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa-hcp,openshift-rosa,openshift-dedicated[] * link:https://issues.redhat.com/browse/MAISTRA-1983[MAISTRA-1983] _Migration to 2.0_ Upgrading to 2.0.0 with an existing invalid `ServiceMeshControlPlane` cannot easily be repaired. The invalid items in the `ServiceMeshControlPlane` resource caused an unrecoverable error. The fix makes the errors recoverable. You can delete the invalid resource and replace it with a new one or edit the resource to fix the errors. For more information about editing your resource, see [Configuring the Red Hat OpenShift Service Mesh installation]. diff --git a/modules/ossm-rn-known-issues.adoc b/modules/ossm-rn-known-issues.adoc index e8badf76e40e..87c3a28d04d7 100644 --- a/modules/ossm-rn-known-issues.adoc +++ b/modules/ossm-rn-known-issues.adoc @@ -21,7 +21,7 @@ These limitations exist in {SMProductName}: * The first time you access related services such as {JaegerShortName} and Grafana, from the Kiali console, you must accept the certificate and re-authenticate using your {product-title} login credentials. This happens due to an issue with how the framework displays embedded pages in the console. -ifndef::openshift-rosa[] +ifndef::openshift-rosa-hcp,openshift-rosa[] * The Bookinfo sample application cannot be installed on {ibm-power-name}, {ibm-z-name}, and {ibm-linuxone-name}. * WebAssembly extensions are not supported on {ibm-power-name}, {ibm-z-name}, and {ibm-linuxone-name}. @@ -29,7 +29,7 @@ ifndef::openshift-rosa[] * LuaJIT is not supported on {ibm-power-name}, {ibm-z-name}, and {ibm-linuxone-name}. * Single stack IPv6 support is not available on {ibm-power-name}, {ibm-z-name}, and {ibm-linuxone-name}. -endif::openshift-rosa[] +endif::openshift-rosa-hcp,openshift-rosa[] [id="ossm-rn-known-issues-ossm_{context}"] == {SMProductShortName} known issues @@ -131,7 +131,7 @@ For example, if you create a namespace called 'akube-a' and add it to the Servic + Workaround: Change the Kiali Custom Resource setting so it prefixes the setting with a carat (^). For example: + -ifndef::openshift-rosa[] +ifndef::openshift-rosa-hcp,openshift-rosa[] [source,yaml] ---- api: @@ -143,9 +143,9 @@ api: - "^ibm.*" - "^kiali-operator" ---- -endif::openshift-rosa[] +endif::openshift-rosa-hcp,openshift-rosa[] -ifdef::openshift-rosa[] +ifdef::openshift-rosa-hcp,openshift-rosa[] [source,yaml] ---- api: @@ -156,13 +156,13 @@ api: - "^openshift.*" - "^kiali-operator" ---- -endif::openshift-rosa[] +endif::openshift-rosa-hcp,openshift-rosa[] + * link:https://issues.redhat.com/browse/MAISTRA-2692[MAISTRA-2692] With Mixer removed, custom metrics that have been defined in {SMProductShortName} 2.0.x cannot be used in 2.1. Custom metrics can be configured using `EnvoyFilter`. Red Hat is unable to support `EnvoyFilter` configuration except where explicitly documented. This is due to tight coupling with the underlying Envoy APIs, meaning that backward compatibility cannot be maintained. -ifndef::openshift-rosa[] +ifndef::openshift-rosa-hcp,openshift-rosa[] * link:https://issues.redhat.com/browse/MAISTRA-2648[MAISTRA-2648] Service mesh extensions are currently not compatible with meshes deployed on {ibm-z-name}. -endif::openshift-rosa[] +endif::openshift-rosa-hcp,openshift-rosa[] * link:https://issues.jboss.org/browse/MAISTRA-1959[MAISTRA-1959] _Migration to 2.0_ Prometheus scraping (`spec.addons.prometheus.scrape` set to `true`) does not work when mTLS is enabled. Additionally, Kiali displays extraneous graph data when mTLS is disabled. + diff --git a/modules/ossm-rn-new-features.adoc b/modules/ossm-rn-new-features.adoc index 4b7267830f07..a639fa58ce36 100644 --- a/modules/ossm-rn-new-features.adoc +++ b/modules/ossm-rn-new-features.adoc @@ -258,9 +258,9 @@ This release of {SMProductName} addresses Common Vulnerabilities and Exposures ( == New features {SMProductName} version 2.4.3 -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * The {SMProductName} Operator is now available on ARM-based clusters as a Technology Preview feature. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * The `envoyExtAuthzGrpc` field has been added, which is used to configure an external authorization provider using the gRPC API. * Common Vulnerabilities and Exposures (CVEs) have been addressed. * This release is supported on {product-title} 4.10 and newer versions. @@ -287,13 +287,13 @@ endif::openshift-rosa,openshift-dedicated[] //COMPONENTS ABOVE MAY NEED TO BE UPDATED FOR 2.4.3 // Tech Preview features not supported in OSA/OSD -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] === {SMProductName} operator to ARM-based clusters :FeatureName: {SMProductName} operator to ARM based clusters include::snippets/technology-preview.adoc[] This release makes the {SMProductName} Operator available on ARM-based clusters as a Technology Preview feature. Images are available for Istio, Envoy, Prometheus, Kiali, and Grafana. Images are not available for Jaeger, so Jaeger must be disabled as a {SMProductShortName} add-on. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] === Remote Procedure Calls (gRPC) API support for external authorization configuration @@ -429,13 +429,13 @@ This enhancement introduces generally available support for single stack IPv6 cl Single stack IPv6 support is not available on {ibm-power-name}, {ibm-z-name}, and {ibm-linuxone-name}. ==== -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] === {product-title} Gateway API support :FeatureName: {product-title} Gateway API support include::snippets/technology-preview.adoc[] This enhancement introduces an updated Technology Preview version of the {product-title} Gateway API. By default, the {product-title} Gateway API is disabled. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] ==== Enabling {product-title} Gateway API To enable the {product-title} Gateway API, set the value of the `enabled` field to `true` in the `techPreview.gatewayAPI` specification of the `ServiceMeshControlPlane` resource. @@ -463,7 +463,7 @@ spec: PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER: "true" ---- -ifndef::openshift-rosa[] +ifndef::openshift-rosa,openshift-rosa-hcp[] === Control plane deployment on infrastructure nodes {SMProductShortName} control plane deployment is now supported and documented on OpenShift infrastructure nodes. For more information, see the following documentation: @@ -471,16 +471,16 @@ ifndef::openshift-rosa[] * Configuring all {SMProductShortName} control plane components to run on infrastructure nodes * Configuring individual {SMProductShortName} control plane components to run on infrastructure nodes -endif::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] === Istio 1.16 support {SMProductShortName} 2.4 is based on Istio 1.16, which brings in new features and product enhancements. While many Istio 1.16 features are supported, the following exceptions should be noted: * HBONE protocol for sidecars is an experimental feature that is not supported. * {SMProductShortName} on ARM64 architecture is not supported. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * OpenTelemetry API remains a Technology Preview feature. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [id="new-features-ossm-2-3-11"] == New features {SMProductName} version 2.3.11 @@ -778,20 +778,20 @@ This release introduces generally available support for Gateway injection. Gatew {SMProductShortName} 2.3 is based on Istio 1.14, which brings in new features and product enhancements. While many Istio 1.14 features are supported, the following exceptions should be noted: * ProxyConfig API is supported with the exception of the image field. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * Telemetry API is a Technology Preview feature. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * SPIRE runtime is not a supported feature. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] === OpenShift Service Mesh Console :FeatureName: OpenShift Service Mesh Console include::snippets/technology-preview.adoc[] This release introduces a Technology Preview version of the {product-title} Service Mesh Console, which integrates the Kiali interface directly into the OpenShift web console. For additional information, see link:https://cloud.redhat.com/blog/introducing-the-openshift-service-mesh-console-a-developer-preview[Introducing the OpenShift Service Mesh Console (A Technology Preview)] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] === Cluster-wide deployment :FeatureName: Cluster-wide deployment include::snippets/technology-preview.adoc[] @@ -802,7 +802,7 @@ This release introduces cluster-wide deployment as a Technology Preview feature. ==== This cluster-wide deployment documentation is only applicable for control planes deployed using SMCP v2.3. cluster-wide deployments created using SMCP v2.3 are not compatible with cluster-wide deployments created using SMCP v2.4. ==== -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] ==== Configuring cluster-wide deployment @@ -1163,7 +1163,7 @@ This release marks the end of support for {SMProductShortName} Control Planes ba * External control plane is not a supported feature. * Gateway injection is not a supported feature. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] === Kubernetes Gateway API :FeatureName: Kubernetes Gateway API include::snippets/technology-preview.adoc[] @@ -1171,7 +1171,7 @@ include::snippets/technology-preview.adoc[] Kubernetes Gateway API is a technology preview feature that is disabled by default. If the Kubernetes API deployment controller is disabled, you must manually deploy and link an ingress gateway to the created Gateway object. If the Kubernetes API deployment controller is enabled, then an ingress gateway automatically deploys when a Gateway object is created. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] ==== Installing the Gateway API CRDs The Gateway API CRDs do not come preinstalled by default on OpenShift clusters. Install the CRDs prior to enabling Gateway API support in the SMCP. @@ -1484,12 +1484,12 @@ New Custom Resource Definitions (CRDs) have been added to support federating ser * `ImportedServiceSet` - Defines which services for a given `ServiceMeshPeer` are imported from the peer mesh. These services must also be made available by the peer’s `ExportedServiceMeshSet` resource. -ifndef::openshift-rosa[] +ifndef::openshift-rosa-hcp,openshift-rosa[] Service Mesh Federation is not supported between clusters on Red Hat OpenShift Service on AWS (ROSA), Azure Red Hat OpenShift (ARO), or OpenShift Dedicated (OSD). -endif::openshift-rosa[] -ifdef::openshift-rosa[] +endif::openshift-rosa-hcp,openshift-rosa[] +ifdef::openshift-rosa-hcp,openshift-rosa[] Service Mesh Federation is not supported between clusters on Red Hat OpenShift Service on AWS (ROSA) or OpenShift Dedicated (OSD). -endif::openshift-rosa[] +endif::openshift-rosa-hcp,openshift-rosa[] === OVN-Kubernetes Container Network Interface (CNI) generally available @@ -1707,16 +1707,16 @@ spec: This release of {SMProductName} addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. -ifndef::openshift-rosa[] +ifndef::openshift-rosa-hcp,openshift-rosa[] == {SMProductName} on {product-dedicated} and Microsoft Azure Red Hat OpenShift {SMProductName} is now supported through {product-dedicated} and Microsoft Azure Red Hat OpenShift. -endif::openshift-rosa[] -ifdef::openshift-rosa[] +endif::openshift-rosa-hcp,openshift-rosa[] +ifdef::openshift-rosa-hcp,openshift-rosa[] == {SMProductName} on {product-dedicated} {SMProductName} is now supported through {product-dedicated}. -endif::openshift-rosa[] +endif::openshift-rosa-hcp,openshift-rosa[] == New features {SMProductName} 2.0.6 @@ -1903,12 +1903,12 @@ In addition, this release has the following new features: == New features {SMProductName} 2.0.2 -ifndef::openshift-rosa[] +ifndef::openshift-rosa-hcp,openshift-rosa[] This release of {SMProductName} adds support for {ibm-z-name} and {ibm-power-name} Systems. It also addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. -endif::openshift-rosa[] -ifdef::openshift-rosa[] +endif::openshift-rosa-hcp,openshift-rosa[] +ifdef::openshift-rosa-hcp,openshift-rosa[] This release of {SMProductName} addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. -endif::openshift-rosa[] +endif::openshift-rosa-hcp,openshift-rosa[] == New features {SMProductName} 2.0.1 @@ -1932,6 +1932,6 @@ In addition, this release has the following new features: * Updates the ServiceMeshControlPlane resource to v2 with a streamlined configuration to make it easier to manage the {SMProductShortName} Control Plane. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa-hcp,openshift-rosa,openshift-dedicated[] * Introduces WebAssembly extensions as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa-hcp,openshift-rosa,openshift-dedicated[] diff --git a/modules/ossm-supported-configurations.adoc b/modules/ossm-supported-configurations.adoc index d0a87f977dda..89b3e08d6802 100644 --- a/modules/ossm-supported-configurations.adoc +++ b/modules/ossm-supported-configurations.adoc @@ -16,12 +16,12 @@ The following configurations are supported for the current release of {SMProduct The {SMProductName} Operator supports multiple versions of the `ServiceMeshControlPlane` resource. Version {MaistraVersion} {SMProductShortName} control planes are supported on the following platform versions: // Updating the list so that all 4 supported platforms appear in all versions; the wording works better that way and it removed the repeated ROSA listing. -ifdef::openshift-rosa,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * Red Hat OpenShift Container Platform version 4.10 or later -endif::openshift-rosa,openshift-dedicated[] -ifndef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * Red Hat {product-title} version 4.10 or later -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * {product-dedicated} version 4 * Azure Red Hat OpenShift (ARO) version 4 * Red Hat OpenShift Service on AWS (ROSA) @@ -46,14 +46,14 @@ Explicitly unsupported cases include: [id="ossm-supported-configurations-sm_{context}"] == Supported configurations for {SMProductShortName} -ifndef::openshift-rosa[] +ifndef::openshift-rosa,openshift-rosa-hcp[] * This release of {SMProductName} is only available on {product-title} x86_64, {ibm-z-name}, and {ibm-power-name}. ** {ibm-z-name} is only supported on {product-title} 4.10 and later. ** {ibm-power-name} is only supported on {product-title} 4.10 and later. -endif::openshift-rosa[] -ifdef::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] +ifdef::openshift-rosa,openshift-rosa-hcp[] * This release of {SMProductName} is only available on {product-title} x86_64. -endif::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] * Configurations where all {SMProductShortName} components are contained within a single {product-title} cluster. * Configurations that do not integrate external services such as virtual machines. * {SMProductName} does not support `EnvoyFilter` configuration except where explicitly documented. diff --git a/modules/ossm-tutorial-bookinfo-install.adoc b/modules/ossm-tutorial-bookinfo-install.adoc index ec10471f0e60..9116f44c74de 100644 --- a/modules/ossm-tutorial-bookinfo-install.adoc +++ b/modules/ossm-tutorial-bookinfo-install.adoc @@ -15,20 +15,20 @@ This tutorial walks you through how to create a sample application by creating a * {product-title} 4.1 or higher installed. * {SMProductName} {SMProductVersion} installed. * Access to the OpenShift CLI (`oc`). -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as`cluster-admin`. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] -ifndef::openshift-rosa[] +ifndef::openshift-rosa,openshift-rosa-hcp[] [NOTE] ==== The Bookinfo sample application cannot be installed on {ibm-z-name} and {ibm-power-name}. ==== -endif::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] [NOTE] ==== The commands in this section assume the {SMProductShortName} control plane project is `istio-system`. If you installed the control plane in another namespace, edit each command before you run it. diff --git a/modules/ossm-tutorial-bookinfo-verify-install.adoc b/modules/ossm-tutorial-bookinfo-verify-install.adoc index baa5d94b6aa1..28ef693828c9 100644 --- a/modules/ossm-tutorial-bookinfo-verify-install.adoc +++ b/modules/ossm-tutorial-bookinfo-verify-install.adoc @@ -14,12 +14,12 @@ To confirm that the sample Bookinfo application was successfully deployed, perfo * {SMProductName} installed. * Complete the steps for installing the Bookinfo sample app. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as`cluster-admin`. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure from CLI diff --git a/modules/ossm-validate-smcp-cli.adoc b/modules/ossm-validate-smcp-cli.adoc index f3c507e95de4..7f59131c9035 100644 --- a/modules/ossm-validate-smcp-cli.adoc +++ b/modules/ossm-validate-smcp-cli.adoc @@ -11,12 +11,12 @@ You can validate the creation of the `ServiceMeshControlPlane` from the command * The {SMProductName} Operator must be installed. * Access to the OpenShift CLI (`oc`). -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as`cluster-admin`. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure diff --git a/modules/ossm-validate-smcp-kiali.adoc b/modules/ossm-validate-smcp-kiali.adoc index fd6b5733ba85..69f83876250a 100644 --- a/modules/ossm-validate-smcp-kiali.adoc +++ b/modules/ossm-validate-smcp-kiali.adoc @@ -12,12 +12,12 @@ You can use the Kiali console to validate your {SMProductShortName} installation * The {SMProductName} Operator must be installed. * Access to the OpenShift CLI (`oc`). -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as`cluster-admin`. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You are logged in to {product-title} as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure diff --git a/modules/support-submitting-a-case.adoc b/modules/support-submitting-a-case.adoc index 1b49c34013f1..5ad6727ddd2d 100644 --- a/modules/support-submitting-a-case.adoc +++ b/modules/support-submitting-a-case.adoc @@ -12,20 +12,20 @@ .Prerequisites -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have installed the OpenShift CLI (`oc`). -ifdef::openshift-rosa,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the {cluster-manager-first}. -endif::openshift-rosa,openshift-dedicated[] -ifndef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have a Red Hat Customer Portal account. * You have a Red Hat Standard or Premium subscription. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure @@ -45,9 +45,9 @@ endif::openshift-rosa,openshift-dedicated[] .. Select *{product-title}* from the *Product* drop-down menu. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .. Select *{product-version}* from the *Version* drop-down. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. If the suggested articles do not address the issue, click *Continue*. @@ -87,9 +87,9 @@ $ oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}' * When does this behavior occur? Frequency? Repeatedly? At certain times? . Upload relevant diagnostic data files and click *Continue*. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] It is recommended to include data gathered using the `oc adm must-gather` command as a starting point, plus any issue specific data that is not collected by that command. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . Input relevant case management details and click *Continue*. diff --git a/service_mesh/v2x/installing-ossm.adoc b/service_mesh/v2x/installing-ossm.adoc index 4aae47fe95fc..e43ac5b1401a 100644 --- a/service_mesh/v2x/installing-ossm.adoc +++ b/service_mesh/v2x/installing-ossm.adoc @@ -15,12 +15,12 @@ This basic installation is configured based on the default OpenShift settings an .Prerequisites * Read the xref:../../service_mesh/v2x/preparing-ossm-installation.adoc#preparing-ossm-installation[Preparing to install {SMProductName}] process. -ifdef::openshift-rosa[] +ifdef::openshift-rosa,openshift-rosa-hcp[] * An account with the `cluster-admin` role. -endif::openshift-rosa[] -ifndef::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] +ifndef::openshift-rosa,openshift-rosa-hcp[] * An account with the `cluster-admin` role. If you use {product-dedicated}, you must have an account with the `dedicated-admin` role. -endif::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] The following steps show how to install a basic instance of {SMProductName} on {product-title}. @@ -33,13 +33,13 @@ include::modules/ossm-installation-activities.adoc[leveloffset=+1] include::modules/ossm-install-ossm-operator.adoc[leveloffset=+1] -ifndef::openshift-rosa[] +ifndef::openshift-rosa,openshift-rosa-hcp[] include::modules/ossm-config-operator-infrastructure-node.adoc[leveloffset=+1] include::modules/ossm-confirm-operator-infrastructure-node.adoc[leveloffset=+1] -endif::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] == Next steps diff --git a/service_mesh/v2x/ossm-create-smcp.adoc b/service_mesh/v2x/ossm-create-smcp.adoc index ad6aab2c7c4d..03ca06fafd18 100644 --- a/service_mesh/v2x/ossm-create-smcp.adoc +++ b/service_mesh/v2x/ossm-create-smcp.adoc @@ -14,7 +14,7 @@ include::modules/ossm-control-plane-cli.adoc[leveloffset=+2] include::modules/ossm-validate-smcp-cli.adoc[leveloffset=+2] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/ossm-about-control-plane-components-and-infrastructure-nodes.adoc[leveloffset=+1] @@ -28,7 +28,7 @@ include::modules/ossm-config-individual-control-plane-infrastructure-node-cli.ad include::modules/ossm-confirm-smcp-infrastructure-node.adoc[leveloffset=+2] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/ossm-about-control-plane-and-cluster-wide-deployment.adoc[leveloffset=+1] diff --git a/service_mesh/v2x/ossm-extensions.adoc b/service_mesh/v2x/ossm-extensions.adoc index f2facd5a2201..138abbe54c65 100644 --- a/service_mesh/v2x/ossm-extensions.adoc +++ b/service_mesh/v2x/ossm-extensions.adoc @@ -8,13 +8,13 @@ toc::[] You can use WebAssembly extensions to add new features directly into the {SMProductName} proxies. This lets you move even more common functionality out of your applications, and implement them in a single language that compiles to WebAssembly bytecode. -ifndef::openshift-rosa[] +ifndef::openshift-rosa,openshift-rosa-hcp[] [NOTE] ==== WebAssembly extensions are not supported on {ibm-z-name} and {ibm-power-name}. ==== -endif::openshift-rosa[] +endif::openshift-rosa,openshift-rosa-hcp[] include::modules/ossm-extensions-overview.adoc[leveloffset=+1] diff --git a/service_mesh/v2x/ossm-observability.adoc b/service_mesh/v2x/ossm-observability.adoc index a85fc8a286e6..e49790ab874c 100644 --- a/service_mesh/v2x/ossm-observability.adoc +++ b/service_mesh/v2x/ossm-observability.adoc @@ -39,7 +39,7 @@ include::modules/ossm-access-prometheus.adoc[leveloffset=+1] include::modules/ossm-integrating-with-user-workload-monitoring.adoc[leveloffset=+1] // Hiding as these assemblies not in ROSA/OSD -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [role="_additional-resources"] [id="additional-resources_user-workload-monitoring"] == Additional resources @@ -47,4 +47,4 @@ ifndef::openshift-rosa,openshift-dedicated[] * xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc[Enabling monitoring for user-defined projects] * xref:../../observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-installing.adoc[Installing the distributed tracing platform (Tempo)] * xref:../../observability/otel/otel-installing.adoc[Installing the Red Hat build of OpenTelemetry] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] diff --git a/service_mesh/v2x/ossm-performance-scalability.adoc b/service_mesh/v2x/ossm-performance-scalability.adoc index 697c384f8049..cd6bf7031a8a 100644 --- a/service_mesh/v2x/ossm-performance-scalability.adoc +++ b/service_mesh/v2x/ossm-performance-scalability.adoc @@ -14,7 +14,7 @@ include::modules/ossm-recommended-resources.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * xref:../../observability/distr_tracing/distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc#distr-tracing-deploy-default_deploying-distributed-tracing-platform[Configuring and deploying the distributed tracing platform Jaeger]. endif::[] diff --git a/service_mesh/v2x/ossm-reference-smcp.adoc b/service_mesh/v2x/ossm-reference-smcp.adoc index b00ac21943bf..e7ebca0a7d7d 100644 --- a/service_mesh/v2x/ossm-reference-smcp.adoc +++ b/service_mesh/v2x/ossm-reference-smcp.adoc @@ -21,9 +21,9 @@ For information about creating profiles, see the xref:../../service_mesh/v2x/oss For more detailed examples of security configuration, see xref:../../service_mesh/v2x/ossm-security.adoc#ossm-security-mtls_ossm-security[Mutual Transport Layer Security (mTLS)]. // No tech preview in ROSA/OSD -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/ossm-cr-techPreview.adoc[leveloffset=+2] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/ossm-cr-tracing.adoc[leveloffset=+2] diff --git a/service_mesh/v2x/ossm-route-migration.adoc b/service_mesh/v2x/ossm-route-migration.adoc index 8d4d9b2bce84..1f8657937fe6 100644 --- a/service_mesh/v2x/ossm-route-migration.adoc +++ b/service_mesh/v2x/ossm-route-migration.adoc @@ -15,4 +15,4 @@ include::modules/ossm-migrating-from-ior-to-explicitly-managed-routes.adoc[level == Additional resources * xref:../../networking/routes/route-configuration.adoc#nw-creating-a-route_route-configuration[Creating an HTTP-based Route] -* xref:../../service_mesh/v2x/ossm-traffic-manage.adoc#ossm-auto-route_traffic-management[Understanding automatic routes] \ No newline at end of file +* xref:../../service_mesh/v2x/ossm-traffic-manage.adoc#ossm-auto-route_traffic-management[Understanding automatic routes] diff --git a/service_mesh/v2x/ossm-security.adoc b/service_mesh/v2x/ossm-security.adoc index 2afefc60ab1f..47128c55b8f5 100644 --- a/service_mesh/v2x/ossm-security.adoc +++ b/service_mesh/v2x/ossm-security.adoc @@ -49,9 +49,9 @@ include::modules/ossm-cert-manager-installation.adoc[leveloffset=+2] == Additional resources For information about how to install the cert-manager Operator for {product-title}, see: -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] xref:../../security/cert_manager_operator/cert-manager-operator-install.adoc[Installing the cert-manager Operator for Red Hat OpenShift]. endif::[] -ifdef::openshift-rosa,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/security_and_compliance/index#cert-manager-operator-install[Installing the cert-manager Operator for Red Hat OpenShift]. -endif::[] \ No newline at end of file +endif::[] diff --git a/service_mesh/v2x/ossm-threescale-webassembly-module.adoc b/service_mesh/v2x/ossm-threescale-webassembly-module.adoc index f9fb7d76893f..b68995e6f7fa 100644 --- a/service_mesh/v2x/ossm-threescale-webassembly-module.adoc +++ b/service_mesh/v2x/ossm-threescale-webassembly-module.adoc @@ -73,4 +73,4 @@ include::modules/ossm-threescale-webassembly-module-mapping-rule-object.adoc[lev include::modules/ossm-threescale-webassembly-module-examples-for-credentials-use-cases.adoc[leveloffset=+1] -include::modules/ossm-threescale-webassembly-module-minimal-working-configuration.adoc[leveloffset=+1] \ No newline at end of file +include::modules/ossm-threescale-webassembly-module-minimal-working-configuration.adoc[leveloffset=+1] diff --git a/service_mesh/v2x/ossm-traffic-manage.adoc b/service_mesh/v2x/ossm-traffic-manage.adoc index b90a4baa2ad3..e1a2c4cf326e 100644 --- a/service_mesh/v2x/ossm-traffic-manage.adoc +++ b/service_mesh/v2x/ossm-traffic-manage.adoc @@ -11,12 +11,12 @@ Using {SMProductName}, you can control the flow of traffic and API calls between include::modules/ossm-gateways.adoc[leveloffset=+1] // Hiding in ROSA/OSD, dedicated-admin cannot create "services" or "deployments" -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/ossm-automatic-gateway-injection.adoc[leveloffset=+2] include::modules/ossm-deploying-automatic-gateway-injection.adoc[leveloffset=+2] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/ossm-routing-ingress.adoc[leveloffset=+2] diff --git a/service_mesh/v2x/ossm-troubleshooting-istio.adoc b/service_mesh/v2x/ossm-troubleshooting-istio.adoc index 3f86a809c029..11740bf59e4e 100644 --- a/service_mesh/v2x/ossm-troubleshooting-istio.adoc +++ b/service_mesh/v2x/ossm-troubleshooting-istio.adoc @@ -64,10 +64,10 @@ include::modules/support-knowledgebase-about.adoc[leveloffset=+2] include::modules/support-knowledgebase-search.adoc[leveloffset=+2] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/ossm-about-collecting-ossm-data.adoc[leveloffset=+2] For prompt support, supply diagnostic information for both {product-title} and {SMProductName}. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/support-submitting-a-case.adoc[leveloffset=+2] diff --git a/service_mesh/v2x/servicemesh-release-notes.adoc b/service_mesh/v2x/servicemesh-release-notes.adoc index be7399783349..e279947afb1b 100644 --- a/service_mesh/v2x/servicemesh-release-notes.adoc +++ b/service_mesh/v2x/servicemesh-release-notes.adoc @@ -48,9 +48,9 @@ include::modules/ossm-release-2-3-12.adoc[leveloffset=+1] include::modules/ossm-rn-new-features.adoc[leveloffset=+1] // Tech preview not supported in ROSA/OSD -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/ossm-rn-technology-preview.adoc[leveloffset=+1] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/ossm-rn-deprecated-features.adoc[leveloffset=+1] From 3b0110675648231af206eeb27f4b76d2b12e294b Mon Sep 17 00:00:00 2001 From: Misha Ramendik Date: Tue, 21 Jan 2025 21:47:42 +0000 Subject: [PATCH 008/669] OSDOCS 13186 modify build_for_portal.py to detect images and not duplicate the entire image directory for every book --- build_for_portal.py | 59 ++++++++++++++++++++++++++++++++------------- 1 file changed, 42 insertions(+), 17 deletions(-) diff --git a/build_for_portal.py b/build_for_portal.py index 357bff6388ad..ffbe7d1bf7c4 100644 --- a/build_for_portal.py +++ b/build_for_portal.py @@ -455,32 +455,39 @@ def reformat_for_drupal(info): ensure_directory(images_dir) + # ADDED 21 Jan 2025: selective processing of images + # the set of file names is to be stored in image_files + # The initial value includes images defined in attributes (to copy every time) + image_files = set() + log.debug("Copying source files for " + book["Name"]) - copy_files(book, book_src_dir, src_dir, dest_dir, info) + copy_files(book, book_src_dir, src_dir, dest_dir, info, image_files) log.debug("Copying images for " + book["Name"]) - copy_images(book, src_dir, images_dir, distro) + copy_images(book, src_dir, images_dir, distro, image_files) -def copy_images(node, src_path, dest_dir, distro): +def copy_images(node, src_path, dest_dir, distro, image_files): """ Copy images over to the destination directory and flatten all image directories into the one top level dir. - """ - def dir_callback(dir_node, parent_dir, depth): - node_dir = os.path.join(parent_dir, dir_node["Dir"]) - src = os.path.join(node_dir, "images") - - if os.path.exists(src): - src_files = os.listdir(src) - for src_file in src_files: - shutil.copy(os.path.join(src, src_file), dest_dir) + REWORKED 21 Jan 2025: we now assume that there is a single images directory and + that all other images subdirectories are simply symlinks into it. So we do not + iterate over the tree but simply copy the necessary files from that one images directory + """ - iter_tree(node, distro, dir_callback, parent_dir=src_path) + images_source_dir = os.path.join(src_path, "images") + for image_file_name in image_files: + image_file_pathname = os.path.join(images_source_dir,image_file_name) + if os.path.exists(image_file_pathname): + shutil.copy(image_file_pathname, dest_dir) + # if an image file is not found, this is not an error, because it might + # have been picked up from a commented-out line. Actual missing images + # should be caught by the asciidoctor/asciibinder part of CI -def copy_files(node, book_src_dir, src_dir, dest_dir, info): +def copy_files(node, book_src_dir, src_dir, dest_dir, info, image_files): """ Recursively copy files from the source directory to the destination directory, making sure to scrub the content, add id's where the content is referenced elsewhere and fix any links that should be cross references. @@ -498,7 +505,7 @@ def topic_callback(topic_node, parent_dir, depth): dest_file = os.path.join(node_dest_dir, topic_node["File"] + ".adoc") # Copy the file - copy_file(info, book_src_dir, src_file, dest_dir, dest_file) + copy_file(info, book_src_dir, src_file, dest_dir, dest_file, image_files) iter_tree(node, info["distro"], dir_callback, topic_callback) @@ -509,6 +516,7 @@ def copy_file( src_file, dest_dir, dest_file, + image_files, include_check=True, tag=None, cwd=None, @@ -529,7 +537,7 @@ def copy_file( # os.mknod(dest_file) open(dest_file, "w").close() # Scrub/fix the content - content = scrub_file(info, book_src_dir, src_file, tag=tag, cwd=cwd) + content = scrub_file(info, book_src_dir, src_file, image_files, tag=tag, cwd=cwd) # Check for any includes if include_check: @@ -584,6 +592,7 @@ def copy_file( include_file, dest_dir, dest_include_file, + image_files, tag=include_tag, cwd=current_dir, ) @@ -612,8 +621,21 @@ def copy_file( with open(dest_file, "w") as f: f.write(content) +def detect_images(content, image_files): + """ + Detects all image file names referenced in the content, which is a readlines() output + Adds the filenames to the image_files set + Does NOT control for false positives such as commented out content, + because "false negatives" are worse -def scrub_file(info, book_src_dir, src_file, tag=None, cwd=None): + TEMPORARY: use both procedural and RE detection and report any misalignment + """ + image_pattern = re.compile(r'image::?([^\s\[]+)\[.*?\]') + + for content_str in content: + image_files.update({os.path.basename(f) for f in image_pattern.findall(content_str)}) + +def scrub_file(info, book_src_dir, src_file, image_files, tag=None, cwd=None): """ Scrubs a file and returns the cleaned file contents. """ @@ -657,6 +679,9 @@ def scrub_file(info, book_src_dir, src_file, tag=None, cwd=None): with open(src_file, "r") as f: src_file_content = f.readlines() + # detect image references in the content + detect_images(src_file_content, image_files) + # Scrub the content content = "" header_found = content_found = False From b1cea2e15921646725ff90db0c052c25e6f004c4 Mon Sep 17 00:00:00 2001 From: Brendan Daly Date: Thu, 16 Jan 2025 10:00:28 +0000 Subject: [PATCH 009/669] OCPBUGS-39579:adding NC2 refs --- modules/installation-nutanix-installer-infra-reqs.adoc | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/modules/installation-nutanix-installer-infra-reqs.adoc b/modules/installation-nutanix-installer-infra-reqs.adoc index ab325f25fa26..875f2fdd31f7 100644 --- a/modules/installation-nutanix-installer-infra-reqs.adoc +++ b/modules/installation-nutanix-installer-infra-reqs.adoc @@ -8,6 +8,13 @@ Before you install an {product-title} cluster, review the following Nutanix AOS environment requirements. +[id="installation-nutanix-installer-infrastructure-reqs_{context}"] +== Infrastructure requirements + +You can install {product-title} on on-premise Nutanix clusters, Nutanix Cloud Clusters (NC2) on {aws-first}, or NC2 on {azure-first}. + +For more information, see link:https://www.nutanix.com/products/nutanix-cloud-clusters/aws[Nutanix Cloud Clusters on AWS] and link:https://www.nutanix.com/products/nutanix-cloud-clusters/azure[Nutanix Cloud Clusters on Microsoft Azure]. + [id="installation-nutanix-installer-infra-reqs-account_{context}"] == Required account privileges From 5ea1bd0efdd7a821b89afb32fedfc6672da34fad Mon Sep 17 00:00:00 2001 From: Alexandra Molnar Date: Thu, 16 Jan 2025 13:38:06 +0000 Subject: [PATCH 010/669] OCPBUGS-44260: Update step to modify existing json file in Configuring hub cluster with ArgoCD --- snippets/ztp-patch-argocd-hub-cluster.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/snippets/ztp-patch-argocd-hub-cluster.adoc b/snippets/ztp-patch-argocd-hub-cluster.adoc index a75f8fa72174..8880f245b914 100644 --- a/snippets/ztp-patch-argocd-hub-cluster.adoc +++ b/snippets/ztp-patch-argocd-hub-cluster.adoc @@ -36,7 +36,7 @@ Beginning with the MCE 2.10 release, RHEL 9 is the base image for `multicluster- ==== -- -.. Add the following configuration to the `out/argocd/deployment/argocd-openshift-gitops-patch.json` file: +.. Modify the `out/argocd/deployment/argocd-openshift-gitops-patch.json` file with the `multicluster-operators-subscription` image that matches your {rh-rhacm} version: + -- [source,json] From 52214d616632eb3a2ca292291bef0fff337573b4 Mon Sep 17 00:00:00 2001 From: Alisha Prabhu Date: Wed, 11 Dec 2024 11:10:23 +0530 Subject: [PATCH 011/669] OCP 4.18 IPI deployment on 4 new data centers of PowerVS --- ...installation-configuration-parameters.adoc | 22 ++++++++++++------- modules/installation-ibm-cloud-regions.adoc | 8 +++++++ 2 files changed, 22 insertions(+), 8 deletions(-) diff --git a/modules/installation-configuration-parameters.adoc b/modules/installation-configuration-parameters.adoc index 1899072ed36a..219db00a877d 100644 --- a/modules/installation-configuration-parameters.adoc +++ b/modules/installation-configuration-parameters.adoc @@ -743,44 +743,50 @@ ifdef::ibm-power-vs[] |platform: powervs: serviceInstanceGUID: -|The ServiceInstanceGUID is the ID of the Power IAAS instance created from the {ibm-cloud-name} Catalog. +|Specifies the ID of the Power IAAS instance created from the {ibm-cloud-name} Catalog. |String. For example, `existing_service_instance_GUID`. |platform: powervs: clusterOSImage: -|The ClusterOSImage is a pre-created {ibm-power-server-name} boot image that overrides the default image for cluster nodes. +|Specifies a pre-created {ibm-power-server-name} boot image that overrides the default image for cluster nodes. |String. For example, `existing_cluster_os_image`. |platform: powervs: defaultMachinePlatform: -|The DefaultMachinePlatform is the default configuration used when installing on {ibm-power-server-name} for machine pools that do not define their own platform configuration. +|Specifies the default configuration used when installing on {ibm-power-server-name} for machine pools that do not define their own platform configuration. |String. For example, `existing_machine_platform`. |platform: powervs: memoryGiB: -|The size of a virtual machine's memory, in GB. +|Specifies the size of a virtual machine's memory, in GB. |The valid integer must be an integer number of GB that is at least 2 and no more than 64, depending on the machine type. |platform: powervs: procType: -|The ProcType defines the processor sharing model for the instance. +|Defines the processor sharing model for the instance. |The valid values are Capped, Dedicated, and Shared. |platform: powervs: processors: -|The Processors defines the processing units for the instance. +|Defines the processing units for the instance. |The number of processors must be from .5 to 32 cores. The processors must be in increments of .25. |platform: powervs: sysType: -|The SysType defines the system type for the instance. -|The system type must be either `e980` or `s922`. +|Defines the system type for the instance. +|The system type must be `e980`, `s922`, `e1080`, or `s1022`. The available system types depend on the zone you want to target. + +|platform: + powervs: + tgName: +|Defines the name of an existing Transit Gateway. +|String. For example, `existing_tgName`. endif::ibm-power-vs[] |==== diff --git a/modules/installation-ibm-cloud-regions.adoc b/modules/installation-ibm-cloud-regions.adoc index a488d2c6e2b0..68b0648ed553 100644 --- a/modules/installation-ibm-cloud-regions.adoc +++ b/modules/installation-ibm-cloud-regions.adoc @@ -43,6 +43,8 @@ Deploying your cluster in the `eu-es` (Madrid, Spain) region is not supported fo endif::ibm-vpc[] ifdef::ibm-power-vs[] +* `tor` (Toronto, Canada) +** `tor01` * `dal` (Dallas, USA) ** `dal10` ** `dal12` @@ -61,9 +63,14 @@ ifdef::ibm-power-vs[] ** `sao04` * `syd` (Sydney, Australia) ** `syd04` +** `syd05` * `wdc` (Washington DC, USA) ** `wdc06` ** `wdc07` +* `us-east` (Washington DC, United States) +** `us-east` +* `us-south` (Dallas, United States) +** `us-south` You might optionally specify the {ibm-cloud-name} region in which the installation program creates any VPC components. @@ -74,6 +81,7 @@ If you do not specify the region, the installation program selects the region cl {ibm-cloud-name} supports the following regions: +* `us-east` * `us-south` * `eu-de` * `eu-es` From ab45b0af52323a22805dfa23996a8960d6c35ecd Mon Sep 17 00:00:00 2001 From: Lisa Pettyjohn Date: Wed, 11 Dec 2024 12:53:26 -0500 Subject: [PATCH 012/669] OSDOCS-12424#Automate LSO cleanup process --- ...istent-storage-local-removing-devices.adoc | 82 +++++++++++++++---- 1 file changed, 64 insertions(+), 18 deletions(-) diff --git a/modules/persistent-storage-local-removing-devices.adoc b/modules/persistent-storage-local-removing-devices.adoc index d10a613d34fa..3876c9541739 100644 --- a/modules/persistent-storage-local-removing-devices.adoc +++ b/modules/persistent-storage-local-removing-devices.adoc @@ -6,16 +6,11 @@ [id="local-removing-device_{context}"] = Removing a local volume or local volume set -Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed. - -[NOTE] -==== -The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource. -==== +Occasionally, you need to delete local volumes (LVs) and local volume sets (LVSs). .Prerequisites -* The persistent volume must be in a `Released` or `Available` state. +* The persistent volume (PV) must be in a `Released` or `Available` state. + [WARNING] ==== @@ -24,33 +19,84 @@ Deleting a persistent volume that is still in use can result in data loss or cor .Procedure -. Edit the previously created local volume to remove any unwanted disks. +To delete LVs or LVSs, complete the following steps: + +. If there are any bound PVs owned by the LV or LVS that is being deleted, delete the corresponding persistent volume claims (PVCs) to release the PVs: -.. Edit the cluster resource: +.. To find bound PVs owned by a particular LV or LVS, run the following command: + +[source, terminal] +---- +$ oc get pv --selector storage.openshift.com/owner-name= <1> +---- +<1> `` is the name of the LV or LVS. ++ +.Example output [source,terminal] ---- -$ oc edit localvolume -n openshift-local-storage +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE +local-pv-3fa1c73 5Gi RWO Delete Available slow 28s +local-pv-1cec77cf 30Gi RWX Retain Bound openshift/storage my-sc 168d ---- ++ +Bound PVs have a status of `Bound` and their corresponding PVCs appear in the `CLAIM` column. In the preceding example, PV `local-pv-1cec77cf` is bound, and its PVC is `openshift/storage`. -.. Navigate to the lines under `devicePaths`, and delete any representing unwanted disks. +.. Delete corresponding PVCs of bound PVs owned by the LV or LVS being deleted by running the following command: ++ +[source, terminal] +---- +$ oc delete pvc +---- ++ +In this example, you would delete PVC `openshift/storage`. -. Delete any persistent volumes created. +. Delete the LVs or LVSs by running the applicable following command: ++ +.Command for deleting LV + [source,terminal] ---- -$ oc delete pv +$ oc delete lv +---- ++ +or ++ +.Command for deleting LVS +[source,terminal] +---- +$ oc delete lvs ---- -. Delete directory and included symlinks on the node. +. If any PV owned by the LV or LVS has a `Retain` reclaim policy, back up any important data, and then delete the PV: + -[WARNING] +[NOTE] ==== -The following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability. +PVs with a `Delete` policy are automatically deleted when you delete the LVs or LVS. ==== + +.. To find PVs with `Retain` reclaim policy, run the following command: ++ +[source,terminal] +---- +$ oc get pv +---- ++ +.Example output [source,terminal] ---- -$ oc debug node/ -- chroot /host rm -rf /mnt/local-storage/ <1> +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +local-pv-1cec77cf 30Gi RWX Retain Available my-sc 168d ---- -<1> The name of the storage class used to create the local volumes. ++ +In this example, PV `local-pv-1cec77cf` has a `Retain` reclaim policy and needs to be manually deleted. + +.. Back up any important data on this volume. + +.. Delete the PV by running the following command: ++ +[source,terminal] +---- +$ oc delete pv +---- ++ +In this example, delete PV `local-pv-1cec77cf`. \ No newline at end of file From 82bb8f7e12c7a0ba82c3571657586aeac105040a Mon Sep 17 00:00:00 2001 From: Michael Burke Date: Tue, 10 Dec 2024 18:07:08 -0500 Subject: [PATCH 013/669] OCPNODE-2253 Sigstore Support - OpenShift Container Image Validation for namespaced policies --- ...des-sigstore-configure-cluster-policy.adoc | 223 +++++++++++++++ ...nodes-sigstore-configure-image-policy.adoc | 261 ++++++++++++++++++ modules/nodes-sigstore-configure.adoc | 87 ++++++ modules/nodes-sigstore-using-about.adoc | 7 +- nodes/nodes-sigstore-using.adoc | 23 +- 5 files changed, 596 insertions(+), 5 deletions(-) create mode 100644 modules/nodes-sigstore-configure-cluster-policy.adoc create mode 100644 modules/nodes-sigstore-configure-image-policy.adoc create mode 100644 modules/nodes-sigstore-configure.adoc diff --git a/modules/nodes-sigstore-configure-cluster-policy.adoc b/modules/nodes-sigstore-configure-cluster-policy.adoc new file mode 100644 index 000000000000..b5a33a8db765 --- /dev/null +++ b/modules/nodes-sigstore-configure-cluster-policy.adoc @@ -0,0 +1,223 @@ +// Module included in the following assemblies: +// +// * nodes/nodes-sigstore-using.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nodes-sigstore-configure-cluster-policy_{context}"] += Creating a cluster image policy CR + +A `ClusterImagePolicy` custom resource (CR) enables a cluster administrator to configure a sigstore signature verification policy for the entire cluster. When enabled, the Machine Config Operator (MCO) watches the `ClusterImagePolicy` object and updates the `/etc/containers/policy.json` and `/etc/containers/registries.d/sigstore-registries.yaml` files on all the nodes in the cluster. + +The following example shows general guidelines on how to configure a `ClusterImagePolicy` object. For more details on the parameters, see "About cluster and image policy parameters." + +.Prerequisites +// Taken from https://issues.redhat.com/browse/OCPSTRAT-918 +* You have a sigstore-supported public key infrastructure (PKI) or a link:https://docs.sigstore.dev/cosign/[Cosign public and private key pair] for signing operations. +* You have a signing process in place to sign your images. +* You have access to a registry that supports Cosign signatures, if you are using Cosign signatures. +* You enabled the required Technology Preview features for your cluster by editing the `FeatureGate` CR named `cluster`: ++ +[source,terminal] +---- +$ oc edit featuregate cluster +---- ++ +.Example `FeatureGate` CR +[source,yaml] +---- +apiVersion: config.openshift.io/v1 +kind: FeatureGate +metadata: + name: cluster +spec: + featureSet: TechPreviewNoUpgrade <1> +---- +<1> Enables the required `SigstoreImageVerification` feature. ++ +[WARNING] +==== +Enabling the `TechPreviewNoUpgrade` feature set on your cluster cannot be undone and prevents minor version updates. This feature set allows you to enable these Technology Preview features on test clusters, where you can fully test them. Do not enable this feature set on production clusters. +==== ++ +After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. + +.Procedure + +. Create a cluster image policy object similar to the following examples. See "About image policy parameters" for specific details on these parameters. ++ +-- +.Example cluster image policy object with a public key policy and the `MatchRepoDigestOrExact` match policy +[source,yaml] +---- +apiVersion: config.openshift.io/v1alpha1 +kind: ClusterImagePolicy <1> +metadata: + name: p1 +spec: + scopes: <2> + - example.com + policy: <3> + rootOfTrust: <4> + policyType: PublicKey <5> + publicKey: + keyData: a2V5RGF0YQ== <6> + rekorKeyData: cmVrb3JLZXlEYXRh <7> + signedIdentity: <8> + matchPolicy: MatchRepoDigestOrExact +---- +<1> Creates a `ClusterImagePolicy` object. +<2> Defines a list of repositories or images assigned to this policy. In a cluster image policy, make sure that the policy does not block the deployment of the {product-title} images in the `quay.io/openshift-release-dev/ocp-release` and `quay.io/openshift-release-dev/ocp-v4.0-art-dev` repositories. Images in these repositories are required for cluster operation. +<3> Specifies the parameters that define how the images are verified. +<4> Defines a root of trust for the policy. +<5> Specifies the policy types that define the root of trust, either a public key or a link:https://docs.sigstore.dev/certificate_authority/overview/[Fulcio certifice]. Here, a public key with Rekor verification. +<6> For a public key policy, specifies a base64-encoded public key in the PEM format. The maximum length is 8192 characters. +<7> Optional: Specifies a base64-encoded Rekor public key in the PEM format. The maximum length is 8192 characters. +<8> Optional: Specifies one of the following processes to verify the identity in the signature and the actual image identity: +* `MatchRepoDigestOrExact`. +* `MatchRepository`. +* `ExactRepository`. The `exactRepository` parameter must be specified. +* `RemapIdentity`. The `prefix` and `signedPrefix` parameters must be specified. +-- ++ +-- +.Example cluster image policy object with a Fulcio certificate policy and the `remapIdentity` match policy +[source,yaml] +---- +apiVersion: config.openshift.io/v1alpha1 +kind: ClusterImagePolicy <1> +metadata: + name: p1 +spec: + scopes: <2> + - example.com + policy: <3> + rootOfTrust: <4> + policyType: FulcioCAWithRekor <5> + fulcioCAWithRekor: <6> + fulcioCAData: a2V5RGF0YQ== + fulcioSubject: + oidcIssuer: "https://expected.OIDC.issuer/" + signedEmail: "expected-signing-user@example.com" + rekorKeyData: cmVrb3JLZXlEYXRh <7> + signedIdentity: + matchPolicy: RemapIdentity <8> + remapIdentity: + prefix: example.com <9> + signedPrefix: mirror-example.com <10> +---- +<1> Creates a `ClusterImagePolicy` object. +<2> Defines a list of repositories or images assigned to this policy. In a cluster image policy, make sure that the policy does not block the deployment of the {product-title} images in the `quay.io/openshift-release-dev/ocp-release` and `quay.io/openshift-release-dev/ocp-v4.0-art-dev` repositories. Images in these repositories are required for cluster operation. +<3> Specifies the parameters that define how the images are verified. +<4> Defines a root of trust for the policy. +<5> Specifies the policy types that define the root of trust, either a public key or a link:https://docs.sigstore.dev/certificate_authority/overview/[Fulcio certificate]. Here, a Fulcio certificate with required Rekor verification. +<6> For a Fulcio certificate policy, the following parameters are required: +* `fulcioCAData`: Specifies a base64-encoded Fulcio certificate in the PEM format. The maximum length is 8192 characters. +* `fulcioSubject`: Specifies the OIDC issuer and the email of the Fulcio authentication configuration. +<7> Specifies a base64-encoded Rekor public key in the PEM format. This parameter is required when when the `policyType` is `FulcioCAWithRekor`. The maximum length is 8192 characters. +<8> Optional: Specifies one of the following processes to verify the identity in the signature and the actual image identity. +* `MatchRepoDigestOrExact`. +* `MatchRepository`. +* `ExactRepository`. The `exactRepository` parameter must be specified. +* `RemapIdentity`. The `prefix` and `signedPrefix` parameters must be specified. +<9> For the `remapIdentity` match policy, specifies the prefix that should be matched against the scoped image prefix. If the two match, the scoped image prefix is replaced with the value of `signedPrefix`. The maximum length is 512 characters. +<10> For the `remapIdentity` match policy, specifies the image prefix to be remapped, if needed. The maximum length is 512 characters. +-- + +. Create the cluster image policy object: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- ++ +The Machine Config Operator (MCO) updates the machine config pools (MCP) in your cluster. + +.Verification + +* After the nodes in your cluster are updated, you can verify that the cluster image policy has been configured: + +.. Start a debug pod for the node by running the following command: ++ +[source,terminal] +---- +$ oc debug node/ +---- + +.. Set `/host` as the root directory within the debug shell by running the following command: ++ +[source,terminal] +---- +sh-5.1# chroot /host/ +---- + +.. Examine the `policy.json` file by running the following command: ++ +[source,terminal] +---- +sh-5.1# cat /etc/containers/policy.json +---- ++ +.Example output for the cluster image policy object with a public key showing the new cluster image policy +[source,json] +---- +# ... + "transports": { +# ... + "docker": { + "example.com": [ + { + "type": "sigstoreSigned", + "keyData": "a2V5RGF0YQ==", + "rekorPublicKeyData": "cmVrb3JLZXlEYXRh", + "signedIdentity": { + "type": "matchRepoDigestOrExact" + } + } + ], +# ... +---- ++ +.Example output for the cluster image policy object with a Fulcio certificate showing the new cluster image policy +[source,json] +---- +# ... + "transports": { +# ... + "docker": { + "example.com": [ + { + "type": "sigstoreSigned", + "fulcio": { + "caData": "a2V5RGF0YQ==", + "oidcIssuer": "https://expected.OIDC.issuer/", + "subjectEmail": "expected-signing-user@example.com" + }, + "rekorPublicKeyData": "cmVrb3JLZXlEYXRh", + "signedIdentity": { + "type": "remapIdentity", + "prefix": "example.com", + "signedPrefix": "mirror-example.com" + } + } + ], +# ... +---- + +.. Examine the `sigstore-registries.yaml` file by running the following command: ++ +[source,terminal] +---- +sh-5.1# cat /etc/containers/registries.d/sigstore-registries.yaml +---- ++ +.Example output showing that the scoped registry was added +[source,yaml] +---- +docker: + example.com: + use-sigstore-attachments: true <1> + quay.io/openshift-release-dev/ocp-release: + use-sigstore-attachments: true +---- +<1> When `true`, specifies that sigstore signatures are going to be read along with the image. +// https://github.com/openshift/api/blob/master/config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clusterimagepolicies-TechPreviewNoUpgrade.crd.yaml diff --git a/modules/nodes-sigstore-configure-image-policy.adoc b/modules/nodes-sigstore-configure-image-policy.adoc new file mode 100644 index 000000000000..2c13a5b9179e --- /dev/null +++ b/modules/nodes-sigstore-configure-image-policy.adoc @@ -0,0 +1,261 @@ +// Module included in the following assemblies: +// +// * nodes/nodes-sigstore-using.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nodes-sigstore-configure-image-policy_{context}"] += Creating an image policy CR + +An `ImagePolicy` custom resource (CR) enables a cluster administrator or application developer to configure a sigstore signature verification policy for a specific namespace. The MCO watches `ImagePolicy` instances in different namespaces and updates the `/etc/crio/policies/.json` and `/etc/containers/registries.d/sigstore-registries.yaml` files on all the nodes in the cluster. + +[NOTE] +==== +If a scoped image or repository in an image policy is nested under one of the scoped images or repositories in a cluster image policy, only the policy from cluster image policy is applied. However, the image policy object is created with an error message. For example, if an image policy specifies `example.com/global/image`, and the cluster image policy specifies `example.com/global`, the namespace inherits the policy from the cluster image policy. +==== + +The following example shows general guidelines on how to configure an `ImagePolicy` object. For more details on the parameters, see "About cluster and image policy parameters". + +.Prerequisites +// Taken from https://issues.redhat.com/browse/OCPSTRAT-918 +* You have a sigstore-supported public key infrastructure (PKI) or provide link:https://docs.sigstore.dev/cosign/[Cosign public and private key pair] for signing operations. +* You have a signing process in place to sign your images. +* You have access to a registry that supports Cosign signatures, if you are using Cosign signatures. +* If registry mirrors are configured for the {product-title} release image repositories, `quay.io/openshift-release-dev/ocp-release` and `quay.io/openshift-release-dev/ocp-v4.0-art-dev`, before enabling the Technology Preview feature set, you must mirror the sigstore signatures for the {product-title} release images into your mirror registry. Otherwise, the default `openshift` cluster image policy, which enforces signature verification for the release repository, will block the ability of the Cluster Version Operator to move the CVO Pod to new nodes, which prevents the node update that results from the feature set change. ++ +You can use the `oc image mirror` command to mirror the signatures. For example: ++ +[source,terminal] +---- +$ oc image mirror quay.io/openshift-release-dev/ocp-release:sha256-1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef.sig \ +mirror.com/image/repo:sha256-1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef.sig +---- + +* You enabled the required Technology Preview features for your cluster by editing the `FeatureGate` CR named `cluster`: ++ +[source,terminal] +---- +$ oc edit featuregate cluster +---- ++ +.Example `FeatureGate` CR +[source,yaml] +---- +apiVersion: config.openshift.io/v1 +kind: FeatureGate +metadata: + name: cluster +spec: + featureSet: TechPreviewNoUpgrade <1> +---- +<1> Enables the required `SigstoreImageVerification` feature. ++ +[WARNING] +==== +Enabling the `TechPreviewNoUpgrade` feature set on your cluster cannot be undone and prevents minor version updates. This feature set allows you to enable these Technology Preview features on test clusters, where you can fully test them. Do not enable this feature set on production clusters. +==== ++ +After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. + +.Procedure + +. Create an image policy object similar to the following examples. See "About cluster and image policy parameters" for specific details on these parameters. ++ +-- +.Example image policy object with a public key policy and the `MatchRepository` match policy +[source,yaml] +---- +apiVersion: config.openshift.io/v1alpha1 +kind: ImagePolicy <1> +metadata: + name: p0 + namespace: mynamespace <2> +spec: + scopes: <3> + - example.io/crio/signed + policy: <4> + rootOfTrust: <5> + policyType: PublicKey <6> + publicKey: + keyData: a2V5RGF0YQ== <7> + rekorKeyData: cmVrb3JLZXlEYXRh <8> + signedIdentity: + matchPolicy: MatchRepository <9> +---- +<1> Creates an `ImagePolicy` object. +<2> Specifies the namespace where the image policy is applied. +<3> Defines a list of repositories or images assigned to this policy. +<4> Specifies the parameters that define how the images are verified. +<5> Defines a root of trust for the policy. +<6> Specifies the policy types that define the root of trust, either a public key or a link:https://docs.sigstore.dev/certificate_authority/overview/[Fulcio certificate]. Here, a public key with Rekor verification. +<7> For a public key policy, specifies a base64-encoded public key in the PEM format. The maximum length is 8192 characters. +<8> Optional: Specifies a base64-encoded Rekor public key in the PEM format. The maximum length is 8192 characters. +<9> Optional: Specifies one of the following processes to verify the identity in the signature and the actual image identity: +* `MatchRepoDigestOrExact`. +* `MatchRepository`. +* `ExactRepository`. The `exactRepository` parameter must be specified. +* `RemapIdentity`. The `prefix` and `signedPrefix` parameters must be specified. +-- ++ +-- +.Example image policy object with a Fulcio certificate policy and the `ExactRepository` match policy +[source,yaml] +---- +apiVersion: config.openshift.io/v1alpha1 +kind: ImagePolicy <1> +metadata: + name: p1 + namespace: mynamespace <2> +spec: + scopes: <3> + - example.io/crio/signed + policy: <4> + rootOfTrust: <5> + policyType: FulcioCAWithRekor <6> + fulcioCAWithRekor: <7> + fulcioCAData: a2V5RGF0YQ== + fulcioSubject: + oidcIssuer: "https://expected.OIDC.issuer/" + signedEmail: "expected-signing-user@example.com" + rekorKeyData: cmVrb3JLZXlEYXRh <8> + signedIdentity: + matchPolicy: ExactRepository <9> + exactRepository: + repository: quay.io/crio/signed <10> +---- +<1> Creates an `ImagePolicy` object. +<2> Specifies the namespace where the image policy is applied. +<3> Defines a list of repositories or images assigned to this policy. +<4> Specifies the parameters that define how the images are verified. +<5> Defines a root of trust for the policy. +<6> Specifies the policy types that define the root of trust, either a public key or a link:https://docs.sigstore.dev/certificate_authority/overview/[Fulcio certificate]. Here, a Fulcio certificate with required Rekor verification. +<7> For a Fulcio certificate policy, the following parameters are required: +* `fulcioCAData`: Specifies a base64-encoded Fulcio certificate in the PEM format. The maximum length is 8192 characters. +* `fulcioSubject`: Specifies the OIDC issuer and the email of the Fulcio authentication configuration. +<8> Specifies a base64-encoded Rekor public key in the PEM format. This parameter is required when when the `policyType` is `FulcioCAWithRekor`. The maximum length is 8192 characters. +<9> Optional: Specifies one of the following processes to verify the identity in the signature and the actual image identity: +* `MatchRepoDigestOrExact`. +* `MatchRepository`. +* `ExactRepository`. The `exactRepository` parameter must be specified. +* `RemapIdentity`. The `prefix` and `signedPrefix` parameters must be specified. +<10> For the `exactRepository` match policy, specifies the repository that contains the image identity and signature. +-- + +. Create the image policy object: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- ++ +The Machine Config Operator (MCO) updates the machine config pools (MCP) in your cluster. + +.Verification + +* After the nodes in your cluster are updated, you can verify that the image policy has been configured: + +.. Start a debug pod for the node by running the following command: ++ +[source,terminal] +---- +$ oc debug node/ +---- + +.. Set `/host` as the root directory within the debug shell by running the following command: ++ +[source,terminal] +---- +sh-5.1# chroot /host/ +---- + +.. Examine the `.json` file by running the following command: ++ +[source,terminal] +---- +sh-5.1# cat /etc/crio/policies/.json +---- ++ +.Example output for the image policy object with a public key showing the new image policy +[source,json] +---- +# ... + "transports": { +# ... + "docker": { + "example.io/crio/signed": [ + { + "type": "sigstoreSigned", + "keyData": "a2V5RGF0YQ==", + "rekorPublicKeyData": "cmVrb3JLZXlEYXRh", + "signedIdentity": { + "type": "matchRepository", + "dockerRepository": "example.org/crio/signed" + } +# ... +---- ++ +.Example output for the image policy object with a Fulcio certificate showing the new image policy +[source,json] +---- +# ... + "transports": { +# ... + "docker": { + "example.io/crio/signed": [ + { + "type": "sigstoreSigned", + "fulcio": { + "caData": "a2V5RGF0YQ==", + "oidcIssuer": "https://expected.OIDC.issuer/", + "subjectEmail": "expected-signing-user@example.com" + }, + "rekorPublicKeyData": "cmVrb3JLZXlEYXRh", + "signedIdentity": { + "type": "exactRepository", + "dockerRepository": "quay.io/crio/signed" + } + } + ], +# ... +---- + +.. Examine the `sigstore-registries.yaml` file by running the following command: ++ +[source,terminal] +---- +sh-5.1# cat /etc/containers/registries.d/sigstore-registries.yaml +---- ++ +.Example output showing that the scoped registry was added +[source,yaml] +---- +docker: + example.io/crio/signed: + use-sigstore-attachments: true <1> + quay.io/openshift-release-dev/ocp-release: + use-sigstore-attachments: true +---- +<1> When `true`, specifies that sigstore signatures are going to be read along with the image. + +.. Check the crio log for sigstore signature verification by running the following command: ++ +[source,terminal] +---- +sh-5.1# journalctl -u crio | grep -A 100 "Pulling image: example.io/crio" +---- ++ +.Example output with timestamp removed +[source,terminal] +---- +# ... +msg="IsRunningImageAllowed for image docker:example.io/crio/signed:latest" file="signature/policy_eval.go:274" <1> +msg="Using transport \"docker\" specific policy section \"example.io/crio/signed\"" file="signature/policy_eval.go:150" <2> +msg="Reading /var/lib/containers/sigstore/crio/signed@sha256=18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a/signature-1" file="docker/docker_image_src.go:545" +msg="Looking for Sigstore attachments in quay.io/crio/signed:sha256-18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a.sig" file="docker/docker_client.go:1138" +msg="GET https://quay.io/v2/crio/signed/manifests/sha256-18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a.sig" file="docker/docker_client.go:617" +msg="Content-Type from manifest GET is \"application/vnd.oci.image.manifest.v1+json\"" file="docker/docker_client.go:989" +msg="Found a Sigstore attachment manifest with 1 layers" file="docker/docker_image_src.go:639" +msg="Fetching Sigstore attachment 1/1: sha256:8276724a208087e73ae5d9d6e8f872f67808c08b0acdfdc73019278807197c45" file="docker/docker_image_src.go:644" +# ... +---- +<1> The `IsRunningImageAllowed` line confirms that image is allowed by the configured sigstore verification policy. +<2> The `Using transport \"docker\" specific policy section \"example.io/crio/signed\"" file="signature/policy_eval.go:150` line confirms that the image policy has been applied. diff --git a/modules/nodes-sigstore-configure.adoc b/modules/nodes-sigstore-configure.adoc new file mode 100644 index 000000000000..54086cc8c886 --- /dev/null +++ b/modules/nodes-sigstore-configure.adoc @@ -0,0 +1,87 @@ +// Module included in the following assemblies: +// +// * nodes/nodes-sigstore-using.adoc + +:_mod-docs-content-type: CONCEPT +[id="nodes-sigstore-configure_{context}"] += About configuring sigstore support + +You can use the `ClusterImagePolicy` and `ImagePolicy` custom resource (CR) objects to enable and configure sigstore support for the entire cluster or a specific namespace. These objects contain a policy that specifies the images and repositories to be verified by using sigstore tooling and how the signatures must be verified. + +* Cluster image policy. A cluster image policy object enables a cluster administrator to configure a sigstore signature verification policy for the entire cluster. When enabled, the Machine Config Operator (MCO) watches the `ClusterImagePolicy` object and updates the `/etc/containers/policy.json` and `/etc/containers/registries.d/sigstore-registries.yaml` files on all nodes in the cluster. ++ +[IMPORTANT] +==== +The default `openshift` cluster image policy provides sigstore support for the required {product-title} images. You must not remove or modify this cluster image policy object. +==== + +* Image policy. An image policy enables a cluster administrator or application developer to configure a sigstore signature verification policy for a specific namespace. The MCO watches an `ImagePolicy` instance in different namespaces and creates or updates the `/etc/crio/.json` and `/etc/containers/registries.d/sigstore-registries.yaml` files on all nodes in the cluster. ++ +If the image or repository in an image policy is nested under one of the images or repositories in a cluster image policy, only the policy from cluster image policy is applied. For example, if an image policy specifies `example.com/global/image`, and the cluster image policy specifies `example.com/global`, the namespace uses the policy from the cluster image policy. The image policy object is created and shows an error similar to the following message: ++ +.Example image policy with a conflicting image identity +[source,yaml] +---- +API Version: config.openshift.io/v1alpha1 +Kind: ImagePolicy +Name: p0 +Namespace: mynamespace +# ... +Status: + Conditions: + Message: has conflicting scope(s) ["example.com/global/image"] that equal to or nest inside existing clusterimagepolicy, only policy from clusterimagepolicy scope(s) will be applied + Reason: ConflictScopes +# ... +---- + +[id="nodes-sigstore-configure-parameters_{context}"] +== About cluster and image policy parameters + +The following parameters apply to cluster and image policies. For information on using these parameters, see "Creating a cluster image policy CR" and "Creating an image policy CR." + +// Based on https://github.com/openshift/api/blob/master/config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_imagepolicies-TechPreviewNoUpgrade.crd.yaml + +`scopes`:: Defines a list of repositories and images assigned to a policy. You must list at least one of the following scopes: ++ +-- +* An individual image, by using a tag or digest, such as `example.com/namespace/image:latest` +* A repository, by omitting the tag or digest, such as `example.com` +* A repository namespace, such as `example.com/namespace/` +* A registry host, by specifying only the host name and port number or a wildcard expression starting with `\*.`, such as `*.example.com` +-- ++ +If multiple scopes match a single scope in the same a cluster or image policy, the policy for only the most specific scope applies. ++ +If a scoped image or repository in an image policy is nested under one of the scoped images or repositories in a cluster image policy, only the policy from cluster image policy is applied. However, the image policy object is created. For example, if an image policy specifies `example.com/global/image`, and the cluster image policy specifies `example.com/global`, the namespace inherits the policy from the cluster image policy. + +`policy`:: Contains configuration to allow images from the sources listed in `scopes` to be verified, and defines how images not matching the verification policy are treated. You must configure a `rootOfTrust` and optionally, a `signedIdentity`. +* `rootOfTrust`: Specifies the root of trust for the policy. Configure either a public key or a link:https://docs.sigstore.dev/certificate_authority/overview/[Fulcio certificate]. +** `publicKey`: Indicates that the policy relies on a sigstore public key. You must specify a base64-encoded PEM format public key. You can optionally include link:https://docs.sigstore.dev/logging/overview/[Rekor verification]. +** `FulcioCAWithRekor`: Indicates that the policy is based on a Fulcio certificate. You must specify the following parameters: +*** A base64-encoded PEM-format Fulcio CA +*** An OpenID Connect (OIDC) issuer +*** The email of the Fulcio authentication configuration +*** The link:https://docs.sigstore.dev/logging/overview/[Rekor verification] +* `signedIdentity`: Specifies the approach used to verify the image in the signature and the actual image itself. To configure a signed identity, you must specify one of the following parameters as the match policy: +** `MatchRepoDigestOrExact`. The image referenced in the signature must be in the same repository as the image itself. If the image carries a tag, the image referenced in the signature must match exactly. This is the default. +** `MatchRepository`. The image referenced in the signature must be in the same repository as the image itself. If the image carries a tag, the image referenced in the signature does not need to match exactly. This is useful to pull an image that contains the `latest` tag if the image is signed with a tag specifying an exact image version. +** `ExactRepository`. The image referenced in the signature must be in the same repository that is specified by the `exactRepository` parameter. The `exactRepository` parameter must be specified. +** `RemapIdentity`. If the scoped repository or image matches a specified `prefix`, that prefix is replaced by a specified `signedPrefix`. If the image identity does not match, the `prefix` is unchanged and no remapping takes place. This option can be used when verifying signatures for a mirror of some other repository namespace that preserves the vendor’s repository structure. ++ +The `prefix` and `signedPrefix` can be either `host[:port]` values that match the exact `host[:port]` string, repository namespaces, or repositories. The `prefix` and `signedPrefix` must not contain tags or digests. For example, to specify a single repository, use `example.com/library/busybox` and not `busybox`. To specify the parent namespace of `example.com/library/busybox`, you can use `example.com/library`. ++ +You must specify the following parameters: ++ +*** `prefix`: Specifies the image prefix to be matched. +*** `signedPrefix`: Specifies the image prefix to be remapped, if needed. + +[id="nodes-sigstore-configure-parameters-modify_{context}"] +== About modifying or removing image policies + +You can modify or remove a cluster image policy or an image policy by using the same commands as any other custom resource (CR) object. + +You can modify an existing policy by editing the policy YAML and running an `oc apply` command on the file or directly editing the `ClusterImagePolicy` or `ImagePolicy` object. Both methods apply the changes in the same manner. + +You can create multiple policies for a cluster or namespace. This allows you to create different policies for different images or repositories. + +You can remove a policy by deleting the `ClusterImagePolicy` and `ImagePolicy` objects. diff --git a/modules/nodes-sigstore-using-about.adoc b/modules/nodes-sigstore-using-about.adoc index 39ffe2ec583e..c306d91b457e 100644 --- a/modules/nodes-sigstore-using-about.adoc +++ b/modules/nodes-sigstore-using-about.adoc @@ -4,6 +4,9 @@ :_mod-docs-content-type: CONCEPT [id="nodes-sigstore-using-about_{context}"] -= About the sigstore project += About sigstore + +The sigstore project enables developers to sign-off on what they build and administrators to verify signatures and monitor workflows at scale. With the sigstore project, signatures can be stored in the same registry as the build images. A second server is not needed. The identity piece of a signature is tied to the OpenID Connect (OIDC) identity through the Fulcio certificate authority, which simplifies the signature process by allowing key-less signing. Additionally, sigstore includes Rekor, which records signature metadata to an immutable, tamper-resistant ledger. + +You can use the `ClusterImagePolicy` and `ImagePolicy` custom resource (CR) objects to enable and configure sigstore support at the cluster or namespace scope. These objects specify the images and repositories to be verified and how the signatures must be verified. -The sigstore project enables developers to sign-off on what they build and administrators to verify signatures and monitor workflows at scale. With the sigstore project, signatures can be stored in the same registry as the build images. A second server is not needed. The identity piece of a signature is tied to the OpenID Connect (OIDC) identity through the Fulcio certificate authority, which simplifies the signature process by allowing key-less signing. Additionally, sigstore includes Rekor, which records signature metadata to an immutable, tamper-resistant ledger. \ No newline at end of file diff --git a/nodes/nodes-sigstore-using.adoc b/nodes/nodes-sigstore-using.adoc index fac9b8354169..6c53189b6c02 100644 --- a/nodes/nodes-sigstore-using.adoc +++ b/nodes/nodes-sigstore-using.adoc @@ -6,12 +6,29 @@ include::_attributes/common-attributes.adoc[] toc::[] -You can use the sigstore project with {product-title} to improve supply chain security. +You can use link:https://www.sigstore.dev/[sigstore] with {product-title} to improve supply chain security. + +:FeatureName: sigstore support +include::snippets/technology-preview.adoc[] // The following include statements pull in the module files that comprise // the assembly. Include any combination of concept, procedure, or reference // modules required to cover the user story. You can also include other // assemblies. -// AManage secure signatures with SigStore -include::modules/nodes-sigstore-using-about.adoc[leveloffset=+1] \ No newline at end of file +// Manage secure signatures with SigStore +include::modules/nodes-sigstore-using-about.adoc[leveloffset=+1] +include::modules/nodes-sigstore-configure.adoc[leveloffset=+1] +include::modules/nodes-sigstore-configure-cluster-policy.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources +xref:../nodes/nodes-sigstore-using.adoc#nodes-sigstore-configure-parameters_nodes-sigstore-using[About cluster and image policy parameters] + +include::modules/nodes-sigstore-configure-image-policy.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +xref:../nodes/nodes-sigstore-using.adoc#nodes-sigstore-configure-parameters_nodes-sigstore-using[About cluster and image policy parameters] + From f43115b54ab201de36ae97d8fb5a31ce4c4f458f Mon Sep 17 00:00:00 2001 From: Shane Lovern Date: Wed, 11 Dec 2024 13:41:15 +0000 Subject: [PATCH 014/669] CNF-13831 - TELCODOCS-1871 RDMA CNI for SR-IOV --- _topic_maps/_topic_map.yml | 2 + modules/nw-configuring-sriov-rdma-cni.adoc | 164 ++++++++++++++++++ .../configuring-sriov-rdma-cni.adoc | 13 ++ 3 files changed, 179 insertions(+) create mode 100644 modules/nw-configuring-sriov-rdma-cni.adoc create mode 100644 networking/hardware_networks/configuring-sriov-rdma-cni.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 60157538b868..f9055ef016ea 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -1568,6 +1568,8 @@ Topics: File: configuring-sriov-net-attach - Name: Configuring an SR-IOV InfiniBand network attachment File: configuring-sriov-ib-attach + - Name: Configuring an RDMA subsytem for SR-IOV + File: configuring-sriov-rdma-cni - Name: Adding a pod to an SR-IOV network File: add-pod - Name: Configuring interface-level network sysctl settings and all-multicast mode for SR-IOV networks diff --git a/modules/nw-configuring-sriov-rdma-cni.adoc b/modules/nw-configuring-sriov-rdma-cni.adoc new file mode 100644 index 000000000000..9ed6b8065ffd --- /dev/null +++ b/modules/nw-configuring-sriov-rdma-cni.adoc @@ -0,0 +1,164 @@ +// Module included in the following assemblies: +// +// * networking/hardware_networks/configuring-sriov-rdma-cni.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nw-sriov-configuring-sriov-rdma-cni_{context}"] += Configuring SR-IOV RDMA CNI + +Configure an RDMA CNI on SR-IOV. + +[NOTE] +==== +This procedure applies only to Mellanox devices. +==== + +.Prerequisites + +* You have installed the OpenShift CLI (`oc`). +* You have access to the cluster as a user with the `cluster-admin` role. +* You have installed the SR-IOV Network Operator. + +.Procedure + +. Create an `SriovNetworkPoolConfig` CR and save it as `sriov-nw-pool.yaml`, as shown in the following example: ++ +.Example `SriovNetworkPoolConfig` CR +[source,yaml] +---- +apiVersion: sriovnetwork.openshift.io/v1 +kind: SriovNetworkPoolConfig +metadata: + name: worker + namespace: openshift-sriov-network-operator +spec: + maxUnavailable: 1 + nodeSelector: + matchLabels: + node-role.kubernetes.io/worker: "" + rdmaMode: exclusive <1> +---- +<1> Set RDMA network namespace mode to `exclusive`. + +. Create the `SriovNetworkPoolConfig` resource by running the following command: ++ +[source,terminal] +---- +$ oc create -f sriov-nw-pool.yaml +---- + +. Create an `SriovNetworkNodePolicy` CR and save it as `sriov-node-policy.yaml`, as shown in the following example: ++ +.Example `SriovNetworkNodePolicy` CR +[source,yaml] +---- +apiVersion: sriovnetwork.openshift.io/v1 +kind: SriovNetworkNodePolicy +metadata: + name: sriov-nic-pf1 + namespace: openshift-sriov-network-operator +spec: + deviceType: netdevice + isRdma: true <1> + nicSelector: + pfNames: ["ens3f0np0"] + nodeSelector: + node-role.kubernetes.io/worker: "" + numVfs: 4 + priority: 99 + resourceName: sriov_nic_pf1 +---- +<1> Activate RDMA mode. + +. Create the `SriovNetworkNodePolicy` resource by running the following command: ++ +[source,terminal] +---- +$ oc create -f sriov-node-policy.yaml +---- + +. Create an `SriovNetwork` CR and save it as `sriov-network.yaml`, as shown in the following example: ++ +.Example `SriovNetwork` CR +[source,yaml] +---- +apiVersion: sriovnetwork.openshift.io/v1 +kind: SriovNetwork +metadata: + name: sriov-nic-pf1 + namespace: openshift-sriov-network-operator +spec: + networkNamespace: sriov-tests + resourceName: sriov_nic_pf1 + ipam: |- + metaPlugins: | + { + "type": "rdma" <1> + } +---- +<1> Create the RDMA plugin. + +. Create the `SriovNetwork` resource by running the following command: ++ +[source,terminal] +---- +$ oc create -f sriov-network.yaml +---- + +.Verification + +. Create a `Pod` CR and save it as `sriov-test-pod.yaml`, as shown in the following example: ++ +.Example runtime configuration +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: sample-pod + annotations: + k8s.v1.cni.cncf.io/networks: |- + [ + { + "name": "net1", + "mac": "20:04:0f:f1:88:01", + "ips": ["192.168.10.1/24", "2001::1/64"] + } + ] +spec: + containers: + - name: sample-container + image: + imagePullPolicy: IfNotPresent + command: ["sleep", "infinity"] +---- + +. Create the test pod by running the following command: ++ +[source,terminal] +---- +$ oc create -f sriov-test-pod.yaml +---- + +. Log in to the test pod by running the following command: ++ +[source,terminal] +---- +$ oc rsh testpod1 -n sriov-tests +---- + +. Verify that the path to the `hw-counters` directory exists by running the following command: ++ +[source,terminal] +---- +$ ls /sys/bus/pci/devices/${PCIDEVICE_OPENSHIFT_IO_SRIOV_NIC_PF1}/infiniband/*/ports/1/hw_counters/ +---- ++ +.Example output +[source,terminal] +---- +duplicate_request out_of_buffer req_cqe_flush_error resp_cqe_flush_error roce_adp_retrans roce_slow_restart_trans +implied_nak_seq_err out_of_sequence req_remote_access_errors resp_local_length_error roce_adp_retrans_to rx_atomic_requests +lifespan packet_seq_err req_remote_invalid_request resp_remote_access_errors roce_slow_restart rx_read_requests +local_ack_timeout_err req_cqe_error resp_cqe_error rnr_nak_retry_err roce_slow_restart_cnps rx_write_requests +---- \ No newline at end of file diff --git a/networking/hardware_networks/configuring-sriov-rdma-cni.adoc b/networking/hardware_networks/configuring-sriov-rdma-cni.adoc new file mode 100644 index 000000000000..3c2ae3dd8e12 --- /dev/null +++ b/networking/hardware_networks/configuring-sriov-rdma-cni.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: ASSEMBLY +[id="configuring-sriov-rdma-cni"] += Configuring an RDMA subsytem for SR-IOV +include::_attributes/common-attributes.adoc[] +:context: configuring-sriov-rdma-cni + +toc::[] + +Remote Direct Memory Access (RDMA) allows direct memory access between two systems without involving the operating system of either system. +You can configure an RDMA Container Network Interface (CNI) on Single Root I/O Virtualization (SR-IOV) to enable high-performance, low-latency communication between containers. +When you combine RDMA with SR-IOV, you provide a mechanism to expose hardware counters of Mellanox Ethernet devices for use inside Data Plane Development Kit (DPDK) applications. + +include::modules/nw-configuring-sriov-rdma-cni.adoc[leveloffset=+1] \ No newline at end of file From 606b8301a00d9419cb61f752ffbd02d68035945a Mon Sep 17 00:00:00 2001 From: SNiemann15 Date: Thu, 23 Jan 2025 12:28:06 +0100 Subject: [PATCH 015/669] OCPBUGS48728 swap LPAR and KVM preferred OS reqs --- modules/preferred-installation-requirements-ibm-z.adoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/preferred-installation-requirements-ibm-z.adoc b/modules/preferred-installation-requirements-ibm-z.adoc index 9a222546356c..884d3a076daf 100644 --- a/modules/preferred-installation-requirements-ibm-z.adoc +++ b/modules/preferred-installation-requirements-ibm-z.adoc @@ -47,13 +47,13 @@ When installing in a z/VM environment, you can also bridge HiperSockets with one |{product-title} control plane machines |Three guest virtual machines -|Three guest virtual machines |Three LPARs +|Three guest virtual machines |{product-title} compute machines |Six guest virtual machines -|Six guest virtual machines |Six LPARs +|Six guest virtual machines |Temporary {product-title} bootstrap machine |One machine From 634bfa369d302ab67fdfd769085b5374248d64fd Mon Sep 17 00:00:00 2001 From: Lisa Pettyjohn Date: Fri, 6 Dec 2024 14:25:47 -0500 Subject: [PATCH 016/669] OSDOCS#12427: Multiple vCenter support for vSphere CSI TP->GA --- ...ng-restricted-networks-installer-provisioned-vsphere.adoc | 3 --- ...talling-vsphere-installer-provisioned-customizations.adoc | 3 --- ...vsphere-installer-provisioned-network-customizations.adoc | 3 --- .../ipi/installing-vsphere-installer-provisioned.adoc | 3 --- .../upi/installing-restricted-networks-vsphere.adoc | 3 --- .../upi/installing-vsphere-network-customizations.adoc | 3 --- installing/installing_vsphere/upi/installing-vsphere.adoc | 3 --- .../persistent-storage-csi-vsphere.adoc | 5 +---- 8 files changed, 1 insertion(+), 25 deletions(-) diff --git a/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.adoc b/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.adoc index 92f3eb677de7..9e49435ebcfb 100644 --- a/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.adoc +++ b/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.adoc @@ -8,9 +8,6 @@ toc::[] In {product-title} {product-version}, you can install a cluster on VMware vSphere infrastructure in a restricted network by creating an internal mirror of the installation release content. -:FeatureName: Support for multiple vCenters -include::snippets/technology-preview.adoc[] - [id="prerequisites_installing-restricted-networks-installer-provisioned-vsphere"] == Prerequisites diff --git a/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.adoc b/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.adoc index 775b65c9420a..ae8e030400b8 100644 --- a/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.adoc +++ b/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.adoc @@ -10,9 +10,6 @@ toc::[] In {product-title} version {product-version}, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure. To customize the installation, you modify parameters in the `install-config.yaml` file before you install the cluster. -:FeatureName: Support for multiple vCenters -include::snippets/technology-preview.adoc[] - [id="prerequisites_installing-vsphere-installer-provisioned-customizations"] == Prerequisites diff --git a/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.adoc b/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.adoc index baecaa816b18..63e66f645308 100644 --- a/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.adoc +++ b/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.adoc @@ -11,9 +11,6 @@ VMware vSphere instance by using installer-provisioned infrastructure with custo You must set most of the network configuration parameters during installation, and you can modify only `kubeProxy` configuration parameters in a running cluster. -:FeatureName: Support for multiple vCenters -include::snippets/technology-preview.adoc[] - [id="prerequisites_installing-vsphere-installer-provisioned-network-customizations"] == Prerequisites diff --git a/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.adoc b/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.adoc index 435cecd6a356..756bdc9c3855 100644 --- a/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.adoc +++ b/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.adoc @@ -9,9 +9,6 @@ toc::[] In {product-title} version {product-version}, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure. -:FeatureName: Support for multiple vCenters -include::snippets/technology-preview.adoc[] - [id="prerequisites_installing-vsphere-installer-provisioned_{context}"] == Prerequisites diff --git a/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.adoc b/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.adoc index 4d82cb156909..c43b31174872 100644 --- a/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.adoc +++ b/installing/installing_vsphere/upi/installing-restricted-networks-vsphere.adoc @@ -9,9 +9,6 @@ toc::[] In {product-title} version {product-version}, you can install a cluster on VMware vSphere infrastructure that you provision in a restricted network. -:FeatureName: Support for multiple vCenters -include::snippets/technology-preview.adoc[] - [IMPORTANT] ==== The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of {product-title}. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. diff --git a/installing/installing_vsphere/upi/installing-vsphere-network-customizations.adoc b/installing/installing_vsphere/upi/installing-vsphere-network-customizations.adoc index cc9517615d2b..27eda9d0dc60 100644 --- a/installing/installing_vsphere/upi/installing-vsphere-network-customizations.adoc +++ b/installing/installing_vsphere/upi/installing-vsphere-network-customizations.adoc @@ -12,9 +12,6 @@ configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. -:FeatureName: Support for multiple vCenters -include::snippets/technology-preview.adoc[] - You must set most of the network configuration parameters during installation, and you can modify only `kubeProxy` configuration parameters in a running cluster. diff --git a/installing/installing_vsphere/upi/installing-vsphere.adoc b/installing/installing_vsphere/upi/installing-vsphere.adoc index 725c1538321a..4b970468672f 100644 --- a/installing/installing_vsphere/upi/installing-vsphere.adoc +++ b/installing/installing_vsphere/upi/installing-vsphere.adoc @@ -10,9 +10,6 @@ toc::[] In {product-title} version {product-version}, you can install a cluster on VMware vSphere infrastructure that you provision. -:FeatureName: Support for multiple vCenters -include::snippets/technology-preview.adoc[] - [IMPORTANT] ==== The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of {product-title}. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. diff --git a/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc b/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc index 4492ba386c58..f0997ea91fbb 100644 --- a/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc +++ b/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc @@ -16,7 +16,7 @@ To create CSI-provisioned persistent volumes (PVs) that mount to vSphere storage * *vSphere CSI Driver Operator*: The Operator provides a storage class, called `thin-csi`, that you can use to create persistent volumes claims (PVCs). The vSphere CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage. You can disable this default storage class if desired (see xref:../../storage/container_storage_interface/persistent-storage-csi-sc-manage.adoc#persistent-storage-csi-sc-manage[Managing the default storage class]). -* *vSphere CSI driver*: The driver enables you to create and mount vSphere PVs. In {product-title} 4.17, the driver version is 3.2.0 The vSphere CSI driver supports all of the file systems supported by the underlying Red Hat Core operating system release, including XFS and Ext4. For more information about supported file systems, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_file_systems/overview-of-available-file-systems_managing-file-systems[Overview of available file systems]. +* *vSphere CSI driver*: The driver enables you to create and mount vSphere PVs. In {product-title} 4.18, the driver version is 3.3.1 The vSphere CSI driver supports all of the file systems supported by the underlying Red Hat Core operating system release, including XFS and Ext4. For more information about supported file systems, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_file_systems/overview-of-available-file-systems_managing-file-systems[Overview of available file systems]. //Please update driver version as needed with each major OCP release starting with 4.13. @@ -75,9 +75,6 @@ include::modules/persistent-storage-csi-vsphere-encryption-tag-based.adoc[levelo include::modules/persistent-storage-csi-vsphere-multi-vcenter-support-overview.adoc[leveloffset=+1] -:FeatureName: Multiple vCenter support for vSphere CSI -include::snippets/technology-preview.adoc[leveloffset=+1] - include::modules/persistent-storage-csi-vsphere-multi-vcenter-support-procedure-install.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources From 13b424ecfaf02d789202b1e1a6a86bbac0195f5c Mon Sep 17 00:00:00 2001 From: Ben Scott Date: Mon, 6 Jan 2025 11:52:54 -0500 Subject: [PATCH 017/669] OSDOCS-12856 adding note that not all instances work in all zones --- modules/dynamic-provisioning-gce-definition.adoc | 4 ++-- modules/installation-gcp-tested-machine-types.adoc | 7 +++++++ 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/modules/dynamic-provisioning-gce-definition.adoc b/modules/dynamic-provisioning-gce-definition.adoc index 24b866333322..d8038be4992f 100644 --- a/modules/dynamic-provisioning-gce-definition.adoc +++ b/modules/dynamic-provisioning-gce-definition.adoc @@ -14,11 +14,11 @@ metadata: name: <1> provisioner: kubernetes.io/gce-pd parameters: - type: pd-standard <2> + type: pd-ssd <2> replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete ---- <1> Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. -<2> Select either `pd-standard` or `pd-ssd`. The default is `pd-standard`. +<2> Select `pd-ssd`, `pd-standard`, or `hyperdisk-balanced`. The default is `pd-ssd`. diff --git a/modules/installation-gcp-tested-machine-types.adoc b/modules/installation-gcp-tested-machine-types.adoc index ed1951576f26..0da04bf2f752 100644 --- a/modules/installation-gcp-tested-machine-types.adoc +++ b/modules/installation-gcp-tested-machine-types.adoc @@ -15,6 +15,13 @@ The following Google Cloud Platform instance types have been tested with {product-title}. +[NOTE] +==== +Not all instance types are available in all regions and zones. For a detailed breakdown of which instance types are available in which zones, see link:https://cloud.google.com/compute/docs/regions-zones#available[regions and zones] (Google documentation). + +Some instance types require the use of Hyperdisk storage. If you use an instance type that requires Hyperdisk storage, all of the nodes in your cluster must support Hyperdisk storage, and you must change the default storage class to use Hyperdisk storage. For more information, see link:https://cloud.google.com/compute/docs/disks/hyperdisks#machine-type-support[machine series support for Hyperdisk] (Google documentation). For instructions on modifying storage classes, see the "GCE PersistentDisk (gcePD) object definition" section in the Dynamic Provisioning page in _Storage_. +==== + .Machine series [%collapsible] ==== From 10cb5306fdb351e87d993b097d9ceeb65a5a53e3 Mon Sep 17 00:00:00 2001 From: Brian Dooley Date: Wed, 22 Jan 2025 09:12:52 +0000 Subject: [PATCH 018/669] SRVCOM-3416 Adds 1.35 to _page_openshift.html.erb --- _templates/_page_openshift.html.erb | 1 + 1 file changed, 1 insertion(+) diff --git a/_templates/_page_openshift.html.erb b/_templates/_page_openshift.html.erb index 925cd507852a..53eb70b488e4 100644 --- a/_templates/_page_openshift.html.erb +++ b/_templates/_page_openshift.html.erb @@ -280,6 +280,7 @@ <%= distro %> + From c7d455ce594592f7ab2abcadaf08800cb749065a Mon Sep 17 00:00:00 2001 From: Talia Shwartzberg Date: Tue, 28 Jan 2025 10:42:45 +0200 Subject: [PATCH 123/669] HCIDOCS-586-OCP: OCI integration day 2 --- .../installing-oci-assisted-installer.adoc | 4 ++++ modules/installing-oci-adding-hosts-day-two.adoc | 11 +++++++++++ 2 files changed, 15 insertions(+) create mode 100644 modules/installing-oci-adding-hosts-day-two.adoc diff --git a/installing/installing_oci/installing-oci-assisted-installer.adoc b/installing/installing_oci/installing-oci-assisted-installer.adoc index 0a20edd02542..8794b83d5bb9 100644 --- a/installing/installing_oci/installing-oci-assisted-installer.adoc +++ b/installing/installing_oci/installing-oci-assisted-installer.adoc @@ -66,6 +66,10 @@ include::modules/complete-assisted-installer-oci-custom-manifests.adoc[leveloffs // Verifying a successful cluster installation on OCI include::modules/verifying-cluster-install-ai-oci.adoc[leveloffset=+1] +// Adding hosts to the cluster following the installation +include::modules/installing-oci-adding-hosts-day-two.adoc[leveloffset=+1] + + // Troubleshooting the installation of a cluster on OCI include::modules/installing-troubleshooting-assisted-installer-oci.adoc[leveloffset=+1] diff --git a/modules/installing-oci-adding-hosts-day-two.adoc b/modules/installing-oci-adding-hosts-day-two.adoc new file mode 100644 index 000000000000..3044b3b39c05 --- /dev/null +++ b/modules/installing-oci-adding-hosts-day-two.adoc @@ -0,0 +1,11 @@ +// Module included in the following assemblies: +// +// * installing/installing_oci/installing-oci-assisted-installer.adoc + +:_mod-docs-content-type: PROCEDURE +[id="installing-oci-adding-hosts-day-two_{context}"] += Adding hosts to the cluster following the installation + +After creating a cluster with the {ai-full}, you can use the {hybrid-console} to add new host nodes to the cluster and approve their certificate signing requests (CRSs). + +For details, see link:https://docs.oracle.com/en-us/iaas/Content/openshift-on-oci/adding-nodes.htm[Adding Nodes to a Cluster (Oracle documentation)]. From 03644a289386edb4953aaba8dd860cab818fadc7 Mon Sep 17 00:00:00 2001 From: shreyasiddhartha Date: Tue, 28 Jan 2025 12:44:29 +0530 Subject: [PATCH 124/669] OSSM-8570: 2.6.5 (At Stage): [DOC] Release Notes --- _attributes/common-attributes.adoc | 2 +- modules/ossm-release-2-4-14.adoc | 31 ++++++++++++++ modules/ossm-release-2-5-8.adoc | 31 ++++++++++++++ modules/ossm-release-2-6-0.adoc | 7 ---- modules/ossm-release-2-6-4.adoc | 2 - modules/ossm-release-2-6-5.adoc | 42 +++++++++++++++++++ .../v2x/servicemesh-release-notes.adoc | 6 +++ 7 files changed, 111 insertions(+), 10 deletions(-) create mode 100644 modules/ossm-release-2-4-14.adoc create mode 100644 modules/ossm-release-2-5-8.adoc create mode 100644 modules/ossm-release-2-6-5.adoc diff --git a/_attributes/common-attributes.adoc b/_attributes/common-attributes.adoc index caf27fbea995..fe4d672216eb 100644 --- a/_attributes/common-attributes.adoc +++ b/_attributes/common-attributes.adoc @@ -187,7 +187,7 @@ endif::[] :product-rosa: Red Hat OpenShift Service on AWS :SMProductName: Red Hat OpenShift Service Mesh :SMProductShortName: Service Mesh -:SMProductVersion: 2.6.4 +:SMProductVersion: 2.6.5 :MaistraVersion: 2.6 :KialiProduct: Kiali Operator provided by Red Hat :SMPlugin: OpenShift Service Mesh Console (OSSMC) plugin diff --git a/modules/ossm-release-2-4-14.adoc b/modules/ossm-release-2-4-14.adoc new file mode 100644 index 000000000000..5eadf276ff22 --- /dev/null +++ b/modules/ossm-release-2-4-14.adoc @@ -0,0 +1,31 @@ +//// +Module included in the following assemblies: +* service_mesh/v2x/servicemesh-release-notes.adoc +//// + +:_mod-docs-content-type: REFERENCE +[id="ossm-release-2-4-14_{context}"] += {SMProductName} version 2.4.14 + +This release of {SMProductName} is included with the {SMProductName} Operator 2.6.5 and is supported on {product-title} 4.14 and later. This release addresses Common Vulnerabilities and Exposures (CVEs). + +[id=ossm-release-2-4-14-components_{context}] +== Component updates + +|=== +|Component |Version + +|Istio +|1.16.7 + +|Envoy Proxy +|1.24.12 + +|Kiali Server +|1.65.19 +|=== + +[id="ossm-fixed-issues-2-4-14_{context}"] +== Fixed issues + +* https://issues.redhat.com/browse/OSSM-8608[OSSM-8608] Previously, terminating a Container Network Interface (CNI) pod during the installation phase while copying binaries could leave Istio-CNI temporary files on the node file system. Repeated occurrences could eventually fill up the node disk space. Now, while terminating a CNI pod during the installation phase, existing temporary files are deleted before copying the CNI binary, ensuring that only one temporary file per Istio version exists on the node file system. \ No newline at end of file diff --git a/modules/ossm-release-2-5-8.adoc b/modules/ossm-release-2-5-8.adoc new file mode 100644 index 000000000000..916f0a71f96c --- /dev/null +++ b/modules/ossm-release-2-5-8.adoc @@ -0,0 +1,31 @@ +//// +Module included in the following assemblies: +* service_mesh/v2x/servicemesh-release-notes.adoc +//// + +:_mod-docs-content-type: REFERENCE +[id="ossm-release-2-5-8_{context}"] += {SMProductName} version 2.5.8 + +This release of {SMProductName} is included with the {SMProductName} Operator 2.6.5 and is supported on {product-title} 4.14 and later. This release addresses Common Vulnerabilities and Exposures (CVEs). + +[id=ossm-release-2-5-8-components_{context}] +== Component updates + +|=== +|Component |Version + +|Istio +|1.18.7 + +|Envoy Proxy +|1.26.8 + +|Kiali Server +|1.73.18 +|=== + +[id="ossm-fixed-issues-2-5-8_{context}"] +== Fixed issues + +* https://issues.redhat.com/browse/OSSM-8608[OSSM-8608] Previously, terminating a Container Network Interface (CNI) pod during the installation phase while copying binaries could leave Istio-CNI temporary files on the node file system. Repeated occurrences could eventually fill up the node disk space. Now, while terminating a CNI pod during the installation phase, existing temporary files are deleted before copying the CNI binary, ensuring that only one temporary file per Istio version exists on the node file system. \ No newline at end of file diff --git a/modules/ossm-release-2-6-0.adoc b/modules/ossm-release-2-6-0.adoc index 2689207c4d08..2cba06e2adc6 100644 --- a/modules/ossm-release-2-6-0.adoc +++ b/modules/ossm-release-2-6-0.adoc @@ -98,13 +98,6 @@ You can expose tracing data to the {TempoName} by appending a named element and You can create a {OTELName} instance in a mesh namespace and configure it to send tracing data to a tracing platform backend service. -//Still true for 2.6 -//Asked in forum-ocp-tracing channel 06/24/2024, verified 06/25/2024 -[NOTE] -==== -{TempoName} Stack is not supported on {ibm-z-title}. -==== - [id="jaeger-default-setting-change-ossm-2-6-0_{context}"] == {JaegerName} default setting change //also included in "Upgrading --> Upgrading 2.5 to 2.6" but added here for increased visibility. diff --git a/modules/ossm-release-2-6-4.adoc b/modules/ossm-release-2-6-4.adoc index 551f286c7602..00949da4022b 100644 --- a/modules/ossm-release-2-6-4.adoc +++ b/modules/ossm-release-2-6-4.adoc @@ -11,8 +11,6 @@ This release of {SMProductName} updates the {SMProductName} Operator version to This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on {product-title} 4.14 and later. -include::snippets/ossm-current-version-support-snippet.adoc[] - The most current version of the {KialiProduct} can be used with all supported versions of {SMProductName}. The version of {SMProductShortName} is specified by using the `ServiceMeshControlPlane` resource. The version of {SMProductShortName} automatically ensures a compatible version of Kiali. [id=ossm-release-2-6-4-components_{context}] diff --git a/modules/ossm-release-2-6-5.adoc b/modules/ossm-release-2-6-5.adoc new file mode 100644 index 000000000000..a3473c5a8bc7 --- /dev/null +++ b/modules/ossm-release-2-6-5.adoc @@ -0,0 +1,42 @@ +//// +Module included in the following assemblies: +* service_mesh/v2x/servicemesh-release-notes.adoc +//// + +:_mod-docs-content-type: REFERENCE +[id="ossm-release-2-6-5_{context}"] += {SMProductName} version 2.6.5 + +This release of {SMProductName} updates the {SMProductName} Operator version to 2.6.5, and includes the following `ServiceMeshControlPlane` resource version updates: 2.6.5, 2.5.8, and 2.4.14. + +This release addresses Common Vulnerabilities and Exposures (CVEs) and is supported on {product-title} 4.14 and later. + +include::snippets/ossm-current-version-support-snippet.adoc[] + +You can use the most current version of the {KialiProduct} with all supported versions of {SMProductName}. The version of {SMProductShortName} is specified by using the `ServiceMeshControlPlane` resource. The version of {SMProductShortName} automatically ensures a compatible version of Kiali. + +[id=ossm-release-2-6-5-components_{context}] +== Component updates + +|=== +|Component |Version + +|Istio +|1.20.8 + +|Envoy Proxy +|1.28.7 + +|Kiali Server +|1.73.18 +|=== + +[id="ossm-new-features-2-6-5_{context}"] +== New features + +* {TempoName} Stack is now supported on {ibm-z-title}. + +[id="ossm-fixed-issues-2-6-5_{context}"] +== Fixed issues + +* https://issues.redhat.com/browse/OSSM-8608[OSSM-8608] Previously, terminating a Container Network Interface (CNI) pod during the installation phase while copying binaries could leave Istio-CNI temporary files on the node file system. Repeated occurrences could eventually fill up the node disk space. Now, while terminating a CNI pod during the installation phase, existing temporary files are deleted before copying the CNI binary, ensuring that only one temporary file per Istio version exists on the node file system. \ No newline at end of file diff --git a/service_mesh/v2x/servicemesh-release-notes.adoc b/service_mesh/v2x/servicemesh-release-notes.adoc index e279947afb1b..cf3b6a45827d 100644 --- a/service_mesh/v2x/servicemesh-release-notes.adoc +++ b/service_mesh/v2x/servicemesh-release-notes.adoc @@ -10,6 +10,12 @@ toc::[] include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1] +include::modules/ossm-release-2-6-5.adoc[leveloffset=+1] + +include::modules/ossm-release-2-5-8.adoc[leveloffset=+1] + +include::modules/ossm-release-2-4-14.adoc[leveloffset=+1] + include::modules/ossm-release-2-6-4.adoc[leveloffset=+1] include::modules/ossm-release-2-5-7.adoc[leveloffset=+1] From 07b05663119d6e21b4b3b43b6137b7a8bdd532d5 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Tue, 4 Feb 2025 15:12:12 +0000 Subject: [PATCH 125/669] TELCODOCS-2051 NUMA-aware scheduling: clarify the operator configuration --- modules/cnf-creating-nrop-cr.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/cnf-creating-nrop-cr.adoc b/modules/cnf-creating-nrop-cr.adoc index 598e7b4d6df6..d53143f378bc 100644 --- a/modules/cnf-creating-nrop-cr.adoc +++ b/modules/cnf-creating-nrop-cr.adoc @@ -33,7 +33,7 @@ spec: pools.operator.machineconfiguration.openshift.io/worker: "" <1> ---- + -<1> This should match the `MachineConfigPool` that you want to configure the NUMA Resources Operator on. For example, you might have created a `MachineConfigPool` named `worker-cnf` that designates a set of nodes expected to run telecommunications workloads. +<1> This must match the `MachineConfigPool` resource that you want to configure the NUMA Resources Operator on. For example, you might have created a `MachineConfigPool` resource named `worker-cnf` that designates a set of nodes expected to run telecommunications workloads. Each `NodeGroup` must match exactly one `MachineConfigPool`. Configurations where `NodeGroup` matches more than one `MachineConfigPool` are not supported. .. Create the `NUMAResourcesOperator` CR by running the following command: + From b3201ad508f3d8d8f26a3605e1f0844a315f0de1 Mon Sep 17 00:00:00 2001 From: Andrea Hoffer Date: Thu, 9 Jan 2025 13:07:59 -0500 Subject: [PATCH 126/669] OSDOCS#12867: Docs for hibernating a cluster --- _topic_maps/_topic_map.yml | 2 + backup_and_restore/hibernating-cluster.adoc | 41 +++++++ modules/hibernating-cluster-about.adoc | 20 ++++ modules/hibernating-cluster-hibernate.adoc | 97 ++++++++++++++++ modules/hibernating-cluster-resume.adoc | 118 ++++++++++++++++++++ 5 files changed, 278 insertions(+) create mode 100644 backup_and_restore/hibernating-cluster.adoc create mode 100644 modules/hibernating-cluster-about.adoc create mode 100644 modules/hibernating-cluster-hibernate.adoc create mode 100644 modules/hibernating-cluster-resume.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 63015798ed0e..0e08179bc7f4 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -3540,6 +3540,8 @@ Topics: File: graceful-cluster-shutdown - Name: Restarting a cluster gracefully File: graceful-cluster-restart +- Name: Hibernating a cluster + File: hibernating-cluster - Name: OADP Application backup and restore Dir: application_backup_and_restore Topics: diff --git a/backup_and_restore/hibernating-cluster.adoc b/backup_and_restore/hibernating-cluster.adoc new file mode 100644 index 000000000000..73a84eba3235 --- /dev/null +++ b/backup_and_restore/hibernating-cluster.adoc @@ -0,0 +1,41 @@ +:_mod-docs-content-type: ASSEMBLY +[id="hibernating-cluster"] += Hibernating an {product-title} cluster +include::_attributes/common-attributes.adoc[] +:context: hibernating-cluster + +toc::[] + +You can hibernate your {product-title} cluster for up to 90 days. + +// About hibernating a cluster +include::modules/hibernating-cluster-about.adoc[leveloffset=+1] + +[id="hibernating-cluster_prerequisites_{context}"] +== Prerequisites + +* Take an xref:../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backing-up-etcd-data_backup-etcd[etcd backup] prior to hibernating the cluster. ++ +[IMPORTANT] +==== +It is important to take an etcd backup before hibernating so that your cluster can be restored if you encounter any issues when resuming the cluster. + +For example, the following conditions can cause the resumed cluster to malfunction: + +* etcd data corruption during hibernation +* Node failure due to hardware +* Network connectivity issues + +If your cluster fails to recover, follow the steps to xref:../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore to a previous cluster state]. +==== + +// Hibernating a cluster +include::modules/hibernating-cluster-hibernate.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* xref:../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[Backing up etcd] + +// Resuming a hibernated cluster +include::modules/hibernating-cluster-resume.adoc[leveloffset=+1] diff --git a/modules/hibernating-cluster-about.adoc b/modules/hibernating-cluster-about.adoc new file mode 100644 index 000000000000..7b88dcc63c33 --- /dev/null +++ b/modules/hibernating-cluster-about.adoc @@ -0,0 +1,20 @@ +// Module included in the following assemblies: +// +// * backup_and_restore/hibernating-cluster.adoc + +:_mod-docs-content-type: CONCEPT +[id="hibernating-cluster-about_{context}"] += About cluster hibernation + +{product-title} clusters can be hibernated in order to save money on cloud hosting costs. You can hibernate your {product-title} cluster for up to 90 days and expect it to resume successfully. + +You must wait at least 24 hours after cluster installation before hibernating your cluster to allow for the first certification rotation. + +[IMPORTANT] +==== +If you must hibernate your cluster before the 24 hour certificate rotation, use the following procedure instead: link:https://www.redhat.com/en/blog/enabling-openshift-4-clusters-to-stop-and-resume-cluster-vms[Enabling OpenShift 4 Clusters to Stop and Resume Cluster VMs]. +==== + +When hibernating a cluster, you must hibernate all cluster nodes. It is not supported to suspend only certain nodes. + +After resuming, it can take up to 45 minutes for the cluster to become ready. diff --git a/modules/hibernating-cluster-hibernate.adoc b/modules/hibernating-cluster-hibernate.adoc new file mode 100644 index 000000000000..cc1a7eb97f38 --- /dev/null +++ b/modules/hibernating-cluster-hibernate.adoc @@ -0,0 +1,97 @@ +// Module included in the following assemblies: +// +// * backup_and_restore/hibernating-cluster.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hibernating-cluster-hibernate_{context}"] += Hibernating a cluster + +You can hibernate a cluster for up to 90 days. The cluster can recover if certificates expire while the cluster was in hibernation. + +.Prerequisites + +* The cluster has been running for at least 24 hours to allow the first certificate rotation to complete. ++ +[IMPORTANT] +==== +If you must hibernate your cluster before the 24 hour certificate rotation, use the following procedure instead: link:https://www.redhat.com/en/blog/enabling-openshift-4-clusters-to-stop-and-resume-cluster-vms[Enabling OpenShift 4 Clusters to Stop and Resume Cluster VMs]. +==== + +* You have taken an etcd backup. + +* You have access to the cluster as a user with the `cluster-admin` role. + +.Procedure + +. Confirm that your cluster has been installed for at least 24 hours. + +. Ensure that all nodes are in a good state by running the following command: ++ +[source,terminal] +---- +$ oc get nodes +---- ++ +.Example output +[source,terminal] +---- +NAME STATUS ROLES AGE VERSION +ci-ln-812tb4k-72292-8bcj7-master-0 Ready control-plane,master 32m v1.31.3 +ci-ln-812tb4k-72292-8bcj7-master-1 Ready control-plane,master 32m v1.31.3 +ci-ln-812tb4k-72292-8bcj7-master-2 Ready control-plane,master 32m v1.31.3 +Ci-ln-812tb4k-72292-8bcj7-worker-a-zhdvk Ready worker 19m v1.31.3 +ci-ln-812tb4k-72292-8bcj7-worker-b-9hrmv Ready worker 19m v1.31.3 +ci-ln-812tb4k-72292-8bcj7-worker-c-q8mw2 Ready worker 19m v1.31.3 +---- ++ +All nodes should show `Ready` in the `STATUS` column. + +. Ensure that all cluster Operators are in a good state by running the following command: ++ +[source,terminal] +---- +$ oc get clusteroperators +---- ++ +.Example output +[source,terminal] +---- +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +authentication 4.18.0-0 True False False 51m +baremetal 4.18.0-0 True False False 72m +cloud-controller-manager 4.18.0-0 True False False 75m +cloud-credential 4.18.0-0 True False False 77m +cluster-api 4.18.0-0 True False False 42m +cluster-autoscaler 4.18.0-0 True False False 72m +config-operator 4.18.0-0 True False False 72m +console 4.18.0-0 True False False 55m +... +---- ++ +All cluster Operators should show `AVAILABLE`=`True`, `PROGRESSING`=`False`, and `DEGRADED`=`False`. + +. Ensure that all machine config pools are in a good state by running the following command: ++ +[source,terminal] +---- +$ oc get mcp +---- ++ +.Example output +[source,terminal] +---- +NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE +master rendered-master-87871f187930e67233c837e1d07f49c7 True False False 3 3 3 0 96m +worker rendered-worker-3c4c459dc5d90017983d7e72928b8aed True False False 3 3 3 0 96m +---- ++ +All machine config pools should show `UPDATING`=`False` and `DEGRADED`=`False`. + +. Stop the cluster virtual machines: ++ +Use the tools native to your cluster's cloud environment to shut down the cluster's virtual machines. ++ +[IMPORTANT] +==== +If you use a bastion virtual machine, do not shut down this virtual machine. +==== diff --git a/modules/hibernating-cluster-resume.adoc b/modules/hibernating-cluster-resume.adoc new file mode 100644 index 000000000000..8d490adcc3fe --- /dev/null +++ b/modules/hibernating-cluster-resume.adoc @@ -0,0 +1,118 @@ +// Module included in the following assemblies: +// +// * backup_and_restore/hibernating-cluster.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hibernating-cluster-resume_{context}"] += Resuming a hibernated cluster + +When you resume a hibernated cluster within 90 days, you might have to approve certificate signing requests (CSRs) for the nodes to become ready. + +It can take around 45 minutes for the cluster to resume, depending on the size of your cluster. + +.Prerequisites + +* You hibernated your cluster less than 90 days ago. +* You have access to the cluster as a user with the `cluster-admin` role. + +.Procedure + +. Within 90 days of cluster hibernation, resume the cluster virtual machines: ++ +Use the tools native to your cluster's cloud environment to resume the cluster's virtual machines. + +. Wait about 5 minutes, depending on the number of nodes in your cluster. + +. Approve CSRs for the nodes: + +.. Check that there is a CSR for each node in the `NotReady` state: ++ +[source,terminal] +---- +$ oc get csr +---- ++ +.Example output +[source,terminal] +---- +NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION +csr-4dwsd 37m kubernetes.io/kube-apiserver-client system:node:ci-ln-812tb4k-72292-8bcj7-worker-c-q8mw2 24h Pending +csr-4vrbr 49m kubernetes.io/kube-apiserver-client system:node:ci-ln-812tb4k-72292-8bcj7-master-1 24h Pending +csr-4wk5x 51m kubernetes.io/kubelet-serving system:node:ci-ln-812tb4k-72292-8bcj7-master-1 Pending +csr-84vb6 51m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending +---- + +.. Approve each valid CSR by running the following command: ++ +[source,terminal] +---- +$ oc adm certificate approve +---- + +.. Verify that all necessary CSRs were approved by running the following command: ++ +[source,terminal] +---- +$ oc get csr +---- ++ +.Example output +[source,terminal] +---- +NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION +csr-4dwsd 37m kubernetes.io/kube-apiserver-client system:node:ci-ln-812tb4k-72292-8bcj7-worker-c-q8mw2 24h Approved,Issued +csr-4vrbr 49m kubernetes.io/kube-apiserver-client system:node:ci-ln-812tb4k-72292-8bcj7-master-1 24h Approved,Issued +csr-4wk5x 51m kubernetes.io/kubelet-serving system:node:ci-ln-812tb4k-72292-8bcj7-master-1 Approved,Issued +csr-84vb6 51m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued +---- ++ +CSRs should show `Approved,Issued` in the `CONDITION` column. + +. Verify that all nodes now show as ready by running the following command: ++ +[source,terminal] +---- +$ oc get nodes +---- ++ +.Example output +[source,terminal] +---- +NAME STATUS ROLES AGE VERSION +ci-ln-812tb4k-72292-8bcj7-master-0 Ready control-plane,master 32m v1.31.3 +ci-ln-812tb4k-72292-8bcj7-master-1 Ready control-plane,master 32m v1.31.3 +ci-ln-812tb4k-72292-8bcj7-master-2 Ready control-plane,master 32m v1.31.3 +Ci-ln-812tb4k-72292-8bcj7-worker-a-zhdvk Ready worker 19m v1.31.3 +ci-ln-812tb4k-72292-8bcj7-worker-b-9hrmv Ready worker 19m v1.31.3 +ci-ln-812tb4k-72292-8bcj7-worker-c-q8mw2 Ready worker 19m v1.31.3 +---- ++ +All nodes should show `Ready` in the `STATUS` column. It might take a few minutes for all nodes to become ready after approving the CSRs. + +. Wait for cluster Operators to restart to load the new certificates. ++ +This might take 5 or 10 minutes. + +. Verify that all cluster Operators are in a good state by running the following command: ++ +[source,terminal] +---- +$ oc get clusteroperators +---- ++ +.Example output +[source,terminal] +---- +NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE +authentication 4.18.0-0 True False False 51m +baremetal 4.18.0-0 True False False 72m +cloud-controller-manager 4.18.0-0 True False False 75m +cloud-credential 4.18.0-0 True False False 77m +cluster-api 4.18.0-0 True False False 42m +cluster-autoscaler 4.18.0-0 True False False 72m +config-operator 4.18.0-0 True False False 72m +console 4.18.0-0 True False False 55m +... +---- ++ +All cluster Operators should show `AVAILABLE`=`True`, `PROGRESSING`=`False`, and `DEGRADED`=`False`. From 5ea9892986e6779ccb459954ff52f57fd453d00a Mon Sep 17 00:00:00 2001 From: Apurva Bhide Date: Tue, 10 Dec 2024 01:09:02 +0530 Subject: [PATCH 127/669] Adding structure for OADP and 3scale user story adjusted leveloffset Add steps for all modules xref and basic suggestions from the PR Peer review suggestions - part 1 Trying to fix xref issue Adding xref in assembly Removed xref for now Implementing QE suggestions Fixing callout issues Minor fixes to replace deploymentconfig with deployment Peer review and minor improvements Minor fix --- _topic_maps/_topic_map.yml | 5 + .../oadp-scheduling-backups-doc.adoc | 2 +- .../oadp-3scale/_attributes | 1 + ...up-and-restoring-3scale-by-using-oadp.adoc | 43 ++++ .../oadp-3scale/images | 1 + .../oadp-3scale/modules | 1 + .../oadp-3scale/snippets | 1 + modules/backing-up-the-3scale-operator.adoc | 123 ++++++++++++ ...backing-up-the-backend-redis-database.adoc | 91 +++++++++ modules/backing-up-the-mysql-database.adoc | 144 ++++++++++++++ ...ating-the-data-protection-application.adoc | 65 +++++++ .../restoring-the-backend-redis-database.adoc | 84 ++++++++ modules/restoring-the-mysql-database.adoc | 183 ++++++++++++++++++ .../restoring-the-secrets-and-apimanager.adoc | 174 +++++++++++++++++ ...up-the-3scale-operator-and-deployment.adoc | 60 ++++++ 15 files changed, 977 insertions(+), 1 deletion(-) create mode 120000 backup_and_restore/application_backup_and_restore/oadp-3scale/_attributes create mode 100644 backup_and_restore/application_backup_and_restore/oadp-3scale/backing-up-and-restoring-3scale-by-using-oadp.adoc create mode 120000 backup_and_restore/application_backup_and_restore/oadp-3scale/images create mode 120000 backup_and_restore/application_backup_and_restore/oadp-3scale/modules create mode 120000 backup_and_restore/application_backup_and_restore/oadp-3scale/snippets create mode 100644 modules/backing-up-the-3scale-operator.adoc create mode 100644 modules/backing-up-the-backend-redis-database.adoc create mode 100644 modules/backing-up-the-mysql-database.adoc create mode 100644 modules/creating-the-data-protection-application.adoc create mode 100644 modules/restoring-the-backend-redis-database.adoc create mode 100644 modules/restoring-the-mysql-database.adoc create mode 100644 modules/restoring-the-secrets-and-apimanager.adoc create mode 100644 modules/scaling-up-the-3scale-operator-and-deployment.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 0e08179bc7f4..bcf10252c48a 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -3634,6 +3634,11 @@ Topics: Topics: - Name: Backing up applications on AWS STS using OADP File: oadp-aws-sts + - Name: OADP and 3scale + Dir: oadp-3scale + Topics: + - Name: Backing up and restoring 3scale by using OADP + File: backing-up-and-restoring-3scale-by-using-oadp - Name: OADP Data Mover Dir: installing Topics: diff --git a/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/oadp-scheduling-backups-doc.adoc b/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/oadp-scheduling-backups-doc.adoc index f37336ee6a4e..135e8f5f7a3d 100644 --- a/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/oadp-scheduling-backups-doc.adoc +++ b/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/oadp-scheduling-backups-doc.adoc @@ -83,4 +83,4 @@ Enter the minutes value between quotation marks (`" "`). [source,terminal] ---- $ oc get schedule -n openshift-adp -o jsonpath='{.status.phase}' ----- +---- \ No newline at end of file diff --git a/backup_and_restore/application_backup_and_restore/oadp-3scale/_attributes b/backup_and_restore/application_backup_and_restore/oadp-3scale/_attributes new file mode 120000 index 000000000000..bf7c2529fdb4 --- /dev/null +++ b/backup_and_restore/application_backup_and_restore/oadp-3scale/_attributes @@ -0,0 +1 @@ +../../../_attributes/ \ No newline at end of file diff --git a/backup_and_restore/application_backup_and_restore/oadp-3scale/backing-up-and-restoring-3scale-by-using-oadp.adoc b/backup_and_restore/application_backup_and_restore/oadp-3scale/backing-up-and-restoring-3scale-by-using-oadp.adoc new file mode 100644 index 000000000000..7381a8a42e7a --- /dev/null +++ b/backup_and_restore/application_backup_and_restore/oadp-3scale/backing-up-and-restoring-3scale-by-using-oadp.adoc @@ -0,0 +1,43 @@ +:_mod-docs-content-type: ASSEMBLY +[id="backing-up-and-restoring-3scale-by-using-oadp_{context}"] += Backing up and restoring 3scale by using OADP +include::_attributes/common-attributes.adoc[] +:context: backing-up-and-restoring-3scale-by-using-oadp + +toc::[] + +With Red Hat 3scale API Management (APIM), you can manage your APIs for internal or external users. Share, secure, distribute, control, and monetize your APIs on an infrastructure platform built with performance, customer control, and future growth in mind. +You can deploy 3scale components on-premise, in the cloud, as a managed service, or in any combination based on your requirement. + +[NOTE] +==== +In this example, the non-service affecting approach is used to back up and restore 3scale on-cluster storage by using the {oadp-first} Operator. +Additionally, ensure that you are restoring 3scale on the same cluster where it was backed up from. If you want to restore 3scale on a different cluster, ensure that both clusters are using the same custom domain. +==== + +.Prerequisites + +* You installed and configured Red Hat 3scale. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/installing_red_hat_3scale_api_management[Red Hat 3scale API Management]. + +include::modules/creating-the-data-protection-application.adoc[leveloffset=+1] +[role="_additional-resources"] +.Additional resources +* xref:../../../backup_and_restore/application_backup_and_restore/installing/installing-oadp-aws.adoc#oadp-installing-dpa_installing-oadp-aws[Installing the Data Protection Application] + +include::modules/backing-up-the-3scale-operator.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources +* xref:../../../backup_and_restore/application_backup_and_restore/backing_up_and_restoring/oadp-creating-backup-cr.adoc#oadp-creating-backup-cr-doc[Creating a Backup CR] + +include::modules/backing-up-the-mysql-database.adoc[leveloffset=+1] + +include::modules/backing-up-the-backend-redis-database.adoc[leveloffset=+1] + +include::modules/restoring-the-secrets-and-apimanager.adoc[leveloffset=+1] + +include::modules/restoring-the-mysql-database.adoc[leveloffset=+1] + +include::modules/restoring-the-backend-redis-database.adoc[leveloffset=+1] + +include::modules/scaling-up-the-3scale-operator-and-deployment.adoc[leveloffset=+1] \ No newline at end of file diff --git a/backup_and_restore/application_backup_and_restore/oadp-3scale/images b/backup_and_restore/application_backup_and_restore/oadp-3scale/images new file mode 120000 index 000000000000..4399cbb3c0f3 --- /dev/null +++ b/backup_and_restore/application_backup_and_restore/oadp-3scale/images @@ -0,0 +1 @@ +../../../images/ \ No newline at end of file diff --git a/backup_and_restore/application_backup_and_restore/oadp-3scale/modules b/backup_and_restore/application_backup_and_restore/oadp-3scale/modules new file mode 120000 index 000000000000..7e8b50bee77a --- /dev/null +++ b/backup_and_restore/application_backup_and_restore/oadp-3scale/modules @@ -0,0 +1 @@ +../../../modules/ \ No newline at end of file diff --git a/backup_and_restore/application_backup_and_restore/oadp-3scale/snippets b/backup_and_restore/application_backup_and_restore/oadp-3scale/snippets new file mode 120000 index 000000000000..ce62fd7c41e2 --- /dev/null +++ b/backup_and_restore/application_backup_and_restore/oadp-3scale/snippets @@ -0,0 +1 @@ +../../../snippets/ \ No newline at end of file diff --git a/modules/backing-up-the-3scale-operator.adoc b/modules/backing-up-the-3scale-operator.adoc new file mode 100644 index 000000000000..2360281b88d5 --- /dev/null +++ b/modules/backing-up-the-3scale-operator.adoc @@ -0,0 +1,123 @@ +:_mod-docs-content-type: PROCEDURE + +//included in backing-up-and-restoring-3scale-by-using-oadp.adoc assembly + +[id="backing-up-the-3scale-operator_{context}"] += Backing up the 3scale Operator + +You can back up the Operator resources, and Secret and APIManager custom resources (CR). For more information, see "Creating a Backup CR". + +.Prerequisites + +* You created the Data Protection Application (DPA). + +.Procedure + +. Back up the Operator resources, such as `operatorgroup`, `namespaces`, and `subscriptions`, by creating a YAML file with the following configuration: ++ +.Example `backup.yaml` file ++ +[source,yaml] +---- +apiVersion: velero.io/v1 +kind: Backup +metadata: + name: operator-install-backup + namespace: openshift-adp +spec: + csiSnapshotTimeout: 10m0s + defaultVolumesToFsBackup: false + includedNamespaces: + - threescale <1> + includedResources: + - operatorgroups + - subscriptions + - namespaces + itemOperationTimeout: 1h0m0s + snapshotMoveData: false + ttl: 720h0m0s +---- +<1> Namespace where the 3scale Operator is installed. ++ +[NOTE] +==== +You can also back up and restore `ReplicationControllers`, `Deployment`, and `Pod` objects to ensure that all manually set environments are backed up and restored. This does not affect the flow of restoration. +==== + +. Create a backup CR by running the following command: ++ +[source,terminal] +---- +$ oc create -f backup.yaml +---- + +. Back up the Secret CR by creating a YAML file with the following configuration: ++ +.Example `backup-secret.yaml` file ++ +[source,yaml] +---- +apiVersion: velero.io/v1 +kind: Backup +metadata: + name: operator-resources-secrets + namespace: openshift-adp +spec: + csiSnapshotTimeout: 10m0s + defaultVolumesToFsBackup: false + includedNamespaces: + - threescale + includedResources: + - secrets + itemOperationTimeout: 1h0m0s + labelSelector: + matchLabels: + app: 3scale-api-management + snapshotMoveData: false + snapshotVolumes: false + ttl: 720h0m0s +---- + +. Create the Secret CR by running the following command: ++ +[source,terminal] +---- +$ oc create -f backup-secret.yaml +---- + +. Back up the APIManager CR by creating a YAML file with the following configuration: ++ +.Example backup-apimanager.yaml file +[source,yaml] +---- +apiVersion: velero.io/v1 +kind: Backup +metadata: + name: operator-resources-apim + namespace: openshift-adp +spec: + csiSnapshotTimeout: 10m0s + defaultVolumesToFsBackup: false + includedNamespaces: + - threescale + includedResources: + - apimanagers + itemOperationTimeout: 1h0m0s + snapshotMoveData: false + snapshotVolumes: false + storageLocation: ts-dpa-1 + ttl: 720h0m0s + volumeSnapshotLocations: + - ts-dpa-1 +---- + +. Create the APIManager CR by running the following command: ++ +[source,terminal] +---- +$ oc create -f backup-apimanager.yaml +---- + +.Next steps + +* Back up the `mysql` database. \ No newline at end of file diff --git a/modules/backing-up-the-backend-redis-database.adoc b/modules/backing-up-the-backend-redis-database.adoc new file mode 100644 index 000000000000..5848d9f81e9d --- /dev/null +++ b/modules/backing-up-the-backend-redis-database.adoc @@ -0,0 +1,91 @@ +:_mod-docs-content-type: PROCEDURE + +//included in backing-up-and-restoring-3scale-by-using-oadp.adoc assembly + +[id="backing-up-the-backend-redis-database_{context}"] += Backing up the back-end Redis database + +You can back up the Redis database by adding the required annotations and by listing which resources to back up using the `includedResources` parameter. + + +.Prerequisites + +* You backed up the 3scale Operator. +* You backed up the mysql database. +* The Redis queues have been drained before performing the backup. + + +.Procedure + +. Edit the annotations on the `backend-redis` deployment by running the following command: ++ +[source, terminal] +---- +$ oc edit deployment backend-redis -n threescale +---- + +. Add the following annotations: ++ +[source,yaml] +---- +annotations: +post.hook.backup.velero.io/command: >- + ["/bin/bash", "-c", "redis-cli CONFIG SET auto-aof-rewrite-percentage + 100"] + pre.hook.backup.velero.io/command: >- + ["/bin/bash", "-c", "redis-cli CONFIG SET auto-aof-rewrite-percentage + 0"] +---- + +. Create a YAML file with the following configuration to back up the Redis database: ++ +.Example `redis-backup.yaml` file ++ +[source,yaml] +---- +apiVersion: velero.io/v1 +kind: Backup +metadata: + name: redis-backup + namespace: openshift-adp +spec: + csiSnapshotTimeout: 10m0s + defaultVolumesToFsBackup: true + includedNamespaces: + - threescale + includedResources: + - deployment + - pods + - replicationcontrollers + - persistentvolumes + - persistentvolumeclaims + itemOperationTimeout: 1h0m0s + labelSelector: + matchLabels: + app: 3scale-api-management + threescale_component: backend + threescale_component_element: redis + snapshotMoveData: false + snapshotVolumes: false + ttl: 720h0m0s +---- + +. Back up the Redis database by running the following command: ++ +[source,terminal] +---- +$ oc get backups.velero.io redis-backup -o yaml +---- + +.Verification + +* Verify that the Redis backup is completed by running the following command:: ++ +[source,terminal] +---- +$ oc get backups.velero.io +---- + +.Next steps + +* Restore the Secrets and APIManager CRs. \ No newline at end of file diff --git a/modules/backing-up-the-mysql-database.adoc b/modules/backing-up-the-mysql-database.adoc new file mode 100644 index 000000000000..e7470f470f38 --- /dev/null +++ b/modules/backing-up-the-mysql-database.adoc @@ -0,0 +1,144 @@ +:_mod-docs-content-type: PROCEDURE + +//included in backing-up-and-restoring-3scale-by-using-oadp.adoc assembly + +[id="backing-up-the-mysql-database_{context}"] += Backing up the mysql database + +You can back up the `mysql` database by creating and attaching a persistent volume claim (PVC) to include the dumped data in the specified path. + +.Prerequisites + +* You have backed up the 3scale operator. + +.Procedure + +. Create a YAML file with the following configuration for adding an additional PVC: ++ +.Example `ts_pvc.yaml` file +[source,yaml] +---- +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: example-claim + namespace: threescale +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: gp3-csi + volumeMode: Filesystem +---- + +. Create the additional PVC by running the following command: ++ +[source,terminal] +---- +$ oc create -f ts_pvc.yml +---- + +. Attach the PVC to the system database pod by editing the system database deployment to use the `mysql` dump: ++ +[source,terminal] +---- +$ oc edit deployment system-mysql -n threescale +---- ++ +[source,yaml] +---- + volumeMounts: + - name: example-claim + mountPath: /var/lib/mysqldump/data + - name: mysql-storage + mountPath: /var/lib/mysql/data + - name: mysql-extra-conf + mountPath: /etc/my-extra.d + - name: mysql-main-conf + mountPath: /etc/my-extra + ... + serviceAccount: amp + volumes: + - name: example-claim + persistentVolumeClaim: + claimName: example-claim <1> + ... +---- +<1> The PVC that contains the dumped data. + +. Create a YAML file with following configuration to back up the `mysql` database: ++ +.Example `mysql.yaml` file ++ +[source,yaml] +---- +apiVersion: velero.io/v1 +kind: Backup +metadata: + name: mysql-backup + namespace: openshift-adp +spec: + csiSnapshotTimeout: 10m0s + defaultVolumesToFsBackup: true + hooks: + resources: + - name: dumpdb + pre: + - exec: + command: + - /bin/sh + - -c + - mysqldump -u $MYSQL_USER --password=$MYSQL_PASSWORD system --no-tablespaces + > /var/lib/mysqldump/data/dump.sql <1> + container: system-mysql + onError: Fail + timeout: 5m + includedNamespaces: <2> + - threescale + includedResources: + - deployment + - pods + - replicationControllers + - persistentvolumeclaims + - persistentvolumes + itemOperationTimeout: 1h0m0s + labelSelector: + matchLabels: + app: 3scale-api-management + threescale_component_element: mysql + snapshotMoveData: false + ttl: 720h0m0s +---- +<1> A directory where the data is backed up. +<2> Resources to back up. + +. Back up the `mysql` database by running the following command: ++ +[source,terminal] +---- +$ oc create -f mysql.yaml +---- + +.Verification + +* Verify that the `mysql` backup is completed by running the following command: ++ +[source,terminal] +---- +$ oc get backups.velero.io mysql-backup +---- ++ +.Example output ++ +[source,terminal] +---- +NAME STATUS CREATED NAMESPACE POD VOLUME UPLOADER TYPE STORAGE LOCATION AGE +mysql-backup-4g7qn Completed 30s threescale system-mysql-2-9pr44 example-claim kopia ts-dpa-1 30s +mysql-backup-smh85 Completed 23s threescale system-mysql-2-9pr44 mysql-storage kopia ts-dpa-1 30s +---- + +.Next steps + +* Back up the back-end Redis database. \ No newline at end of file diff --git a/modules/creating-the-data-protection-application.adoc b/modules/creating-the-data-protection-application.adoc new file mode 100644 index 000000000000..f9c353576f7a --- /dev/null +++ b/modules/creating-the-data-protection-application.adoc @@ -0,0 +1,65 @@ +:_mod-docs-content-type: PROCEDURE + +//included in backing-up-and-restoring-3scale-by-using-oadp.adoc assembly + +[id="creating-the-data-protection-application_{context}"] += Creating the Data Protection Application + +You can create a Data Protection Application (DPA) custom resource (CR) for 3scale. For more information on DPA, see "Installing the Data Protection Application". + +.Procedure + +. Create a YAML file with the following configuration: ++ +.Example `dpa.yaml` file ++ +[source,yaml] +---- +apiVersion: oadp.openshift.io/v1alpha1 +kind: DataProtectionApplication +metadata: + name: dpa_sample + namespace: openshift-adp +spec: + configuration: + velero: + defaultPlugins: + - openshift + - aws + - csi + resourceTimeout: 10m + nodeAgent: + enable: true + uploaderType: kopia + backupLocations: + - name: default + velero: + provider: aws + default: true + objectStorage: + bucket: <1> + prefix: <2> + config: + region: <3> + profile: "default" + s3ForcePathStyle: "true" + s3Url: <4> + credential: + key: cloud + name: cloud-credentials +---- +<1> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. +<2> Specify a prefix for Velero backups, for example, velero, if the bucket is used for multiple purposes. +<3> Specify a region for backup storage location. +<4> Specify the URL of the object store that you are using to store backups. + +. Create the DPA CR by running the following command: ++ +[source,terminal] +---- +$ oc create -f dpa.yaml +---- + +.Next steps + +* Back up the 3scale Operator. \ No newline at end of file diff --git a/modules/restoring-the-backend-redis-database.adoc b/modules/restoring-the-backend-redis-database.adoc new file mode 100644 index 000000000000..623d7272d0be --- /dev/null +++ b/modules/restoring-the-backend-redis-database.adoc @@ -0,0 +1,84 @@ +:_mod-docs-content-type: PROCEDURE + +//included in backing-up-and-restoring-3scale-by-using-oadp.adoc assembly + +[id="restoring-the-backend-redis-database_{context}"] += Restoring the back-end Redis database + +You can restore the back-end Redis database by deleting the deployment and specifying which resources you do not want to restore. + +.Prerequisites + +* You restored the Secret and APIManager custom resources. +* You restored the `mysql` database. + +.Procedure + +. Delete the `backend-redis` deployment by running the following command: ++ +[source,terminal] +---- +$ oc delete deployment backend-redis -n threescale +---- ++ +.Example output: ++ +[source,terminal] +---- +Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ + +deployment.apps.openshift.io "backend-redis" deleted +---- + +. Create a YAML file with the following configuration to restore the Redis database: ++ +.Example `restore-backend.yaml` file +[source,yaml] +---- +apiVersion: velero.io/v1 +kind: Restore +metadata: + name: restore-backend + namespace: openshift-adp +spec: + backupName: redis-backup + excludedResources: + - nodes + - events + - events.events.k8s.io + - backups.velero.io + - restores.velero.io + - resticrepositories.velero.io + - csinodes.storage.k8s.io + - volumeattachments.storage.k8s.io + - backuprepositories.velero.io + itemOperationTimeout: 1h0m0s + restorePVs: true +---- + +. Restore the Redis database by running the following command: ++ +[source,terminal] +---- +$ oc create -f restore-backend.yaml +---- + +.Verification + +* Verify that the `PodVolumeRestore` restore is completed by running the following command: ++ +[source,terminal] +---- +$ oc get podvolumerestores.velero.io -n openshift-adp +---- +.Example output: ++ +[source,terminal] +---- +NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE +restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21m +---- + +.Next steps + +* Scale the 3scale Operator and deployment. \ No newline at end of file diff --git a/modules/restoring-the-mysql-database.adoc b/modules/restoring-the-mysql-database.adoc new file mode 100644 index 000000000000..7dabb625656b --- /dev/null +++ b/modules/restoring-the-mysql-database.adoc @@ -0,0 +1,183 @@ +:_mod-docs-content-type: PROCEDURE + +//included in backing-up-and-restoring-3scale-by-using-oadp.adoc assembly + +[id="restoring-the-mysql-database_{context}"] += Restoring the mysql database + +Restoring the `mysql` database re-creates the following resources: + +* The `Pod`, `ReplicationController`, and `Deployment` objects. +* The additional persistent volumes (PVs) and associated persistent volume claims (PVCs). +* The `mysql` dump, which the `example-claim` PVC contains. + +[WARNING] +==== +Do not delete the default PV and PVC associated with the database. If you do, your backups are deleted. +==== + +.Prerequisites + +* You restored the Secret and APIManager custom resources (CR). + +.Procedure + +. Scale down the 3scale Operator by running the following command: ++ +[source,terminal] +---- +$ oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale +---- ++ +.Example output: +[source,terminal] +---- +deployment.apps/threescale-operator-controller-manager-v2 scaled +---- + +. Create the following script to scale down the 3scale operator: ++ +[source,terminal] +---- +$ vi ./scaledowndeployment.sh +---- ++ +.Example output: +[source,terminal] +---- +for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do + oc scale deployment/$deployment --replicas=0 -n threescale +done +---- + +. Scale down all the deployment 3scale components by running the following script: ++ +[source,terminal] +---- +$ ./scaledowndeployment.sh +---- ++ +.Example output: +[source,terminal] +---- +deployment.apps.openshift.io/apicast-production scaled +deployment.apps.openshift.io/apicast-staging scaled +deployment.apps.openshift.io/backend-cron scaled +deployment.apps.openshift.io/backend-listener scaled +deployment.apps.openshift.io/backend-redis scaled +deployment.apps.openshift.io/backend-worker scaled +deployment.apps.openshift.io/system-app scaled +deployment.apps.openshift.io/system-memcache scaled +deployment.apps.openshift.io/system-mysql scaled +deployment.apps.openshift.io/system-redis scaled +deployment.apps.openshift.io/system-searchd scaled +deployment.apps.openshift.io/system-sidekiq scaled +deployment.apps.openshift.io/zync scaled +deployment.apps.openshift.io/zync-database scaled +deployment.apps.openshift.io/zync-que scaled +---- + +. Delete the `system-mysql` `Deployment` object by running the following command: ++ +[source,terminal] +---- +$ oc delete deployment system-mysql -n threescale +---- ++ +.Example output: +[source,terminal] +---- +Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ +deployment.apps.openshift.io "system-mysql" deleted +---- + +. Create the following YAML file to restore the `mysql` database: ++ +.Example `restore-mysql.yaml` file +[source,yaml] +---- +apiVersion: velero.io/v1 +kind: Restore +metadata: + name: restore-mysql + namespace: openshift-adp +spec: + backupName: mysql-backup + excludedResources: + - nodes + - events + - events.events.k8s.io + - backups.velero.io + - restores.velero.io + - csinodes.storage.k8s.io + - volumeattachments.storage.k8s.io + - backuprepositories.velero.io + - resticrepositories.velero.io + hooks: + resources: + - name: restoreDB + postHooks: + - exec: + command: + - /bin/sh + - '-c' + - > + sleep 30 + + mysql -h 127.0.0.1 -D system -u root + --password=$MYSQL_ROOT_PASSWORD < + /var/lib/mysqldump/data/dump.sql <1> + container: system-mysql + execTimeout: 80s + onError: Fail + waitTimeout: 5m + itemOperationTimeout: 1h0m0s + restorePVs: true +---- +<1> A path where the data is restored from. + +. Restore the `mysql` database by running the following command: ++ +[source,terminal] +---- +$ oc create -f restore-mysql.yaml +---- + +.Verification + +. Verify that the `PodVolumeRestore` restore is completed by running the following command: ++ +[source,terminal] +---- +$ oc get podvolumerestores.velero.io -n openshift-adp +---- ++ +.Example output: +[source,terminal] +---- +NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE +restore-mysql-rbzvm threescale system-mysql-2-kjkhl kopia mysql-storage Completed 771879108 771879108 40m +restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia example-claim Completed 380415 380415 40m +---- + +. Verify that the additional PVC has been restored by running the following command: ++ +[source,terminal] +---- +$ oc get pvc -n threescale +---- ++ +.Example output: +[source,terminal] +---- +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE +backend-redis-storage Bound pvc-3dca410d-3b9f-49d4-aebf-75f47152e09d 1Gi RWO gp3-csi 68m +example-claim Bound pvc-cbaa49b0-06cd-4b1a-9e90-0ef755c67a54 1Gi RWO gp3-csi 57m +mysql-storage Bound pvc-4549649f-b9ad-44f7-8f67-dd6b9dbb3896 1Gi RWO gp3-csi 68m +system-redis-storage Bound pvc-04dadafd-8a3e-4d00-8381-6041800a24fc 1Gi RWO gp3-csi 68m +system-searchd Bound pvc-afbf606c-d4a8-4041-8ec6-54c5baf1a3b9 1Gi RWO gp3-csi 68m +---- + +.Next steps + +* Restore the back-end Redis database. \ No newline at end of file diff --git a/modules/restoring-the-secrets-and-apimanager.adoc b/modules/restoring-the-secrets-and-apimanager.adoc new file mode 100644 index 000000000000..a67eef2fa470 --- /dev/null +++ b/modules/restoring-the-secrets-and-apimanager.adoc @@ -0,0 +1,174 @@ +:_mod-docs-content-type: PROCEDURE + +//included in backing-up-and-restoring-3scale-by-using-oadp.adoc assembly + + +[id="restoring-the-secrets-and-apimanager_{context}"] += Restoring the secrets and APIManager + +You can restore the Secrets and APIManager by using the following procedure. + +.Prerequisites + +* You backed up the 3scale Operator. +* You backed up `mysql` and Redis databases. +* You are restoring the database on the same cluster, where it was backed up. ++ +If it is on a different cluster, install and configure {oadp-short} with `nodeAgent` enabled on the destination cluster as it was on the source cluster. + +.Procedure + +. Delete the 3scale Operator custom resource definitions (CRDs) along with the `threescale` namespace by running the following command: ++ +[source,terminal] +---- +$ oc delete project threescale +---- ++ +.Example output ++ +[source,terminal] +---- +"threescale" project deleted successfully +---- + +. Create a YAML file with the following configuration to restore the 3scale Operator: ++ +.Example `restore.yaml` file ++ +[source,yaml] +---- +apiVersion: velero.io/v1 +kind: Restore +metadata: + name: operator-installation-restore + namespace: openshift-adp +spec: + backupName: operator-install-backup + excludedResources: + - nodes + - events + - events.events.k8s.io + - backups.velero.io + - restores.velero.io + - resticrepositories.velero.io + - csinodes.storage.k8s.io + - volumeattachments.storage.k8s.io + - backuprepositories.velero.io + itemOperationTimeout: 4h0m0s +---- + +. Restore the 3scale Operator by running the following command: ++ +[source,terminal] +---- +$ oc create -f restore.yaml +---- + +. Manually create the `s3-credentials` Secret object by running the following command: ++ +[source,terminal] +---- +$ oc apply -f - < <1> + AWS_SECRET_ACCESS_KEY: <2> + AWS_BUCKET: <3> + AWS_REGION: <4> +type: Opaque +EOF +---- +<1> Replace with your AWS credentials ID. +<2> Replace with your AWS credentials KEY. +<3> Replace with your target bucket name. +<4> Replace with the AWS region of your bucket. + +. Scale down the 3scale Operator by running the following command: ++ +[source,terminal] +---- +$ oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale +---- + +. Create a YAML file with the following configuration to restore the Secrets: ++ +.Example `restore-secret.yaml` file ++ +[source,yaml] +---- +apiVersion: velero.io/v1 +kind: Restore +metadata: + name: operator-resources-secrets + namespace: openshift-adp +spec: + backupName: operator-resources-secrets + excludedResources: + - nodes + - events + - events.events.k8s.io + - backups.velero.io + - restores.velero.io + - resticrepositories.velero.io + - csinodes.storage.k8s.io + - volumeattachments.storage.k8s.io + - backuprepositories.velero.io + itemOperationTimeout: 4h0m0s +---- + +. Restore the Secrets by running the following command: ++ +[source,terminal] +---- +$ oc create -f restore-secrets.yaml +---- + +. Create a YAML file with the following configuration to restore APIManager: ++ +.Example `restore-apimanager.yaml` file +[source,yaml] +---- +apiVersion: velero.io/v1 +kind: Restore +metadata: + name: operator-resources-apim + namespace: openshift-adp +spec: + backupName: operator-resources-apim + excludedResources: <1> + - nodes + - events + - events.events.k8s.io + - backups.velero.io + - restores.velero.io + - resticrepositories.velero.io + - csinodes.storage.k8s.io + - volumeattachments.storage.k8s.io + - backuprepositories.velero.io + itemOperationTimeout: 4h0m0s +---- +<1> The resources that you do not want to restore. + +. Restore the APIManager by running the following command: ++ +[source,terminal] +---- +$ oc create -f restore-apimanager.yaml +---- + +. Scale up the 3scale Operator by running the following command: ++ +[source,terminal] +---- +$ oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale +---- + +.Next steps + +* Restore the `mysql` database. \ No newline at end of file diff --git a/modules/scaling-up-the-3scale-operator-and-deployment.adoc b/modules/scaling-up-the-3scale-operator-and-deployment.adoc new file mode 100644 index 000000000000..3184fd2be9d3 --- /dev/null +++ b/modules/scaling-up-the-3scale-operator-and-deployment.adoc @@ -0,0 +1,60 @@ +:_mod-docs-content-type: PROCEDURE + +//included in backing-up-and-restoring-3scale-by-using-oadp.adoc assembly + +[id="scaling-up-the-3scale-operator-and-deployment_{context}"] += Scaling up the 3scale Operator and deployment + +You can scale up the 3scale Operator and any deployment that was manually scaled down. After a few minutes, 3scale installation should be fully functional, and its state should match the backed-up state. + +.Prerequisites + +* Ensure that there are no scaled up deployments or no extra pods running. +There might be some `system-mysql` or `backend-redis` pods running detached from deployments after restoration, which can be removed after the restoration is successful. + + +.Procedure + +. Scale up the 3scale Operator by running the following command: ++ +[source,terminal] +---- +$ oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale +---- + +. Ensure that the 3scale Operator was deployed by running the following command: ++ +[source,terminal] +---- +$ oc get deployment -n threescale +---- + +. Scale up the deployments by executing the following script: ++ +[source,terminal] +---- +$ ./scaledeployment.sh +---- + +. Get the `3scale-admin` route to log in to the 3scale UI by running the following command: ++ +[source,terminal] +---- +$ oc get routes -n threescale +---- ++ +.Example output +[source,terminal] +---- +NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD +backend backend-3scale.apps.custom-cluster-name.openshift.com backend-listener http edge/Allow None +zync-3scale-api-b4l4d api-3scale-apicast-production.apps.custom-cluster-name.openshift.com apicast-production gateway edge/Redirect None +zync-3scale-api-b6sns api-3scale-apicast-staging.apps.custom-cluster-name.openshift.com apicast-staging gateway edge/Redirect None +zync-3scale-master-7sc4j master.apps.custom-cluster-name.openshift.com system-master http edge/Redirect None +zync-3scale-provider-7r2nm 3scale-admin.apps.custom-cluster-name.openshift.com system-provider http edge/Redirect None +zync-3scale-provider-mjxlb 3scale.apps.custom-cluster-name.openshift.com system-developer http edge/Redirect None +---- ++ +In this example, `3scale-admin.apps.custom-cluster-name.openshift.com` is the 3scale-admin URL. + +. Use the URL from this output to log in to the 3scale Operator as an administrator. You can verify that the existing data is available before trying to create a backup. \ No newline at end of file From 67656f1ec381f67de60316564d6c2a426b1b8fe5 Mon Sep 17 00:00:00 2001 From: Laura Hinson Date: Tue, 4 Feb 2025 13:37:18 -0500 Subject: [PATCH 128/669] [OCPBUGS-44218]: Fixing command in HCP backup docs --- modules/hosted-cluster-etcd-backup-restore-on-premise.adoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/hosted-cluster-etcd-backup-restore-on-premise.adoc b/modules/hosted-cluster-etcd-backup-restore-on-premise.adoc index 7d5e6f894f7e..ab45832dd2f4 100644 --- a/modules/hosted-cluster-etcd-backup-restore-on-premise.adoc +++ b/modules/hosted-cluster-etcd-backup-restore-on-premise.adoc @@ -274,5 +274,5 @@ $ oc scale deployment -n ${CONTROL_PLANE_NAMESPACE} --replicas=3 kube-apiserver + [source,terminal] ---- -$ oc patch -n ${CLUSTER_NAMESPACE} hostedclusters/${CLUSTER_NAME} -p '{"spec":{"pausedUntil":""}}' --type=merge ----- +$ oc patch -n ${HOSTED_CLUSTER_NAMESPACE} hostedclusters/${CLUSTER_NAME} -p '{"spec":{"pausedUntil":""}}' --type=merge +---- \ No newline at end of file From f157f867c5a628b1e89bd5973b385252037d490c Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Tue, 28 Jan 2025 14:57:05 +0000 Subject: [PATCH 129/669] OSDOCS-13230: Documenting uninstalling the K8S NMState Operator --- .../k8s-nmstate-deploying-nmstate-CLI.adoc | 2 +- modules/k8s-nmstate-uninstall-operator.adoc | 90 +++++++++++++++++++ ...mstate-about-the-k8s-nmstate-operator.adoc | 5 +- operators/understanding/olm/olm-workflow.adoc | 1 + 4 files changed, 96 insertions(+), 2 deletions(-) create mode 100644 modules/k8s-nmstate-uninstall-operator.adoc diff --git a/modules/k8s-nmstate-deploying-nmstate-CLI.adoc b/modules/k8s-nmstate-deploying-nmstate-CLI.adoc index 6d867522460a..23c77fcabe9b 100644 --- a/modules/k8s-nmstate-deploying-nmstate-CLI.adoc +++ b/modules/k8s-nmstate-deploying-nmstate-CLI.adoc @@ -4,7 +4,7 @@ :_mod-docs-content-type: PROCEDURE [id="installing-the-kubernetes-nmstate-operator-CLI_{context}"] -= Installing the Kubernetes NMState Operator using the CLI += Installing the Kubernetes NMState Operator by using the CLI You can install the Kubernetes NMState Operator by using the OpenShift CLI (`oc)`. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. diff --git a/modules/k8s-nmstate-uninstall-operator.adoc b/modules/k8s-nmstate-uninstall-operator.adoc new file mode 100644 index 000000000000..f281b9ac2302 --- /dev/null +++ b/modules/k8s-nmstate-uninstall-operator.adoc @@ -0,0 +1,90 @@ +// Module included in the following assemblies: +// +// networking/k8s_nmstate/k8s-nmstate-about-the-kubernetes-nmstate-operator.adoc + +:_mod-docs-content-type: PROCEDURE +[id="k8s-nmstate-uninstall-operator_{context}"] += Uninstalling the Kubernetes NMState Operator + +You can use the {olm-first} to uninstall the Kubernetes NMState Operator, but by design {olm} does not delete any associated custom resource definitions (CRDs), custom resources (CRs), or API Services. + +Before you uninstall the Kubernetes NMState Operator from the `Subcription` resource used by {olm}, identify what Kubernetes NMState Operator resources to delete. This identification ensures that you can delete resources without impacting your running cluster. + +If you need to reinstall the Kubernetes NMState Operator, see "Installing the Kubernetes NMState Operator by using the CLI" or "Installing the Kubernetes NMState Operator by using the web console". + +.Prerequisites + +* You have installed the {oc-first}. +* You are logged in as a user with `cluster-admin` privileges. + +.Procedure + +. Unsubscribe the Kubernetes NMState Operator from the `Subcription` resource by running the following command: ++ +[source,terminal] +---- +$ oc delete --namespace openshift-nmstate subscription kubernetes-nmstate-operator +---- + +. Find the `ClusterServiceVersion` (CSV) resource that associates with the Kubernetes NMState Operator: ++ +[source,terminal] +---- +$ oc get --namespace openshift-nmstate clusterserviceversion +---- ++ +.Example output that lists a CSV resource +[source,terminal] +---- +NAME DISPLAY VERSION REPLACES PHASE +kubernetes-nmstate-operator.v4.18.0 Kubernetes NMState Operator 4.18.0 Succeeded +---- + +. Delete the CSV resource. After you delete the file, {olm} deletes certain resources, such as `RBAC`, that it created for the Operator. ++ +[source,terminal] +---- +$ oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.18.0 +---- + +. Delete the `nmstate` CR and any associated `Deployment` resources by running the following commands: ++ +[source,terminal] +---- +$ oc -n openshift-nmstate delete nmstate nmstate +---- ++ +[source,terminal] +---- +$ oc delete --all deployments --namespace=openshift-nmstate +---- + +. Delete all the custom resource definition (CRD), such as `nmstates`, that exist in the `nmstate.io` namespace by running the following commands: ++ +[source,terminal] +---- +$ oc delete crd nmstates.nmstate.io +---- ++ +[source,terminal] +---- +$ oc delete crd nodenetworkconfigurationenactments.nmstate.io +---- ++ +[source,terminal] +---- +$ oc delete crd nodenetworkstates.nmstate.io +---- ++ +[source,terminal] +---- +$ oc delete crd nodenetworkconfigurationpolicies.nmstate.io +---- + +. Delete the namespace: ++ +[source,terminal] +---- +$ oc delete namespace kubernetes-nmstate +---- + diff --git a/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc b/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc index 2fbf7b567398..2e0c05073412 100644 --- a/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc +++ b/networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc @@ -41,12 +41,15 @@ You can install the Kubernetes NMState Operator by using the web console or the // Installing the Kubernetes NMState Operator by using the web console include::modules/k8s-nmstate-installing-the-kubernetes-nmstate-operator.adoc[leveloffset=+2] -// Installing the Kubernetes NMState Operator using the CLI +// Installing the Kubernetes NMState Operator by using the CLI include::modules/k8s-nmstate-deploying-nmstate-CLI.adoc[leveloffset=+2] // Viewing statistics collected by the Kubernetes NMState Operator include::modules/viewing-stats-collected-kubernetes-nmtate-op.adoc[leveloffset=+2] +// Uninstalling the Kubernetes NMState Operator +include::modules/k8s-nmstate-uninstall-operator.adoc[leveloffset=+1] + [role="_additional-resources"] [id="additional-resources_k8s-nmstate-view-stats_{context}"] == Additional resources diff --git a/operators/understanding/olm/olm-workflow.adoc b/operators/understanding/olm/olm-workflow.adoc index 091ce7fd13c1..3e3ba7c48d87 100644 --- a/operators/understanding/olm/olm-workflow.adoc +++ b/operators/understanding/olm/olm-workflow.adoc @@ -8,4 +8,5 @@ toc::[] This guide outlines the workflow of Operator Lifecycle Manager (OLM) in {product-title}. +// Operator installation and upgrade workflow in OLM include::modules/olm-upgrades.adoc[leveloffset=+1] From 029cdae76ccad77810d18b714e5f487436c51441 Mon Sep 17 00:00:00 2001 From: subhtk Date: Tue, 21 Jan 2025 22:01:54 +0530 Subject: [PATCH 130/669] Removed the acme.cert-manager.io/http01-ingress-class annotation --- modules/cert-manager-acme-http01.adoc | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/modules/cert-manager-acme-http01.adoc b/modules/cert-manager-acme-http01.adoc index 338a02a922dc..6b4ac5eea09b 100644 --- a/modules/cert-manager-acme-http01.adoc +++ b/modules/cert-manager-acme-http01.adoc @@ -89,22 +89,21 @@ metadata: namespace: my-ingress-namespace <2> annotations: cert-manager.io/cluster-issuer: letsencrypt-staging <3> - acme.cert-manager.io/http01-ingress-class: openshift-default <4> spec: - ingressClassName: openshift-default <5> + ingressClassName: openshift-default <4> tls: - hosts: - - <6> - secretName: sample-tls <7> + - <5> + secretName: sample-tls <6> rules: - - host: <8> + - host: <7> http: paths: - path: / pathType: Prefix backend: service: - name: sample-workload <9> + name: sample-workload <8> port: number: 80 ---- @@ -112,11 +111,10 @@ spec: <2> Specify the namespace that you created for the Ingress. <3> Specify the cluster issuer that you created. <4> Specify the Ingress class. -<5> Specify the Ingress class. -<6> Replace `` with the Subject Alternative Name to be associated with the certificate. This name is used to add DNS names to the certificate. -<7> Specify the secret to store the created certificate in. -<8> Replace `` with the hostname. You can use the `.` syntax to take advantage of the `*.` wildcard DNS record and serving certificate for the cluster. For example, you might use `apps.`. Otherwise, you must ensure that a DNS record exists for the chosen hostname. -<9> Specify the name of the service to expose. This example uses a service named `sample-workload`. +<5> Replace `` with the Subject Alternative Name (SAN) to be associated with the certificate. This name is used to add DNS names to the certificate. +<6> Specify the secret that stores the certificate. +<7> Replace `` with the hostname. You can use the `.` syntax to take advantage of the `*.` wildcard DNS record and serving certificate for the cluster. For example, you might use `apps.`. Otherwise, you must ensure that a DNS record exists for the chosen hostname. +<8> Specify the name of the service to expose. This example uses a service named `sample-workload`. .. Create the `Ingress` object by running the following command: + From 7e240b56d6ef10d2c3205058c5cd33c83fcfe57e Mon Sep 17 00:00:00 2001 From: Laura Hinson Date: Tue, 22 Oct 2024 14:44:22 -0400 Subject: [PATCH 131/669] [OCPBUGS-37638]: Add troubleshooting docs for HCP on bare metal --- .../hcp-troubleshooting.adoc | 12 ++++++ modules/hcp-ts-bm-nodes-not-added.adoc | 39 +++++++++++++++++++ 2 files changed, 51 insertions(+) create mode 100644 modules/hcp-ts-bm-nodes-not-added.adoc diff --git a/hosted_control_planes/hcp-troubleshooting.adoc b/hosted_control_planes/hcp-troubleshooting.adoc index 4a6d405a7370..7b04d8387016 100644 --- a/hosted_control_planes/hcp-troubleshooting.adoc +++ b/hosted_control_planes/hcp-troubleshooting.adoc @@ -48,6 +48,18 @@ include::modules/hcp-ts-non-bm.adoc[leveloffset=+2] * link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/clusters/cluster_mce_overview#remove-managed-cluster[Removing a cluster from management] +[id="hcp-ts-bm"] +== Troubleshooting hosted clusters on bare metal + +The following information applies to troubleshooting {hcp-short} on bare metal. + +include::modules/hcp-ts-bm-nodes-not-added.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/clusters/index#on-prem-creating-your-cluster-with-the-cli-pull-secret[Add the pull secret to the namespace] + include::modules/hosted-restart-hcp-components.adoc[leveloffset=+1] include::modules/hosted-control-planes-pause-reconciliation.adoc[leveloffset=+1] include::modules/scale-down-data-plane.adoc[leveloffset=+1] diff --git a/modules/hcp-ts-bm-nodes-not-added.adoc b/modules/hcp-ts-bm-nodes-not-added.adoc new file mode 100644 index 000000000000..cd956654297f --- /dev/null +++ b/modules/hcp-ts-bm-nodes-not-added.adoc @@ -0,0 +1,39 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-troubleshooting.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-ts-bm-nodes-not-added_{context}"] += Nodes fail to be added to {hcp} on bare metal + +When you scale up a {hcp} cluster with nodes that were provisioned by using Assisted Installer, the host fails to pull the ignition with a URL that contains port 22642. That URL is invalid for {hcp} and indicates that an issue exists with the cluster. + +.Procedure + +. To determine the issue, review the assisted-service logs: ++ +[source,terminal] +---- +$ oc logs -n multicluster-engine <1> +---- ++ +<1> Specify the Assisted Service pod name. + +. In the logs, find errors that resemble these examples: ++ +[source,terminal] +---- +error="failed to get pull secret for update: invalid pull secret data in secret pull-secret" +---- ++ +[source,terminal] +---- +pull secret must contain auth for \"registry.redhat.io\" +---- + +. To fix this issue, see "Add the pull secret to the namespace" in the {mce} documentation. ++ +[NOTE] +==== +To use {hcp}, you must have {mce-short} installed, either as a standalone operator or as part of {rh-rhacm-title}. Because the operator has a close association with {rh-rhacm-title}, the documentation for the operator is published within that product's documentation. Even if you do not use {rh-rhacm-title}, the parts of its documentation that cover {mce-short} are relevant to {hcp}. +==== \ No newline at end of file From 50f16fdb90b66178eac089815a97e5a7af90e2be Mon Sep 17 00:00:00 2001 From: Audrey Spaulding Date: Tue, 7 Jan 2025 16:29:48 -0500 Subject: [PATCH 132/669] three merge conflicts fixed error fix for TOC change fix xref fix xref fix xref added to path another try xref fix fixed syntax removing unneeded NOTE --- modules/virt-creating-vm-from-template.adoc | 35 ++++++++++--------- modules/virt-customizing-vm-template-web.adoc | 4 +-- modules/virt-storage-wizard-fields-web.adoc | 3 +- modules/virt-vm-storage-volume-types.adoc | 3 +- .../virt-creating-vms-from-templates.adoc | 16 ++++----- 5 files changed, 31 insertions(+), 30 deletions(-) diff --git a/modules/virt-creating-vm-from-template.adoc b/modules/virt-creating-vm-from-template.adoc index f46546ae4d8e..fcf461f90117 100644 --- a/modules/virt-creating-vm-from-template.adoc +++ b/modules/virt-creating-vm-from-template.adoc @@ -6,30 +6,33 @@ [id="virt-creating-vm-from-template_{context}"] = Creating a VM from a template -You can create a virtual machine (VM) from a template with an available boot source by using the {product-title} web console. +You can create a virtual machine (VM) from a template with an available boot source by using the {product-title} web console. You can customize template or VM parameters, such as data sources, Cloud-init, or SSH keys, before you start the VM. -Optional: You can customize template or VM parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. +You can choose between two views in the web console to create the VM: + +* A virtualization-focused view, which provides a concise list of virtualization-related options at the top of the view +* A general view, which provides access to the various web console options, including *Virtualization* .Procedure -. Navigate to *Virtualization* -> *Catalog* in the web console. -. Click *Boot source available* to filter templates with boot sources. +. From the {product-title} web console, choose your view: +** For a virtualization-focused view, select *Administrator* -> *Virtualization* -> *Catalog*. + -The catalog displays the default templates. Click *All Items* to view all available templates for your filters. - +** For a general view, navigate to *Virtualization* -> *Catalog*. +. Click the *Template catalog* tab. +. Click the *Boot source available* checkbox to filter templates with boot sources. The catalog displays the default templates. +. Click *All templates* to view the available templates for your filters. +** To focus on particular templates, enter the keyword in the `Filter by keyword` field. +** Choose a template project from the *All projects* dropdown menu, or view all projects. . Click a template tile to view its details. -. Optional: If you are using a Windows template, you can mount a Windows driver disk by selecting the *Mount Windows drivers disk* checkbox. -. If you do not need to customize the template or VM parameters, click *Quick create VirtualMachine* to create a VM from the template. -+ -If you need to customize the template or VM parameters, do the following: - -.. Click *Customize VirtualMachine*. -.. Expand *Storage* or *Optional parameters* to edit data source settings. -.. Click *Customize VirtualMachine parameters*. +** Optional: If you are using a Windows template, you can mount a Windows driver disk by selecting the *Mount Windows drivers disk* checkbox. +** If you do not need to customize the template or VM parameters, click *Quick create VirtualMachine* to create a VM from the template. + -The *Customize and create VirtualMachine* pane displays the *Overview*, *YAML*, *Scheduling*, *Environment*, *Network interfaces*, *Disks*, *Scripts*, and *Metadata* tabs. +** If you need to customize the template or VM parameters, do the following: -.. Edit the parameters that must be set before the VM boots, such as cloud-init or a static SSH key. +.. Click *Customize VirtualMachine*. The *Customize and create VirtualMachine* page displays the *Overview*, *YAML*, *Scheduling*, *Environment*, *Network interfaces*, *Disks*, *Scripts*, and *Metadata* tabs. +.. Click the *Scripts* tab to edit the parameters that must be set before the VM boots, such as `Cloud-init`, `SSH key`, or `Sysprep` (Windows VM only). +.. Optional: Click the *Start this virtualmachine after creation (Always)* checkbox. .. Click *Create VirtualMachine*. + The *VirtualMachine details* page displays the provisioning status. diff --git a/modules/virt-customizing-vm-template-web.adoc b/modules/virt-customizing-vm-template-web.adoc index 786a871b9159..9127ae7aab63 100644 --- a/modules/virt-customizing-vm-template-web.adoc +++ b/modules/virt-customizing-vm-template-web.adoc @@ -1,10 +1,10 @@ // Module included in the following assemblies: // -// * virt/virtual_machines/creating_vm/virt-creating-vms-from-templates.adoc +// * virt/creating_vm/virt-creating-vms-from-templates.adoc :_mod-docs-content-type: PROCEDURE [id="virt-customizing-vm-template-web_{context}"] -= Customizing a VM template by using the web console += Removing a deprecated designation from a customized VM template by using the web console You can customize an existing virtual machine (VM) template by modifying the VM or template parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. If you customize a template by copying it and including all of its labels and annotations, the customized template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. diff --git a/modules/virt-storage-wizard-fields-web.adoc b/modules/virt-storage-wizard-fields-web.adoc index b501c2a85e4f..617cd165a3de 100644 --- a/modules/virt-storage-wizard-fields-web.adoc +++ b/modules/virt-storage-wizard-fields-web.adoc @@ -1,6 +1,7 @@ // Module included in the following assemblies: // -// * virt/virtual_machines/creating_vm/virt-creating-vms-from-templates.adoc +// * virt/creating_vm/virt-creating-vms-from-templates.adoc +// * virt/managing_vms/virt-edit-vms.adoc [id="virt-storage-wizard-fields-web_{context}"] = Storage fields diff --git a/modules/virt-vm-storage-volume-types.adoc b/modules/virt-vm-storage-volume-types.adoc index 2d7c4e6e9cec..6daa5dfa75b6 100644 --- a/modules/virt-vm-storage-volume-types.adoc +++ b/modules/virt-vm-storage-volume-types.adoc @@ -1,6 +1,7 @@ // Module included in the following assemblies: // -// * virt/virtual_machines/creating_vm/virt-creating-vms-from-templates.adoc +// * virt/creating_vm/virt-creating-vms-from-templates.adoc +// * virt/managing_vms/virt-edit-vms.adoc [id="virt-vm-storage-volume-types_{context}"] = Storage volume types diff --git a/virt/creating_vm/virt-creating-vms-from-templates.adoc b/virt/creating_vm/virt-creating-vms-from-templates.adoc index 218a3fbc9114..a74df7e9c552 100644 --- a/virt/creating_vm/virt-creating-vms-from-templates.adoc +++ b/virt/creating_vm/virt-creating-vms-from-templates.adoc @@ -11,20 +11,20 @@ You can create virtual machines (VMs) from Red Hat templates by using the {produ [id="virt-about-templates"] == About VM templates -Boot sources:: +You can use VM templates to help you easily create VMs. + +Expedite creation with boot sources:: You can expedite VM creation by using templates that have an available boot source. Templates with a boot source are labeled *Available boot source* if they do not have a custom label. + Templates without a boot source are labeled *Boot source required*. See xref:../../virt/storage/virt-automatic-bootsource-updates.adoc#virt-automatic-bootsource-updates[Managing automatic boot source updates] for details. -Customization:: +Customize before starting the VM:: You can customize the disk source and VM parameters before you start the VM. -See xref:../../virt/creating_vm/virt-creating-vms-from-templates.adoc#virt-vm-storage-volume-types_virt-creating-vms-from-templates[storage volume types] and xref:../../virt/creating_vm/virt-creating-vms-from-templates.adoc#virt-storage-wizard-fields-web_virt-creating-vms-from-templates[storage fields] for details about disk source settings. - - ++ [NOTE] ==== -If you copy a VM template with all its labels and annotations, your version of the template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove this designation. See xref:../../virt/creating_vm/virt-creating-vms-from-templates.adoc#virt-customizing-vm-template-web_virt-creating-vms-from-templates[Customizing a VM template by using the web console]. +If you copy a VM template with all its labels and annotations, your version of the template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove this designation. See xref:../../virt/creating_vm/virt-creating-vms-from-templates.adoc#virt-customizing-vm-template-web_virt-creating-vms-from-templates[Removing a deprecated designation from a customized VM template by using the web console]. ==== {sno-caps}:: @@ -32,10 +32,6 @@ Due to differences in storage behavior, some templates are incompatible with {sn include::modules/virt-creating-vm-from-template.adoc[leveloffset=+1] -include::modules/virt-vm-storage-volume-types.adoc[leveloffset=+2] - -include::modules/virt-storage-wizard-fields-web.adoc[leveloffset=+2] - include::modules/virt-customizing-vm-template-web.adoc[leveloffset=+2] include::modules/virt-creating-template.adoc[leveloffset=+2] From 44a71451b12d6e6926599d586c55ba37a3a0ad89 Mon Sep 17 00:00:00 2001 From: Maysa Macedo Date: Mon, 3 Feb 2025 11:55:23 -0300 Subject: [PATCH 133/669] Add note on NM behavior with additional networks When using additional networks on a dual-stack cluster, the same network manager connection would be enforced to all the additional networks. This commit, informes the users about that behavior and recommends an alternative if different connections are desired. Co-authored-by: Max Bridges --- modules/install-osp-deploy-dualstack.adoc | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/modules/install-osp-deploy-dualstack.adoc b/modules/install-osp-deploy-dualstack.adoc index 9a718a9f8a18..a92b1b813a72 100644 --- a/modules/install-osp-deploy-dualstack.adoc +++ b/modules/install-osp-deploy-dualstack.adoc @@ -164,4 +164,10 @@ ipv6.addr-gen-mode=0 ---- After you create and edit the file, reboot the installation host. +==== + +[NOTE] +==== +The `ip=dhcp,dhcp6` kernel argument, which is set on all of the nodes, results in a single Network Manager connection profile that is activated on multiple interfaces simultaneously. +Because of this behavior, any additional network has the same connection enforced with an identical UUID. If you need an interface-specific configuration, create a new connection profile for that interface so that the default connection is no longer enforced on it. ==== \ No newline at end of file From 70d75073c48af65caecb79b0125ec8de74dcf28d Mon Sep 17 00:00:00 2001 From: cbippley Date: Wed, 5 Feb 2025 14:37:12 -0500 Subject: [PATCH 134/669] OCPBUGS-13065 removing incorrect output line --- modules/getting-started-cli-creating-secret.adoc | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/getting-started-cli-creating-secret.adoc b/modules/getting-started-cli-creating-secret.adoc index a8660cc3a6cf..6b6e27c0c7a5 100644 --- a/modules/getting-started-cli-creating-secret.adoc +++ b/modules/getting-started-cli-creating-secret.adoc @@ -72,6 +72,5 @@ $ oc rollout status deployment mongodb-nationalparks + [source,terminal] ---- -deployment "nationalparks" successfully rolled out deployment "mongodb-nationalparks" successfully rolled out ---- From 68dbdb72d81a21ea81bd0902340345208ac7792c Mon Sep 17 00:00:00 2001 From: Max Leonov Date: Tue, 4 Feb 2025 17:55:51 +0100 Subject: [PATCH 135/669] OBSDOCS-1670: Add TOC links to OTel Collector component-listing pages --- .../otel-collector-connectors.adoc | 7 +++++++ .../otel-collector-exporters.adoc | 10 ++++++++++ .../otel-collector-extensions.adoc | 11 +++++++++++ .../otel-collector-processors.adoc | 15 +++++++++++++++ .../otel-collector-receivers.adoc | 17 +++++++++++++++++ 5 files changed, 60 insertions(+) diff --git a/observability/otel/otel-collector/otel-collector-connectors.adoc b/observability/otel/otel-collector/otel-collector-connectors.adoc index 713766c250a7..545305f3ee89 100644 --- a/observability/otel/otel-collector/otel-collector-connectors.adoc +++ b/observability/otel/otel-collector/otel-collector-connectors.adoc @@ -8,6 +8,13 @@ toc::[] A connector connects two pipelines. It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data. +Currently, the following General Availability and Technology Preview connectors are available for the {OTELShortName}: + +- xref:../../../observability/otel/otel-collector/otel-collector-connectors.adoc#count-connector_otel-collector-connectors[Count Connector] +- xref:../../../observability/otel/otel-collector/otel-collector-connectors.adoc#routing-connector_otel-collector-connectors[Routing Connector] +- xref:../../../observability/otel/otel-collector/otel-collector-connectors.adoc#forward-connector_otel-collector-connectors[Forward Connector] +- xref:../../../observability/otel/otel-collector/otel-collector-connectors.adoc#spanmetrics-connector_otel-collector-connectors[Spanmetrics Connector] + [id="count-connector_{context}"] == Count Connector diff --git a/observability/otel/otel-collector/otel-collector-exporters.adoc b/observability/otel/otel-collector/otel-collector-exporters.adoc index 9bfcfbc907c6..76bd455151a8 100644 --- a/observability/otel/otel-collector/otel-collector-exporters.adoc +++ b/observability/otel/otel-collector/otel-collector-exporters.adoc @@ -8,6 +8,16 @@ toc::[] Exporters send data to one or more back ends or destinations. An exporter can be push or pull based. By default, no exporters are configured. One or more exporters must be configured. Exporters can support one or more data sources. Exporters might be used with their default settings, but many exporters require configuration to specify at least the destination and security settings. +Currently, the following General Availability and Technology Preview exporters are available for the {OTELShortName}: + +- xref:../../../observability/otel/otel-collector/otel-collector-exporters.adoc#otlp-exporter_otel-collector-exporters[OTLP Exporter] +- xref:../../../observability/otel/otel-collector/otel-collector-exporters.adoc#otlp-http-exporter_otel-collector-exporters[OTLP HTTP Exporter] +- xref:../../../observability/otel/otel-collector/otel-collector-exporters.adoc#debug-exporter_otel-collector-exporters[Debug Exporter] +- xref:../../../observability/otel/otel-collector/otel-collector-exporters.adoc#load-balancing-exporter_otel-collector-exporters[Load Balancing Exporter] +- xref:../../../observability/otel/otel-collector/otel-collector-exporters.adoc#prometheus-exporter_otel-collector-exporters[Prometheus Exporter] +- xref:../../../observability/otel/otel-collector/otel-collector-exporters.adoc#prometheus-remote-write-exporter_otel-collector-exporters[Prometheus Remote Write Exporter] +- xref:../../../observability/otel/otel-collector/otel-collector-exporters.adoc#kafka-exporter_otel-collector-exporters[Kafka Exporter] + [id="otlp-exporter_{context}"] == OTLP Exporter diff --git a/observability/otel/otel-collector/otel-collector-extensions.adoc b/observability/otel/otel-collector/otel-collector-extensions.adoc index 07c71497d4d0..0e76ed3de4c9 100644 --- a/observability/otel/otel-collector/otel-collector-extensions.adoc +++ b/observability/otel/otel-collector/otel-collector-extensions.adoc @@ -8,6 +8,17 @@ toc::[] Extensions add capabilities to the Collector. For example, authentication can be added to the receivers and exporters automatically. +Currently, the following General Availability and Technology Preview extensions are available for the {OTELShortName}: + +- xref:../../../observability/otel/otel-collector/otel-collector-extensions.adoc#bearertokenauth-extension_otel-collector-extensions[BearerTokenAuth Extension] +- xref:../../../observability/otel/otel-collector/otel-collector-extensions.adoc#oauth2client-extension_otel-collector-extensions[OAuth2Client Extension] +- xref:../../../observability/otel/otel-collector/otel-collector-extensions.adoc#filestorage-extension_otel-collector-extensions[File Storage Extension] +- xref:../../../observability/otel/otel-collector/otel-collector-extensions.adoc#oidcauth-extension_otel-collector-extensions[OIDC Auth Extension] +- xref:../../../observability/otel/otel-collector/otel-collector-extensions.adoc#jaegerremotesampling-extension_otel-collector-extensions[Jaeger Remote Sampling Extension] +- xref:../../../observability/otel/otel-collector/otel-collector-extensions.adoc#pprof-extension_otel-collector-extensions[Performance Profiler Extension] +- xref:../../../observability/otel/otel-collector/otel-collector-extensions.adoc#healthcheck-extension_otel-collector-extensions[Health Check Extension] +- xref:../../../observability/otel/otel-collector/otel-collector-extensions.adoc#zpages-extension_otel-collector-extensions[zPages Extension] + [id="bearertokenauth-extension_{context}"] == BearerTokenAuth Extension diff --git a/observability/otel/otel-collector/otel-collector-processors.adoc b/observability/otel/otel-collector/otel-collector-processors.adoc index 29bb641b18bf..36e80aa2ba03 100644 --- a/observability/otel/otel-collector/otel-collector-processors.adoc +++ b/observability/otel/otel-collector/otel-collector-processors.adoc @@ -8,6 +8,21 @@ toc::[] Processors process the data between it is received and exported. Processors are optional. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, multiple processors might be enabled. Note that the order of processors matters. +Currently, the following General Availability and Technology Preview processors are available for the {OTELShortName}: + +- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#batch-processor_otel-collector-processors[Batch Processor] +- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#memorylimiter-processor_otel-collector-processors[Memory Limiter Processor] +- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#resource-detection-processor_otel-collector-processors[Resource Detection Processor] +- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#attributes-processor_otel-collector-processors[Attributes Processor] +- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#resource-processor_otel-collector-processors[Resource Processor] +- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#span-processor_otel-collector-processors[Span Processor] +- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#kubernetes-attributes-processor_otel-collector-processors[Kubernetes Attributes Processor] +- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#filter-processor_otel-collector-processors[Filter Processor] +- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#routing-processor_otel-collector-processors[Routing Processor] +- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#cumulativetodelta-processor_otel-collector-processors[Cumulative-to-Delta Processor] +- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#groupbyattrsprocessor-processor_otel-collector-processors[Group-by-Attributes Processor] +- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#transform-processor_otel-collector-processors[Transform Processor] + [id="batch-processor_{context}"] == Batch Processor diff --git a/observability/otel/otel-collector/otel-collector-receivers.adoc b/observability/otel/otel-collector/otel-collector-receivers.adoc index 26e78ecaa152..176fcab4bf90 100644 --- a/observability/otel/otel-collector/otel-collector-receivers.adoc +++ b/observability/otel/otel-collector/otel-collector-receivers.adoc @@ -8,6 +8,23 @@ toc::[] Receivers get data into the Collector. A receiver can be push or pull based. Generally, a receiver accepts data in a specified format, translates it into the internal format, and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers may support one or more data sources. +Currently, the following General Availability and Technology Preview receivers are available for the {OTELShortName}: + +- xref:../../../observability/otel/otel-collector/otel-collector-receivers.adoc#otlp-receiver_otel-collector-receivers[OTLP Receiver] +- xref:../../../observability/otel/otel-collector/otel-collector-receivers.adoc#jaeger-receiver_otel-collector-receivers[Jaeger Receiver] +- xref:../../../observability/otel/otel-collector/otel-collector-receivers.adoc#hostmetrics-receiver_otel-collector-receivers[Host Metrics Receiver] +- xref:../../../observability/otel/otel-collector/otel-collector-receivers.adoc#k8sobjectsreceiver-receiver_otel-collector-receivers[Kubernetes Objects Receiver] +- xref:../../../observability/otel/otel-collector/otel-collector-receivers.adoc#kubeletstats-receiver_otel-collector-receivers[Kubelet Stats Receiver] +- xref:../../../observability/otel/otel-collector/otel-collector-receivers.adoc#prometheus-receiver_otel-collector-receivers[Prometheus Receiver] +- xref:../../../observability/otel/otel-collector/otel-collector-receivers.adoc#otlpjsonfile-receiver_otel-collector-receivers[OTLP JSON File Receiver] +- xref:../../../observability/otel/otel-collector/otel-collector-receivers.adoc#zipkin-receiver_otel-collector-receivers[Zipkin Receiver] +- xref:../../../observability/otel/otel-collector/otel-collector-receivers.adoc#kafka-receiver_otel-collector-receivers[Kafka Receiver] +- xref:../../../observability/otel/otel-collector/otel-collector-receivers.adoc#k8scluster-receiver_otel-collector-receivers[Kubernetes Cluster Receiver] +- xref:../../../observability/otel/otel-collector/otel-collector-receivers.adoc#opencensus-receiver_otel-collector-receivers[OpenCensus Receiver] +- xref:../../../observability/otel/otel-collector/otel-collector-receivers.adoc#filelog-receiver_otel-collector-receivers[Filelog Receiver] +- xref:../../../observability/otel/otel-collector/otel-collector-receivers.adoc#journald-receiver_otel-collector-receivers[Journald Receiver] +- xref:../../../observability/otel/otel-collector/otel-collector-receivers.adoc#kubernetesevents-receiver_otel-collector-receivers[Kubernetes Events Receiver] + [id="otlp-receiver_{context}"] == OTLP Receiver From 92574782bd626f236159e79d04e482087b0c614c Mon Sep 17 00:00:00 2001 From: Brendan Daly Date: Tue, 4 Feb 2025 19:41:35 +0000 Subject: [PATCH 136/669] OSDOCS-11792:adding new parameter --- modules/installation-configuration-parameters.adoc | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/modules/installation-configuration-parameters.adoc b/modules/installation-configuration-parameters.adoc index 219db00a877d..4c89056cc088 100644 --- a/modules/installation-configuration-parameters.adoc +++ b/modules/installation-configuration-parameters.adoc @@ -3501,6 +3501,12 @@ For more information on usage, see "Configuring a failure domain" in "Installing |The password for the Prism Central user name. |String +|platform: + nutanix: + preloadedOSImageName: +|Instead of creating and uploading a {op-system} image object for each {product-title} cluster, this parameter uses the named, preloaded {op-system} image object from the private cloud or the public cloud. +|String + |platform: nutanix: prismCentral: From 6f2e90b1ab5f15a8b7bc97ab641162a1fe54b117 Mon Sep 17 00:00:00 2001 From: Andrea Hoffer Date: Wed, 5 Feb 2025 08:52:51 -0500 Subject: [PATCH 137/669] OCPBUGS#41513: Updating secrets store image to RHEL 9 --- modules/gathering-data-specific-features.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/gathering-data-specific-features.adoc b/modules/gathering-data-specific-features.adoc index feb0e99f780a..e301c1b8734c 100644 --- a/modules/gathering-data-specific-features.adoc +++ b/modules/gathering-data-specific-features.adoc @@ -71,7 +71,7 @@ endif::openshift-rosa,openshift-dedicated[] |`registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v` |Data collection for {gitops-title}. -|`registry.redhat.io/openshift4/ose-secrets-store-csi-mustgather-rhel8:v` +|`registry.redhat.io/openshift4/ose-secrets-store-csi-mustgather-rhel9:v` |Data collection for the {secrets-store-operator}. ifndef::openshift-rosa,openshift-dedicated[] From a490e3f6d00f1d4d3b39d8fcd23208533302472e Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Thu, 6 Feb 2025 09:36:46 +0000 Subject: [PATCH 138/669] OCPBUGS-46510-label --- images/510-OpenShift-arch-012025.png | Bin 202366 -> 0 bytes images/525-OpenShift-arch-012025.png | Bin 0 -> 199480 bytes .../architecture-platform-introduction.adoc | 2 +- 3 files changed, 1 insertion(+), 1 deletion(-) delete mode 100644 images/510-OpenShift-arch-012025.png create mode 100644 images/525-OpenShift-arch-012025.png diff --git a/images/510-OpenShift-arch-012025.png b/images/510-OpenShift-arch-012025.png deleted file mode 100644 index a65596159a63bbefee04698b07d86dfcf64d413d..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 202366 zcmeFZbx@XVA1#WBBA8f!U;v5|ii9Acf)a`#Aq^tZCEa0xU?3r>AX3swHwq%s-7V76 z-Dlmt-+T6+Isfc4d-lwkIUn=-eZ&*@eO-v@NVP%(O_Xt*u%0 zj19~+H6CiQnwaSXkMmLCLA!|u$?91cYLS>|K2+B*)6_j?B}{?8+$a7btYxlgruWc7 z&%~HS%-F)r==pJSo)w{y*Bw^zjS=F!@u9t?A;{r@Atc(lhFP9J*lwEp8tIB zvgPDg(*Jz_b<@6{E&utx$K}5-PvWwrET|RXb^PKpH#Z%t1OKdz;YZT{uIhh74#Duh zlrSIS13Ui)&e74P1ZA?_yYGIvUo0CT>SZxzzc8+A!2E9?92-MfY?bDg<~Zo$g6yGd#Z*IyhOp6o5kww?%y?5M1i zP|h%xdriYbYgSHF1_g6*QBl#Ke{PVFyiqJHD!Q1nQ?&To6H?37Mbl@`o((s|$naW@ z_%6>5het^SYg^Ta9Jz4eLblyp?7R2xSyZzd#>Zv`D&yF-%6^uY z-%QG1jxD0_xnS6G{{1D>LMgwHd{Xm)iVM%YdCUhIJO$SSMx3Umru4h>^I}%tTX%VR zdU|%2IPaBDRH^kkB|>@dVB{edr3VUmD5J}xNd>8Dn)vC8>t3>|%k%6PD5$#BpoRptMKKQ zWAirL@K>S`yPW=6u(2K&F5($w5PIygmG1iL()8qH&1a3G-^0U8HEj!@P;!FRJo#*= zThfveny04w%a1m3DX%XL(ydJv?cueX{nS7FM!zvuOTneUdO}Si;QUp$J#@ECgI^qG z)35z_T`5)9OTQ_eNl;KwE>@OF==Da#^VhGNn$?{XTwGjaP0qZ(Y<+vM>b=aBD=t4? z9u3dvc2efIo0aOywWU*P_c(YiidixF3@dAulv?rwvEcU8^z^#p9a#&O@wiVl=e0Ti z*K~r3Io1>6-ZwmHCH{7l2ggfBv=P4Sl`%5W4QU3=q$BPxNDoj_MpOoK3;Kq=cUm4k zZ8_G)**MwN7Q%1e@I_OEMLnOdsC_0g+kEh~!0Mz>f=U*(}*gDf+F+k?BV5 z^EGY$g3eBJBaP)!ck}El^X8hAxiSP1zKLpi`e#iVK6yDU_L5e zietR8+{PU>D-#8wy+-yO6#*B7oDsN=6Lbit&D+Qx=my8n{b`lq)NiQCcU+N;SGa(0 z&lFEBCb&s?M;Nvy>&*_;R=5$-eTjtV#vz-r6kKmrbF3szoH!9>nxmC>&xTwO&K=p zP-ss?-qpS6?~{}PoCbcC(?xE3;&iG*qQuy?Y}sNt+Qiad5qPnf7neqCP-m{K%r-Jg zDO?kKZl`BB7DhGCPApFMj*Rlmn~~AQ^Qr|-s+vDue#xrp>{JbPUe7{YSZbAe zUl%uAS)Lo^)F|94Q~f71^AZJ}fJ&TvQaaDOvhX*vZC&|}%G-7wO0^f*Vn0H$XOFyt zL$1dmmZ1AKjy+#Db!J~*Zz7`z~Rf6FKuZC{s^*j6v+_r5nxvY2~tSkpCogHASbItJ4BS@=kq}#9ZA? zw$kQid8FpW_yUVzZ(`rc!V=m~i{12Rl|xQ&I_zWrYM+O@~xe#|kbVXY1< zhGL?kHK^sYC~NxFA^fh*=@!Fv`gIZRyjFicb!3^hcR2SJJ_};k@(7E^Q-V13yjj(9 zKOtH5$J)~Tngvie_1jWo&=WL+44axdwlduGyJecmq@$x_Yi}Q?mS-2lZ=a6RqLHO| zKM5sn1ZmQYoMO=&7hL(1GSU#^gSK!lB_A(8JWv_znv55ZwVNC9rbi(fYxyE1PQz8+ z_V%3oEy4P^(dP2TMmYpuny+Ga#hZwIr*7yXcl`R#%EjuUC=K)K5LvY0MwHBG75DuN zhJMNmWuB;G{_0LK6x3WT9qWb9b_HG3dF{un_z^Wf)@f~-iS{BF*WDsliuzc&IPwFJ zh;YlX8mmiG&6#adHaS|TB$9gX+vdu5JoN+gAH??W-)}MfTTCLDOX8*+t}!(cic`9VNQMhyAtNfBpKEdcT-VCJQ;hc>7(z$bkdZ0xR)I)o5bNT@{ql z4Fj3JW;CwLMBmo?{(N6lFiBWg7!kUMSXDluyTsm2P-03y#m+9hwz5DhoKfPQUAyX2 z_3DW$a$Fh9;BtHUWp4R~h=@UVzT-%HMpB}Bfw6-1%zy-oQmQDfsUg>PhMYSbXyDdd zD3AG#m4yj3C>@W%-e14&2PrhsuCK2Lo!59gJm`p?{EC|UHtBAfn#HNU{y%>VdEFYi zF4or8#^ewF>g$sj^>LDvX_s^_C;|paXKp83B9f@3y7jdY>cs)WjrBFI@?8hdK3H-_ z1$T8%|7?^Y|b(ttXtLb z{L`8eP1U8G*iZ#MfS?IIN6ww++8+BvQ8rnOgT=h zPEJlS`LCN3RqI9Q93zlstt!rI%a)e-Yw@H&*7N5(EGN47i?id=Cgao#oMP5#Zifma zAONV!%6yqL+gK3+`T6-HO$otKlA(zRl{3uD724JFvx_s;f}F-3nX)ELhAp43V#{qm z^buk9(C{x5VY0*niKU_t;t$4sEfCRUVJYu z9xm8eOTw<|Qzh9WZQ20PTtjZ!b)bHlr@lc@~2P&EEk-Tm7f_O2mXGL&3T zNE}G*e#&YkcXw*e>+KX&y1+eAL{+BlqDzlxlu>r)cHO?4=2L&U|2d?ktK}2YJ?XaL z$5HOg%+2eu)&_qIo6Dvxe`8X$>GpeG28Q{U)jp>^V-o49xs7FQY%=@#RleN6{+3m> zts!x7(Wczzl$eT&ifp=}qFkbiVw!$qU9d@hU}Dym_4O4KrYT0RxHtn3Zt8*R&?M__ z$L4W6A|UwGh-uJB^Or*6&ho2|%r4xK8R`YF8oY}Mrn0^Xsv@;WSj#(S9ri8j_|W@df@B&Omq zi^$E*l`U{`e9I(r129bv;1|_0ywvA(aN(Z>bZe`9GjM;Mf^l5ZxQJ@Ez5I$RrU*C7GO_#G}qqb_~pr-0g z{VHDWd9sID3uvYZAf2Ke&{r4f3;Y!5dx^68!__Aie_Ae-bM^N2MrBL?8Xb+J<+nS7 zZ6VvUr}^unEonetMO*~R91L|P`Uyk+1nC~y_&tKF=PNZ{-bYgf>OLmu)WJ^owS>+& z-fFC^mByn)3Ae>*F(id}R8UOOZp2;v9vsxkRav0CS`Fz&$`nV= ze@(4kni*u-ur@a*R^IotB&}9;NN1`MiZ%17&wA?tk0_3L8)U*Cfec?`%82=OF?NT= zheXi=IP&uM_jhn~OtTo44nA;-O+CNkv!P>=sJFnX9=#Nb`VL~#;c+a{g zB_$DnDqe8|XydTDL#CzCHGCDu zR(k%6fw;G1LomH$2p^SzL#EzO|IeS#7fErJ+S{J{!tIRxGt3{1m5sS?x@_w}deZ?4f)}zeBk&^$lWART)%(lAWj>_SV)@G>=W<5*o!4_2zGleD1G?3B zHj#R98r*ICbmvr~5qc|m(WXd7R@U2C_-23;N=yR=&55OG{6w&z0?CG5dq%EryBtAz zkB-+ultJA0bKGlGyEf*Gkmlq3Ae5Wi_C+Q!$)shVtKlbm1Zqg>96q3&45R<>n*Ms)-woI=le>f}^p0c8Dyb1c zhq;{ILQrIqk9HnlR9JULcaeC{+n8Djw0KJkljWWcWjtr9jXnFvXoC!YecrveM;E#5fdd>xqO0BB%?%7i(JV_W1NJR zV)BD8OW%d~csxL>QW-|~w-}Vx0LxsB zMjK)@&2z1kJr1)OFF7OYEtaOWik9o6CEIBZubK9s(p=27-l|pRd#BW!L85mG)g;xZ zovZ%)kEYF=Hyeg1Bjf4L*yN+lYwUKR7qdjER=<7wc9iKaY7X6d<`tI8WQUk-y2nyV z!i3ypQ*~=OJo^C9*Vk6!OuF-bP#^;lk6 zT1lsxZ?osAAAQ7h$==jACvlX7M3e22DfX-48?=B}fFaUnl0iX-q)+ zm}Hn}o6F0~Wz(HEoL(KdaE(A-z6|vAXO17g0@A_iy`=W%-2n_SND|EP&&vE*;y@t8 z06Xb-WPa(iY$LM=wvP)42w2hA1gAtHDNCQ_w3bc801)QcOP3Tdp zUBrI>`woJmx^w3a75jtd4h{~fPHT3-J}qjwHh0i8*xw#zQJQr>Qr*?G(%95ZFdQTc zp0q@iKJ&KgU0GT>TGrRs$Ej6DDG|cgD%}LgR~f`vzBlB?_3JSJb|6;XtJw`RXjKNu zp=CAz&L2E_GzM5bRx(r|N^P#It800_-Q*xGt??Q4Y>OB!IOR5GgP((-Z{kOq>4Z0F8ed3DD~P*_olp)X#%AW%E% z*eL$FT4Kef7Z&^r3JNMu7{vEi2Fqiv6-&#oHQTglQ(Lmu3otE_n4m_nr3CL)Y2@hW zNKt-_*P;fchk}~~1Y2X21hwU#mOE^@laB$e2~4mC)+rmD*z&@}##c$w2rcY|Qh`%m z7&CT>h~cVtJj6u%@%oFi0s?B7#~VQ@SOD6}@S_MPuUJ|!QSbLxkUfbyi@Ha+7ietZieZ2Z)dkoNYlq2zO7>Kkc6$SWE1bk9bQZ#

s+`rf^JL36&$iggx6zf9$_%(TWjbDE_NQB%hO6)5@z zW8~V&@^^V?{=qJ zb<&D%;?6ngi05sodYWhVP*BK$DC070c{3ipL)`a_*TC=Og>a1`R}94Sj<0f%4yH>h zE2BVIx0{{PL!>_6MS}pD)G&}uP-~9Mvtsg+a|92TdO`(dn$x6<&vLv&5lta0i5De0 z5daXJ4ogwH|53c>jFhn-kxYw%f48?7A^#wdk0T4^P7ip-H5owFpebIFtE4Jppua!z z*{;LHaLLuYFw%H4{KhMBQ<}-a5c}JLtCLTN2C$Izz!lpY$@?QOI5?Q#j1zf9Uem;& z)aqOreaXkqe{k>K51>|B4kiv_yYlSj>JU!jas1s5Gp~py96So_-jHW+EfFfvIr-Ez z4H#H=`gf_BwYB7&0x*(3LBp-hHB)*9?(6920FsSHsKlbmSdmTv?2puhJt4;Q`^6so zsq_5$w6wHT7w$h*oj6V0=z`k`3xJHrd?4-Y2H#u%=@P(fMs?}ohe~%Hlma8N9(B|S z7R6-YM~@y6Ir;d>6N0P4hD#7!6>>{1NsS+K4Ns+JhHmXO3MvkQ4zXCA)VO}*M(o4h zLhu281d|)Dm=X&z0}NoiMggj8tD_U5yuLHXnnG*-X(C!2m%7O4j5w9%dZ%@FBVrW6 z9*^;0Bv2>B3v@6xD4gI&i7D1u98K^O)?XimjkqohVv%_4SuF3_yYT@aGa>*&+D+%R zZD$7L(8C%Nl(~&Mv*mCfyns%u>iNu=QT_rnabGEbsJW52q_g1N7c6NQ1SCE`?7fCX zlpuN|#=*BB^suhCaeGqWjmKi6jXSgL#?qQoZD+L0NB12)%EZmheUMc}AtWS3zvjcE zVMp(XYtP7{LL(D`?|K{NcZb-I^VC~KiodNRXcSZ&f-S=68jTetiZ5CZ6N&>-XNZU9 zE&h5jJwKn$qvo)mjg5^TM2~AS7~-Jxo5PNJYvx2x)T?^(c-yWvJ}Xeq5|#sC*o)JZ z5TAi+4pY}LBFYjZ1TH@Ey9O~&vzVBe3+FSQYU2Eh!iG~By+7Hz&7#qza+XG|DQxqsiq2J;)%2-z;5hs-=4X1Ql{;%s4BTX z`YFCDjJJ}I1Tp}Ar34cX#5Eld8v5ruFpB?GLV-K~zaSGUUd%xd9M^Qn;S}#S^eFqI zO+qsHeQ4nZEk%6jpcl<(YuF%dDJ%vyN-9*nA*~WW_Y#E8KVM%r)ovklmx><7>zv72 zUn-1YN$^bYGGNvJS+*zfR8o!w!Z2%k3ii7+5!Ia$5WUEGk`xqUP=dc zrjj2#aNv}@d~=2IjqHvbYe{sGG9Wf>?rrCLm3E69rlm~;RyadX|9#p7RkAFY+eDk2 zMTq}(b4EY_!B+e%EoDjx-h7FYs9taM2)0?j@hk`#CFk{}=#!^Tv5XQx2`E<1s4dki zBV$2xX4`LE3h%=n7lOY7+9euP=^sWh@7J{avGZf?)Ma+FLsLIrJI{^C<;_%Gc1;E| z&!60PYON_zH4ZGm(K4`87ZdpR6rnlJ79UpE1&MFh_llc6bsPwl(1={wA@E|3I$rE} z^5jX-TSjLv%!d+0{~2oZ=f{s9bAXvWJ?=_ID=7IZZPnCgwzUtm7$FCc9X#vXDq*Gz zHnj;?u65=GmN9jnkGx=Y`bGZwvH>8GBpTIGr3LhAj)y&0gcJerZZ+rrCT3rZq*P_D zd*~7|M=SVYl4y*RXG4d)Nf1^bde~DtzxKT(ye$trzJ9&pI8}U@jEpQfOW@?mYXxib z!A>h<+(fg)+*Kb3J^5$5UJ}S)-JY+Hc_paa4kS-IX`9&qPS=pdI1NpoS1h;Mg~Ef5A95}1njhw|ckk)eo5P8B13FK(O*x+65)N@M=z^vz zFebAW#Eg-StQ5>&bmI1#lr#Odyw<^bV4CCh&0C!=4}w+T6wf-+v>$wjxvElgGZtO9 z@afL7pt+6hWbV(OOA$(1E)xl#1VM9uJw*uC&~++g$45qDfrxd%j*yX)HvrT$W|*k= zSBG|gR>g$zyT5;KhcbDO80-uOfyRDH-NlSe2--q!hegZ5hsS_u2r3)PdQ?MHg)lh3!FKvLK@E4=4j$_-^%>0fwC?*sLE!0sT?q))5!`2)*D1F$ ze>OH?%{1fAwwf(GrOq2G>hA@eyOLF~Q;yT6r^%qcy*zp`RkLI_`1Tm+rPEVWcYw5| z*RI(@6=zY&lmf|84>8MPuKK+$mQ=P}K=_H~ zK30$UB2}lFmZ(#}yz&^v3Dr(^GzVM&CM0>R`fbQScP%VZiP-@tTv0$kzz)T$-mo6B z>TJ77fgVa$_QT%CW&;&5sPS=F%3p!!n|?#t|YN7@jyWAI6ccyqsj6YF38b&fNC%hJ-H%mGEs& zP7dx?XJ&a41yX-ye$21`@u;M3pVnabYzARqaZd9gxovh?)(H?{?J=_wc4DCe1qo9A?t~*zf^)rSN!DULy(v^ ziGb>|pWx4_mT(Wv@ncnuCCEeE5%KB}?C07Hl46A1cAu4ry5+G#NW-{EOrSou-(3oW zB7BAzG*z=xF@J}_?=+M(7;;nIUYfY`&MfniWEB{A))xCb{i?&ndmZI6A8Nd!;i;RX z-AP7PmucEZ-u{+FxyfP+9|OpiyGVDruxs0H2Ie(qpdssHv0S@{F}T8%z%4Wwvx%1b z_Rdm4)6FE4zP8@3G>cXSzWJ?`k>4ad^dFSp5A3AznHj7uhlp#bFiH&p7c|L7pFDp2dvw%b2Pq+#N(!zuBV?Q*l4}_sAaTVL#G(X!YNRReSRtQ^O*gvL5-aS1s&bi^c!_534CR_8G4ay z_zenzYmPd9%Nr`?M^IsH@uRT2Nr8x#S*0HM{X2qSe>yTvU5}s6!h6Mtc+zr}B(Z|7 z0;i^bL`2>@-K5)?j2ajfJ}h}y%qj*B1nNP^yxm_EQWtrXI{he$2w@ZmUMX1UvL|#4tt@b7TpLy=)@~$b zeL8WD?Nc|kEyw5ewQk>Ao;q$JKJ*L~Hp5;t6Ha`r&C?V4E4M+!DrFeIj|l<5(jBgg zY|F7u0t*_2;Hu{WJ<^(4cPhxb2&EVZD?pHG*d%0O*}DSV{O7Yin@xRY~kc# zE{E3DfEKOSqK)<*)xE-^oK86P2t}Gm{`x6t)ROB(-CZs&B*boTRV#Z>!R z3lh_CBFuB{|p@jLud#L3|*gf z;Rqs(gyqXHs|~}ZDGv@e;yd*ZLP|-Ks21GFMR@7hZiHU8uHyJM2B+xUc+N!7{3}mS zdEjpHnH<=oZu^{;Odp*ip)ZXY6cDS6+_$ii(=U4D8YXVO{g@B99eb0|)Pp>02ca6# zInH`jvaNy^y9=X(%vxd6_ujMcN*Oj&1T)D*MI3Pf9*D)ABH*Qi<|3MrREFDsiEZAp zr4clwY>~YfQSrL$Mpk1OHVtjXp!cJzRHr`vyRQok^ibGz*0`3Jww;MFZo;%%~&w;jXATJh=&35&2!M1;|MTTWL7|4xZePT8{W$ zntvQgz{aq>mC(m$TV7x!AV$!uNsv7%OFD`Q3mXu)9MyA9@I_v1oCEq;Mn8on*lE$& zUf}FpWCnGW=mABOjAGDqG?#cHeiIG+#S0mLKq4E<-|Q@fKehQYz?PQd>QsrV`%zU@ zRXNyP2sdjPjGY=QP*J0f@B4hakL*kbSO%|aXH|c7kK9uhxbGR z*Y6d@CCK0(g-|KIn4}902F8>HeGGsAdghoC9rpj^-P;cD$ce>ju(<*otL8+PTU(x! zL6kFy5vCIu7+LZfCgi7Sle}j!Fhm?V^SkQ3@Gd2CPv>+|e}SLp|DHNE;!0GoisL!?M=xp*5f=4i z>n>Z7t5>fQ7GEHIEj!r-&{K6l>@o=%*txb89Z$kh4Khe<%XaefMn06haQmuJa~TpX zj$HH3>8XW}V;r+QP?(n^ZmmV}&I5-@_nSr|x z{Bf)c4I1>9*T{tl`EQT6aRLY6!~hd$Frq2|d{VT(gAoI`=$3pQ;DjvpaSoE-XwSDN z#88O|zyf|JSa?cj``HrI*1@~g(K)Y%6~8!4U{^u{W%|Bn_inDIbEqWs2ubGhEVKSP zApbPNnF${^a4pBnVX6ZM!qNO$RR;sV7ZnK+D1*T2kf`+m0isT5#}_5TyAkj>QBq(@ z7>JU1dlX=azA0ANB_X@}5vGpU9mT5T$xl zY@KR^Zf!Nu#RA(kQE-Wa7Dk$D%T64;>FCalIIK z$lHCvu>lF*0hRF@=K=tE12@|kU=rZoI!P26OsRy|h!{h#$^II*UE$3j{H}glqo>Qe z92Zmx9Hp9VQTyhEP^wn>Awo04Jea52w-h~qkSCHrs}-&x)UBJA!0aOd0vWV?Hc}Xz zT|=(|!x^ci7{;Kc4-cg8550XEL$9N1_g^+{VA(bLC}H%mp*M7V==>(e2Ip<`6ky&- zj=B|ifB4xS`0jyM30Ix#Se+d`z{h|qt#GKrw0n%m9dXZc!RIit#UWP8VASzl+{Uc; zbMGjZy`7yth$KH+J*-JRP?^M-!<&+BnTz+nl3>qn0OL!+_tp?0b^z$I!eSMH6A`C| zMupT6Lu0r&8(?8&brZ~n+)9!Pd#rb>8_^{snFj<~v$pPNC;?Ma&lq2=go`hzNLLjeZTg<`sjSz#4 zQfcSUAU_CSH*vNhD=TZ5P=Gx>C4(eQ71X=CoK}pAa<`zgHxZ;Iw=s}LA`nUGk2zFj zJ#b02Pj?^XV0O<(- zS=-vIGLRSH>cQGkmnFfGfe`3wmXdDbp78GeL=wXFk0RDHeD7Qj83hHW1fPy4R1KLu zzXp`f%uH!QpoMTddTyHZs#bj%j+ zVmsorUb^87jlO6eRv+R}PIkNg+DvDGa{>IfdTmw9H&x zY47y+xdk&9QTlup)#abSe*l@UxaQF2bN$svAk((0^zaDO0Zm7yEPuvWqSFdcfx!WN2>c=7TAb_ zJqfDm3T|J$yu60nfS;s@Dp>hJ)?K6YiTId~-e_7nH#ri@`6B&S$uSzEH9i@8Hu^oB z-l4&BEelOPX>H}pyPR=gP*rdyV&;rW=I8vidH7$3qnU1tIuI7i5I#jP!1d6^{_JLq zS6qmYsIGKDnQoOH})L^;7j6LeRB; z*1??|-ZfA){v&%&F4Xr4@Ehi-0o2HXq^j@WK4o`0iXn0#FQL>A8vz(lGWkqUPc(QI z0%KBdc`8tk>L&Z?=6WbP2!dvU?FMwMZz&do87vx=A&SS;RGjuVql&2La}Z_vXA4aG zTp>shu)0-Mep+MqS=mn?P)Va!7+p9=Pt&l^xSk4eP${ps0z*MTQ*L`atyH+AIPuIONcsUj}@7b@VxmI+EV2 zCfBfg9QVGuSg6t&eGxdEXkrsY2=*=_AEm;s5sal{Q! zL^m8RXDg7El9F;wznn$5TJe{QPk*xIFMh-^E|apHf~JUzSY{HN1Wfzp6_`(&^JFSGESeZqGr zbK1idjwqbrva)uxvNz->WAx~Be^j~epy#nqJ3P*OjF9#&XYHo@O5>+mc7Lv;V01d$ zN~in1tqvgLOie>W9wWQl`Pj!Jk-ws{GAug!w2x~7AZCD|^SX8G zKn5)D-W{(_U`WiOWQFzf8|DNpKP`cDkT`x!3Qj=$sM`$P!(Q_ESn~-rO-1HkmDr{kMy`(m1kQsP+AY>$2a1U_|Imt zXA{Buai_I5HE~sslJD5@#H2x)8!_5z?WG11FEF+IY#^7qbH^*L_`K;a;Vc(vE6RgY%6WF2weQ*g)nP>)!{YDXf4}$$A<`@-X)-j^JuSu=8v+o zvn$eW0JUp1!ENUbA7l^8i>R2`RdI1L0HS`g4V|}ht5(ntnQ=U6K#3KLgEr)q<5aqr zipm3|UQu^9-PyBe&nTxew`G`k0`T2O!(lfn&~nviYPf96Ux1ScaswTaiHWZvn3d1! zK>^#bXHRwgGUY2jYb&cNnKk!64?6wn9FK`cR9xB|icT={u382>{C-yQ%@TQVcCZXm%Y9aho9Ur_=W?CPfksF-Q<_au`+6DX>n+r zH=CL26G871IWqS)?ep2XC(nV-Drb08UcbIz+-|d+Xin44zB;eO?<(bf>7T5768_nHd7xbHPz3)y^kNT!v~={ zY(i(wa>&6&iGb95IygT+uaIY#R&nbI<|iC0-9kl0B_$_U$#vRM+sex7Al;e;U}0YB-my0{OjVj-lj-;dAjI>h`HhZ#^kcXQ~~ZG~*>8WY0=W6ndI1Pn|^ zAg{#ME{GOf_>h#u7O@+r!9-w8e*I_$)X2|Q5y#jc?0{1I95T?(ZVklZ5iY|cXuAFe z8a{pEH4$o zFP~W&2of9dK&q>+zxm4Rm}U0aiEcVTc(0=j1y*B9 z?|3cAF!VeUKKTfB0UnJnFvUaC_<-VVw&G(-tJj*$u)f&02d5d1Gcw*_HA2)|PCMqZ z<5>MZeSQ5x%SQzTf*=mfZh5BSz(ip1DbWIL;xhthjvafZ1;XbDj~P7ztzvl&i)t-Z-?&; z$|s-Gs=_#_GtMG?MzqrT38o+PwepZz?u!qv*qaBtoiPCe>Qc>?Gv0F9G^kPI5_@L6v=^R{{d4b@=ORfcpkH1DEqI@ zpFej%A$d#PxCl=g4WG?k@DKa6dADxchCSMCgP7ejA@2%}=pLY+SvgA63TnwTJtch7tHaLQ$LCo=W5;^ZrDgeejsGbd-)a3S=(8WaaU&w+JXNSz+gK+0G)7N<^y2s*PK&do?p_P$I9 zK6VEo^eirT`TEkUu=UR`gvZ3}#1WCK(QKT2=|SVV1U~6JJ3l-7{@uHG6KYA#I9f)b zd5o+F%trSx8Ka2q#VxzwlnI$pV?)DvL^92a&jBMNqd=ugOiWCfu(+g;0ZI};DNqYV zrw}ipVPLR}l$12E8E1@7SRV@$wHnNT4t^i?+qUJv(i|r6a^}9JSL@}zgYj@$32z+5 zxrDFamuf%V3Fk|LXvPX=>M@E&DKjb_Kls{nhp$w`!Zv>iOiqaNUS|q?W&^Xr=N`9h zm9ROG;l`dl>te^jBX$hD8xl-BW|}%Wrx(Jxp18a3gt5Z|4xQfy@1gCNPmzX%h4o6_ zq^OIPJBQ+OsrgqL3S+=|jm~&~neX$&&FI5NE4G_=888w{miFlG9w0Vk}m*u&@Z3P8%H=xsR6H&9RwoxH$5n zjI7%xlKlq`T!0BB@aTJ9UfzdRR)@B;GK0@dyf*|iE~Ml|P{fVl-=NeQhsRE}p zLy?;>I8lL5ei9dV4gdrn7jqMzn-Pxj2Qjrh8!-Y`!ZX|ZbF;I*$;O$;+L~fyL>NnS zt-h6E04(h2ph4082ItYmOQL7NZaJjE=ql zdrQfxas*IQ7;lU7nB33m-_6a<^`|eT*@^{o8n|LYCaO@T`QUCeOeqi~*hMm+O5gHV zFK6EsC}!X^mtNo?TBR^thdm=B9yqpjoRxJqBO@bl@i$mIdhjVA2jVN1c_2ZP{`m2u z(A>blAj|3ZTno2R8zsb+Vl+3?^;KIGXL4{tHMO-@p(E5_7AHT-z7>;OS^b4AE)YY? zD1{P0sj37+FD{>D%c)I(1||f^NAz};T$_Esd{?EV_rqH03b_2OtBW>0J)LK}rJ5%#L?bd(sK|UB2rf(2nV`7m)js?KC}1Igd~>exa(? z)YM$Tgi!dQB5=%&`DXF-#~JB{OT2j zg{9>k2*+7{{`C}@FCD(&XuN-*m>1_8?-PJlz%y?Yo6uZTo_rAQu1_*!WxMn7sY-A> z_s5y#wcxAt`!#?DFIkV%V6=UPN_j;@gtV!tX$a`+^`>&19|{8`BLZ666rU=z6Ws-) zBL72hzA7uhVUX_$KnuTt2l$SkD)tW~Mjy`_xMMup)x7^`iITkhVLv}VT{L}H@}u5^ z23t`!vhvTK^)S}bVvs36o)i2V)5Q(k3wHi7Znp=?z4t}&8MWL?;DTMy*UKB9l*XdC z>(+eOgh_!EmnU}Z3LoQB5PCo%Tyk9I2oZ4fK%WOv zA!|PHWIM&<$jHcw6%@v&h#}$tG3*g%i7la_d5P$LE_@Q2#%?Yyt|Pn_y7`HxlKXmk zT)?}NrBKoeIP4&X&1?a{>t`U!ZnYJHn=R|0tZ#01OG&v1|1ir;7kYRYmJCtgg2!}B zmB+Ybi_#aO82|uq1X8yKCbhC3cu}m7{J_0}-QIpB>2|ot>}4sjH9vhR)PRCaJoWK- zp>@i*9r<0ASKkcVy%p{+j2^o15=M+8^c8zzWMe7oM;Z2uCGrRrGL;{x=SR9wUHcU{}PRsDseEmuSQg(4T{8oV*rmoVhllY7Uu*KUN z;9R`2GMA%ZCUC@D()s)Mf$fXx=bnTnSMHyAOhOW)Y_Q=wwkBFopT$>%g7u)z1&+wo z=mzc5q@<9wCG-bLTCHaXgOtdsE6NISAAob@0!9nnT`BN zHo@tQN|e6`HynR_|Na3Hw`2G2b&dAbeV~+{V16%W9<7hw^JF1{!ztQ)DQUnUU+C@M z=lagAa#@e71g7#MLWa5v3U2XcsuT9ooQw-6b$-C;EL#7>e?FUTRw{Pu=tE;O5 zl*(IsaUPU=2wryS*Aa5~%#UqAB>tt{CdS6B{#9N1`ST&cq8~qQuv$CZyLTMkt5D5-0$P(m zffaDWGuo)5YyD}@V3XYT$o=`yLSwkdo~+G30p`C0+V6^&wC(Tje+F`+Z0m%?ty`Y~ z+*L;J>|Hy2-bE5|14zLH6ZuyZq-S1>m=(|!aE4e<>U&!o6_ScF1kxcDu;v|wy@9xV zd!h+q0wuftr`(t6l+4O?MNUjRpePQ7;PZ(#A;{NQuRFOrkxnyo{Jr|R;n7h|obO~W zXJ%p96)z~<{USX46oBee0Q+8u)Ym}M@5;VuTL)7lh?TOIKW>j24Q**AqZc0%!W zjf#qzRFRZSy0RkV`yLHD3=BCK9RZAFJ5QYoe>^T7@{&%F7Kh|SZ{Pj_ADNcf@`1JH z3;*0>D~{G3L7u7BbRJ(mf#N^{&zeTWucVVRL*gfwd1z>8dU|@K@YgaQOgmwK;sqU9 z_puQLDxvO9+xtUKiT_=z*)3}xj~=Uksm7FGzUDFLL3}H?WeYJj{QD*Te3JtL^WQIq z?<4|$W90AucBN8H;h!h_ z?*0Gs+luCWxO`W!%M`FkywhNzE} zr7h9y+kJSDbJEFzF6;!|olMa?6$+m>r;TtfLz*ziv0NkLOH#QK{95%+j#J;`@!fwj zglW^?iCdZ(4V{ zWdCb#2w&!l>V3T=B#uSbK7WpXN|iPmuGRj!rk?waZFSLQwE%W~obfDtxn#S$XQKVf z&J`=e%_Pr#iUTg0hgFR3oH9Ma`GHy>*ZDN_3(u9uEdc>0UDf2$F_aQL2d310g*z20pKIv2i$sorBac6h>CJ`tyBtg zxPI&NIB7)}nUk__BF>fU*p^;^8kv!u3Wx&+Y+r0cfrwtZKW=Yhuy63Xe{o*vyLRTz<4!#IAE zdEU^v@SCdg33Pk=_8Y#;a@&vP@KD~XF1W3)e^w=9bsI@dUDG#pR?_~uo(B>8+m@Tj z=ue;4=y|y0V(&COG}3Q#;P~{+>E171rllrbYTDSXA3ko@n@AcB8jFHh>_0>%oT_JJ z4$NI=HZAUcV)U zk(`u-dBuR5ieqUD$==Df!@;3Y^0y_fS>0s|asU2(E6J7U@w^L;>tzEuZ&jA{R@>^Gn&3`Uda0ez z_U-HEnmxn*3h!PvG&HC@p=ixJOd@=eBCGji;kA8KtD8t_GHuU2P=26_5^T3F+pYkLR54d;h)9wfA+o-LTfP?sdnUV~#OqbyY@T z;oH3ZzfMLlxN%GSfK7c?2v~bH z1WuVXet&w{vW10c7VA@V_Nd-(zW0^fezP}mlrAPlK09D2_lZait|t|eC#dwdUxg?; ztcyea1G(ouBi@N`phfh;I_*fg{Vy@3ZTWNMV5Y0r;gMKR>Sq>zZk8?VgR0=SZ)p(Y z)zln5PIb~--Go?3P-sZ@p7?Dn@o^!Nsap<5SlU`;Ips0QC$ZxaOI5A>gX*J8bCFyd z_Bi+iSctZ^Zu_?uqCcLy!EYP>ENnMe>^Pou3nFc4afwdt>%uj5+Tlsd9GB@uDc&`R z7)P2B$>(Uixh>c^R&qja@76uqj?Bt;#C1b8%s(rpE*g2if*t4>+a}>)|S%1|? zm3WRg8JW%={pNuvaW}pDKP&FBy?!IB*v8bj`fS)@PuxaZ zoG8S~4BHy)PAu-7qD~AmwX4X?X+hR#t8-;vSbJiBy6I{@ns$p5HXOp25WbtdXr0o* zB=i*FqCywluupfwtLo$C-Z=|Am>d@xt@ofeH8VXI$7*LRH(8LD4GK2iee#SwKuY)X-E&7NNe^mY**Y#|jl{CEXL-wu?dBmF{HFfvXk?kXSWo4jsBI&DqwDS@1CH1YAW(+VA+vpn7sFb*6~ zHR;%MRGdb7$ruCCXV-QvQxwHJJTZ?8hlRfMq<4Jm1rvkxlLka9lbl84F}IZX6wA%_Vx}BYw1E8;Tf%Z zp1jrc!VQ;Yj->3LS=ynKi;6H?PP)#aS1Kmqhu1UcBT7PkNO?EFf!}Kg$9B?B*WCyX zv_)0=sF9W%?+Xhf>Kq<6X-FUeeNO&d(4adcV1WJm;+7KL z00{%8=dF#&7Bocr!*j0P-S1DX!>)1n;lqbkwa-`dFiqLe!NGSR<0$`3=uj*pw~4GjK_^YYWsV15!w`{LCM z5plS{fZr&!fW3X(ZTxwEE?6rcC56Px?K%}BlW*QPVbo6@eh)FHm7PuK4^655t0h~u ziD_r3iHA6Dd^FKa<)Lkp^ji(vs#7XE|FvNj(nGXi{Hv+pRd%+NoZKY{Al^R+WS0+% z>8w^-ay%GzWo2iV5|J#%fzt?K!XvFNKuetL%J2g%p(NRpAn ziv2e(d@W+9q|VZHRtO07sN6P9@p*fUZ`-|LNE*KH1iwrY9+X(^CDcTGNM3?dsZj6Pr-e~OXUlQRi z`5pMvIiXZ8PI3%El)rKtFI15c(xOoUf6`8%6?ApAo$#Rk zW|KHO5#HJHX&2)3pkS=|(!^_o>r;fTu{b5;<0cck9yf2%e@hb%I&hdqR?yb^x?bA+ zvNSwS=_6zw3=w5h>TRaDkgvldu_+4wR6z!?dzue zymdMBi4pNdna_UG)Nici3Mw%vD4-#IMinHt*G2KmT>mWkS6(B`crqEn#~Yil)*=w%24RNO&OY$=ew}X zdtH!!FQ5K;S)?_SgO zZPU{OuM*R4%#BjxB)sdhyWhzR~mu4DC|-K+~hg5T9^xmi&<*V>uyxk&23;IeB7GUJi6nH4P~;jUn2gK&z7l@dP>Zp{@#qMw0frke58sF_@2AA&kZH_;H(0l#+su*0gg^+ryPGN(o#4up&fTuxyp>IC#xz$w1DHw9SaaoMfD8?SwojV3*vxA2rBr z1aXtk4xaptIo%vxl|NGrXeJr$kXv;W*c|*u--Pj@RAzh$b3Ey-&U@z=3DjN=JJKw~ zBc^7V3(>*Rr3~k_jyaB#uOGp&`rO_yRDArMc?9+vEw%B)?h)*iiPje#C;X3j6dMb~ z9gafuF%BVdz+dlCPR4*~`my^XPVY^JIgFjmh7x9SiIA>%wl^<-~3MNw^sf#z@Mpg$S8v;$aZVe*O}z1lIu$LoGPDnlzTbCp9J!51LCCLGuS)>Wht^8f zPYh9I&%t@D-o=I;ZH3P~=T-4{k*<4V5W)H%%%9*85w!1C1a6J&2j&|#rgfa5Vs7;5 zV|<}KARA=lmPSWU(v>^!+<`rKuCOs>A~FaPNhbC?>sXENe2jprCj~)VnUB#`SZ|PG zbJ*CUVRUvtiX&sSh5fO|r#&;7%-C>BV}l)Y$l>`ZB=_)BknBjwYU99~BBmt^v~bS7 zD#v|qiNWXDneDcW0jqS^mq!7+v?4~P-&g;dbITmNzAZ?;R9rODQF)E%@8VCT*(WYf zHu7T7SA_Kbcltw#yd+~v`3LR?vD=PSli;Kajj}`SYYior-0>UQdFqDl8rR%rz z&w)+|#s)#7qg8FFBofp+M?X(gWm|JSquzGztjZy6@jH2y@1CapdUWtz+PJ}Ye(`83 z7Z#XAa44?zgI5xyG>GQ=*X!hakqg1w>h60s0F0m@yZpyRymYoHOai}I03Ct%uE<`} zx0wwXk;}?2qHZ=7KD#OwWae{cqckGPcDAEV75?GO`zAV8!!g#;l&{nLYe@as01W8p zY&1kgVFUy!6D=!fh}Fn123_6lTjC!`Eot`ht)9p~SeK)D`)fllHq-i+(9P{d8*6K` zleFv*xYKQ9y?9k0|%*F-3ZAr%flZ zAa|Ewe~U%&AUc58PE5}uI=Kx>O!u8d0&ujfFzLhsZzekU8!hTjR8VQHwWkLm3oDr# z;t$r@?p|K=Kn2Rzsbq)3Xedwv@XJDjf@X$t{PLuy>YcHHrxj3Yzn^7VX3!Wi-*fS` zmYX!Z#q%P3d1UW?3L&IxSLWth+UAEuTvmQcj(cy)GZr*kHpXo=$&45tjTzc13rIjD ziEzYTKT{6Z$Uz0@1@c|T>)6`{qdcuQbB8Uz-bzmfXl0H~g_vfWM85U!5trBc2|8uM z<7Ywvx46un@``=)4$Ltx7z&FD56MkP=xA#@7^t?CoY<|y1hUdjUW~Mo-hB>P8_V=s z^_OhQ=VvL~y_)YupnO6r!#d;-_?lvTkx5RaP1;Ejocl~!*UUaGe-1k#eTiIXx$hsM z5pGUK0}T6Tw2IOI*0i0&F>&lO*rmuBmKN?=hV-pCklxDa>b0FpU$el=(;dl=qM&s% zb}S>5Cwb{K*T&1_pYHT6?B&UephMy0gX@ZS-2k_cXC(!c*Ytdw;HLEfr)FU!#D`{s zeYiN@YcV|s7wW=A*58I)pS62s=S^Eftc%K^dY4pZH>WK3&DnpZreML+}mG=|Aaa(;{=%C*+b@1lJplk@7nSrGlBv#;^dv@hi*UZ7`HXP{{9 z2Ro`=UZkb}`w;$>D~X@@M~#SYMw=NP9*cFBPrc)}nd3|D=+dzuFPHH|*krnwc0`!< zlb&2O|Frv|oNwm1@tY)7G&VsZ4{}?Gl$li5$|3)$n6vImx=}}ee?as|s`sSUA*f1N zS9eTWZFy>)c5BZgVd?ydMB>`&N@s&7kML!>5dMdY$Vz)L(TY$Jti?BdEDbCjGCQX& z@!e9Kple;2gO7OI~k6V?w%fO;JTvDH6SgR z-;-3#(NIp8pjS=LHIThRLt_etF6GtL@t>*wY;MLtQN9C?w(#(99D=V$=srM>S_VFX zfS}-lVWY3?Uw z@j!?q#c|V8{{#6Vw_f7<9ix+$wqo2)KC4~UH(U;=+Q}cNUjguftkidiKA~xz9vdd~ zsu5*^y2rk~+l#k_G@hb_p@4M)BhB&r2Z_5uYse)oA?ZynjVkl!+n)?g3J-%ang_Gx zY+L|p{`vH0ut`Qo7!H0HpcylO*Z?SpAp1VpOTy32EWdoJxPbbKn}fl90B1-TXwb!d ze1i9IAR?-GcJ2beXPwhJDZ|8H>4PbZ!OUu&K5kB_M~^4~LYjU4=V~;gJZ(T}PEOQ= zNiTe`%(onHnb-8??$y2T(7OSyldq=G z!sq>-HPf;Dh+xAC&?)Io!RzxdL3U=VccbS zLOlh6T7>5tM-rO~ggay<-~g<*Ic&`=EG~gMqsjM7mwk@_0n#TS5WvWXfpssDl$`vz z1n9@;D3%~7fWm>Jo9SxP!tGj3H7p$ zN8fI?KY^H`ci67$kcluX;xv*zW!Mz(3^gcn09c2^l543-6RZxad_F;O(+Rbo36Vi%BN#F!RF$~x z8&AG=4yTLj-rup!OUxZZ_T>jobnUw$vb&u>i${o$v9KUeXon+WW^G5AJ-_TzJy~e5 z`rl!}r&*2?HofNO_CJ1p$42rCip<6)EU1OO>CWqC(InF?Q3%3~bn##a7;g0tNgEgh zU3X5v*?butA^Tawg3qiyg041^zmik(rahj6ZsTRB&eW|qT?vjLms6Y3lWPAut7NHa zy3la}XcQ0kUx<$egakf9aeHE~;ryrdE+ldt1ULo;XJxuQ2O4=M56QXgBg4wo^cx*1 zv##}1+FO~EU$8MB)AI!rl2~xDv0YM7P!Os4Ow)wz4-{f31NS9o@wunp1$-3D6~5ed zCqjU!>tA54i-(Tlc6%QaF^CZlZ30IMJ0wK%cE1408I6E|06!F#iXF$ozdW+xxcn*+ z>a%2)ydQ_p^3*5l%%j(ewCg#c0D6PGMdsK|d;K`7!gX39*kix?(Up~&6=f9MNuv8R&<>D*L>`yu7$TTthlLrRDCGW&{3CLrXZ@4kW;u(R$pSBj{H4l!F9G&|%6_0i zCFN%de9cFXu6Y+aajVkJHgE;f{3Ay-TN6%)hu(HKZnNuB>li^*eB$frTsPZb{e4E|ca4`*4pRNcr-{_+ zpAsN#dqo@4jggx(12aS%c27e9g)~38QD*()!-w^uhe;IA0bfZ~Rn5tQM4whqG`Z$V zu-!}A#E<(jmVEBJtfL3?k+EbKq3`Zbw0b`O1?9$|U*7S>MLs|8v`|5WShX@hA+MV8 zd%*($-5cl#l%|r`s_97v%cMPg>q%mV%B!k^Ku`jNWIix4jHD6714(TEL-kRVDqwxQ zN*z=v>{j2|{jx4os6Twz3EWYBXj=u|T5Vsy5&#*M%CKMT*cYg8Nw=fWu$Ygv<`-{U zO-x|TW-z%goY2LlOSTV`6DuaaZJ+44X|(oqynxo$cAc1di>88~AD%eDE^YEMo3i43 zf2;nM2s52TfBMypUMg~44f6P+Vd8s77Q;|8g2V38kaO)5L95@G@XXm;dpDmf@Y#H( zk=J<8#kE_-yLD8J^tnuEyEehae1o<6oy_7_aV7vY`M{diXiOh={55;*~VjY zquy3aLuGGfb1AT`*5MVXZsLY=^VJF>CJ2ZiYKwe#hboKxThIlU5!_rF_Tb0}+N@{Q zZ}DO-kqpKRjXMd8l{KL11XC>@!6ZKGd{EE$VKbp$x57htZ644-;W8c0-z5&f4_Fnn zPh0owek+9(kSdbSR7cC3EYWc3|rljNL+UE z(^?n6*C9`l0qNP_zHk`G+P=Ofh+$C4H^B0~drsFi>u}Dk%wD2Sx$|{eY2p?GORxGD ztyiw26W~)Nc=6br-IOzw@OuGOG_{tw);%+L73JssM^eWG=6a$hWmAb0`GseqQv%x$ zWowrD_n(V9LZKcp{+XF9uc*lNVj{lFCVW|_1M0t@oggQHDGVWNB#1NSfXYl9F3Tq< zCd^U3z;=`vVp2*xY73C*kuZR>k zI#({ckIG>ABm&Hs4;JS7(k4=3t-#PQHf%9%nbyiqBB2qg{apPnz$Ak%@LPRz^se>% zl`M=MX(c5+j+(+PS3COPGRs;q8Q7$D;=9{m9ZU;7x=Bq8%4^Vqu!D2Qo8;j z+P@#)K-?!-$maL4+6T>2JGjY^#JKlIKfoM}O|0YdYM!pHmRA*`D;1qC#IQ3TrwIsF zm;bpzg(~?5IGmxXlMn9^_yand6kmimJ-c~obOdPFJ$Kc~5EW+?gG%5cq)NJy1J6u0 zLikeJ2rr(X;+LfOz|J-k>2aR5#Q9HC^rIElJWz-Yig$Gdx&R6$OZ@(DyV`bK!O95W z!m9Nj1XTU0Z2&Y6mc1kx8{d;sh#PXJnjt#)em|C&F($a^-Hwa+D|*or-#V8Ka#m7Y z*4LOQC9P6_%d_>upM@`}reglCF5tuTdjXK%`+}S(p)y0$>1Y+{r?r1%ea|5P7e$>A zfH8`7bV{S9p0t0HmqSlk1H+f1q1=mab7>+b95jNpcipvd#@tp3*J!GPN8pfEhdmV) zw7@ErI_7S^Ly&S_-U8HRTNEmi3jE)B@QKhaH%0?qI~rnIK@Ghj!Lp|Zc+x0rgq-W~ zW$>$z1>3FQSU%{jMLBF{n|d)3{Yza-s|U6B0bv?N-<_PK2MbB^d($9x;pDDS`bTYT zO(SJfGJzU__Q8^vz5?SfXy?HkSH`f??i(7=$>;tfo9=sOL1AI0QvbQIla-LSk$Z2` z6BFy;_y?lFBs&@fnx6_#MFCswdNq3=6xZof_$~88TLD?UWUq8BuY65Jxvy{ausEpR)Gu}dq4hoXkWVY>kn63bQ4mHdV}sVmLxV3m_^B8zuwOATp3rw4??_TRr=4o>9K z1LY{{Rm@!u{cH}J+jilP*YAABynwWc-*V;7F1;9DV*fi7giCUC9Ub^b7GK_G$=T>t zZ4Aiq(xc!f9i`#-09gt99dn#|H-XA14Af>>_a?EHCz@j>Y}+7ol1fcYJwI>yDSzeW zlbfZ6!Wh}uMHKve+(GNula6x(4DON*F;Fn$(Cx$Fu^r2jsWsHTZHcF$p<$eyBelh7 zcL%4ZL~j!fuzZ}2$$eRAX>`SS-j}HQ(GUI4#`+)LY)$pu`Vv8Ti!vdRUyv$+>bkD7 zw*6DX2lhS*j~~rETfV&CXn(iO#t4`pe7CO$YgqiYJgK2v|2lmzePyk*$GGb!bzconINNjfOOy16!R(YSR4p%u_ex$7Gls5KueN&} zRG>(%gAScT+8#Ds+#-yT^?8z}rp_3rWsmj+#aij(@yPII+ zQufH*q=O}+`j*hJ79tDr#G*)d1i{gH;}y|ogtkOZqTT`#We3buFz^b1nj>-*BIy{$ zJQbZ{kY-j@4Yjm^7ibue<2M`5K<1O7Sx`kn508c*AX|)Xxt7WJWb<>c@SHChzLp4; zix!Hy%0UKc8w#E(csnCuD zlL?mEa0%FQ4ndBVcFYdYu&wAv9tNWgtO#H|^wl&cCtm@ZH|AQD{TV2A{z2+wWPzY^n=8AhbelF&dFQ~`)~b;7 zq4w$=Lw+3vwr){L<5L4yn-bUf)uOrnr$2QqOTa|Wbu%14wM)D}_n=K(E|%w(t@P^y zq8WMR7yeTIzLFygu58!(Mf!anMB~=?J8xL#EPf7>WhMa+)*H&cDO4yM34PwB<&%pf z5P`l2`N{HV1q)c0;(&pbLFm7&Qnsturr&pYtTMf{^C~kZB?`NH3T9c6;7V%>rnrFB z{uu2BT_Y@d6liPW;80c(6&i|-Qp19slufGnW9Ha4IM~2^vKs`J!6YXh8D(WLfQ9@v zE6eBpyR#DrKMr+)#HAxL)Wk%66ltSD%@GBX8p!7&?WuZw|}1RGn9 z)LJm)R^Pvqfn8rai15^*lp`GjU0eTs^7HxNx94EtHV_7c;SGXP+?BrLJ}wSU8zA?| zaO;1cOpRb%Z%&OaaXcgDSaKUb*$aw>hAj3J$n@YOezggz*IigQSHbrJ1JsOiq_qE4 z;Haycpr=P@!azne_ro1cH{&FxpjCtuoCCY7Kk-4PH0&z8%V2P7^lb&U{UgIXkjtS0 zDzMu(VCDbsWk@Il*|)-%0|8{Muo`4WIjf*-rpGqm{h}nHus(V8G30UM#a;E43du+{tMCbWpb&(R2n`AfbJd&QBotb#Qz-w9|0Hw`66I$qOxjO6|g5#3)#~_QRUGXCN{Pi zxEVlME~>n*y&Vl?!$BGavY>qf_hbYCyTUfuCM)33jUPM&A>Bt%d;NO{ASvfL@`dl# z_f`-H)L`gGX$cPi%0i;U!^0Vn1;cF)q3PaL(r~=9B(={hlrIPKR2gAGW5XI~AOv~d zzfb-EYPfWvvnw|0yM@ap2|1*JseE=etGlNs-scl=Br$=m2cS*OQQQ{0dKCdXmQL_U zhy{ZJFx9JWL3!tTgX`|4i0=af#PB5op$Y`80Aj#~9^4bKz%D>7pyqzgeGtP|fQf0z z0%+_0^HID-8NtpR1-=DX5HlieU~wrhOg0T>@O<@FP#!P08Q}$Sw4&YbM$m5$ml?%? z*m@Sseq=#FjW2EpMGBadsBVLZKd981I5`UkPBLD%4-S$*Ptk6eWywwdTM9Q{vcP-D z0#{ZoQOm{o3HrKZyfW-P z)PhgR!8PVG85!CCwF6|Mwg^G+u{{LOQ3L_ovwNV?{xJq;Ckuoo2&y;AkHSZ5S;vvr~&Xb zSMQLcb^}gkL+n5&Ar3kBsevwPr4!+zsv}$COhdY9=;=mSD4qj7z0O5y~(GK_ZlUbVvN5we$jwRtRWwS0xj z7rKjB`}T2VLZq9UGtzLBPVRL69F*67zG02-D?&jS>?zD)$Z8nGm53 zZ&R>_&rUfXrP<^w#d1NP#HV8$j8W@;MHluMi0CbGTbjqX3zw%TLJ!ZpS-m&AZEQD9 z?fmy-^xt3eI5@FC|Ldslp76e+xhRY2F@dAH&i?sPe4G~r8Tax6tC$vNKttR! zojErn$0>q>G`}u=%O)gT*4Uh^-NSE%fK^2>mJR=rbG`}u(iUM?eHfO6^i z;;NFY_O8VxfnqCe?#68g=EZk7q53jjEI$0cCtt4oT)!M_ag9;2V$L?G+xcssd}MW? zf!fsRq__LYd3aYy@7vA7+S{?lodb&Hd4d}d_TP0N_qai zY%_|ETd|eG8sH_%Yy}QCRU6 zN*BRBV06U&S{OPKM}GuYH3nLIybMC`9%*(B^AVMxmi&~6r>dX_S8P6Ew>n2aC{qE>&)2LzmQ6kg9ye-ij4 z$U2HP^QdaoSC0M57Bi=cSqB{=J#YTBA0tcf8i5@1Rpz_7uiO0S-$joMzKshZt-#?| zcvjcc%QW9LE>NP-?O1?=YyB)9^StttKvVe735h$`F+Q*0!C%Djt*?`5EWG+0&xw0_ z-{iXPcrS$_#~U?;4hbBEpJP)$nW?Te&l z_l9{ZVUyBKYq&}e7teYZ^HokJ8MXjM>iC4Uddfs@Erpqjl-_!>lu7h<|uS$qUL7{oCzQLqRcnKa{|wI%|p66l?Dt8f6@)w1~xzGxZuTG&BQ zEeH0zAjQzFh7$@MocB21EPj6e`xD8(gQy%_uZV^Rz~o;B3SbBVH7hU$C^C z@+j2TdbW7rK^glt^l@;}@SppL|E;*EQp}|7-;{}tH`#hLhA6@mhf+0N`9?z9z1HJ~ zeupUxr&2qA7zmXu9y5H)e7`A#wHU`gE(dMOCc2KizB{`_oApIpu*6g=lQ3*!zg_nB zNu8dY9LTXfxhmJm@f0JRns9M-{w7QI75T)XTA^X8BG!@0;O<@l_Tb(t^2TP&>N>fa zPinNRUGAZ89*W=Z7+`|1~#Z4LI^zU%<;5&dz%zf~!pMXgsesKJ3*~dk33L zY%`3X+1^h>q}L`X_MdoivQ)*o-o2*VTfb;kTK0?|O^51A6`P6RJDb1PpUGFJ+E8#4 z{P^~6)Lex0yFjWaGf67$d2%gK zzq8)#DM!E~kv(S~+1`3HdscOYsdO)xRKJzurwiW~nWDLCG|u=4u|%z{{*M_PKz(*r ze{x;5G1awumhkg4Z%&+%+@sD3$gjZ~0*uM-fq!G{?%jXku3TPbe8BKXB*DjgtM_Mn z2g%fm#@mK-LPCq(tKkb1+B8MiN}bKjGSjI0qjF`f>;>5NoU>Lk18y^usNI#x)TXV# zas0N>?42&l{bjG&FFkDASO5E%YE{ufzfzAn0{v_(R)fG;W3 zD}f62qLdRnXp_^@_G7|8UymCDy_EvN%B*8RZBHo+tXVkwq5VcVv_L`pi;8Swqo5UM zU@_O^uM_EiW5qUIu~{{JNm*8Q^{`%NKYV8(B<$SmL+?B*n{}EAJ;JF~8VSB37^vX1 z0_cE5_HM8uJLbU{gyn+F4Y{Dc*@@r98P&?l3gQn=F2Cy;|79{m zCCjWLnQ^}x2m>Lx`e;l5^Y1&fazJ7A5OCG`oZJR`pXwTvvm8WAsBGht`Asp;T^z7s z85$i1wI3K_%RvOcLlFb9B*bg4Vq*iqQm-9S0b~Gr0l{V`UvdUEJ_rE9!qYzg;`i8% zUEPx#$jk>V9)nP6SxwH!Uso^@ZK02HBN;%kMJ)r=*GNi9!9al3bkM#JnM~(wXXqeY zks4YCH{09VMuIgzDcG@ZZEbyW69ZeVB;Xk1QvMqZl}eq@Gy?=!IrpnTika?Ow>r`P`5R+vI>J3_1`-K2R4+2A^7_* zH9UZ%4FzCi=j6=7_W;XB3}iqcMO#h(Gx37!SO^EA8?2^=}|%v_onE85tVN9}IGJ@aP$I z)&B2oWn~$I*<8Xnw7?t0AJRBfFE!MNO-&h&FclO$E}>CYR=yf5Uk=S~fHkQz)}0|aVkx!&wOsOlyyAKT|G=Lkh;9C7vY{T%XY~-2 zi;FH=|C!jkItQePqdDCY7IzUUG5Pn&B)$7KbHEixRyIdbih5I*b4^?Nf0-lPIUm#X zpTFjRzr_FRKj&C${yXHF%y5RGpo_44S=8YD=c5=@FP;hmLZV)*qaYWF`#*mX zPLgAR4?kRSEYg${qU*pnR^-iyD~GHmPQt#^uq%${M{SJ%UFUpgqJAZ2 ztDG2V3uifTr3dGQg}ceT8Dw;Hkx&!ADj~rDU8MMK-wuh7r+~yvz1hH*YyjeJBhsmAM{ABWQ;fy8R#whe$4w!3T9YiWeURM3A;0YMEoF)c0p55ubh0-9ECVD1XAo-y=)$k!@MBi#SzhXF5GvGG$F-=Wc+&{slbCdO7_ zg+FaFEK*cW^mz|nOIceN)OoCvz6b=_{V27{q+Vye2E|03Ckbdr0F*!}`@3CPI;6My zr?rR~8L{*5CbFe&I6uBxcuA*&Np+O;FKZS?92Y}AYSO|IF zWOk{}t4c=G9P`_m>tzR)PGdW>nEiJR?d=sWhJDmH$U!5;DCV_Itj4L{Xpiy3W)$-V zFcvz!YS9ooiFF(sy+(9;J?6}dLXGsc7~rwO_CpN)n5#_F3NA+3XBdM5Ez zq?hUMkfW$b@~mH64S+=F4Lwtq2Qn}~s1ObvA*6W|p^?fSVA8VSiTKj`227KDz9`yZ z+A>2iY^3kpf=#!^x~zJAkBf@Wa_j&>`U@%tO0+IaUsh(nd>P&y@<<6m9^dvj$?*Ad zfPml|6FJpNwm>hwVh$-{Y-NRc&5}wsD0FUM$_}73nv*w{=EiTA0i^lId*m^-ELfRQ>3V^J8qloxGgzc-S2(Qc$nH8kaMlC~^^4q=%tds9|7r zv3*45?mI~Yh6+-T?Yn>F;AQ`l{G;rBYr7Pt ze&Vq&L*Map{DP$e;$Ql8k1{p$joWx<*j85M>t@w#xoPj|y1LL4p7qd;H4?=2T)sQ~ zpu~k9kL0Z9)OGjOtERH~VPdanqmN_jq2*mCSRqNL^J0f*XVjw6sqKGz-DXz=@tJvWPXoPba6g}L{Nxn;jNH zop`JLwPa0Zu8qO0Aa)Pl;p~zTev_oigkF|IU@nrVCbuF~mYC7qLJ1{?-_DE*U}AB{uPZ!HjxT*_;8!K+d_C zyA-bU_tMzd<^TRsc!EAYIZrk!DqL8-GH`^?RN%9e+gg6QA?b&q!)AGWnXc67l&(Cl5Y|B~wD9+KJ z-ZbMAjWL42v5!MQNSY@4I6o}=BPLt}2;PLzoZ%Z)RhqJ`UsK2Pn&K)Iy?>2kBXE>= z;56Ib)VY%;;Tx8eU&~in?r$@B3~mp3PJVF7*f&Yv{7r=cf8MXI^n}o+8PdFBhV;qF&TAG`Rf>SFX5|k%vLJ0K{dr3XMma z`PwUiJyBs1q4!%rgp(c)K)27B1^ps*S6P!N7cHb-mHGlD(w*#%A`m zNPMD1WxZzUKFQ$HyPD0B)Cys(LJjT;;@HB^=_1Q_FgZsqh3}n9EYb(JwjxUoZoVorXPPmbj^WnT?O>40AW$4!FM9C8BoXCx& z1p}8R&s(~F+sRwUv(2LAo-1G!0~=8O{IHt4U2l4;$z}}!pWv;Yze;Yw3s-Jld2WA< z0p46vFLn)9h%qo^Tu!#RB9jj#lins5eXO=vcMvZ30$SYWU!~DnAC~Ap(`ZnvDiuF5 z={~inXGqk#k#5%Usf?J6{*q|uws!UP%9hQw#0yi|>2HXCMpjE_QuucSQCE-ca9aN5 zz=u*Vd)zu#r549Qs#oh~m)LA({$!?|hWXNl3-(gYoEr5o#aWyaSUs^1ATeW~voojO zw;u<`3AedtX`~gXxTgLY{{x1A8EY5 z-`Y-n)=U-Io1DK}VZUHXCtin*l=mLc-7e9I-tp}TW{pu=c&fcMeg_!5WK*I-Ci8C> zI%>L1`Hu_k$t~H~o@acS-DlyAKM`lB-clm6X%m>@28_X`ERXXHi{+u0V_De;k@jc?HXz~3?diZA=7a4WlMJN0-6~Qtk%G6KeV{L34 zW|Gy~6UI*_z+lsA7K-|Iv*-`@?}(oewU1t;0}{$J>E-P5x=CUWGXYkZ{hLz6YVx(h4*8hnBLS^?M;y z92*^`PZL-Y!W^UAFo;P8d#=Z`xq!1PrG_bYcS)#2Qg$2`e-L+rIA)3B$p$6SdpNEu;Zqr)$&;X_O-7Dn5xA0rY%32AEqyjjMhC#R@`RL$%-Y~ z9FKOA@xo_07ZmYyBqH1#FLh1xKe*IN`0=AU1_{hVJuhr9Z@FQ_vSBFZ_u{izDDzz! zBAXPq+G7cyQ^asSq=5AY8dyG1XJPO8PgitGXVT~>96nN7ocle3ST48}NZ@sxCo|I1<0*}Z*)%kFReTr(jrFQ%C|Gzea* zU*`D`S9r^>%jU;Cbz5O@VVpq0zJkJHbi@_(>iVm-__XUwQm2~Y2u_o+OW7Qo4TUf4 z3(r51f#*(`X}xOrRRNhhVYT~2Y^c{gb=*e-p<1wv*4{Oks=+CK_g)P|36QZ#8`aLW z-cxtHb!GFV%FIR|He&0nO^;sd=?l}O1*xuS>kTpz-gC|>Sq+g$6LvHNt`7OgcLT0P zhV#ocgywVegXqyu*Qs@?&v{Cyw_d&sV|MP7JW8y+I<;HIbll=tuYnLd*lcvHouggn z%z6D044Wf`J4 zMW^`AQ@3LV?8FOVUl}d2>JQQ$Il{x98Zq{t6m#X(j*v?E&-D-!`I3 zB)fhMh8EiA=2|4&*7wD_CmTIxrgZ0z+Uhy{#O;G816ul=k5AtYm*i<3pck3+=WDWP zMd#zFrni1xIC{H2lwAw9Kp{1P7c*sA26Jtxr?lg)XVVOqNGF<3p>WTAxO^Gm7l_eN zuc}Ea;o>8RmUOui%a*Lf7{c$2^BJUaTRe8J7WM(Kl{+lz7d8s+Y3);S+_%tW$NBiB zC|U7yuYZX9;TxDclxRAB7AF1>22-GVDAgInpxl*wh$?4@FrHFWg<5i*eRm*ezOjKB zLw`TOK(?vW>!!Bp<`>&5ZdM8u5CX6eMzfPdnyiEvW(gX!dfVT+?(T{3G&VJB_mtWS z3i9X2mb~>E=ggZq4i^)b`cyyANy2%~NGU|+m0Nu|g@SU6{z0*5MM>^`FoUFdcsaJj zh_bQV-OxbrS8^4;vc>+LKhZo-mFrR>iMM|H)s(xRQZXcQ`P32icsE+gP`1b_NHo@H zVNt1psX-0dF##}Q+Iof|Q=ID}P3^~wdLzj%a8PNlgJw%ExoB{l_ap0!%LtH^`vw25 zy;2lbhqF0(3BhJEgh$7@fh(%3_F7f%PmYf`#3d~;oQ14S@MXfX=Nhoe^&M2J@`K`NOG7^3yyJl<%j=Krfcus9oqF!M^S&vJMf0NFXxWw#Y z=ag#T+Z+||CIk_afo}nuqI*8jFVr1mx@FvA{%q*#7cbQ%Ye#CsoUtOU@@MjsZTD(( z#=N&1oaSb#$DbY!Go!*m5PPIyrxsPj<6py^9!mqNZ3`?x+f)cq*Ix=8Wxh&@lM0BV#;u{Vz_h-ld?`6d=JwL%8%kI6P+mIJU)~!2eXG zdvY%7!JPo+cSlC?e2%xD=W9;G!2#c#5OyzQ`&cqbPboSF`_jdClXQj`w8T<>O9pH-^(W9`Z;_x#I4pet;~NKR2(y<9|1^hspcHo%jhazt1a* zpP&)i0^f?DRRRuxkSIo}xVl*g2$_vN|EIIn_$BJU0eFC9wBm~$Ix#LmUX8oKg?lZ3 zb?HCsBCXi;QqJ{^^^P^Y`FME^*?dLKIh4OJiGAOoO>RIR{sp>zkhdC>HOp-*pP?%k zztx6)Vh`c7MP9WMdtXDxMB?G6p>_!3-Ren+%T(cT&KmT1KtjyDtuwS0{u& z7*cIg)(Zk}@SM7fb%?3Z{P#(acMFsb>gVF$c#FCVDCto&b<%HQCYloVV1$UdajMRF zyKpVtFSkAaCTCy>$@!p3YCx0j4bm9LKmx3*d0K-mfsgP8YpJJ_q>fz&O@2U`u)F%x zOS=?a>M~L3K3Y>5A46GEOxwxLi8HmwD_esF@+P)RO}dte-I0e=JpRHdR=DJk=!Sri zm@yB#COzTP7GI3?@u_;AUApd>6L4Mj@<7KBl#H^^kvZxmALI_X*jpUKRM=MsE+YWN zdikeb(;Yui+NyPt5nscXZ~H{9LLroQ$}`g-vM>D`XuwrCucuZ}TP#hPEr9%rB<@ndGrkm86I? z8F>uE3ND1cxv$7KYvVPz>?Wo%X`B*9aH08QDn!RD@rA1Qmym+O zgFGmt;dh*Ss!vNsee2d8()BNLt+;UXpD`vthyA(bsi*->(pn=DY=ZI^z9kb53R`w# z>g}%1#=NYbW5&!~$I42aIWft zH72taL%&rY22(7ZD|RRw-?~To%?g*AQ?$AHDrp;+!7LlE+dIOa`Ijx>yiHE~B>qsa zYs{8ecMX{gTnX6(Hg2TGM(%iuLS&WJmu|=8Lg268j#%F?ls%aReK3X-0S0);C}yvG z?B+Z9qJwWAH2!IMMQ1F&Y58D5<*Dkd8K0Bhjrhx zcPHLB8~Pd}`RXOtxZV5EKZrJuoGeL}*zWD``|x~MU#*UNDFAoMXzIkkoa!C)M}1V(b&M3uQ1mHSnwkKL;on4N$SH#FcYK6;Lw_GK(`kJ#*Oq z6S?Iz)!a-q+_sKlcITZ0JQ*RvS^vbkvDuGnS;+hfd759Jd3Lz6QcWCl4w3tvm0ep& ze!M;UqR%UOQT)T9rNOuIZryAQ!G4ywwPCzo>bCRQ@XY!nON~ zTld|51@;h8nGR-2)v0H$-yJoT@|Dm%855r;hI%U1cKZN7lc`Xx>{%A2GD3Z|yJb6V$IAhdb{?Hma)Ee0Tkz~tuj}{l#cO(&g&G zaMHOZj%7D>C`$a{^2m9|CD-5q)81Eeyd=NMJGl@yOI*;2?%qvN%fFy4*+-jhDn`s$ zcJt)7$3*K+7L}Nc!vZCWGVCAbY~8ojkBlPbatk}*Z>)`L>Gkt^=UJj12sr)MP3bpgKn!g+OgxI_*SQVEMh0%^^6Ji(f2WEYREU zEnnRK{?Q*qq4J9a^UzKX zR0ZD1~A5{GJ%FAf`;IXLdcYkxJ6Ea_hyHA5G0=s1~d2>B~mSJy;b>r^++G z=_7v#&Xl5CtItX}fGX*_zjU;n5Eh@tUbCiOz)N`o=>EM_ivRhFS~ zwQuNKoAzD-b`s%nxv~s6!N}Jxkbk1-KYGG9=9X?l0PL`-f>t)Fo-zg|< ztJ<}m3J#)u6Yz;Wss3O;+asqfOs!N&HjprKNp=8vvA;d({;v@7=gh%;^=B zz=zNVZpCCFB5m7pBuvTRXK(Lg*Ozz`WlwSMoQ zb$uD}xhu&tF^Qw8=$I8%A z{zlOpzr#r;!M_rD76cHTt%e~~d>Tq50y0EF^J&6?q!ak5ycu7f&nDoL#1ZL8kl&VE;Br6axvz^? z`=H4Ka2SP4Y625o@X2d|qO84~K^WzU*+bF26>sv?8m|}Wv>1-~SI9OsP&XD0XH#eK zJwfj(8c~WN^p+KozC4(fHL!)*8viv^Kf3Ei7kD6Rd6PI z*Elm(>lXGx!q%-aQ~|GtAfWf#Z$m$fo6o(zZam(6g;rU1<{xDGIT%Fi z%(x%NGeybBEJ#T)FAXa}Re&LEq`0Z2Y4$6U09Zm)20%1kK4j_vX$OHZi7-+LkGaD= z$`&|l90YXSw1AiaI>J4n<~$A6oMlk-c$N{8k_H23{q3!mme!%I6b*7m%{SU6V3QD+ zAadd&5N2c5$7`v~jC=BZ53aWuPsWF%3E!Pw3t{X+f?}zM43OQ@NAtT3w2$n8{T`%# z>ZUIMr^eY%e{9nY<=&8Hk^Zjr8hzQU*VlgYlZ*au8#5}YQ%pcp zD5D%WI7aqbtY+BpU$UZ3$-B0RgnHB$9#AboVH=1lM|z`|_C+Nh&F|dsYr2Laa3Q78 zfi3PA643PPSl}D`CIF$k+yPeG(3bz6QiQIRruN^d_p#3waBAm#EteLjfIz+se8N`)gG~8s25vi~!m2ujz!OcM z$;%(+IwiDF>$cB=lKf79&eF8s2kGB^sXY&Z7HqNNe9{Smt{?%VXZ-Q@A(DD}lAUH=_|H@T}t1Bxkd=5rTme9V7@Bnu~kZ)aQ zOEz?nr(xdYSu`t_C?pzc%`Vw3R6R zpa!j>)<-S%>qwyvJIZvwK-D8Vv)2JYYY9@l@P)ce z!bwoC#g@3sSYh0l#z){k1^pkG`P&0nNe&VT4qd?G<2$RexoIGEcIW5w{H;F=)@vi> z*C?B16jfX3TezL@s_n;%zPFb!W*>@T65YF}*G4w}?re)cerp?b4nr;WdGTj6CYu3! zD4*!n;2lAsEqZs=a(28w4KT2UO{WiCaMpf~zN~RLm=Be!Z~4_u zlw`Z&A38@(u&QxeI89X=J(c%6t=bw%yWUo-jA!_I?K_}VoGF1C~#48DIsoMXWtQ+htAd!n2?2s zhlfDGgtTZLdj^nKe`Gr->Yn6hje)7sSU<~k+v+W>NZ{<$rPu;HBKxD)hN+;0aU&>DTh~N6ZJRv%t(@-r+Y9Y6w@3Ys z!VLP>vYxs*kRhf$lH6;khawI2AAmWY!Qvf96Gjp}{C?fqfVao*RiA$6w3hjY!+n*k z>sgi_VEqWn$&dtsCn1@}$&L`jE#NAc$_fTXq5!%n81o8K$cILKey}cSm11|{&(yht z*@Se}4JPBhE!(xRQ|twn)3(wd(`&09n@+*xCdG2eKlT+yolg+Yj?@h`d=(x`?lba$ zWO%a{Ns5vYLUIL#8`LC4&^M)kMAitbK;shzr!qmEF;VSIxYU!>g3@g)pl;q%ONwxm zfIdx>BGl!bVl9=Ayfc43TvH-`-y2W3!jK}-DdM|#>%SseUZ-S4fH7%+b|k`7t+XC2jL7HMOhAuH_knii^xUeFd&50&X;03vz~ zCOUZIS%D->|MdbWC)s$^)OZ4*qDTq(J|k&>nPb8C))pF6Qox+22R; zEZOPH&qPJ9!GKgxm3&Qd4i4o7_AEs#P-cUlK<+Ax)%$KX&e8M69+FVMOouB_cNz_^ zfm{Z`kWo4byBDXNs%=8`V9>YmOJwj z>NPI+ptJ7kWUxY(cSs11i;D}WUhdnhj}hD?Vtc$Z7LU4N5j1x-RP`Al1^KRs~D>VvPjiA8<$!!p5$s>nhbmS#_dwY;+UI`8kcAPXg@GeDpjDqOY zN8b(XAMVh<@_^BbU7c8x6YX)!S|J&=52+`LKfnTv}{r_m2L6MDRp_w~%ofxTuynWeUe3)Dgm#%Lr z5Ii$cvSZjO2*k6s{>t$2k^VOc7XwZhQ@V4#rElhfkW;ea_O{wFIQV@En zpx4d;q7Y<>6NI@$&o-hNOC_W*UR;KUT>j zL5Tubi#zFF!`DV}LNa>)cb`*=vKy(kPWM*4z+@Jsj(~;ZmDhlV7#SVa0m&s;0C-kb z@|XOn1 z@_);Yg4X(AqB9hT3wc`G`)IiGD-_|o zUd1_dh0q}We?J!;AEC~pt4l%3>j3So7%);8e2?nE2N1#_bTfL##Sw$*{Ud9dAZWz5 zq4OQp&<9^16eTeJ!4niDFdtGLg2q!Wt`%;n|GOWwA|D@lx`Tbj+5WH=N^d%ERWW1- zYa|nC*xXzX4NxEuh7jzMXO<8jkD9$}voUcGrgJvkq5SWeK**I5 ziLJxI#%>0E!Zzdxyd_a>`Y2h(#$bn0ujej@}6{`b^aPX-7LF$h(MTEw8 zln)m4HIwo=c|-sH<6?uH3ft8?Fq1qOy7RukU`r7KgPKRw)N_YZ?kzArROz#ts%o-) zswnsYcz`5f-iix#p5WoV2Nx#_P{p6UT9v_ahg&zMxLD}hA$K7l2O2V*1lm>+5jbtK~ zWeE&)hEde06`S*_@DlWtLt(xJYz~;*FI+}OMsC}cJAPH9g$f2Yxg#TyB{VX3jO`BE9Jbb0EudfXaxPlrE;2(0f*DvcRIk}_=9qkZ$#l%ObA3oHE z5$S@C_cgY5cN=om$~cA#>>H#Q(P@)rc3jY03 zbRsahgq&>(z`(&lM?j532fD+VI5`=6OV`%cm~`r|EU&zoe2AL!3IRETsiCX*`*$`F zvSakr9?b?^k`<(bI*-k=TzN$W8UjWBZv0j=D*QW-7PTJbN9Bw#T`JVEA>*BKgvsAK zd^;DeFw;~BOh|l|e6jDJ0691SVsWsn_sj`m#b`4 zABu}u$9;~;s<3+ax#Go(+|Jwzx2%P3v&F=_GQuL3>42Orl?`G|uKMxH!snW{V z%(w124qlpgLECdYxik(pM}U4hwQOM79)J3pThgg;z(*|!8S$i3l@?XN^XF8GCp zr+Nfmz%LNBiP#In{mCD&g{nlklrJOUG;F_?3le=c(;<4XW;1C1hpVT61?KMV4*FWn zse|Gkgqds@` z=I&wFVgzm|ec_a(r=1T-W+N!Ruj+}$NsH3YJMmhIdP3>j6Bmqoy zzh0OABFc*0lZ=$K8~B6=wyb0DuNFo|U{|T&9ZgP78kLN_9ULHpPupP;G8d#qeM9)T z%}dfBXAW)u0$S_##{fqiN4EFAQd8@PgR~kQ z2atpUM~|(&eapbWUi#7zBPfO7SP+Aqnw*^M{ow;G@gdefck$mp9Lg46T@s+Dm*TRs zvTD!LK?9Qyp++z* zqtqe+ctga_QcpffhWk@oGP0yO>xIvNl>WKX|2}beGm9z`HCi(hlWz|bzyg4old~r) zSwc!`AZDo+>-J-hj0|c}I=QcKfsVwu_lxJ`@!oEF2xZ2@hYugO7#JYU56axmPe_vJ zI0AgSm;vh(J!=CtEzxYG*5In< z4CB9#N3e><t-+sTBi-JTb>s#(!{Y9$L^Wzej`r~@5GKS%`RAGHj8c2@K^J|Km?>R za9#+Di~AMzt!2ZY10tfJYaV!yB(;rc=x#QS&-jwLwOZZFJe+jitQbDsUu}UnZT*<# ze=EbU%=d-cIUirs_@fq%!dzH-RxU8Z8#*|w^wnV-nNL<>X3<$%^LSe5>I#DA0;1QX zgoM|Dsik?c*1gs${^A^eR+`gsXP$%xGm=q@A7z$8$&C5G4}e|VR!DHfz`&rGDfe7e zsRq&u7~H-isb^C1y7cYnd*$q_UqBasEen6@n-U#^R?Q6*X^gTYf}Neuwul0<+uLYh zwG|B*WkEbgaQ-UXRIuv*07IfrBAf4Vn#aHOqi-Z$>+bia|BeSwDvx$W+hF;YFlm~E zEph?BRjW z=}Hj^fS&HogQqYBqcedg3LIJxurF31s|aqP^G=S5jjj2Gbq2mK+9oEkssBBd9gd`y z?IYe`PMf=MubN*u(t1Z`SbdGCXpOUC^OQA78Hf4g9^h)?c5!UTVKFJ?;xcq`36dDN z|LpW?rBb8QB{nO+;6ROgf6%|DYxip+0JgTH~u*RekxFZ2RJU5X>f+XXH zn%ejPEQ9fN0*gk0Q!{bH(B7)J~iJLX2 z!7z~?g40NaH8t4CqyyrCLF-;v!q{VupHG!%@}a@{8Yr^^fw8!P6i-!nBeP2dLy9hl)@{bzs%R$+QPCkKZCU<6nf zODC=vp>Z|Gi^OnPi^2tb9wR0uW(7;MVbZAoSNL<(2!?bY>kfS@Sjm9@E-nNpbV%Ob z!!s)cR~AKD0*E=UR8?{B-{ z?5F?VwMF$x?4n%35)vYRY_AxXyaot}fq~J=<_N>*gT1^igO^P{U1Jm;0fC5u0-g^R zLHm{y;C1*lSne1`SD`;vFK2>7g;7aJ%a60K+;XmdTnRv3^Ia>5SmxthDX8HQ(JtN*un zC>S9qUyK$Tj(#`BmrdmDt28h-mw|ZoSRoAqtoLs}mP4?zvVuAlcFaR^a?eknZUIC9 zBg-Yg`rO(55SKeb1TOqQ@a%eekPg;`Tm%Q#eSCaSCYNx)2OJ3S2xcnMg1nGM>Dz6H zXONHRz#Rlx-|O7Zkbi^42j-0%^B^QXIyu3^+u6+Pg*0aEcV!fS^?t}WP;+Pu+pl7V z_QRKKf&VH%CHr9*dPSdsVVpG|!{09>JlR+Hk>0%HWACR7^~#+!`|Nh5Y`r+GJA#a*h;k@!cj=W{fxW1)ijC%℘0x&62lBO&Sl+bjo8phb?uBFrWgG6($|2AVNnLTI)<0d%F#)Q??)XG`-~(%Z zTz0ZhtL26LnO|TasEZ=37FaJ_1s*-}$&RWI z`GvwjEsWO+eFvx$simfSqA|pb&{R{IfRV<_k?5x;`Zl+O{gTX5v&g7&VgAug4c1!+qnPh_NZYJ=xg^$;uv{(!3l72YimYm zAs+ZJkj;Mu>V}IW1imfoKTvN(!cHoJ<(-m}0;f9o14M#X-quP+62#FkJ!`B`LX-#* zz_PgG149FYW)&^hH_HY8BBS~TL-a@yiAi^E6Owau9Hl%EIQg5Yxur_`GF%IkgPloU zJwO=?A6oSg3QfHG^`)Y9|3(0($jwIm7g`Yz$0H zl!q2~+bINTW(c+<(vuk(3Rz%Th-SUiJxG&aeR1aKb~J;YP>t|*Aim9@9u3N0ntXxN zHP(-}X>@y||Ghv*A)EIfmahaQH?*?*9&YOVX=Fy&Y%Knq&o}&|uJzsK=AqE+k<*a< zwU4NaO@71U=?;eg_?2Z36fnrATwyin1aH4{+s2FxIQUTGI)E}mz|(*&w+O6mKASnn z`Qe(B{`|@wG~qv<#|NK)B)aTrv$t;{6=O6SFU_^u08~z|S@pERX-2TBO*Dwe2O#oGU)Me(wt=kSPges}3|>hdiY(X;Hurp_ z1A7hpt!45uixWIDvcm~}V>p)_j;8sesu^ePSE<@0Bu0PurZoz0Ow{;n2CX(WH5p{d zT3N9io^4uIVOjA%+0^o!a(UPO1PIrdC$xRozoig#2~Q-eC$B_2mHXv9K>q+=LW zS(yOS*=AN&B*3_hoZY_`j3NMR?kUU}vk?J#`Rv&<1OgnpP@C%7O`>_D2}Xzr!b$wD zP+G0D+Nk%rezxh3>==CN?eH61E2DWGSrQWy!N-D#o8;e6M~ZtZRQQ?jL0GFrN()-Z z-QMEsV6+=S`|@)najCY6!PntDk#nNk-4>Eb=J#3dcXSQ}z7b&jVsMOCrIP$G$W$ps z(2}~@?SOekC;8q1m*W}+r7w1xOQGOEp5n(+=|8>8)gL41UVmZ~y-CSJp`PE9DSJ)+ zX)R5_Vh{O|#4W8rBGDL%1GXGhPXpU;3VBA)Kj5s0wf1XQ*FXa#$;2VG(8<9UbzJm6 zN>5&lQ`u%}{Z7Do6n|vUeg$uKuK1iEi(GqiQ2anMh|T8iEPK1vB>@hO&md?HkB`5R z^@a(~D~p18HSlwYVKc@9)GFx$1`keIVXW?F+vt$(emc=VoeL4@?)z~vItT{ZV`Uck zRg++e1`=_$SG4FW@@4nWzF!(b5yhdAW2W?s>INR7LSBI~(B%s%e&)@wY+;Q|>^6opc zeMg2G^!-J$geafaAbc_RE^8nD^#H>F`rUyOKV6ncUkUuA0d+zt0Hh{M?-%mUGS5o-l@oLa>yFGgw@@ z9bB#XUz`V%aHTvv?ds~f$8BSj_!{@Ev2hzD!;!u2fEhPBQsK%|cvyp}M+8G{UJUq#ir}{N06GlcZ7F1=RJrEs=0TUu?KyZ5UM z-7Ib(S#z+6EmuQ?dNL4*h2t$7LEdeV;7J@ zaV;{Ei{M%BU^Zxr=775>PHvGkI4hj4w zM+X;`{mmQ#?iaRL&c5%xKk&Z!HG+GEA6vVAd4^tNw>FA&xO0NHVRwa)rn#w!riIfb zkGWjd{%rFI?Td1@rHxf{vNq_gr7<1$wkmcP%IKi`?{k^rmFcw=d>58X9N9}`m$l#O zzArAmZg+L9EnfjQ!buY&)RjPk<%0ED6<=ap-64Q;$lMUov9Z63-gHcSTuldC)I@+q zDT5{z4%-v}fn)N%30TehSxrD<3E#FL2L6{m@VE-87SuKvhV!qeu)ACJOI}4=Ol>7LzLrquBr=p_uf~=Z9 z$%u$jNXF9`nx!D$Y`~dZNvF z7qT6X=a;Y4MKJO&&3K&xhj8v}&&uoGvRP`iwmg5SPJpo6vInVp8mdR#TZ#1LraK}N zz;?*Go4B;B<%{vcHF}!M6+2;aGx^~r+I!80c%TS49L+Bw;W%%%1Y~JNnW*xazk6>x zV`*U_{b3!XFCIZb(s@mXMJU(}L?r^OQWjx}GOkhn@7k)`CX*jbA*)aX3n(y>+;_-n zXiiGzwDgBdr}n!6QIwjGAv{vUS<^c{dkqY38B3N&4z1cRHWt3^K~msyTYvsl>2|{2 zNdpvNxD@^oB&1iaUPaS3Fc5{5)rvJQ3h*Mk;x|K{$%z`*ILLphIgGP6(cXBCRCNP) z1db4N`TbK~|0V9*91gcWw|W%_1LQJ=2Hq}TA>YgIPUe|EpB=fDwv;=K#;Iz#OTbx+ zlJ~kWn?x6-)EZ8CN+v!$Z!R5oN5c5@E8@Aj1?Pv-orxTe#5U-vb(>y1ZgD>=eNN*X z7a!l5PxrZCK7-841+{PE^keG|_xF{8AzKf6o!gxot?3Wu0)PY&Fr1Z!{jC3my9I=t zj=>{Mk&{>dpDSIinaBPewTvW1a-8n0szzA5mm2Mi$7orf|5>0vWM@C9{{GCywlmx~TplKr$H5Ba`Wl*qY zezAUw{5*5GfO*hlsJ-trd$<7eO*5ebQ#elBj-;?E^NYN=!`L?rytnbWB`$M0BfP$Tje~hjS5BgNspv#P|4g zus(lQz9N800VWfm8vszQ?5P$xT{h#f5zSS`ze802W;5VnW6N$|M(V4B zRg3Q?%D#38DU|t`#a?k4D_QCTe>}~F?;?1 zdlTLApDIIhj0VPy8;ZgG%HL~gpt&Yu?mNpf<6l!zsi~_wcO#l@_&kyyFP9gs-@ zJZd}G*!(ZTc(?MIx{V*yb1by0%*$w}O&$wTSF^mGsQ= zT9g9pV#|>s-2>Z%Zpeb1g(6-IhfF(eDj2ZbU1EV3xIrX5p|EHF?5maQvtN(Oaocb` z;=j?hRb?8Q@^-8+1lR%y&vo$ntJk?`xq?)8?=YKArYr?Nfg-~)u*m#D7yB2gtx~Fag4D?WNbH@TF4}N)5em5QdISz!o0|J6jRlCYd2OqFS zpjbi*twFIfRE05xg_Fp`zh4Q*l`|}5Unx!SuZGZN3v$)n@QWF=M zl%J4Zn-7kutWcV|EF8&-r@$G93-_wAHt`A{6_+i(Gd1C`RBi4Zyfc`T1o~L$iAevF z`trkik?utx%|xYh-1}e|hwj%~zvlcVU^K8_Pvi?;Cx;N&hC8a`)3dK5SQco%aCGk_ zwK#s9k|yIP9`K4@HfolMk$~C6H|~f1-TAZZt$UoebCu&NL#wEaE8{X8)iwfdY)GTf zHIw-WVq@Y5THQtea6!9a?(ib$ie0#z9z$9MWi+BZAajM`?d(^ym&>BxA|pJdcdUKK1><^HY+g}BfyKmS1fb^{tsmIbgenWMW^O!09Ht@- z2Dw}Cl4hmatv)}?Yk?f5Ffa(bL|9OqcxWnrrcxcMc+^PzM!tvQ1NQSOgRnYO?XFR6 zGjB?|*=53aITM&fshA3Fni68RKI8C+h}Oh)HN-p1C+*I*#0E~);)QsM`AUra$BF!y zvlA7@Q2uaD`n7)amui=7Ui)+NOT9%J zhJn&jBgJN{IAljJ%XC;x2J&R#MegfUH$Bwu+0!a#bzesEker$d%;=Dpm)}WOT@4;t zEbCzV>h0|yL&j6T9ycr@CH}=|JB;n^(Chr3QG9p&qa)lhSJ#W6?WEmyEIVJpU=Xbc zW`t02-ODegu5dc00HqP?y)@_UJms#?Ckolzr#n*LUbL(e3JeOOQBHwDNSsu)zXQVg zPzdhRRGx>-&P8qe4;B3}djSD^`r0^t5cmWZeA3P_+j$9(zg^{9tj_p-H4elyeEWoJ z(%ZGx0Yc7p+G)!D>t3F`RhvC#Q>zHYEct-hM=w@`LgOPElg2qOYj4VZx+blLOSBU) z=e~~qMZJkO!Zts1ST?n9+I8ke6k{Sa2L}<)>3h@~defoOzBPUoGAPN@=;+5-T--0q zmvavN-rKU|wUMrvPTsDnD3R@|JSxt^L zf56pp5vlCvV+phzCPV6NAF@VDOZ1Nr6d2YITO0R&kL*77a<+R+yrlf?wO5Q*@Xz5T zT7-OPDD2v#g5C_tDwI9OY5l#=$fNs2L8A8zM^h|fw$FdvI7B?Pp2e7Q;|ryJ9%9?g z7|#==lS;kgmXFq6kz1R)03nKd9Yoa4p1;Qtnevr2GMA#3=bg5Fy)XRdJb zUmJ{Zu*!$xb|!KM>-iq8#%q->tC7J~ax|naq538ID zZm05b4$bR{pbXhd{(%_Ov@Bt&FPp>SBZM+8MlNTc-{6URDNa7j?^uZvE9N~ycdcj4<$QGHd+yC8*cr(S6j3h|^SX+;9MLVr8s zT>A2^PdDO;mOuQK6s%o5!mV}Qz6>ADYm$7dclvv8ruaE2N*@n=Thll8g2J|*k`6-5 z>o;}QcaorSN^ihBKZ2jmEw*I}{V+u#D|0ICkw`$~gRO1KJgsdirfG6b7_+Hsax&5L z)$W{19afksp6mRrZ;JUMgNJ{nm4A>l{g!;0X5@cks*(*8bK8Y~{_?_eiN0C$+jJ|< z=au&5_JynB&DK3;>MAuKs@6drYpgro& z;LhkcO}!IWbG3M@E7)RA#(T(*=8~tW+xP$n3PgxjLo*?AYQn@T*I+PYo00`%^G)J9 z3wP3{#>R-25K^X`sZ%&6(Bx_ql-91_sbBi@2;!>l_}3CJ7-6yPrg#%9y{banb_(;n z-{FJ@4U11mFd$=)cd+<=yWdFJzfuJAqWXX*lG58TE$lCzQDH|VsMQH1o?A4df zc0I&&`R7zEX?ExS^ra<44vKxQ7qd~XZ4>U(&=BzxFG-Y5UIthqlG?WgYIk~7Cf~F) z;X7RkqF!HfA8T(So@v(!xt*9_OEZ_`w%C6ChTyomvh3cuJQ+Zo#qoUVBI8XBJcF96~* z?9o>RE~!NdJP(gBwOlwbrdCy6@w(rURx53((Iyvl+om6Nn{7p-3MRSqY>K>LWwuX+ z8XKT#Afp&QA+cA5&!b&;)fbUOB?v$NVBInA(0FdU=qb*e6kFfCh-|rg9`Kyddb7mH znw`59n){8|NG8hy0CSCuEV^$P_JexD{8)3YZo|x|1XAbj*>zjx3aGU{AB?rl94^1N z&V5)nA;@XDxB5#&a>uRJaPIEbHU<C z<7Vl$1^{qulV`gz>6^6F^MHmX6m)vOQa801UN84+%a9?jdq60fm22IuIE;bA0U`vn z$_b?_Z2zoFc45;cfqeui)Xq|A7d>*D=d0^s@y_nv`gea=F3C{MUBr12iIl~|n~gc$ z_comRdSuP>WZYi9rrOG3QwBj)X4KG0_IbsnnHtIO*WQXjy;R>5q=Z4v!eX>ujQ(@- z<;2bV9|3IPaz8n^JUzX1l76>o{5feC3NX&*?(ePOBNW((60Zq3uI~{^tGIbD2Bwjp&-NkNpx2hNr}null99zqt^k`KuwvpWDbpurlBC`VV1GcE)>DGy=@uKW$C?X==7`1+InO1;N1RVUcc1r zHTtuKKby?&mc`RQR6!N#@QZ8&AA#Lj4O*^pHfQaNP+h55epyCGoZgyj^siqHKMK+@`NsAsJB3r+eH{O23 zo&_x)gq38mt>?Xv){|>es(PaOc>KzS<}DVP`*%l8b~X)Veojoh=ScWbU3xqqdN1n6 z_VOut_8oWR+ifOE=9f)f%-wJ9HtO$3 zwi`A4N_jM4vnVm=dJ*_*X(jwwg0K_O_?exzuaDJ2(seketG^dYoGU*Q)#KqJTm;mZ z=j2SYWHM;hIGIpuz1Pdn&7S+biI)AbMjr1*z{b$p|hiVN91ZKYvQ!0=E_&0H52H3oif>A~;Iz?lmVc+J?_&$iBz8;oxUI zL%}_)5pcU_i}-dLznt;d557}-v!>0jIQ%M%Gn$212xkTSlP1Xsr;X+|V-O1_lVoVM zC25`ng(4*mW75)$SgXnegS)}C6TZuCd&S`&j@hB0)idi0wuHE$G?(;mfXqF2BEQL} z+VMg9RmG%?ik*Ou10+y&Oh^KAPo!V-zL=H0Cj>`YaYk^%M12_5lkr6p64>rQQAthD ziFx~6p<8NDQ28MyCPGUbWyk(tl7*%N54DLqPZ+Z?WJifjp@o=N_6zdOiZV z5u&optiD&lUY4RXJ-^mNfm3#;NqIfA$wb!q$)UMn=Vu-)fg@)u3?Sr4o;9I@ie=?sHtKw zX3iQkF<^N0p`t?Ms~IaTdU#|Z8kNlH9aKU+{;l84!$dDG31G|Y=Zt!!ertI%nO| zCOLanORJpg3F*g~`N`IBO+w(AF0HcyXTIw}SLlW+5%qb`aR2m!oE=qxTGt;v`bQ%) zuuv{4)|IkwBv6DYw=?vqge(9En)tE3#)`sE0I$vtR;}oBceZB=`cyxv{VjBbGD}RO z?$m7VaP{E>iiQT&{SF)mpe%3ps9nk&f@CDZ_Oi>oye?VWT=cgfDVUkjJgf-l=$3xd zDb0QDVMo!g_K(H=72a6eOh(DqD6nI%|MnM6CFi1viIparpB_OD{Rs)g$Y?mNE*h+T z<*xuCS<>_7XAf)`C$d-S!J24YT2xzsB9ttIJ2+o?~~3>N0-Xz5YMt$ z7QkPfl9|~P{lY#|m3nD1eD8UuU!OES4^rs~q%+om;OFbigPx<11B9_~k>%07iTmN$MB1QhW z|70ncWNzHCq^|r|Q1v%V%4^p9zeE;(q$}~i5vf2WYbWZ1D@iq_4hEnsY=q2;duik3 zxBkzy{0IBw&mlpHFugyunI0h$v%PRPz1t}kbo+n_-TW(E9iNQZDVVe~MzcoNwS?H@ znTCZQFP*L2I3)g?*jebPIC9o1br_YBofp)I-`@`3Q#gEb zv_8Dxq*jLmt6DeY#(Qb9-Ha^!`D@YMY(p}L2ODy2|+jp zp`?I>lnRQpg3BA0hwZqr-B4G4<1kCW`T(6$Z@=l|q=9FL`Uyi!xo zZY>?W2jkCLmjU!gMy_YIWhZWjc)7JGXS?*uL-#-N^mfAF8C+8q)enp#YE*E7cy&aO zdDox0`lT!GR*gjH^&PQ)Txj}Wb_73aSdct5x3OIHV`J&40m|jTgrK}Pv%NsVV!&oC z&NSdPzrCry0AKzEVp6x+#Z#OH!{d47+is(PwFFDeya8taja2>3|)TcBt_J zRKt)RnL(8npGenkvzl_x3F5qhA9tp@n4=?=ejW@Sk$}Z@LQan3u?ufZL!#hyL6%UN zJMf5E@OE1m1)f^ zdvS303>+`2{eX8xM#PppIk`+5QF0VtrfHtL+-bYyUZeEoUznqVx6DB)_`M-I5QIFJtoM{bR_ ztxrK3`%Nk5)3(qrnYo}k^XOl0 z%<3*{{T6kZ^0|Zo0mSx{F<+nG1|PO6-O1!%n-se>&*3%JRSQ zhZc(h^^MO2tFTBo~hjYYxiznl$sYeMyG|X7TJqsIxYCE zbbS#;ORm&)G<>9`%+Gz*bCE0kr8FcNGU$`Hl5{#b3bLkhv(@L*?F(2HJx(=oBeg&O zDkd>k&mgj|PZ7z3=S~XVhc#q164ctS=ybe7?gUbE&4zcp5WVM(iVa@rJ?f1;4Aydl z(iJ>kc)RF)Zr)~pcUY2tn{qz+91$KZ6y&gP!`p8Ykt5OT2>%ehVDB5bATUP!(Bs=1 z`K?_C8`vh=H?{fb|M$+M%a%Cb@Cq?~e;jNnmdM356J;lqYr4z{+Xt#|cmJ4NP=|f% z6dt%Wpbgau-g(c;1SNEg$>;M6>=PtwT%(W7*C#suGEK-Z%P9Y)a-U{nw#_Jt7tDFh z%*-xXGe3@!9!`B)6sg9fx8}2Je$VKh)g-*3&}{SKo$Nb(gJhq@eU&ZeM^sF5d)b#S z2lI9{4j+UiX$vMveaYFQ3{oEKyWKl1Ub66V1yC7+sP9^JE0l$R_(5$vRlsUHD?1zwT~+M*6K%iQ@vE_P_>u(A6orSbv96y@(r!LRsK1`ithYK9wsy}7|q6w$Bzfu_%ubu~wApwYl6;Qy&Y(b3B?uwm+7D+5N#gCYZ>H}~qlNy6l z><0b1Ge$*jZEFFRv-pv5waob=;K*YZ(-(mc02-NR-$lff^A{dYk{X3<{l$X9QaXWQ z?)zByd9ZVfw1tHQDE0|1_N5H~6Lo)U33lxOb?4oh_VH4S^OxjGPs-`6tF@l5Y6L6eNFJsM8>1;H0Z}`cZ*@&0yuPM+DHv0SX{K$gYy$sq4;L(mIRIIx@lm5tY0l~kB;y-eq-&yfmYrDjlZZc4(5e7L3-!71n*1{Ca=HOd$ zmIHu1z)&pSO7UzlP-swWGeZFWX`Si_En6`0MJ8ZT@h z7Z+6TIxt2Q=@&^nK(ug0PEL-Hn!o1YW7z!L>0a2 zE>fToh0mISyQErZ`u6y6#RcIi!%FFd|Hw|-x=sDd1sK8n&vgQRh;Y9A7%mSVA7dR* znR$44#(;PS)M-?>fCCrsmgNRGjfcYF;fBFI&A^ucVGR|uB$Gi04Rcm=fhY_CD(U+% zXs3Y%);BP4+j9Zvz2=C~6C7a@zep;up<(4FKk_C84t$Pg9bpWC?Qv6En?LAbaoP(^ z)MwhznZem}J!xb>Y@7s!Qgf_86p;O1>F6YPVSr4Fuu$hX;F*N6>j=9%7(gbpAPP}h z!U}qprSTx&WE7DtpseP*D{E>#f`mU8RIaf6zF=bm_bQM{YW5EJL&a8juCS!#V5mIG zicGK4D;9A3Y9yfPw?{=ru7fi4zW(sucN3h?3dxcb zWyPEDw55Pj;*;_TdMV{*i2HSrTG7uyU;=Tz6sRd2xF-S;1MV8u_o1Yki zr{YuO5G(9H$xbxpb2M)dOliA#;X(z7K|-)V;!1EiK0=057m0mZl=bni zSZ?ezkP@If+uyhq%WoHG(3^qN4nyW>l?-3MJ^^F+m|=E%^LLlM7ZV`xN0cXU&^dNk z9T0O@!%D|N<}OJm2&SudBs{XZxPSa2OQUkO7R}Y_!u?h%=*V$EK>HDl4sP9n$|_G; z3YgTuz)FHq%@JGauxvwTzdTDk4Rc&yLy(Au0|0XdQ<&<6sIx!~gy=yQGBK^|w*Mad z@5r^D!9>3b2J^vFrO=g=PACUW0t)26Fu(dt(&uJxd~!BdIL?cgE>(k@Nk(v-*kCD_ z6wsV_U}*8RYu8{Jm4_n?w7zRtX7+?Rz+!3O?zB}TguI2}E^qi8QxG-8toi!*q&|&5 z0fcS@RC-vxsKzrG6Mggu(Y`1b z7^eq2Pzcz6mws0QD-NFf6Kv*FPry>cs+Pxg^`CR+&e>}MCqSX!#|qXVU`n#7Pp^T% zE~P62JVs!b1R-Y?OiP8w?|B9=yuz3q4)v#SC>qdj0V2t}IrRmw(x_}#^ zmaUhW6hGawnn%GvspzFUf7K`tJxV7)I(%S%ue<#fxvHTdK4jbg9#Y=Bw05O_meD-0 zZPM0}x{mT5{@Rtf^a}MtQ$$vE>K|I#N30Y=jytA6<<7KVaXC7G1M~(q1|l4Rk*-gW z$ro8!WALDW1`V+(05w|5&Al$<9>TalM(sC{AEYUiCqMrpi)a>2Foy`C)O2(H1>^yM zM~zo|LRWsL{fituDXZGSP>U*dwyh1`O3rJ3Q%y|`5%nN3nUb;zLY*fJ)B-6D5O#DG z-A4uoqd>{g0h%2k@4|lZ;^N*P4z-NlAWSf~UhJzx279x;Isx0n4Z8kNDPqh2vQL{aArx9PnXRA9iuET zikq98*%!INBywV60)ooM(ja%$TAeSYJD4B9&QSx?DPc7FbR_%;^WR0%s$PZ9@JplzG`yzy{{Y3;mx+_U~BVGoW?)B*r%b<6nJ6fb2vy= zgfC=9+;I&GA^|CAeb(EUn5)an%dlgYnr6Xm>3}2>nHmp@d0;KWJ?eyz2LeBiMNJUd zz;%R2va+@oF~E?lJBFQeZFN-xb{=F;gbV}Omq3Pq3|iFV13K0{%U^N9&0?9bfDkbi z%(EPuodv!CF{Hi;O1V01FDun^w8W4N8CD=-N&&Jf;2X1bg@c!)BBW)Bk9{XVjC-Ad z;Vjq~eU(oa52BOWDA&sZ$q3~8kau-aHUXsxc&?zTAqQQR8yq{R<1R5P2LqL8VwuI| z*?;8fc2bg-obY_M86`Hnr!{j?!MNRU-{M%QOHx!>n3eas#C+N{$dqBojf^I(a{kQU1Tno@dFPio3Zu;QQ>7trt3S;bex1DV2Te3T#K**Zs=OZ>yF}p zNT_u}*Jf(WIrJdslK@|V#pK>yP~|j?PYN&D6U=dFoc-u^>2A@HjxBlF=(5-Q#2a|d zDsMsh`DN6xh854{CO?yp-^-E0eYYP1@z*Mu!uh;64UVSv?z*7XXPRWd$kXnyg*rHG z^1;v>Fe8q>q0Fu)t_?G`3lH+HQ30R zjDIlx@mjg=@17zt!$=#{FNtOkFhe^TS|2sf1x52Z1#TM1!{i3(pyExTHgr|iTPF|z zv&WWL(7zk2K>1M%)*lI!^kq2-@f8@&lyR%yrLbwsSgM^&O?@)C7Sa^fSFTNh5^E5n zbAmemSAQ9vi;1Fvx#0T2)q_^_jeEBn!z}ktJ<{v(EEN#fUR{0>B$NEDHWpD0w zpWTyy3iXXHRlzGmM8Xe$2x*Frj5kh8d{f+`9Ch5rWwqff^M5&5(7u+Wqk|soVGJuO z)}T%cJ+Ab^CWH$^7LzDD*6XL(d##?vq74GI>=x7A76 zb!%-<2RV++)crZQV~dM{$>c)R>=ogqChM;mM}8dQ>-Z=jp{B^P z!(YaY;Prl}6#i};%m&R6ul#X*z~A#t4n2Q&;prEBE9C-*kH|-oQ8s;@c42${#ASKN zQ)8mAqdD@}yT-vjQ$^SBDz8YR@h%Upy(tw$CHFiquVuW0@fQ1+L=-i_^g_4N?X1)}en6cb(8uPv zueVOxk`yv^XKYMOegds{vcb2$KJu-lk%nn+XH_hwaY=<6(hxf5*|r}T{Z2lfd+esa z&el4v6CWvLwwFJ;@wF$lg|BRX?e#*wPE^&ALKy2F&R94to_6C4+h3c)k9}U>U!nfs zcCstof)D=cEp^#m=WZXuJ<$;E@0Zd!17UfJn`aLQM@w zs5ThA5ycoZ1kxfyy{Bp519Chs89G9hxwqZ6kngtp{hgV7v~jaeMbB^iDAaaUuN z?H=Z623MgmrS4FcotnCziLUfGy(;bHTI+Zf8CaCTF78Jo(Cj)Quw%aQ#NLU`qhNGF z)3N@7y@}znk$8#0e(3W$Ur#@-UHi%69>)U=POX-My`7zd3O@}sslKBekBW7hrSIIt z{8k@LTs>BIaF-9&%{Zg8^z`QY)DM337K`-(H@+M6`oUsK+H0JFR1@@u4B*8?x2KLl zC5ZxR0@64R6qE*>5+gK%j^R*#u7mhiS#o65f7<1Ep9Xm|yjmsX?{uXn5zI zPQ!`&hK7cb`?r>|Nb|G3exS4`yL(^HQoU7+} z=D4`ocIjfLpDmx4pd>HFw(`{~g7XBJLXbfD@T0#!0-6Tk zOY-_+Q}`WCuy6d<&kvv&;w@L32|X5Gu&4mkW*R^Nr2YhR1$LWdRq+sZmlw}ssi8zb z`T#H(dKIjUz;>aRGkq1g>H*CQYIBxiHPI0^601|JV@)e*N7oJHb%VAB3yhOHu`Gbz zDd!o;_&_jj8aRi_g-(w&0O1MW~R0=Z8 zS^?xExqNxEe)~p*C>Z1-?lBNGklOGkp(ql0Z&s;dW{9@+(~tS=Xf9uF_~>wVwJSj< zX3r;z=kjvd9jDz+A(PuFOng_uT3<}}=y#t6|KErn#?a)n6(?jXZ)~RVRUgevQ!mxh zGoNWBsBp_m^NE@kY_>WsL94)K-4eKssw1Rv{LE9dJr=LOsf=3eE8dMKdL|;$DZ#k< zC{|eLaDxN@YE&?*dh9t8y3fjF-ESnxIAai z6{*x^0gMKgK}T=10Sny@c#TxO>r)M20T=_g74aWj(eBs@E9E~mX9>n90ByBMu$EN= zvLrjap&09wA#HzIN#v%$bxo4=tMBS9K%;o8XDS2w0L#?<)b{{m7Cbpr*xE2D>gn&z z5t;^Q%tYb?C(qoc`E(Oo%ty)vZlOj45wwSLbO+d}mg6>l!Q;gTS(_^u+a=$EPTR2d zLOZ-1?d_?bEdVH8eY8t^}p+;M8L&}3s$*^+tr1FX7q-Y|i) zf#XJpXzA8~0gCBmG15vB%5RCR-UHtvr6O}$Kt(?Tith-BvR2Wy{}tp3CN?;K9zV|V zLDO+8D0U|o^W1xnOolUXg(RbVMt3bZ(^Wq1`9?x|-PpRMQe?#ueQ%R8TXSI)s-1*{ zoj6?^HzMmq<(7dR(%U|O^Xkf=*^N&Orp|fqkTPj4%4NRN)Km(O#~smaY1Gzg2>b}K zYHj~3hMGLl+nZ5cYA;sJ#A+OB!*t5tWXYoQg$`Ob7*hNCR^7FS5mKNln33pWckYW>IS!TtOBf8jswg8Gt>Y4U6jv zJARZA-G1RmXh2#o4Xb}pbKcb+8anE~d zEg#(8c6`t?%{Mu3h8sn3M{Y36e&g9azjFm++*N*~&X4N#j7`v4f^$0K{<-5LZY5}+ zT`yw+d)eftEcoZn=@nyK5X+h6HVlV#w<+lnPn5@CL0_&xN%rvY4NAcxa{*I);(%_3 zx2zOjAsov%wnA%9V2u||w@osG0wcKk)m_BtAG*u{oFxJ{OlP!Ba}$aTHU28iqD+8y zVgG6%e}8d@NZ0o`OwP;G)9$k~c!8yWIWRcD!0uQ36#RYbsH2cFfM0Mohc9{jYf4ip zD;+T6>(8hK*-HR0f71MSD=4|A|lZCW0_R<0F*W0IDgK zX#0!phAzgc84lY-om0V)@APR7mDp6nU_ALMY0tb2=?hK%1eXxN5 zk4Ktc*m*eFF=>m9QCl1=3xXUTI^4*F6yK=DjNu&j9haW{?e%q?rBH!S2|hCVIXt{~ z_L@w{*$?C(*%#B$fTQ!d%QaghIs*EABTU3gwm@Xsknyktf$ z6n)CKZt%2f=@t5A!Tc=edq1zk8TaQxZQX@M4zM*eiTX{J!$a`dK+Vp%Wo~N>O>4Jz z3s&T7NWVbsF0f{IT&Dk^IcezNE*l%0@T_u~{aqOBjw)TNrPNu1VykwNeis|KUQT1B zIWpUFDCLo^4V(oe?#?ainwZEyGl*Q!kq2yeC@%h8hOK_eVz;jcQinLjbeV?mw**&& z;g3&EfgP(=EJ}H@va$=YDgo^I)2B}r;715%Ue!H4cL@Z#=y`ej3+;^NI+MPr7UlHc zg}`yQ)*J?8K!64_YP-ek&zCM=u7PIB_(*cTOpaw+SF`f5(CF;!Y}m3ObcD4R`?7m5i6c(BKvbOLx>cD<1Ij`yy{^#Gv z04ucgXm0@pzBuhAHYQNRNm*KcZ?J++nKrn?IBpNyNJertgdaGEgIYQlTIeV>r_C2s zR8+EM`42yy2jj&Wa*0f}GT;^XDKRNzsJ?=wEg8k-3zmEA4qJwm%$=LqoV>gw$cqqO4{)-_e*Jp$r}&;CP8T4q93Ub7dkXzu zQGzO$HI~?J;(j>|mr)-M6I$`v&~%4jh`2hzbxMKw2ady#WNFCS3{_UXgf8jtyH7UkV(-CKE3#Vl`1b8>PBv&DMRm&J($emH zS|(zr4$TI{h!Gl1e$)c`)<$1slaQ_&)E^GN3LzTmz@JcX1gD9J{5fmZ!AD%K+|drm zLc8eKfZmNm6O(ihY-b@2$9*&KeNI?2u$dWn9bn$|4w@KOnjZYMn)l#0fPLA{ufMNv zwU*MPJ>N(Lu#GyXW(w^$Z-e3<+(_i$+z?Mg$jr0BH(+jIa#9#gigDuN;?KNt? z`-?Dm+IC*`c6S#o)@*=9qQtbmh(HCvBDw*%&ria?kCVuh=TnH3)KLVKTo=g6&)x?! z;%`^|_=T@601sezV7#6~B*`BsHe!ji zu{=tACy4`6yhPa3y$?*C9oo^vF&V}dT%eTeb@Jvfhg4q^Bg zs3j#qbmoZ*>bFna>Nx8&tQt$7P&{TEY#klXY_*!CK|(;?6b?T`^VpNS0^~XwQZ<|H z^9SEw|Mu+9u5lH554_eJ7b@Sr1!t;p09b(vPUkMuddz8@PC1~ zqk~bOLoIT$R&eWSQTUZJ0ov@4__c1xR5gmWT{zXY*x`B`L9( zcSF=WoL{4g-mha{z(R+s^7+9-NB&%0$?4uxSPw5RP6P1v;9ugyNDnI9Q=t`vGt;dk zJ|>52aDDxGL^kuu;0l*y*iYd41fH%sv#s5fOn^)e2;fjKhMwwKys~{f>=%-{R5?sQ zNa7{r=?a(?lg-Jy@yQ1wC~`xE7l-qP!%H)};_lG+A6;Z!p+bR!nEB%B1Fj4esf92; zWnS68Ph}|NQt(ad7of7?rY54<{CLNX=o76s$qR$-WEh4;bafWe@A2_hwzgVTe7=+g z;3anfT0GuqjQs*-o!#`n&;-pb%y(`Gp(j$xy_fPu{iz1Z<~k58ZKwfH!^zru7aKMH zY6zdGuB*x0uwS67ha1YUcPDV`m*+xe+F&9byl!&*xkFReUMD2@)U~$iNw|&bvXLut zx^OjxGR+M1FxFz><3X%0@Ys*SL^5RE^Q}?SUlnSTb^vnOmFNm7=iBv9 zsDd91IKF(G#vDJKiaCoo?tW2AV_6|0e+c+F^Oj11{wa=R$uVdoX`^PzV;X0!m)@8?;snb(2}hn3~cGJ=){In`xOSK7KK| zd%Z@yJyAgj9BPbyro6=3LmRE7w5DlwYwgwgPWC~O;kncA-vUV&i|xFt7mMUCaK^dF z^C==Sb?yA6+lKr{R^$Gq!)fY~doV4nBSlofk7~f=Cn*U@%dUm4shJgr0iVO(&D`?i zP?7gIP!Qsu#x*m`RxPp&kZ~tC$tE!|T1+9Gxtuv}bHs$=bu>qs2T&(zZTqED4Po3( zQrhl1$c=@a!iMuKb4>O^BSrJc6OAsnJSri3o}L%SYi4Fco`nS4@&_u28E=fopomJ} z1~8hrNNTriLyk=iLKE(+pScg|49)~y^Uxe+aUABRZjjJ~huvneDoi-PoabuQ~| zG{19q-wfRT$XT-AtE@p1iE9ADdFU`Jb2zP^G#)K(*tkVfD%(Jr)DRiJ;?w-FVZ*h7dmh) zEL5Cnjn)sXK(E#?0O&~z5L0b|8Q{Lq04E1}N*v?f z&zKI4h>`nHOc3?-o5-!)qscnQhtNjLTi{MG*tV&UU?hTrli8q#d)slBbrJ?`!k@}j zmkECoK41?|&%;Hw#D+zHpvZ-^-Lg;b4+w=}^#^ros|#si*)r`L-9g>e@15*orTes^ z7C|AyY66BhkD)$b-f9V~@mveiC8xOn6ravGGY&O<7n`6V#xNH11)ajRJy*)!{;U}p zR550|{El5)S@l7?OfBjH)unnU(0z=cHz*C^2zBW0!$!A)b6fsvW5p{p{{;QkubH-3 z!uxrVR<;FgD8R%~rlyutRpW{9kUj!xDP!1?PPPl2rOEbUQ$g3PY?3|m`aWqjcMatw z2vkKqRaNAtr=2%h4e4wey9K6%O;7ry^VKBmN*g5oV zxUwT4XPWVZ>0AQ5to{}DOhFyj@^8EQtq{&+!kK4SvYXkIpocm8J2sAO!y-7BRJ#h+8iGO8cz0EC0 zOzgNj27}pUs;EESEKm%kXAvZH1&Cp?D9&{O1wg0)Q(919*?U|lMCjIk#8}oGr=E}< zs%yjQs;Fwa9II6c3p4pWi+@eKxVXuf@q26Kr`*V-gmn{O*%-n{gzN6yT(aU)5qxe^ zF<vjCeN{f#F&%!N^wG9Ra$q|TKieE0f{7Ig8o2%wjq4j1XZsuKbe9|_)lFuu-q=4Si(j6_A(<)tk!FFFn zkbW}p-H^D=6&WkpwzJ4Qp$VTj0#;mvJC@58j;hlMT z(S6w)TW_HP7Z{Ml;*MqF)O@=No2}fdU!5f;J6 ze)92^jK$nY$sS%g=*J&^a#taiABFg_Ub!eamowef&8nNhKUC=KTS3_iDx{q7OGM}K zv--A=W^(xdX*gbS!l9;9SmwVqQ}%YS>uH^ZM8B631(WC$}MxhX!xii_zxy`7MeG&0R zTpTH{`F6Nk=>)1;mBlFLl9aWbA4R!3OJ}y1W^`=B`*+XWQp@2i!VVp}*Fk@of5YN; z5VaejrB5j_PvY19H4qqOI@bJ&%)B&s9?r+8>Ay<}g3%01O(^k1O@#bOQ~z^mDX??kFfXfiuzH=z%l;=TO$hWu%K>AhNQMxd6m+LuXOLp zHv2)YnNJK`NX}(7)IaOeCo3PkCBJxawLUZk1{Xj3ZrdPw1xeJHBwzsmnFA}^Rb2YC zGwjN=cL$J9=A6mlmimWDndf}H*^o=iz)(MM&4b^d0OC;9B- z@69dY`F5S{H#B4Lyz&}LGo-Ec3g@lU^df5NkrxCEC8ML^gL}o-LT(Y)OiC=?ZJb$C|i5`q+Kj&z;|ySsqc-YH5SQ-Z4vZF&5@|b z8HJUud&~F+i$@|e&6^E(QGvSh&eyR?AMYNqunx1A zWLx)~Z$hkumD|#vIU; zaZbqg;7|XyW~1*ZvwT!!XT1@0Y{8Lx^wPKs`w!}EA{J@#KBTpT%P!JcKQaQ;X_}?5 z`+N=#ur~>33{f1L)&PGHGML&In|jdKmT(?@=g)}-tnU;ance4dqRIIgzqG+1u~-OJ zn^x&ti-M^F5Ubwrm)B+AaBm)fJWQ;@cL71gSLl3o$5&iBv&69Uq=O+6WWEji!06E2 zDD>1aGoKiW=z9J|9_v|i4MxE!1^rsz`BOI_t2*LGUAnmv-v|6`gMHGG#RGOI!cbF9 z>aJTc#ZNygn(^moBQ*p9{vOVUz^5fRCnn2qmY0>-ko$`|AJK+0^K9}+F3XXL&@Z=rFl2FWM+-7SOjWXz}UJE~H-q^q=KTvb`a*dxtD z^PQciW%4fvZ0reu))SO1gXVAbOONtZcj4CKmzGw-L0U=-&!^BIjsk?w>=+E|!!m&i z{UpFtNs_x>mcI^)Ys~E>NLj}Y`Z-EB%+DZ9hc0^mT&=Z-2Q1Mypl=6;m7Up5l)+5e zMb;n!UyLxnM@`_%qE z{l5;yynRZmI%&Oefb4LyY>&YHvzhfuU+wn>61aK^t|r$;es#OS_TjUHcDZ|Qxu<_+ zN0tK~JZ?(OITP=QJDZe8ie;YBMdrAxYsaVAOn!0(EzC6UwA!ksI18V@d`ldTjJ4nm z!iH$~RHQxb=6Xto9Ui z+2$G81eKd*XG2fv6b3unJOdPK^JFK!w{{q&B%r71?cqNXiIx9TRKv=6lHC5vjRsZX z7W&DvzFKxYGP|QQTSAQca(BRUcOaN ztsj1I2ulY^YMaOoZHp!io~J*<14r1_Siy#1Pj#~e9UkGm?c@hRI)#OWku?Hvy{_L_ zd0L_-tY+AAn%l+(D%ZL;q1IXrHnxeXBczoDJGGA!pzF_oT3KCN-M$sl8qDzGG^>e; zAN9QoxHtfUo;|&77k1TMT4*g$_ z%HNZfynjOKL`kwEV83-|0px6bb&eYivphWauC(0+@&mLrkQOCU&Ovt)oJl>EM(gYO zV6Je$oP`Tl?843f<)u~T`fwS+>s(xJuu-m=*C|Nt`9hnTOo2P-3=KgPXWh?Mzf8g7 z4~$`~W=BI5;=xQYtunGx{n`7kgY$$x;U|j*BT)u}!% zG<>&v?u^o~eE`y&c4#T4O?LfioO#c@sY%tqzR9W2?rCXJV(Ulc;xE2Z`hyXL+>*%` z$EQ(SF+O#@2btk{B@SA-J3oIMI^diYU!(B$;P?Li-++P3({~(XEhfq7Qu5DgEdlWQ zw5-eDtX=%#w>!UJB4{}RA>bfWZ9MY$@(EmHz}_|3{jYVn-cahaB|C-{PRGE8ItEte zj7tdcK_9+<@3RD~Z_<5UorZ_IULnn`vyy91Z?J^8pN9<@Xu;|NO*qw70O|+t&Ut_qB7Uf+qaF^H{hYc3LKnd92n7 zJkSE6-!Etxy}?F~kMM)AI#NKrJ+|C#3#h7_pv2`ON_|N~Nm(^@PCaKQrHE9B4`8H^ z|5-!VDdJpLgrdZkM{~U)k>J$Jd{KY{+!!`z0jEZ8BB|2BeiiBI4=2xbe0_#sIWTP| z1=xSxU**t_yLe+0GqsSsu1*ql(aZ^+E+2iLgke4%XW760-6@Hr{1jdgvx7D8VeLwq z7>O3qJ^j2dv9z?b9qDp>(E7%|A2NSeZ&#KN3d|qF!&i^f6n3)I+eC}Wptkh=&mjih z>g!j|SUCFrRV|kgTK^t}PTnC+U|jX9j&z}3h!Eif0u+VnnmLk8<6?~hTk7@g8@>)O zVnTDh+^R$&{EkULgZ^HwuJmC_A#;C@S=k-C(L@DxU`P6j?dP{p((FgW_7=jGeHpS; zBvF-oWf~NXZyd-$pdfgait4Jb=LS6fJI0+m_)!ZRUr108jk|#Lsk1aobEm_3rdw>% zY?fuXS$sTN$YL*`)@dv{MXp&de&)C+*WX1tAmzpI>x^#Z8GQt||AYA<>pbG2|2?&h z)X5MxMl>~)dOlGlXHX=yzBiLN2jKG9eYtk*xpDQCO zYK@11q5U)Sy?b7Ba+vJ)1&o>1p>FG#ON$)`zmHvRvcMMaqgeUe(N*m&%5rEPuzwQ& z^>?3zjTZ7t4s6|{(%~dr2oVSudN)`1X`=N@^VtS(xu4%eMvAKXIZ_Ip7G`d=$)z~o zKziVAA5N+{710ilI1>R_g9J8&xUG;e1Wc#EWwSK-Ofgb=`ia*7pEiJ-klQEB{RbJU z>ys%(xgNg0HIU~*(D0|)$Mj_Mf|!&N_y`p14_NKDC_M@ahyZ_`PT}P+GdKSMq{`~* z>N>6zk0&|^X8q^&OB#J~qJUF6oo?m4{Tkg`<5xv&1Zto1;aiJ3qq4ApuX5KpFYrG&erYkiQCd6~%UcU2Q&< zAn&eK%{@99^1AuU5?xm<5LJEa*sB{6JJYks1DonE$eF)Odb^pmskdW$*qCalU^ctu z*7ZQ8|GAw%=3Es~;?veSuu;7PKN`hsXGVH|8tt~$C8QoPeF~zy3|o77B1b{E5m-ez_naDA`F22%O5vXtoK}Y zh{0w$UfFkm?ywj$7zM@8kE$C{HQ`3Un3^9fnKp%dr1G=TSiAohm`z}ykp2%31B%sz zMm*$JlLV_xMSuUtfJDp`5SN3QUUCtj%jX*7l=JYBld>QV*jU9A zQ&(cL+v`*2L;ZQ;fB*jukZt~bL?G{=gC61k_oI~Ie?HhWD;J3I*RRW5raa8vZT@d6 z%KmXJb1L`uKZQt#0;;z0iyUce>Vto^ezJH5-;0xMTV10!rJ(dQddDi zEmgH;ARaC{|8w~>FZM1IixH_%8i@WwA@!&A#5n*W4&gH!bk_u_T>-|IOL?H_=xA;} zD}7Q|np(H+XN&Fr&eM3=7Y6PUP?^8qD;3^Uwz7K_itJT^yDf7w27dq~8qRfC4}q0` z&+|Q?wYyrs?n6_`%+yk=T35aEtC2(Q0%VF8$W3wKKBrYjbk1OYf|It^5*ysC&Tl!H zxp~4Di0!^^WT315Ti@Iym}w0cM>-vDeg{LZ*QqNYtR~cmD;i30lK16nlfb-`S$SLX z{u~F{97*1JZo_vmWxJW$Pz_~6+6S}M^+vuD0H&}wk{tp>S5roQd`Qv0_I#6qQ}KoaCDP>olq*Wpc+^@0CIiXnN&EDpJePtZ()tFk)VIa&zfE~V&bavsYmT$*o442Gq$w$>mPbJ|zT z!(X>*K#132{^mKilZc2x&pT|j)yyVDPF0mjh0A!XI=c#}F@aU|^aN3PPd4{WqBZwP zS@vS+S-42+n3Nb^BvT1L%$*8e{#7YY!>Zo4$=lU|0A_A5s-~wXy$&X~87kE6;`HOdO0AhJ^gquqM^L ztgry#Jrb%*Z+?dI$*9#zhl>|^g-EuZIdh8LxZ4z3^0(y5uip?{tot%4DwUzcV?qit zuJI_nT;GO1J5Yet27A`_Y?HA@>kg%k*9rJDnfZNQQwUx#zqoglcvJsFg(J~^`-u-sM~2yb@eOl1Y|lgpqfid9Hm zD)s?%a*sBZ4k|#Pv-75#3?yWJuW;=V;TQ@qwq4Vm)HAJi%gwiHeXk&*mV+#fb7jRC zag7RsAOjugnCe2YQD{D?o!*?Q%u7>{901bckGNOp_97@SvCxOBbQ*cp)@npL>J@%( zO@dA;N_fuvvgC^1G858G1Zo!;u{M3v3u+NOUcnqjo|Ds+EmD>W!UhD`{2I;W16u5N zTbO)3t=sqw&G0nqlhWXK6O7K$;v8KzOEkCleBL|)>rH}3Wd2gi#ho;TgGz-&Q7AEu z%#TV;$|v%S1%7echKr#Y!w!7V4uCd9Kua!^Y0qzDF_loP)PVon!v=0%1L#N2?Z{16W&Xq4$F6v*1H}0=L-jEJxou-^XQgav z=yc^Ci4ra8Zhdcn+5$mop|v`;9I$l;4-U4pH=P6^kJ5F01xNFliROk$!;!*RsH`6Q zlfQ%P^ux(hYn@G38&*_4!L?P~pEVB*4BW9kAfLahukXlkLc_$wKP`&V&rj)1`)3O6 zCX6fmQZ%pCJEThtF{(;vv5v-oOT|$_qjE?Iq$hIUZN-2{?vt2!1Ie*r)rgFEYE*wZ z`Zknx6~cv&lj_Uitb$yK7rwPE%1^&a@-8ke@d9{mAkQ)lkM(00{tx|lF5D3Y0HTC* zFF2q84NI|0;{q<3>0b36BzSzZ(3L%0{F$y?nd-kRxN%D^<3x*cUS-t{O1}J~mRhSc zCFp?zUQs%z>#8&gfuJ1C89VA6>kkbiTJg}@{+f_?AH6>BI?!L2!Kf1ros6>xSwwgs zQ(sI4ca|}7Ub~-S&p6_R-8>1BAsm$xW>+bSnde8WhIR1bZX) znM5m=OBF4vY;akBBxoBj`+-!j`FoL&T;5}&$s$Bj782@A* z4&_I{^L&XxF}(b}n;={5C>iTND%LV|erPE`QX~ghqWP*4c5`u3Ut)+!l3Ur89vtpZ zoO0aUeIgyhQwz?ikvj1w^D{loSw_r}3t?q969)>_vkw`WEMw?TVEV7)C%`*{=HYIW%Dd~Y5& zp6lo$k5pa>3E>#9QUiy%c1KI5K^JWwO1(23{c%dyeZ5MyEQRZ_R|jJ?`)&5y(a@Rk z0+7FOPh?0Om}P~0QLg8~AK>!_VEcHehv-Py6wgJi zMGlS75}KX)OimMwGSr5nJJ^`ep=gy;-Vv8~!tw$X+~q04jx*cipAjt`e+TR&jLaa; z*PIaICYpnvzaCoP=9-$73vQ|DYCD&+d*|(&WQt$)ZjQTSt?KpgY&)iepcc+8 z3)42K52lTYw6?3w*oE=mn>^MN)bR{@fZ#SMvEsImyMMfX3I&CGcN0`O50@nyiGHr= zlqG@ji;eJUjbRhKu+k@s$+y+cG$cEmS$`})PC`I83vjR61-PzH0>3&P7L`DlJFyDES`}{U1eTq zd4MO+d>s+3>i9g*RcvLhGKWQJj}+VN_m4#s0L00&FO1j>E}0y^P6d%fzK~39opdyx zWyt4kCUjmy%InMFMh6-fP?7BJLtO8?>;dU%LGBOIG_A(2lFGaO%$8EQ1){+GD50{N%Vb5&CbuKx=1t-=kDDDDopmRZ8s#pxvVe-n z{0cOEy@VNypaCFZ^oAm=&Fn8}I8wpatI|{(mNm76yr>Apticku$C(4CM@vI$_3k~R ztSBvqsW_)lzGDwyS>zGLLP2w_6yRdxWr*Jla1-7)Z`P{WwO)Q-d;f{erMJWb0l@*2 z_(sy>!ow`op$C8=?4DaJSH>WR&p^j~x(_hO+qZ)L@I#QOQDMdG|9n_0mC1IcYR*(i zE*{b$8y#_>s_BQYfTE-m{K7l@WcD$SHmKOr2X8N|Q2P>WQGmZO)K zqn*YW87V&lb~UduQwSQGo2EZa=mib9VAyQ&)=tHtSf4icU5}TRq!+8*E($aJWF2O1 z6hDYM>8Um^i5%GLV^gnoHOd^gd*==Xg83c9RL{Gxt#N4f45Ut?&YZPIiq>J`-gZ4X z$(-yCRDD!(ciw|l0eP};8Zuvp#_5E5ygLhS9b>I`o0g1Sz0&)t*Fjq&DVE(Tao>7! zTj1X5)mEe3{eN>M=$hVZA(RB7gZMm04(H3;Lg;;vo< z5wO@j*EF>%#>;fCZ8z=Ip7^f~wzuzdOCRpIAg~kAXZ68X!&8*#2Kx-Mcd1==?_akvLVZu!COino!Y&hm4Jfeq^RB3HTecY~xjaSy5aRu}|?^-ES*4 z(c)E=t>J>0Tj+?f-u9hO(=&OuLdE2b?dE1|r`NKqD~(jfvA?9NnxBietFS0uE;+1@RD+*1vEx`Ff>%%f1y0%=RZY6A#FQ( z{{^E!{>hm0tSw)vzI|DVTnW3|i3^Rq|3pLMYhfd9e-m$Hhg^)ezf&bUyf}(QB_Io4 zOs5q$S2$VO)7takOMADBJ2fBbd$9!tU1q}LCUhJ9y&`^%&E96hThE#CO_;F{O)zD| zEC217VtK0{Dq+O-PM&7}GF#-)(mm|XHEb7dEJdAkWGY$GU9?_6VdvU{X-08|+&gpM zZdCn+C=L_JVUBHc>d zz)+qW=T#u&$aC2{EVX*)cDxH&ny)T!T~mnnNt|2A1dA$s~cti z=k;wv8S#2+bFb88{sATa@3gG}=N=&g4K~}1$mqmAe5+@um|2e~5}2eYCcS~H;tyfw z;mx?F1mdI@*Xsp!FJHeBoRwvsK@D2cBj#OQ+>%FqFUY*`!UrTLgxvKcF&>-O78bgK z<~@&yM-ga}10nm<``asXd44NKghy>`4@&mC$bz;tAB9M_uqNfWJbl8l!S6IqN5w5U!Z@wVVx>Q(MIgE5A$d5@coe3 z7F*Tzj2C|Yz2!jYc_%*aHEWCPi6bE7J}g9TxliBHF3!^)L)NE@q{c znu?pW9nt)u6bc6ePHXd83L!L&Fle_kz3sR(xFslBd}uk+|I-%g$#fm6$mjO1QcLHwJSd}K#wvkA6~bZEIXq3FlvP|W zLHza=Vs)~a_|5N>jC!Z@k2R11-0^~FMDRqcyQDRXq{lWtY#~Uc$$`P; z-op@7q18wTZ7qM02ZwaVbz0;03>h2$dM907-I1*{pI)W}9O{^?*+4*aM!;^vywDu; zmY*#u>=O0k`+7G(63}2`;^xT#vdM3gp}lF^C-pDT4lfTZ!@H7K`%c7$>?#V0qhsQ` zv?U?#S#7F#S++LEd4RII%F^0;^7N^mQd!>9;~tEJHJTcF0e!4df3Q`$3=E0xs|y3b zZ_ctf@Z&bO1cR=mB$LQ!O--#-^kyfu3HtrCgRGP<`Scb;)vi(N!>#2ZX*&a^SCv)g z&S%QB3rsqcWmmvh^{3(D0mTX{@$}r6e}fxLn;wT5UIn(<>n&JDy!>Q-Ouj_0vr77! zS?;B~8*InImo7HVe!1}nzrOZ^E;(PCHOqP2CAdt|^Kk^L&Ox!GH6s01)B~B2RIRR& z+k*vKU7DA#L^iGtC%s8rTxy=zJj`hkHV+9tQM0GsaLO?E3x0`*xkyMR(0!^*7G0K3S94pI$e;@?{x&p?@~PY5IThZ#2-~$HWnR*W~(9 z*$cVoyTJN|&BdN-p&XYz3L>C#`a{Qf?JQST=CqtPsx|KA zCia+|@#I7?gY%b{zD#!f?IlzP%uGyo!VSBao8xuIO`lx*MfYzrfx-Gf6GpAd;F^r8 z>brL!C}Dh7COHgn!p~uer|%a~WtZY={n|xe{d-3e|NEW2lfEHzl6KLz{4QQ*q<-eE z^}{mG>;4Vrjq44`3ZXS)^~qZ6f3e?PK}F(!+oCN_lQY!8Mm~8ecSD#ZtnfD_(E&-; zWtHliIXQb*9{$~@r#;q~zi8Lgjqe(caN)Rce1sPs9rgX|HwicO{I@iuCL;X?lTwCD zY55oD^GZ$s>qY&Z@C)strzpan&W7J~g#Y```oZcx@wuc3Tn#M#{>rTM*LVN^Q-@HW zzbpUuk0+;6PWO-`8UMEz z2@iDjlR1LN$Injq)f$?I^F3CML=2d3D^1?#1Et^{{baZS2#e({7j(5J% zoAMt1FpzCvK4SCF`?Yq17H2WBL@Q3YstWQ8S#E>FkvMt7wm(AYYp6s{xoWO2v7T1N z#mFQ1x%LaP@|d{HaTydHtuGQgl$V7(PHb7?NKa_;+o@Qa?N$`iLN`^HFhyQx)zjR4JH;M6EFUsG=w8g?+3>wFvfe<(8|vxx?zcs=AV$xd**Zi1c$G*~ z^L2hex~GV&O3pe;p#1Uka)HgFqUE60NuRP8GqLphAz49h*A5@fUz)QtA8{88&kHiR zhZSvdE2WKgFrkn2a~U4)Us9M}zS^}P66HtQyOQw?HdAw?j-JK6l9M=VdU7pyst$Yc z1I3JGVNvG+Z$OOK9nsSyeID8|+=UD}D<=E$2v7OezQ?BsGckm(j7xQy2pci(sPJ^s zz%Me%wKt8uI);|9&(L{#JenC(o=(-`+(Xh$n)vJ)&B=HX+6wtu%4@i3Q^~NmcQ9>< z(H56^P}s~tl>;{=WiXm!g7H}g z2haGt!HqlZK#?vKx#z{rCA~3=( zIv80r$Gm74XBQEj?9Whzu0k6j)I7@R2+hqY9(pUFOrCU(sD}kXjt;#ej59w?(qJ;v z6JK!ZU|JT@xRo#?NBZs6vaZ4lYAJ(WvXt5A zUqIP8-c!q#KJ0}(DBEjnY>aR|AJzF~)3mZcBv{qW=iWEn?8{3pipl@;7krU7o<4fi z)$&i$C$;|AlWvhWGTr6WsOrswk`OrsjLFuUY_$wgIdg;WPMyt0N^6C!M?6BfM`5Ga zEpPb-|2+fC=Tj)D3IbVORW^C1mi3P-yju$`~9O~(`>d_-labx80j={>XjNrhd%^NcODVX9)EI8z1#{`Ggx_N6)RdK@ z!A^}X=lA6u%ouj%_hNI?!@Sq`km6<#EeU*Pk-%nzdrrc9*piii)Zr)mga{y1nVZPD$Nd z9cXy7TOR#zx1sg2kzv5&p@OTP2CvMVllG5+RZ9QtK7^v zw}50Cq7GklIPQAxj#v(fEyhN=v(;Az-R2pZ;X_%MN}?Sy^xYDT+DryJuG~(EOK7@S zg<_SmKASJD&{nwG;@m9T+2!odPczY~nb|8TR?|-vV=Z#DNQ0X?kz8@6w}So8RZ`i+}U2qi-7@MJ&;f zDQ`O(HFdkXlp*Og_x?Kaus?hfg_)vA3}awckDhtlPrgXUGrb8-f*02&lHDFI4|m~o zvE?}^LF=ttBR8v}I>Q1c;m;G5xsKM8?cc!>XygbZ{Y+#ohsR?1JZ<*xGnAFZE+aJL zB`%`Hv2<8CR?O2P z_mYUfR$kGvP)uh#seG7;?ZIUEwF`1|igByuuPiLDlbftI<*TvFPl=Cd?M$qQ@MW&( zn`@#(gIThsg1YcDJyRY3BKggo7wEVw#@2#S3 zm8Luch1lMm(|uw)8%K&-T~7Qu+BBVMhB5Nfxf@u@I|o1h2C#O0vT_bb#9U0&xxl1* z$N%pjdKq7k^Dwn_FqYDs7@z%fUzxl|no@SVVUXJ7xiOntQh{u%lg{liFP*m>?O)JG zgLL_F@|hoN-U{vS6w>jYnX*fu5@%sgN@Nr6t()A@VwXR^MARIk>L2n7Z!o<7<7Lp^sg(lAUhWG%%yU`%q zTYI~v!+AQvF@gKoHw+9>K<9lj$s4}9y4~=a`kHlX$!F`voU;{+%JN}r5{X;k%F*lD zOCeo%x$Acu67(pVM(MjW-enEErduILd1nHQ3j9 z8cA@vrO~J>NbtZ`C~e~7`xZTtBVQ~?%Zyir($AN3rOodwRl2k9s%SkO9LRx|97sG_3E8?TYqh7{N+d5 zo)U>o@SpOHLS?0wq=97oW+$gtCT(kX?5}BQ!9+iY6B4i@cH?GA+6$*iBv796+Y}Tm z*&mq#pBrV%{xmt4)YkmLh`X}*pIEHOKKEWf+d4wi-hz^-AI+50vimei4AmYNGDcF9 zNfFFVOSo)}%av1V#Vp7MIjnS_o}}O?ItblS^eKL}5;=f(uAetxL`GWP>}#ziB~fIn zNm#FwZ7mk!7%rzF`wcJK=(sDT`LKr@R`#UcIC2lTW}Kh!G0BPGX0Sq>yqujMTc>Dr z8#^0wNGkYO2?llQldYIue+*(R z(%n2v8hxMZmITAr+TEatQaZWcw{?yS4d%m6O3-&8j!_s(|o_Gh(}GE@m-q2 z(l8Qyw=8DuUb$kutZvZrc6-;g%obE6!DkaZ&*k~q!2$WJJAmM_VPZS>rqrSC_j2JUHF zxj3*9xkuy-@Se|XB$REpR{-8w1 zV}8hlC@06nOZ>W^>uu;Rvl(ozuMfX4vaK6Aa1#nyqpefV*MClv{B~0MhchsU_z&fJ zff%x@<mr42A$%@1vD53+=)l7g!1L-;>t?%N*dwQP@4;Syz`X?m_e(0-I~BuF zS7GK`D;z|7WTin8Br8*W)QQHk`HjGNUSKhHmkf(h;9z31 zi%`SAuE)zic$A{lgPrSB_&3;jYT8=&j$(VvZ8H_o-tS~dJvF;5l~DfegKuBD_MEv3y08=@seUDjeAm!(RXisN3Y>qn6GtL zU5Pu|q4#bio$IbxyfA8K$!2ARd8Zr6Zg78Fo@{^YU&lr~?*?XiHQLkjYT-(BHPtB6poRaO*@6)GMvd!GIVkwrddug_>D_kg9o}Qfz zZvFC1$>0QFuXKYYetr2BrVrRL@z{?c!T%uS)!q!U{k@}<{Jse5W~4gfX|jkNNfo)Uu2 zW%Sfh@A7WL#)=&`)LV??pCd}nVgvJioB0k8E*+;1Gxoj4?h46G&n=tVcw_uUt^wmM zA9qjs4z7yiiJZS=+{@sJ%hi6c{kFG_G``w=Q-uX8J5RcEH#`7%D^|BLUIdX2cB-J> zQjiOPx#n0Sao5fla>3C@e%D1r;;mN{F!uNc+lOX- z#;YC|e|>#%1;+A^h~JL7w`c!QFN^PEi&~$GUMnrD7+J2c8$uVq*!DOxZe)(!TZtpA z7tDd^xERrSoEi0jzLfo~rKwp81sm2sw+(gOLvUD*qN|7KpV6IS!b=jDuS}I?H`BvA zTkf1Wc}Ydr{%xSYvdm#R_2}%@KIwBV8lbvZo!<^4a(D7-J@)<7Hk-*Zp_#1$vLG$M z4F{8(<$CVQ%9Jl&ydbc@0o_sp5DkKYjz{ew#Njg-^W|K&mM01%UrlgF#aoG0`vri( zAC3M@Z1~`k`6~Ctl;Io}%M0ZO3pMDuToiR`8fh+R*l%upMADu4TnN_FyN+TBAl}RU zHF^Hx)4iehu(K4(Rj03{yMKg6ZldvViH~&Rk=b!aGpmFdYW=j{6GyFAnggK76Q1S~ zf<_>==xwN>b%t4-BjQ4Jc51uj*m5zKZsjEyW*r8+o0rJIFiD*c=S3v8vpNk$mIf+J zrIERg`|^cEkBysz*7FccFZ<{Fi*$B@fnv3b)(YQLF~oN~F^?Xl7z8R*R4!)v`)e)t z)BoO6P_4*PNKr|aqS6%2I% zPuox>f*06Rl7M(~c7qkxEhR~`K+$qNGCKqFGc5ZM)(vr-Ab2mX^M_iw|2_k)&J1_4 z^-x2DZSE%|Qo&A%3DEqz5}Z^-){MG|*Eh{F?9Q8a`@<5eWUI*}-|Z~L9vx4tOiuFk z$450YR%erZwT~LHXx!(87R`B^=}tNc2k&ezYdcpmXp%LzuAEk^JX5-oCV%o(?gOQV zVr-G@g=`ay;_m(=`&&fa6~_t1Tbo&`21Oc^l|f9!hU487s;a7Bjz^BiX9~SBWsA$K z;{=)#|CU!7zo=J%2Ok0KM#6S8{@EP&?tKESmtP=wa&UM^zt!5_ZU}NmApS$3S32%AYDW9gL}Fv|+z zArVynf_rWy0umLFe5(3oYI^GDxVZOze(7UdBHuQ@Yd26zLYLHNkMj*)6ZbaU;=re{ zMBmU{MxttV?UUXWerKTQS+INsF%`WGauJn5wR$!}IYq0-=dc~NUOCrAoH(b({#s~? zj8RrcVvJc456Dt|gRal$eP!ghecf=}q(v=CaDcL|WGEdX6ilc?ktTEyZ3$I3QOtu4!A>OCERRjNfR9THR!$ z>9!HsUE+HWtSwkDS&a-MU*Z-OYqr-f{^NukKg7MB`>4P@JC{EyiBAlNmwO`*h`t)MIH2jiY4`iPxF^89$ z)DYZNk|3sOQM&nr2KjcLZ`p* zWAP%^f%NCSu&nZjzQJ$$is>OVFxj2**`ODS))prJT zxoxhifByE}&tTGiQLTc4hI-+fuw?|WrasfIluU*TD|URaPN4OQmb(ia%Uye$i(Gqu z{fOfb%8?g7ACb%6ZjZxqf4k`@+}EGG=nKHK&wERU?!6|r^T|jbIA$)paBuR`ALIA3 zk_SsYFZf-gpD2N8aWKul&yvG5pI(thYiH*-h@p%=KSmNe=YbJ*vKW&yYcuUX*dryX z<(HzPYRYENxa44yazubKZCHIdY`LQYG+8@F6$^O5bhA#BSKOI*-+Z&Kd>&w~_yf1*df*O`(MRT1eojlL^Ut#VM2 zRv#Z#Ebk^bk0WYF%AO?wljq6PRHLy8U?qSqi*q-c5apE_85se=KM#==YF4FUcLePW zCVq8m?UbG@OWJ4|CEVOU4lhQRHIIwMb6#PWk_xuT54niWt=WELSi9XS8{o7*8zc8a zcb!f1_-HZ79>2AgYF@&iD3s8h5grCJ#N9_06f`Yla+IZt?BLq0cLNJ2pWLq&A- z{7dN^VJNPsMm3A7V%NGf!sQ>Oq@*QX=x(XJg1Dk8QN>Y+DzdNy*4sB(OrT0YG5cXCRUd1#zob+&*)`& z5d7C^R`0y{0OZ7mpF{IhJ%}t2nKIP6VJxS;TS?lLMveQ4$9~d z91X53)QUXVye^Q`%TY2MGE}6gu;MXMeXuj8dayR4u`+Kb?)Y%=MUff*=kO~5!j6HX zqjWNre7^qXxz+QdZrj!2vq9APaCnIdgXZribi2wMGFNWjtoJ1Yq1_N$E=lxEz%+O( z3aWybol-RjEQo1DmhytS{&EXHO5w@;a%&X`3C-Q;7#KX&Lp@g3ww8wM)k7h9V2VJ; zEZ9TE4Z9bTz(s?}(ycA@qzdv$4Vu&bRi?_W-GK43Y;f-?^n6TCuh&1IJD8~J^^#dk zJj({+r2cH3*4*|eLT#Vm&4#m?d#m?|ld}$^gTuH&jYb=`&zW~$0!O=+{FpGa0Kyct zuiH|k7mN8S8g|!4be(VAECSJy?Q8Qwdq*EXX%y}1*V?&mS6@*;v!J;mlp^7Uy1D-j$m&A8A|g#h&7ufcO-ynkN#ZP|MX?pBxF zUHdgC2+by+VJ#tMSiN}mt84+)4y8PHx`Y1kwB6(Cw~rVkWi~Rx!R}KJ5wrqU8h`8|6J+LFs4p`%PPg|#)V-SxkE7h$VDfB$mo01~DVP+9rj?z#kOPWnYu z9rRZ(UHZEFAD<*4Hq%murw95zi%?*a^iU72G9Tbfhuwa>u&_YjLMG@#&dkn6ER^Bb z*LQ)uoIsQa6S_vOoCJ?Z_zDaBs?_xs1KLem+S*_w=_EblfspY<^^p&?ZpEdnzBW*Q zvvL~;m565&X)O_7;^RAjtneuv4u?Y*pe5XTu=^A5FA~%nDLIFQECzoDN%Z>=v^RkY z@`Mc#aZ2pYg!70-{a0RoBCW_mx&um5TFzsR7NeyU1TBHJQG2vD-d50h+#>EX42cnF zLI3@AiHThh-9M#cynNYC_SC`XsMd&Kc&kHC^EVrj9}t%nz#}JmY^k3(abl?$>2_d` z{+AL@^`dHWS{@9x3ah5FPmpXZ4uIU&4hcLDh)oXu)F%P`v&#AD)U-4n;NKis2dRUX zpncH(CA%d83#5o|-n_}1rg8rE*W!dC zI}467<>RY1X$v$Bs1m^Eb@&DAIqtEo4LqkleP$eM+(0PVL=X!Exi#W@w83JVzh446 z5urZLbS#j%D~RBaSN-?tkUOO`AT1sc5_o&=ckAWl@bCnq5rXXZM&WNBbb*kh=eqD= zc`>%jO7N27lv>%G9jfD_0|VXfga;+M=!U%HHc@)$PDuV0Qj>1Od4NrH3jrq1I3Tl! z;2q<>egXLSQ{hy9@beQ*INfYkWCsc+{W;q05H46qOF&%oQ@{QVaCQn;(uU!$0Hxa% z1+fQKR-8a9OVCDe>LtWM3~o!@us5(=hQo!Xpp6SSha@kAy+k0Thb|h>F&{Aj6E%Ce)2w$Fi824pZW?suLoGk<4NFoor%)*ym{x24=_YRq;sFa zb4Py^oSIHWpV=(Dq+k2)6rHf$Ng&IFV6+(&W7SKn=s|$*%tTf$is|<46uEJ)zn7K8 zi%ab21`Lpth#&97l@$ihD=tYOD<#8rB|XgDBn}o{i0!oW8pWE}s<6j|_L<$8b>IPv za&G0@MbQRmllydu`dLEvPqx^RBk^OKsX=`rPj#_S>SLVc2*v}#m%^voHD|u77u*H5 zO+%1T1YD^ zdC<6^>^Cm!uU-1mg?!9QT>9;;Ga|jNMAyAxp8|8WOk9}z?VWY6@+4vHCne(~LUuNc zgkHtLjI8=CKitA!Zm`moyLT0k3mY^>sp{$-x@gK9qM=V zTsxc9gP4(Un)X*JLKP%-8>}dEhb-n`uc=2dARr5xzo4JD&eNql(+1=$bFlGt<(5El z3#yERMPnY@V;19;F%fL41o^<#!)?quJn$d`2{7^cfGBVE;CgJQ@wgjK{D%tLfQ}nS z^5f9EU@izN0~rsGc|W@^Iej2dwL+={YK$cJfSwo#gHP6&=T^zwzh65V^*ol3WzAd% zi1L9YcPc_vrxwgGXkG9aHj%-V1Pw$3(CGz|fpMzil?vQ&;5G_{BBK-f1VqT7CENo1 z?;!3B0R$n8Vq%I~$-~_u9uzJexiVF;+$l1|{0r9& z2DKTw9hi{2D=As6Ww*YyW;AyY5lf;w!?Vbnt#@g982d~|GOR6CiF&o5pgZwK@(cBO=KYaMmWja{UA{ig_IXe1H#V04U`1auWRT$<5!?O;K4Z@{!-T<6H zyiY`sVJtr3iR~h&k#KIBhxq$X!1XSi1$gBd;Yd>RK>Ed{Nqh{r0YNIl8-i@q{)%V? z3+PSKx4~`?0ikZjI)NaOA}lD}kP|tDFLOUUXl-k2Fc9Ef4+@PI+I;304Q*jmT`{j{ zaF6#8&bv|RnSGg2&`$w#JQ*%rb}`cFbKqH&nplm9CAqjlLA5*IVu&X{KcB9+qO|l9 z9i7~pH~$b56NBjnX70B`7I)d$+69&$13Y#&ytNqF`q}^pfPZLLMEd)m^(DLZ5JWw4 zRPy=R`lw<_8CBk0g($0bTonK+v3!LJ+50FG)AbN)mj>Z60SouW_)Zc0aVZb zg~pliTvwE{PbUl)8ZOigS*Y!^!3g44;81enw)fhJ{n0^2v{G(Oqx;-h!sU+^yr#F# zSREdH2}IGy3(tq%NU9H}(WZaKduD6J;5gBXXx_p6BD5F}9W=W-;YNJ{xmU5wo%EG2 z5s{NBz#^AHKZ)8qXUuJS5sMBm^c%`c)%*E7##DY9pf_a~;uBw0k1N1ol!&S*&)6Of z|BNh+eyO~MPm^6&(QDvfaHvR@^j?my zy;)R>GNc|kYWGSEViAnWe9%``W-h2C&s^X%dl}s3xwB+RF9%kBUocmkaWgN{Ah6cM zThtg7cYimvuN%=-*l4|CF?MiI2~be`UUi9PeWgqF)8|>&w*CCIc&2 zEMzq&;WY7CkJABxG${j{q$IMX#CSR1fE2Yh5NeT6nF0>K3;@QRFk8w*lM2@e}b zybr5V7=){^*2OFv2)Sed3lik+7mi*Ob;-k zj6=mnM#lG?`v-^$tp*a!txL-WPk3sWowq*5{Nt%vRj04{Momg5=3WeJSe`ki)GA^* z`exB0&tles#aLkvXPWe)&E4n70Hl|9w&hsvv)#2gH1V73J;b__UPnV)g1BI>Au9~% zj0_iz^%vx3-$Oz*VCI~mTHb3^y@M#fl-c^!an*CbIFXSe++coytaasynuZKr9?M@R zVfOR<{dE;f%Touu*viSd9O#7#9j8}-kH?|mXXwDsJT z;ol%U7zNXl%6YaNZY&j*4EWafr!P!R)V?PK&m5Y?fV6PO$Pq-{y^ePM2<{aMD3P^n zS2A3>KFUcZe`j)ek?8t|1q6Tz0QexvL)5q+rm&Dtz+$ilqNHe1XHH&|uEOW3FB$Oo z^H$uvG-O7us@{Jcp0b4F!}|IDEJ-+_?Xg#%(QGw5`SUyLVolu;COk6sM2#0CoI~eOJLH0)m?Iu^wbf z!$~KLUDBY&I%w37#`l{$QB%hgj=X75zoJk+vTk>N@q_P>LD$C_bUQEiRL}lUkq0S} z&bS1hY5(Uhy{*96>pvL%@}n#M=J>&cy~wy-MelmH2FkyPElLR*none%o?Ll>uil?B zNWQjT7tUun9!SeC1$%HNT|TkKAaAwDZQ8UaEn%!NWN`i;21b!j@UDt#*o;>t48lX| zz@ic)wO^DIrhxh-f^+=w1VgF_xAR5)xz#V$pEAwQS^9KAZhYIR~zB5_F{q zQDQMgxgR;b0eBCw{iOmv%i)c8%@i3O03g%@srmBoxcmBZQ1p3T?|b2ydpx+GU*Hq% z?CphJH~u+8PN%mv)ktvUpkKoc9vGa0#_8Ozue;=83<*ZSnsIr!h^^#*?#~orH7p&v z-fsesfirfmYoEs45)|}98dHrfjIxTog9|jIvg9;ZkU{XqCEI_RpG}X~7CfybN@`s9 zoH-QKt$*ESbkL5*Pft0Pid>6UtA6oP$IF_Q!N+GcJ5GQH-?N{m#twy?FVs42q}GfW zCi_w_Q&uY)KeyD^+C(H!8gjiha-FyqY*lwkvBym;Qn9(;9WmLSI2SW8T$n2p=WiSh`lPE0y%f zM774tWb|3_)S;#L{vCR;H~;kl@ZkH-6}y)lA2M5`L_WhQNbQ1|8YWv7+)2*cw{E?M zv~u_=F1Kus7Ua_(fw&Zx=Gu`KkolWHGBE0l@hHyfHZ?V^gh0y);jd*0m=t2Iks84L z`hkHch<&E#=49mM$-#_@4Gj&YN~HPuoq_m#8qYEF=POsf zo=nEs=NAyZ`zM|i+jkB(61WYh?pXWRn-@=9vj21YdVjW^Yy7u*Wd2ngF;h-OtH9yM zMvJ?UOg;0=rW_9t-}ZZ#@itQ>t7}J=apXDLzJ0lB2RDs=Fj;sX=#Xt(h1k2d_Pvq>*L`K`$zIAJX}gcgVqS-oX88Uv~8|0 z%yq^gX2Kp1R(nZ8#T2|6!9RgOxz=4QVE~YV0xNM%DN4atIUdo*NO!~sV2ZsC1IB|p zn6dhZ>^x}`cu(ael-tozNb46;pP&Yj>)IHT?yBPEk>jI+282()WX0FTn1u&ieF4dS zOELv+2+@A+u1@C>W9)8!mM&hjuNF>qgNs?yQ*7(_?h(UQQ^kq>K zU6k0l0WwyvA!TLPDD)?%7wHB82?Mk!#OUl)q*+AsSHY&DdH1h+Z;)G_V5U|y3gzMpT`dV;Dm#>l+7d*01Er7mobvMU28 zw=A*=jHNdr4oPX)xs09tqfyRJ^o&V(VF@E4J-fJw4TTwr43>0DXQnUw?`yq#-pm{< zQnIwl7B_#>zd+|qxf?};i2D<+T5(?V1ZY0&7S|=gG+fwYhO#Lr6lSXCc7Y(G!Sw)S zSXeFOc$_1ztitdh3CKM%zt`QZsDqfjIwAnsG(nE?#EJ7((((87-&uN`HGv$k3HIO| zDOyig#m}i&5kVd8nO9J36A_Ysu+T-ppIX}5Sl2Tg>gwvE`OT?H zM(u*e-axEuc2B?VBp_lcOGkgLf!(Ite28sf1X<9tLNLM~{;@qn|&|THYOVlrO%3$mzmGHoDd5W7o_BQh3 za?8a)tRjKpmJH$dK@bl4Hl2urKLIZ?*I~j!JV154k2!uy=ar_mGCSJ{yG!EJXl`w7 zT~SK-l;C0jwYZ=+GM@bSVqJYb@plm5TaE^C_16*pPIn3JAApTA5^N%`7b(mpKv`*y?77JwG( zaJ%7s>fqQD6l^>X_h7#0CqQ$W@dYE?(kTXQ0uhHIw6Ius90hIvCr!5veIO=Ntn1k( z0wzY7j)G`scfF*DtbF^XU$mr`ro8~lD6T!)aRG=JDV?1!%B5)?9e8lmC%TX8pN|Fk zXu~`;qBp}yj>6DbfylBMneMA*FOCz{eKcdHq))m7dqoQR@>9Oot!$$ElPdY%bbB{5 zxP3%R?0@|9=@KnpH;0B&UTImG_HXIbiHQkP`EyqD9l_{ z|H<_f0<-v|5&Y_^&E7EQM(ne_M8H4>S9pY`A<5@h)@VYyIu0HKZA$b(Yh80@4*bdXGiy>!ccK@WBgR| z_PBRlqfy27HMO~2s&ojE|MXcmKw`7&IK-q;T;=;obpk5@Wj#A2k+_TT5`_*k3LD8=mm;0f+$fCm?Aij$xtFtEIr)}x<-Ts z0V0rVv?*mH)WaV=;;il=Rd{fUePSRB;}2<7RQa33jqd{>^+l>4bFGqh<&MCQ!w4$N z;jl}L?SYN!tzB&=C-Sg5b&@q}A(QRd@^1*#xij5IRUa7-$-Zq~r$8K>C@AR6 zfS8jOu+(}CZ3RfL9Zt=o;TnE=tK0mud~8(cTO9nSy!ZXYA1s$|wuxb;;Kbd-_X*Oh zwRk|_vR>P8JROC^_J~x9XJn-BE^ZeUyG!R7JA1lPdOzn9Wc!~-NBKWd7cEGX2FC-w zOaug>?m$}rAnp{`uagpbYXA_?LZk!u4`(-5uUL6hbRm&}Ka!AVw*^{%~M`mTCh>n?^k;cfZ!hIdbHHR=7$iCDQ^ zcPcJP(fIlAPl_#%-dn~U|7+Pu@wC`Tj#%%lQiU?N824Rgauh?(T0N)B|RB zf0F*vA&% z)=qx-om2noQ~`j)4Sk)#dl#-7ItW)hv97<{x$D_u7ReD`kF3tVDZ+Z$nvoGiMWk?V zx4TPYrYTd47W)elN;EEDag2IFroSp9wD&u;%flifWXD`L6IKQmiWz2NGu5I@R}6cv#Tr&i~Vvy7m%wJFDD>ke0g}Vg*=l5z< zI>r6}?*MUus@A_>sh=Wa-&ItICcdg%8irc8xBrK_6sS=7`+pZIVGF7Px$8V=?gMx&$uyYo zOgRQ9LZ5g5C85uAo1d+vyB_pV6JobwQOM-B$+n@c6Y>Ciw=#gVot>9sqU=*_P zhKMdKA;HqTVlLKvex!s88V4OjeB_`-aeo_U0<{AWPrK+1%4r0Y6Us&)vTLQDqQMGa z03hv101uk`sc2|gKt8M#tUyquCP8Tm3|*eXf)WH3ySuyY`^zQq5+1R&@6YJgs@}M& zZf3>`Ca_4ndb3}%Y#}YKjXyn08~##3Z0#gO;-Cur({c7aH!;@ zQWQ~$7Ype>Cmtx?zeFnVBd=jxbE!$k;J2D#uWk(%D3x+8*2Kq3hWS2P+05FgQ=W&QBhm zz4_r16Sc)PEhWrNfA@ypzdvewC18K&NeX-Yq4B)v4oon$4Gom(ms7uw{A95db#DH> zvHYW#jD7;~;0L5a!kT`g(N$WIk5{sA&SKW>AXTOvPTKrB%u^kY-i+(E`G8 zY9MY1+{2)>oeu(KAj+1pZ6^{ZXe}RAX+2!Xf^^59OG``R5=Vn-8bL1&>ZkyvHH}p` z6ARf)5H$u~O$PniGKLD5{llYu+|p>-OT_Z@bh!Ds13-W#=jMFDgg{#tZmIVMD70yn z+7K%!D3}f8WUs5g%<6lJ;0l%rPgT-=SS=y+SZHj z{HK8BI9&)V z#qTO4{uFM71<0cFL3Ijesm0zlac)vwgOgk zKE!e@iFa5S z1r#&V)bb2+`=Ww^NFg^>Y+Ej^s9;NV#A|?{I&`fl>I`J5hC&+-N=277BJf4B%!~Y) z_2v^-u3YKuET=ktE@(A60q719pV|LM)mK1Oxo%yf*dTg9rR9L2A|>5m2Q4C@ND4?x zH`_*}Bm|LC0TB@pkd#m<2|++W8tLxNf4=A5^L^ic$2jAR;XUsC?)O>GiaF<6>(`~0 zBHT*p3-et+;rigTK3js&J*tpHQk5&CJlkvx#0%nid*!Ij7p!ep6z@mi*E)5_qIJU$9;K5Qkvp={`HYD zH$y|iw~7&u3)W{>jDQB6N>(2g&m+b2maU1*MMb3bHN9AQvRp(2ChS3P09AlxdGpNv zIAzv{&g4n8GWzo^$5yD$?qZ$sGvC*nvv5+WOCQe9PAN%Zz3zq}ElV^tGD1sY(BFbn`6e+iWR(o7X! zB9Kv~B2d2si;E?WJ(H6G=oJQ;fs$_q?BiQW6F}eE`ObkjWuvamjYU~}IN7O8v&I*K zCRAi7iP(jN4q(E)dWv?UUxvb>_=au@6Ff2`YLW%&1Oh22@k z))C1AtPj7_%5?PY`!dg&vO;yfFNi7P9%+5W!RWK&c&m6;_Xlj12J2UICo&(SqpQ|e z7g^)ERa%d2?Nnh@|%(?r^76Kc`8D740>F2*;gE1&5lbxd~IO)u;z-zVC!7ic4IgGlD#6`$AvE{PT z5f)+l{svhmY14z_tH)ps^Yu@+UpE{*d2&B=I?acRD=Pstn+;XJonKfu3We2O0i(*k z-+sIALOpG0YN;S~W(4uLa0E;!znX`bNzmYbV9Bumn1Ld`ht#`Wt0_CqRiVZ45@OQ1YQ z1q6by8CoC9bOR1c3ouP0>~m{=B!l^;Xy<+dVg4i_U=V*8DNBlmVbo*eDt0Ssf`!A& zJeRDV0lE}5^+;UudsgdiGZm#X4$--4CzY7oK;tjB;lt3*2#1NZ9;k$`Nnz+K) zD9c$aL;M+dP*cF_C`Jeh64x~dqN4U(JJZhm+!`nU@AOEvt7Bwi@Wm%>dd|5nj3rJ! zBVyfuJaRfuIrS2Zy3$c8jOVyczVqYWxTn6peS~;d<4~UiVV-a?acyCa`JZh1w!!OS zbM4gp{C21#&s^>%A<V6~oV>gs?45<&fq-Lpamh4V;dH`5o>Ct^Tm#BoC8_b9 z+6l5f2TzosZn3Y_5Qc3a!GqXD4#I4xV&YZf?%U(X zk3VRp-3KPaRW7qOg(V%6weT7(X@)xiY5n-PtqBlrJ&*IvBy}FNoSZv#itOvxuS=_| zgAw&wAjj|Nlq-A4aeUrNPEIfFDui6p2zjjS?)ee8=LW zo^RkLNA*rmPm8DjG{Gw2n!@&cUY(pQ(93`hwtqn+np#|}ot`df<8YY1ULR6>x$&(c zBY`knPd1+HETCkT5UMpc*Uhd49em+u~uw$et zImx0ZC-gj$-$eCf2J|qTimlsEL-PQe)oi6QGLM=4diNFKf;D*3tNAZFx+l1AaqOcF zM<*{Y^U{f0F#*sKFVhEILykV~UYvWN%BgI9mUOdi7v;PM>cRDB3ngt!nfES}2c&_SV-Q}e05 zkk7QONUoKUR{Wr~&K)PG=FufuUTr=6wicP%-@ji#2!S#wj{CV1_5Ylwb3~u!)-D8N zC~Pi;XX?CDc^x?-A)nDb`YSP*tK!F}-gv(w3AY?A5%FK0^QIP;=_H-L;1gKr=g%Y_ zG%+~Y-#=f!fLg99W1DBIt=hoA0DacwReDmx%(c{X(x#_z^^4b1rU>7#ZKRmo8fEgI zr%a7Ubq*X?A;Sd~V)#Kac&+9^Vlf&#digyxCDTw=s9+hew)f?K4xlfe?T~orTcn4Z z+h~qx-I%-?W=Nv9w6NH1XABJ9%tX_p_8I%R z`Qqkh$J3Ryo_*f~het9KGL3&Q#Mi8rc+)so8E_muifND`e;PPbt2#S%S_<*Rk9VuE zu*6Wd4SvEq@m%_|^Ou=0HZ>a(x8a@2J@^KQAhaNt@!H2yQ^&8Hzu^n&=yly~GTXUT zrTpCbczHooqd3^ONtJ(`*Ex}K#y@W+_oH^v)vA{?X&yUZ9m6Ew{W&l1gmA31v~(i8 zhLVGymzP)i%`}5jrcqHY?8y#528?ofX=MeWF4$0+7D8Dr!LX>vNZa_bLT5?TZ#Ltt z!k0KpXIe~Rj9M39_+zb^@|?L1P$piwN~4pkZnrpb6Z+1EsIW$@>BP%JpNK28)`=3v z8}>935)5@5OOV-oS>(BHp&j_02Z!$qs~`xr21bK_Y^-P8_{JsUZVJ|OjzWB! zGq$B{n}XG6BDk>&`)Rtza6LzSWKG1Ly?d{jwK3AsspQ)BGYyL-RaaLZzxtLk?pKlw z!hrugWKSfnHY7b~iX109lYhyQ8oB>`G8Y6ObFoE9lFJikL!@h})QhNhw>&@;j>wmNJ`bc1DVik$(tUwToeC?)h7=lApVMZd!SIBs9bTi^E^9raXDpoA{8 z7pWB)Jf=5ZPX0incck9!+m{fhbL@wybxG@)?30iFjb+m{-_vfiA#Hqv)#bzqv#`{H z{u*jbYGz#g3_{3gzyj}nCjTMTDSYzu!orSjM9)vR>)0Cderxp)4wfT3{JX#GocbZM zgOfWluP3vU6crUsJM!k`+nV29A;&&SWOGQGzg1OHpf$>MapDJZQ@jsCw1*tv>IY4J zWKaB}qUCjUfnUCSp+v`1Ssp58@rMIX=Ij0ATtF_$v&HeS{rC3bi~q)T2j_ zvL8KaJMxRuXzx8ls|@UF5A$y;B9ZWPWhj59XJ&e017twG+()I%smwzRJGSnWiYoOw z-nvr4%5W*#{F5+pDJW3qgxOqCP8nGYT; zMM2Q{t_Y4elutK>{WQ{69^p@$XO*I*E+e)Q(6+PD<)h@}5A2Ip6EmjdLymt%K6P9* zZdY8$H|=B9o6%+iE1Z6_N9b8x{}l=!Qn&%Y*a=BRd?d~o@8}-WY*V{ z4?UMVkIDfxc&yT1_i4475!3b%hs45#nD z55885(S4Xa31`!_H0pSO)Zt(6M8`%;fBS02ckxq6jRGb$OdK}Wrlu8Wb5g`bOz&t$ z3Z#|q%#S!uM(`ydp8C|@wK&d?T=QHRKE^TJiIQ|>VLW0yJRAQF3ZHXYe6LR3+gkln zm;#Bcw)bk$ais8FtodpnQi8r@XJ^;e_ml$IkaAzzuUaYbocHE+@ldD*k>xnA%yHsf z;-c)vk2);ndB>=27*Mknp}6ySoQBzwqhgiybK2Wm;68if3a=T-E{fm<4 z)|nmFmTYkM6z^@_PA70{7a<{spA@fOY}<$mWR{J3YB14tMZs$D%ZHkkB7S#KA)PK( zqU!f~o*~%av@87O{Kb=9Y2*tn14YfBwHJUsGMKvDm}x6-x8+QYAK z)|MYS^JMMajPD?azqL|To5NWbpUqH4N|Mezd!JhelZIzVY*H~D9o zia0?Zw97W&J#%w>4h{+;l6DkxSLFtOx3$e877W~a*Py8dPoem%@pUpcAJy>O(FNh*OCi&s zKWR|270zj^cJ{(?6s0rYMUkA9l@)%N@A{|X=qC7~s>@b@8zKn)^C>N+H2pXJ@>b+> zK0dQ+y6crv5MPbcO@F>gDDJg+AQ4AQ}3jB}^p3)atCQ6n#RPG_L^TDH!F2w#pu!)p(@`sw6 zOK@o~B_$!J0Fc&LmAQ-coM%0p04-OtDo9sLufDc604s*| z^z?{d3Tr=bLhT96A8vd1zI}tGW-^E%M0t$s(SPAZi}9^nCGW1p5?hlX*?Xp1yMccZ z#POL&d#>c%(ceSM>w6)VFnYLfc;sg(EJ(p1Mk0vLIeQLN#zk}t14BVP|2`9P-!m!f zQptqA#c%Su5PM|-!bQRcddmZJT?@m*>Gv`SUDA+UwR{#Csqkpe!5oLtLokbB z|I(6@xTI^*;b>(r`uyOv>GEE-p5Z_e)7hVX2(wxMxpqFTpdy{$e2$ zMLPW_^HA{*PV7cl1BetU;UK4VYu4c=;t7{VO83yvC7hNi;Mfz-Pm~)R#6x%Ph__Ij zXTJaMR|Tf_Yi`{HybZShARk{UWq_oZm`c904GKfpZ&`IUCA>693%E51e`yXJD1qR$ z;YUsLZ6N|Q0vLG(cponCdi{A`yXz471AHI+9(bgrynK|2PvsRE5ySyf{%&c}2MU17 zq|Pj}q_p%e8XEMH97mmr>@ePqffei`9Lo$u+YRyZv@b<0U&cT3L_+Xn?UWHase!PA zKxq!6t#|jIxg~#^hEses#jSUnC%zG20Fu`u!^Z8%k-8zrhsX?mdk+jkYeJBln{wa2 zeKpb2Rj|sxja%%3?TnUP%UIp6?bG2+fsdU=l=qN1Wu0#e`txF>Hvh~$PyCxCwRqwnHRUqvth z`Jv7vo9Ha4g8Zu--pArBjxr8p^;jQ583q*$)@B{wT|qa7-kLq2qBtVPgB7aSTruzd4l6M{W`eZ<8gKs{9d zSB?7+6HXdc9zZimOl;ZN~A0CsZg zt2xIM_})sC8qZzPhj5gT<0AnXw0GYFCc%a8i2G&qqUq9owTKfducm(T@o?@aI9^zgyqH2Yju?#=l`Mz@euSYA1sC3*h3a>F7vZv-(Uf=)wplqv3 zoqkPDyWk&EtJg=D*_AGL7ky68r#6S*f`j7V<)yj{g$An5I4R}M1T?xjaF7>Kp23zjYE=M&fY&RMxc)z(BrhF$gO z?^|-IOa0f2vYx}Gf7{T^8CcaG+NQ}UW`uEqhXlSspt8VF z9_-v#4-|`g5jf0u1m)+=$=5UAfE>*|VADk8Ar`++B5k@4`T$k#8ZN}zRPnwS;o-x= zBGXo=0t`@qz*C@rxQBTt@M;OxU4lkP)hRBND#idA%mhQH597Y%EsfG&MukO=AVZi^E@23V__|9{h0 z-P(ot4IK1Dsw`R|r4n6Y4j$xEkFy)BA^hH5X}1Lxz}=Lio+h_&2jLX9s-)k#EQ2ej z?I)U~cH$t=Dt(+JXtMnb43$XJP+^1>o~4<5(&f48w>&!-91;>Y=Hutrj}MHLb_+c$ z?Mk>M%DoM;?=Uo{6iD!H3W}=>*g^W|KeZb{9Ro2zv?>sIjEEqJ0+yPZT42l^ZyV@= z4}Kk2IeNaqXRrqb238h=H{sRE|BgFEe^QSOFHK`bu3-9w`iMbYVlyQkOb!TW!%bRF z4sQS>1l|b5?pIrb_?fCW{%b~_OK1hri zknhY!Qv-10XDCn54Vvpb%?493#FbuKXtRE&_CBm^8htK-&z_y~rsjAdZg&(Rf*nm( zczqc*y%%Jbx*ynAtRkg4i8~I{ zG6E7Oz-I{dQ)aYM5z)yAz6}ALf`Y= zaxCC7a|?{lyO4~L`uHk&!^`Vdrd@^7 zxW_xc{HL_4%P^_6SKb-t&vN*%|LfPRAOw`M)z6^^5l(U+ZZHvq5IqbH49+0zZV>|) zl4tHdJ#fa18+D{Q^*l155S|;M2z1npjH{K5{|jsa6^PT|HCP2@e!LB}{TB@B$u@N5 zF{%uCjdQ6SMVeRiX&o6^NIe4qZsJzv9_}4I3B&`iL**!*Z3f_WK^gl@HUQq~p}Hev zu5&|yye|9nu)hiU*cQlnyHP__reUH54cX3}JEIsc1BNHMd*y;8GsmmqLR(rg&1s48 z3b?#-^*ZsY!ylH3c15!H>d7yOk3xQcyk*O--Mfj3K?|FGQ@lfJf1|$1MHX$A$)GCZ zg_OEdo%El>elnJ%_2zG3`2)Y4=nxRB4~U4q`IsA6SiC=XC|+>t?3ptmK{U{-yXv;j zUxP=y3-t~e9RDkD_AeqLOnxsQG z9YrMFmK6BB$CmXI%9bq1W-ndZMFvkn zkV@?4^tq++K{F8XpaESFAhgJyJblUr((5iqw<$T%)G z7O+Rq@)r=|-*}xYlwTm_=(DhZ>T7W^xvFtf7I-h^UU%k1qf^d*2HDOz#@{?ClEJ8L zx|qN_6DcAd-Qr>z5YK!}5G1-GQXnDI`g7ryEI{R8uS9N#6ogewOw+_21V1+qkHbbr zTH0$sS85+Lk?99(_{W4oJOuLg4V1Mg$2bjLU0l)}$FHL^{0O+Ypk$f<0smNSucj@k z9dJ{Y+bFEkks#lM&#iQXJDX*T7F93u^Z+J8hJ$l^OfiAkDAza}u4GwgJw3n~G-zY2 z0U0tv90xYeAOu50ZY1*2na7l54cB!MWt@F$T$FT7YbWT+t=sn?lg&^tX$Ba`6;p)A zHR6m0?S2KI2~h2QEHZugTZ9}K790%XPW^-CSEQTHlLd<*(-_?V?)PR^>iRX{Af@=E z%f#mIP*sh^7@p;Px?%Flu-wB!e#XF27g}++g zzWwf3)Z)}sH%?h%aJ;KXS1Xn^mXVwfoC4J5uaQs?lp$Q@nU~ZuuzVql-}~Z+^{*ZbC@H!8Ih#KbN3_%nr`iCYhz4a zSY}bAr46$Pi~UJ^UX|R7myC--~w`TX!4$RV1~E*%(3r5%=i*pI<+;vl~DxSxH$L zP$}}eP;;S4vlouT?*WRoB@!=~Z+8V$foOGSL4S*j7zq;`Z zvK&w>UK*iq2_hHHnjLaGv@KjgB81|AK&#O*is1lz4jli28YIMy&}Ti|O+(C`0M15? zXpone2e6?4LK85l>Y;DUVt4fNrcp$dK>4$UPS#TjRO>Mjkx(Q|4UqoeH-3XoEBvJX zR*FO1ufV9@&b&^@Hh_r1efu&9pYZbB`rmiv>%6^u^gn*5DSC6~-Bj66%}&SJ!fr{B z%d+=%Ns2BfH{}(q7ZrUpO+7O2FcQ2Si9H=XeZPx)>S*bxTmcnpx_rE9npxusf-(td zfcS+C>VYFij=(#AA;2nyZ~IWb!ISk24ejC-EmaL(gfQ%bR>ouK{qqHOaB*=#D-|1p z83wbxc%uWzv5_pQ!Y3g5`uNDNH2zwh8!jfQ!3b-hS^uG7KunP!KV>$03Zp~9DSkvca{tw1}6jy@O z)A4+mat&pLQCm}!NNP{TU#7wSrm5uM*V&34mx(+bHv#4dKJk);1vesBy_04S55YB{ zErp7+NHF|LqMAGUvWS)|?az*W_*y++(SSv2F*snco*(`RP##Z?-5?cc(Jvq_f%=WG zw_SbCr>o=5IX;*Icn)R26Zk7|H;TZ4A^pOCqx$;Hc1lW0fw5G`a`D?C8tJe~NDLHy(0tc_lNQNDCZB@3d8$W-I^388g=8#&&&ZmLTF2*4f&O)acX@bW6P zcrJ{Evrj#udM@`o#cDkXq~~44HWN1{ymwggi}GM+n>>wk;thcftf4M>|8d@fI|v5!0CFI|tVi<4g%6y?*2_@9yuf8mMK z#kay!jKXFQU+cg1n%xe65EWhu-l(js?1D9ae+X`^iCbCv|Mz#+jx^9O!Kuc(%nmHO zKF*Vi|5j4M-cg7GGvjvM8~1g!EvfG;dH;Q$zKhIBTK^5qkslP(f6Dclgdd&SqM)cK zcyqsitRT>bZ8`sb!*rpxLo7ai%oL(SGu$ zT-WPe(@n>38(ca~b9wroSoy!C{Fz%_9xtkbUHqIOhJ-}&pasGPda8m|fymsXf zN8#FWT>0+_f0dM-8~sObB^dOD_TNJ+xxsprb@$yfZ!fG+_4_3@dM~Xm8@sOcbzRiV zxDXwF&HaGUGi4t{WQ9SJndJ~lPS`YXXQ4zkP|$+-C5K1 z+x;(+aZhMzcfEe1&4xeu_tWza-e+rEp<>lgOIeX&mKj-?^sg=JmYI|4ko(i2-Zg!nZRRV*4yrrCbAEAUDC|NW#VQVXX$R(`#^<|S`6m}4`(2jmo} z7qWqxmt_y!oak2@v(?_K`ZXN$Pa7HJXkVNz%YU~0Vy%vPt6AoCe-`oT8q0{>NxQJ$ zzv;%ted^*|Wa3Q+fxTF(m+Z!?T#HgLXuPh zPZwXjlI?kfti2{izxU!074oRK^tp}aH#Ynw< z5+#ixVftU5)!sQi@a|IsV8V{v=Z`XqcK$6S_Tmru>hba$+2ij7{VPizqTP*ZvoHXQ zPkUFp50{ml(B`@}(0lJ}NRD97EyK6M`n46PTZ~1olVOike05V|_k>XP*R+LU~h?CTLso zJ%zbMEMB9pyOKXyJ*lK;sP97q%cd91|1{bD-1eY0zbIcE zQ_Nf=9fwO-rU{FL)42De=LZirr%cWEY?q>G&sbtFccdVV--Y6bwHNoZZ2Dez7~yxHqy60`d1rmZXuy4wnV-Mz!cil!meVR$*A(VNpF@%2={o<#Barb(Xr0L?RO^%R?}{kO7v?+x$ZTs z%VSPvXkrr)dE6p+G{daHF=`|8I#1@Ph+c>F=aT{gTVIv!bbt(}uh}69*zwyJu{3Q9 z%w^Rd$jBPa8;^2|dz*H7KDv-x)-f(MF6ne_B~pC;2E>J&{i=_7+^F;o%=X-kDj<(= z97{O8S0h@A3PY|G-9-lE+EKy@C0pb zhD$Mn`g;BgmbZ#;JZ4wi1+Cc0njTrwNqQQ*9pt@{dF7bPzB@x-bd22ka;7E9<-<6u zE_S$~p*&UgiT~5;kDHz`RM&zQ6B$I!6D<}u`mS=g#Jtz;!x_o<-kw7-oX#I-# z8*%TL9Y#dm4>xzEvS?gSbl5O7coQyZcVeWZglSj<)aMbwssAZJr`73NzpHn#_B}0- z%uaI}6p32zKan$2)EIH+M6`{E!yq+B;)@bNwY4L-gjNUVwTboiFqA5qb_SyDLeXZ_&>gO?@Q=iVG8uEY?W7wLlYrE9# ziM`6 zWoqgk$alVEy<^TC6@9IkV5OFNO|(P&%9TL(#l~G^tqv2cyWRDA2G8!2cL54El&z_xKY+=8gXT9jgOLL?$||D`eK*Oi4GRl~wDeWvd%6umvqhoDCx0 zX7fekeu7F{=uOgPpPxQ@dD?e%#p!+tZI{C9`{wZqZ7+>J-nMQkuh5a;8ZYs?(py+F zm-v{r&A2opCr2Sb{D9r9#<%0`pGSZ2${);yk%8*q->1>by`u;r<^MBSfQt6Ujl_22Q~!&WAf zQ`c=wg+?eBcW#$)?aw8g|DR3u=^svAv%PZ_W9=(k0mWyu1U4@oTF;kgOM@@GF=O#p z8*1@sVs;z;If?N#%@ZbIL;xs1Km|Q`+uhUyvr)jt5&RgnQQ%+1o78PxZj*THF0OsK zGlDk#m1&aW9KDz?Z^6pdLW6_x;}x-9^n&kbm5+Rzr?YYG%$8oZ?BLTbu#a9c39lWi zjF8&cE_8pBi#N8%3U=t%5%I0Uu0PD?2X8(cM_vl zO`=&I$=Dcc+eiw^=j&JX$5%Btb>DfJX_XpM=fcGBtvPmwZC`hl{K0$*-e}2+njhx5 zF1_!$cN{N~9Jz9iEr6|WHtL|?OA)V->yrt}OjRF$V3If4aSKEFT!oRo!8kKvGfs+< z=;{H3kBOV=mlQ62pUyd7REQMwM zT6#ql_iKRm%o7dpmwe(HcHMTd#TWe_@5+N-+O5gtV5Uz4b z+jvh)Fn7daLHd*H*)rrU9(`M}i&F2G#?>XWO712r*^>7>JCFCzxa()VailsRB-(7( zXU(^>yyw@un=E%xVBC>(>xm!VY%iivJ-j~oGhQRQ2oBGkqIFz%h^D&fc2>@EO}yb~w@2I6+CU1{l8E8L zo-G|3Qa_#Omd|(vJW-`7=b-DN!W(W(BgnEkldLKf^+MD>%1Pylg zRs1%SOVGlpp;zKGANL{i8*|GmJHMX3CsoO=wUMWwpg7zzNJB={8Si6gxi*y)s*r!~ zTVJmq=eqfpo9x(z%>97HDEsn-$Yy`HklD zkzgr_p z$o>d_TYdfTQp+GE{rt7ZmR#yCG5ryav!gF#=p~HvRpmtlkqe@lB!4BqUzaGk@xGUg zzwU}0^Q?hV%>g04kcp*?98E;B+WP9efB@13do}jB>TO&qDvK8$Tu2_MJF5$jF3ssq z^(Urh7n0RiM7UJQo>x9Ju}NF%T>M&7NGEdssj=%ibKb&0dYcv*70d7Q#m32aZ<{H! zojRgh-d)RPT_}yTZ!MAaIdZ(46;C53cBt0=N>fS6HtL>URt`?RTNVmxeOC`Jul8uC zf2#TBW|PMMaNBlFwD*00sLRdmu%dA3iH7mwK|Jqp!UE84w+cU@3Dhv;MJyLn4cRDgs-pEx)!JZIB0{G~crkGmn zhbAF}_bOE^?Z}~v7td2O(VeojrEU_*`rSfX^{KX`gk)k=R$6*I#;01wx^j?>^&T56 zT`A**p?Z(!kW&l_47uE)GMp=+ZqqS)swgk*}~}AQn~=qtra9EUVO?z!DSo z_lJg$oD`&%I3^#`q4LMi<;6dmQ(OsDW;JSBUCkU37)h!amfOB1QgU82xQK4Y7tw}6 zqC5|$Dd+Sa=(xyx%XMAxde+za5$QKD4?!|a6EOXIP9$x*SRud&9r+NEibThAARWHNy_v$;pbc|E#1~BdVf~EKTBI;>2 z{iT#y%>k)q8AI|(=tE^14GVKp%OQ&+JBB88z`*PBCp5?tAd0)d} z%iUD&=PK%rvG(P`l5hJf{~Lk!KJxO|*+5&&xV0 z#Qvn-i0fldvovwo6SOsCC*3L-7=&w=9)sK;x;oor*^=YI(&fqj^zOxd%e{*p!~WUh zKf_V&tGqh(Rm!L0+qWR{i%JCi0VjShV39d0eAID)>UZHu(^raEWjWdy*3p)|{}6Tc zze-a0@! z^wupPBJ1`K<*MeNA1|CYFl^9x`H@eZ5*Qb6Zd7kjTdsXMSjcdp^_|x{QLCO>q?ZQZ|r{}*C>)l~Nu^dJyDksz)TM$MsqYcC?PK%Qmv zR~y=Lz0e3t{uBLqP$!h(FHFzPq1mo3&%qqLz^e=iH^{yLV3%STE9PH~jgO;AEhvD-*H`?B;&Gzw|IlM{VpXi{^ zXx3!&#?C68Zuukg56*HN-+#Aw#xu*+he%Y$04xzou?CEtI)U^_n+vbY#pE^OtZ9ShkE zH-}!Twp4kxq1O~QulitKO|;lIqwI{e9Or9qCx?#|?et~%{ouvWabI7*%k{Ux8x72L zk)~^VPkJspOv;mvL9~2F{<0$@J!3hU@M_rco!udFBkUOHMv!SBndV|GtZWsdBgDOcjceYTddSRqdF5A*Txiq!BGf7zvOK!QA(JL{1AR94x3UC6KDN^Lid<%mFC`w+P0Vs^hVT=qeYPQ`H5?lcOr3ff+5Dy)@A~+Hwf88Wg4b-5b{BE%KktT z4hAW!`>#X1^Fg;L8Y&l+7AHtrup0%!Mu(7Xo|v49>q!0uvm2n_nx6V|wm)*bEt?Q4 zU=YAt_Gv!d!fKF_3ZN_VG&P{;D=3f#rQH8upA340P5@hb5;je`qui3Bo{^WM-86ZWWT4QwI3{^F z+B*Mi7L&M`>m8OZ#g!LrUr#G6b-8_L&y%g#_9c~z_G-MV<;q^B$L2ClYloB0M}#;= zxqUR&`u1V5OEg(fzMDlnr#$D*ozaDxz8p6o(q6t*@)?W?^tH9!e;MBHkkP-;QDe!t zG~8CvPDRy|5%5@m)ZfHT^NouXUX^sywJu&mH{H0(>I2JXvuP=G=T(Fi{VhnFTHN>} z3i>iD?ShiZ(piijP`Wm=;i&R=j4$1#bIx>V9?qYxQE7D?dYqXl+$Qotzpj1w+=Yj; z!|%RwYZX;#CM{kmr<&{?UzCX(HDCXjkv-TQ9pyehyt(T$OJz&LF>%-1d=3mkiu{jv z8tFeh#Al?=c3k}Vdo@uj-OslC6(Z3l%Clx+I-guK1;y>edtDZUGTwStjK4aiv^jq) zBf-;1z-N6eya>Zio$;$V)!zsmDOY@ZrvLrYYP-S0r_|q{XGIq#TWea4D26vyZW#A& z-??npcWQsE?CMJf2S(e)8cS_$YPsKY?!C-P8!iuxliRLlSdq84Q|xz?4bj!$HEyZR zI7vrY9W$NGe{Zw*TVqLiX_|f`b#DhvjK?fZ|B1qF-lGh}mX5Lj1`CBwQM1mMo|9C^ zoTBq1#T;L-6f)Hoe0I`2IqN1D$`jKkvFqN3Ogq&cz3Ra()mSC+iETRrC6#C5ThkkW)O(wWBdh@}n1g3^|~FL}gy6uE+dAw136}#X$3{|gHxtt5ZXls>K8MD~P&SwR7hLimy=YQRC5krSO#UT;XL5KA{B#X9Pry>?^ zx=a?^lo|TH%r4__I=o|X{LJ@)&lcA|N6J|574b>JQeA-18EM;|CPl2$q{g2cwhc+b z6vf)LAIpLyyYb5N!9fByL_dgF$!h8y?EW&rtUaUfLT|cebu;!xjMwh28tGL7OUvTy z7*ciVtVyt2Trtn&FwVXEbx2Q!4B>PWfm^%#__-^bLskF2^Uq5R5*X}Iz6_g(hkouW4DM@))ya`)lyz76xZ+jvtk=;EJ)n=ic z;VarnZ{{Abfi%M=Eltd*qEb|lmOf_k{`sfVEuw-hsU*=2ahJ$=+1j`&^Yv=E;fG$2 z_{}R^D>dG4)l+SganjKkNNqQ$m$|XPM7g3zxfv-@GhDRMx6&y8uwx&GdDd8;Ja=-) zdf>h{Gkp$%tA%2Yw^=qLrKr5Sb_N&57)EM0ys_LV7s`2Y)5;Q~g7m*CcX5P&LHC47 zP5s`GWEFbVWsiTxyauV)&qaeB52ImW34rfM*btZ?J*(cgXO98!ucL!lck=Z`J`0*a zA!xew?cr{0@Sw*BjWnmmU=Lecwh1H~#OxB-6~B$aw=>5uw}6mLrWUR|MX4r$xsH%n zZuMpnf(U}Au~`_}#=^}FH?$3xplL@de@UOY6_^{EQhJBOReCL<7$PP`K-({X_6TCj z0g}dwmyPaQB$gK;O`nVB^N=3o3-o1o^Y_X=(zS2z<+O31neL_;S_MhH;TXF+$n1_5)=Lr^E|)g1{`nMxtV(%lGXm?ecw%W zdJE+;y)tILu_Oho=3F#sm;JH2=s8s2<-vZ{>T6?%yNzZTJkDy3$|Y{2Ig_3hin_Wh zWtnvEn^P_}S+pqXq)vTY{p2b9XFiAd!1sQN!Ri%WwYVI9mrbF=`PN$|y7noijrT50 zx-8}8uXlPG_w)EXKdzY3l)Cj!#Vpf?H_L|9`%7s-Wqk!t@|{JR)PHN-^ILe;_93&-G}9 z0t)I8w0eHMRnDUt_oSj1{q0yW(xGTlO9)g<8>o#7Iv^QQjSouMyO>ixy3h=PrL*+c zue-B-O)7#)bF6XhL=l0{QH8)6Ns^62qh${CHm0vny?&!DMLKjhBSBg9l}g~T=-{Ru zScsLN#MF-%Dw*{`wCRbHy?syrO9z+raaOhWEB4uyU3o6WInwT@Djn)Q4&PXt?J}#& zA{Q)-I(O@wm}FMUV23`7K1!jQ9(XLVEP=1Fwhb;hQ^O>-|Y^tJQFiTdU!jeAvu+ z{7cu;mhZ5W+Zmr#(+A0yI%h;~oD}CsT_=5+A>E@isyuHUGNZY@(jpns{t6($G>TMm!u;0bTMlYAN=eja$hJ41h4RpEW zIB@+5o5XO47mcm~PcCV4TVg^5L_*JGN#A^5q1EEFfogkA&4g}wptPG~??Nv{H2A)N z#B$mJ*Xz8zyvCUm<>-Mxrw|sRob0^z?8rqbLRk%sD<*zS6fs;yLRsBt`a48gGHdHc zmySKx3+wCZS+BgiZ4+wQ{^Mzwe@&>?#d>;I`D`mHW$jrcYvg zpOi$!>&B$0s|qs`Min!usei6$Wf&z~_Q_hyGUYkFWrJx^>{RlKUrUya-?Vc>$;948{+DrOqI=)AWYHXlqnY3%~ZvGm!&4ymG-omAOq2m(q zZ$xLi=oS6ftGei8N@{ANM>0q4?o28HNd28=v`#~t?uV|Iw#UL{<)D@>r>*xMF3{nE zDa%OM6(EjX=6$;QF5fCqIfov33nk;-@_P>@HKK0XUd?kq|81hnWH#?0hNQ?`TMKaA zpZ!(u(t?9ft`66anS+eJw6x`r zMiIL?rDiA`_tgO+8buWc<-sWT`WKg!c8)0kUZj3jru?d`)E@gmFXKJte( z4Ec+KA*HZSnnx3TO$K!*Pp4Jox;7BKUgap(6c6RYkBY5cjQLa zm@mU-zo3p^E?xp%@M4E9$%pkw8N;OWlU@6IX9K)4NX-i-BcJvD7%)lZ#|#Y}Ptnc2 zv{B)zW>AAN+DSYP3H z78-ULEmk9W=Q!ubzf-6yMX>3yi%`cWRg-2>YNiDrmZGBQ%vv4y%HX~4uYwTn_psC(mCd%t$*cb+PF>3SyvZM^0`DCI`uG1pkRpG zZbry#**oMxIpN)+s@l?>oZZU*<1*vIvF;_@PoU9WAVY7V1EhGQsSdMD6xc%yYT&t&e+v(vyF+03I?Dy(`&% z=DBw^D8r;`=}qJZao~f|)~5_qX9_$!mPUW;f1B{?nRN5%r!Z;AymnKGNqSD9_D?V* z`E@DsXWq!1I8m=Dlc3fDFGY4e!|E*3hkaiV$d;U!$fRncj|#9wGB)$h_KMOBcNB8! zG^a{AaJxBeTqrjd4qslU-L`#+aZz_hs{CHbdBv+4FU$f&cXW^aKla`_EXwWg8y!Jq z8-Pd&2nZ6=NcSiq4Fb~L(%q@3C@Dj?NQpE^H^>0WP|`zpcMoya7<=#EbI$X;?{%*8 z-+AwAZ*`dAp0(D!);B(%Z@|d><0lo(;1#J(`!IT-&I}d~^|xk+@GrH;e~(Y`nei9_u3>E!nJr_K+vv2Ml1%v9X#Vx3+Igc>Js#;H>71PGJe5Fdcd4pT~b~{ET+Gg)%xB=?8Z&TtJwtlQSI(EmW za7{H1Jmk#KSEuvicscxa5~afA*VGJOXK5(lIQE&`U0WuYEcMtCIjui(nP=0fag5kC zKj$g%Qp#5op7q2EJUD9#3_$I4cg&8}`bNuJ#d7W;HomsPjX>7Ce_DD9qb;7tLj}** zPJK*ceTHw}n%u$b-4++mwuzT4&Rf(IZ8Kmm0vQ)lI%{Pz(Q_9Xpy~`6NPN0taJ(~* zE-(Rbm|}w_40O{ZboE#e_5OUgh^n}_7@cwD?v>bmS0)^28xKG(16(u%6utt9Pj5i8 z6UT*465!ljc?9x-QisDq&26DX!*u%hys3h_S$<-m3;R%^9!Cs`hfYq(eEboj^BDq8 z&hAVw`vm8!0lPieZ8&svZ>5vuYk}~OJOD9Yr$SDQhKFLBe5+VQh?FOKL*8HEt&V~Z zZu#HkciP@SQN|~!e|$z#ve+*Dyj{5K%qgg%n!eMZa595dUo{!G@dGY=%IQZi2%~h| zNL8u5h&k>&ls1MMbeurjW6qA*YzIyTSi#{alg2pQZndK1o?a#K-s-Z_0ek3N7a7SM zxif#Hbo%9azI5nC6gmyo4_*N(%^Az7 zf65+yQ8$#@^WG~DY!iU9nD@~Ezep}Ype=4mk!i$f&g7tPKZK0`Exsf!9x>VXdZ@4Y zBxOTlSy_eK)LFPlrtzT&!$Ex8Kq$F3Z;|op)l%dA#4j0>f}4qJOgdjK<7de&f`f`j zg4d-uDL<3FNf-Zx-|GYj&u7_U6%#r2<*`n?he-lVAkN*UwH0&1pdzlK{*XEI=iMbnaP7PA{Iq3FMyk*Qxjva zy%s=n(7~WtLIaYMFb&W+dz}|LXCsDLQxv4PUgGmQU(v{zF;Uj>zG>aUq5@4F-mw`Spu)-^*B64UBa6B$5hna3Y^js;wfQ6>(6U#%}+Vz zU_wkSPI@e^z+nd^(FuSNe)T;7b_tN1lMSgpsM}_(fYFI|oWFfu2^!>?gS=38cXv=! z4fGdNbq5Uz(RnQZOy=X`+iJKt>mKd~EgL~<3p1#8fi4gZKLakkH|QzWHI+ReAC&WgpP)b;q(z{A30QUFK2X4A<+~bG5(a_7bzWq=Xu% z0khybI@$+~sZ$>yj#YDT7X6(<8?Y(eUIw(s2xDzrxh||qeJTM89JU<}%5Ma|nMxAu zcFL9T1u%du))fQxJN9f(m-Or4KzITelzQS;xYr9&x@*?5Z$tiS8){2@yJhdV=t5g) zRZm|Jr@(UMw6upp-h>~A6cp}b-4j|98RX&N6A;MsJbk;xe(G)aqUn(y@8RN3(^G)D z)mPX86idGT^oq(^3UD*%dnRMO+%{ot1K6=+9xyxNDaoU=o<*WkA!z0}tO42FM`yEZ zI`)L}4ez(6B{Q}KpGZ6J=JAC$`~!OJ?m33u)!!Y zhM*@(7?V=hyzwHk(=Yp;ydSxJwCtt@#M7Uv-7oI;(XZO+KmS~QZ^JnH=gka$FES?C zzzp3gq$L{5x64cJ}U%d-92slj#i}^qhC^9Gn2o3EA?i_Gk+`w=W*XZw}i2B zu|$sdQGwAtFh-(%SQz{nramzDeqS`?${7F6%f8p^i=xB6tY3dAY>`8wttw*UWp)~D zh8JwVhrHa}&OH&LGi$TV4gk-Pg3`hsmjgoCKq?EV;NhTO^N^~(g>PQ;%X_xhNXis@ z&y_{;pL6~%OYlFxBtkdq{kaa#RKax%BtdV5u*s~@>trGKYk|O<(!{>Y`u)Eqem}8B zQ{b|+u?+etp)aZGr&a!unPRu_I-X8^PDS=nNs<1jv-pq4@ZXIA#(FyAkf<+Yt1WO7 zFXHBGtsTvx&wUp1HyKm|jl_QjX86q&3ON~cD0Y4TKm*au)I!Xyq2ZDk9ErZSf8QC# z`n8penY`Ig**}%7)cmw@+n8bu_gWJ`dl^+d;uYz90g)UEe4Ge^R}!uNOhGE)K}=f{ zdT!w9lNlNwj+9#nOX83>4lNS-ic|nrgI@m}gMj&2+!J~2o+4?^yX47w&voiWDxQyj zB6|Gd5*_M3_5G+?VmxWADD;>kxD3sWO_y(>_XKknFvaE;tK{PM7~D4Fh7O@lCj^k- z1;j_Ac}D9=zqQwJ-m{v*kPg%#^sZQX7UJ6VzwUuzvTR&9VAT`;o*cDBqX9en%GrxN zjgp?qLeG2wuIFPn4^OWE_F}BkHChaHxOLcpzvo$ZaU*qQ;pS-xImxXAI^>u|8U z7*zh=mNeL%3FpZVJyIJUJvKY54OuB$p&1!mg}dy`Ay6;xznc1R4VnyiC+n~*zv&X)moq_Zc~ z{Vg-Na(-Q6KcS3--94_@#~$EfzR2bgzjHdi2e+`WBtaIAd<~B71GSGC(vzSf7`j7T zQ4tEFQ@zxo2=&JF*A{UD0rke#WMud)_2cM{cmdBTZR1|13pCtP$?vuWL^c#YezOWBylOfb{`TI6dZrjZJ=U7evvKj@_lAP0q_~&ni&DLjQKZ0GU`b-+%4yqK)y}F*wsznpMttAP)o*Lc#m6bcyG<>{$E{y2Y;4v-{juL&hV8+U+pf+>T;wEA@wFb5BJ##<|IM z+cvX%%FOMux^(=@%3MCD%0_gqtgOfiRGBM52?IX0wAI#s!$`I4xxQ%Dy6}{c(P=k3A7XscO+!&Js1chU{VSK zSlwWu-ix-hv$bPmK__#C**e^Fb0R08j*>^YsJOUZrTCmEm3v>Y#lk8zT$ib})ma>l z5If!4C-xRHEa~u=XH=u+nxh!79Lxf(NC+6Tl7Px8^4{(~-93~B4JMxiz!A%%rH2i^ zdJ-Jf<%CWe{~^EL3Yk~fPC!05$2-oo$FJ%iR@iUI$wV`JSKN(#n${<4)>gB>A_h{9 z?7uJCmNcj(pEz5O#{qERe7HEY0t<&Wwlw&mMZJ5|-H+)X-K#w1OT`5q>8bM`+&xaD zmI#ht+niW+AszVK8EU0IQg7n~yXNf=DBwI~`S*-9L}MD|(*`Z{+=^X(6q?P(FleWU z(F-HsYyJ83W%`UQzbTCzEEJ9y;#y0fnb#k5X!`|MbiR%5E>Mu2^$s4;SQb1@e zfC5y#?L6qcZ1V;75tf6Co%3?5>Z7G#{l;n?$OD=T)Fh!f4@-dub{rJV?`^;)BGT;& zx($%W$u8e-Vvu2uIEv6=icv3 zJpn`lGC7)?dVBLin+(7?!nG?T1A)AFl*E$mGrPNXg%UKS!+s0|{%;$=(Id+FOp9=f_!%Ps*Q8X={|z$m&&BJvH4F8&AD zhF(EugsfUZt+SvztO>l|j?eQ50>sk=wWCT!tsRHUH=1%=t<<@XC5kgED=~Avi3CI+ z>YSS*b3lqrTRDrk_$%ms9Q6h5X1-J<=EB~Q<3G9zMrSm86WA3%J8;|)qdVD8d(_lC zwpZPB>||c)2aU2Gq38-LD=vPRgQ|D#J*B7H0;aeAlv_u$M+nI_1%MiKx>!M8R_N_a z8PHQP44l}2$u%)hdrn2a1#B^oG6_r&jZ$1*o@JWoidF-FS4J1B{A}7#&G_;fbaMs{zv_M|2t^viIzA=?x$h*|1<|8 z7V|t-K|q7If85e@-^r4Q0%24}2cJ?zgd7F*DsZwuOx$qou7AhJDjl!*!&&Jva@T9w8 z4f@i*Z7cBvv_ji5i-yD-o4zBhoI zy!I%k&lCYWgs8LXdT05nI4R&rbO7(R=s4q2n@%6RK?5w=`w%6U!?qwA}VW-Is2RWV2 zgL9uBs6-vB_oG3cW%A;J#>U@?k#rVC`b90t@cFFdV5|qld{;ppAHE zEO|%J@($2~v%b$C)VQ5@fey6o$@H85buIc{zwA0dt2{?%XQPp77VrIIcOpcpgfr+^ z5O#M42;YwKiVCmlMRW*{^rW0|Bbo5qkPw5uqla)~bfq!W>|L~!4lJb49#&L-bMxZ; zI+DlXY=w{~Pq+0=S#C(1RzqJ3UV2+^$>Y%M4PIJGAC8C?tOL9|Q{5juN;SCbVzxC> z&oQ6>D8Q_%`QlYz9Asjm;L2D88~L}jyg)?@3_hpDsipSp$Of|>)cxUl8_rlZ_nFmM zc@SmP#mYM8=t`C9MY9Tug}%jA3u&kkUH8jqQNQuox`C4oxhhVdQ{OwMb6KbPE%9*Gnx_OodYabV8y5t*0UI7chxs zw+|01od4=Z^!rfBjSw~umv6|eyr1lww(o`Zq3u{H0ku&C;I(M1`gru=HhYHd>32hH zV`g86#M{;KLVo*Z+xjTY6hto^{D#z3Hz-F}A64zsU(OgkWGhjlr!P&2xVlaB&khEd z_W$JFOy}Bhc<;*tXN4MOm?z*HsM@xvA#hV#OzW%K?_2b}p6q%B*%EV;l?2ERNCapl z_NM>|Hvmz>quoNI=D#kE50&l#KOLLf0{>fH1o}hE@Bh|Dzh3^!u}L%XmHg%S%{iA8 zqrJADhWanB$8fw6n0JrGX~Z;l6&(Ik&;euq<|ipzub)5V{?`!2`>6gs;(x!S(Or7D z$tb*_TjqQRC`tGi-Ai&vLIbp?mvi-x`p&qKZTke6A>jU|2j`ClE*+77~p|1;v z!5ExJe;x?_Qp`4mIDCnU#{^xiX7I1=XY+UrP^+Y_Ix;^ zy!qFF8=dj+=Q7c#T*t0z1H@gi@v0|4nl82L7qPF`)IT{KnrsH>(r~$5tlH(H<6Go} zrenO0J!GKhG^8g@+wLk3rHv2~fKM0n)k`eaIapJmH~azJJzzDI-9~1o0s!+!4S=e7 zxmuf^eQVX>^LqpEm~X1y??miyWOf&UmZiBLKau|amwU>C!vKay6Z&g?;=?xc6K{Ie z5Q|Lc(V={5>*CH@)bEZSVof5n^m+l<&W)m=JUWE*KZ@|w1UH9do$L7j{!e+(Q=@>Ss9&f?+>W;+G z@kH~tQBtebSD9~!y3cy!YTdrUqV;?x)10fWk5!R(+9eJ;o)kj$kDO?JJ$;gZpJxe{ zgQjJfh`TY_O}@4{ukHe&+zO&b;Da%M4B0|0xS+!W0Zu47A8Q4%KiHEtMwSrjeFT9j zbUvv7mKLnAx%>eHb#&^~0su29v<>7AAO`nymqt?a7Zhpz;-B1iO@-zmDs(E=?8t@uI!AJxabY8c_Mm)Kub~ zvS}*gRdB{T^UgY%zNC7z{=#M`&pd;}ed|ooNA~x&XB;Zjee+dof1ONSwj?wANwxb$04VgA)UiHAu|iY?h>QL{;t_JR zJaktff?inQY$q8Zle)SXDExGEteGLe5*;2vLaTgq@*I%8z1unP+h{?B=W#orsSi&| z>zB#G5q%1*y2aU0u;of)3);FiI^G@hJF-3f>o)$eI;*F0x#&CrvZe}}bj9g>&Eh&C z%lVkm&-;MsD*hB@G%7hmpZFnQZ)?_A_i`TMlaU#h7y8aFz}G!E>sz;morwdQ8bL5y zXgDx5O!5L7><@53;6TDY38~{|yfMMg+eLKB|AdJ;o(VX6VXO1HK^7D)5=^1p)Ev}z zZ%58Ibe=`@icNBCcs*k^C!Re3TpyrB>EOUI zgl;jW@Yz6Ro$r9rV4>E{Mys<4)2;Sm5UIC&>{jYtsLzXW>cj78&nAhx!5wTkz9Ye9 z+e-uroQvE&yrRyg+qil{SfE?qxOJ&8{)>x?6Mf?(Z)gfO*j6F5+i>nnL~L;Zhw^OK z37EwvE3mW#?&9GU-$MKZisZ5Zf77X5|IRZs0bLjv)VKoN1UeGExAz@@YZue6sdRQR zsg?@`h};PX5Zk%YwZNtkcg$4LQ=&@`Lcrw0=G)_gD*{92P*t+U>>I?EnWK(q(Hix*G zj+O)MG%7rgZ@g0!=g8`_4U2M>J>MkmIO6(=b8=*YXu*f3@NN+0I@f&c_ckbm#*IQ< zCblrn=w&?!KyuGi!7b1cQUt-3b1n$UcYUVvqky;he#Dl&-4ypur`|rS2$eSH;*<>{ zRe+R_+m8oy8l~b%7~m1@SF?y+4_paYwJ<=_nJY~AV3~!f|ZtTI(gB`@-H>y)Damg zfSn*Rzz;~9Pinan_(;?4h{(^}D;d?3c_Sq%)jC%qeMU+a?*TGTkuVi(Bom?B*fR`& zaOT17^E8WJ9ZH6kJ8MXfR(R1Dx>qmnq#dce5A;g5J;MtBPCqR{sL6z*%L{LX@}=f;EWo9Zd4u0rXcO@3C!l~ z%zOKAx0NMT(Dr!jQstCK7)SyH1E0RIPr4PCTn`SUL!$119#{bX0Rjv9YP=ylJrYD{0Zc5H z6{t(t20%wR;0^(p1-V7!nkp<wd}4`8HJg=$`5HSUhu*;E&<9 zjr-XTRLVh>JKrxBI%Gh=bm{HRet$QsasC~{$k;g4#%ZzRd$<#^5_lqXA>^%WyXGK1 z)=p>xxIahT**$iR_e!l$9$kve3D97sSk?q`CgnUnPGaWFZ{G*x54w*X7^Uwrhu8_)M5!o#a)}q%Cu=XE0rM}$PyoL1Sj;+oPs9_uYHzT~8hHkKe41_* zMcgFYe>gaZ8*C0@gfr!5Ux^6WU3EB_yY>d#8(%+F8zeqX$X+|W_~09Sd%`RDgz#Tx zFnACVpzI?v@Jxi#=oL2@przZ=9k3DxT^AzfT|H?2qI~O&da{^*v1nEL4 zF(2h{nC9pfQUTWcwB7|jWwEO##|6K96Ka-p<@^(9Qp~xLj;m4qTm%A$U1~Z3Qh=iW zb$u`U2|~dA?kMgfup=KlJ2F&ew;Hsj^J#5s+x0xexeHx+JmN_BK(B;gg13&(XN3(w zEF{AXyewK3rsJRI2^iFpz|NZ5T>Osq2ZKUP#w019mU+qp*By~DnW)EG>9|b2aeU(` z@WmbZYwQ8&cCazp1n`<=WcvL;MURA5%p$)-1vHuh*l^`+-}_0)$yLt1k+Q@>+q?T| z7goCb-aA69Rl9LXZp$C(!O2(tWRLevJx>no^vWd4lw6 z1Kdh?6pz#q_cs0+fVpQ@qGhc|io!sxrQ1ZV(2k)VP+3VmVUQM}X;_ky2^E&I=ciYW zQFGVA$oaok=qrHUj8WT>Su`H!G1#j|CjIuHylzoh8Lc_1`PeS8TuHu(Eroi^KX(34 zhoP_i^Zd2a&GDTom5Rs45CIpzP`mo0ke;4f<)E8=EJa~vb;EaJHg?m#Y>4ynHtA${ zTwuPgXh)ioF?~;grm(HYc7Y!CxTSbTK!F^Z8Km4FH$PZP?$6=oI^# z8+@Db;#0d6fO@bW1+ofwhrFAPTB(6b1F0qtKs3xiUxTr(l>7AQBJl>W zw;&3DW%lOL_gGe-zyXt#P-bc6i1R2=^7g?{fsPDMmfIwmt>fVfFyH%aA)I>Ii^rvA zi&2a!F#`Lb5jF9;JiOL=tlsSq2kD2zcvPxp1HOdDx3VuwH;`r)H4pO2m-!}`zUkIU3wEr8aXFg)cw)7h0Y8G z@)o#bDsq%^aX?TxFTX9Y$(lr8Ujne~-aBJ)i6@SWuAsK{*h60fJQBiaYm}M|GI9Mo zpqxB($8W=LVtdTQ0UzTs)|J~Nc-!`q_mO_7LWVj_ANH9RW+UQ;6~8t4c!iYCGpT!w ztZ|0~g=9$ez8czAHagC1La5NOa%d>g-gKDat_n1_$=LB|;(qng4#x6r%98ZQ{i#u{H) z!S)y?Vry=fYMm-lQLD*iM+8pPtj~{N|F2dGEJNPMura&T<~V?p&c$9S<@fE!+)UI1 zlgQ$Gb8-J6A&Z(&2hZ#4#L^2ol(Vjztn12m=^wB!^L>MsOBKI0(fb#bWMF)@mZ{T6 z1_V1`@^1J8YNtN#z4uGs{UBPP2pm4kjJ$e7(@D`iza~0OEumta2&%Bt9D=4GA`VIY z0Bw}nt_+y{5nARpgmblaB4BIl)oXqxwE|H5-o2onqKP4$?a~V@? zmBTutq%e+hjcow^K_b1Ko~`Hj2k+Kzc_X#=<&7$$fMNXx*E1N*9&9MnP!p>t-j^o9}bIlKpjs7&K`s7aXTUGe0 z7npL1XPkswiF)U+uV$9~!&{@xOR4HDwKEW;FkJ~}(1 z0A3D!C(34gp$~%Sz$mAhk4pgtZ)BC|p)9!AbuI?zT!H6`wrKS6e`qtEd!Tu zP-oVZ0{m)+18`DLxS1k_D!>t*>sYV2TD%kN5N<9 z(B;^w_vxfHz zD#`?_**Lg%fK=$FJgxFL+jrN_=7t6N*!`ACCpLbB{>VNz+E9eDA7)c_j+R;f*oZHj z9zPT@*uxV03%g44LN7TK6`MC91x)XIKVcxP7m8D;`}#pgYhnn6|={dp?kMzk_YC&dpN;(3B$ncs_i5i+%Qo)MgYRgYuQY7hkh zw!n%hwJLM7Hy9a3FemXw>Z=z{os{MB3Gi<_mXY2h=TAUBe$fGn$WC0?-;~!bH$0w6 z<#)7)`uf&`0M1B3MZIsTpQHd6#{kS^A15+x?b+S)sj^!ZP|j)AP#ZB>k>Y$4Ikl@k zQ9uS?7=n6BXaN;p&Gw3+M)VGHFj<b=C-bd=O92Os?%ouo9RumVM>2t$#aFBmOC9f z8df8v!u(6V{<`3XJ*U<_k72#TEfs9}%)p_Qn2 zqQLPL8j&yy2S#v-J`v<8Fvd&F{T{mQzLi=?C;cbbO3~tCBN-jq1L7h+i05$3Ay#D$ z?($c8_eT2ExPhjU;eec_RE)D;ekwtO`4879KF2W6rtuckm!R8EHI21pq+D> zpO?*TZs2ve|7*)Ob-}*@jwVBED8*-~r%JHxLg;091+Ux2Wqf>cdEj?5sHfeI*86O^ zd}|)sD05+B=J_v;yyX)660jR{IRH~s$<`AC{LF>ncrL&8e}q6tRri#CBsw9$^B~1X zE&_512G3(5_x~l!fBqTwuhIWxg07LH{bc;s9X>RT0rEx+4)m{I`Cyo{2B>a8ru>By zjCs#Z4M5%07UDkRq;<6>|ak2petg3&lEk#|JBQXF;Q3O1reO+=?<#Jggxr}C(sJU%Iu#0 z_IEIw-3nOLKWqUSUR)L}qt^Tt9mw%XN$)3&f0*^|pLP20hagp5v~onQM&1Bo{?NgF zvX>%suD|`_dn*WKku7(MCLrD(*vC4$c!{&aSXQ?I+NW!9m~NB>aVR!zkkk7WjTfU^WQ34`_8V;#E^TE z4i4pf)i>JBE+74Tqmh89PI;nN7vg`{)LIfM>2_%(1oG>?aiZ}rZNoM&Aaf78`@0sG zSnnk&{m)vn9^4EjrpG5*){#PI(4yWkJGw}HbL zKVPUkSeRW|Vud^ko7reSSsM%cGq2_ybtbR|koW0w>q_7Ly|!8JyMK)QT#QVm)><(4 z5)wb-Kdb3~4e$4ALLdgBf3M$!mZP*COV!+xE%>IUO|uP2f81XaF`RF*&lkG0@qc>< z0`b3A#aI37(LgN9PFFH$a?~soea_`NoKacUID+!SsjCFLory50H?X3U6sXnt_Zl{Zow1X=3 zyHln{eT?<1|KPtRUbCK>AE{r+#cEQE8!oTbpt@(sO_qn&&TM>W0`$4+v3 z9O1ygW0Rbekrqy@nz$kYFIS=3CM}tQ&D^#@3JPj-mDoJw{v0^fzX_vCaZ{B~qM z&$@0h%gGm~YXgj+Y&g=FL&kXM@`Kicy&Zd(zByZ3)wH6TVw?5pzkJFbw%dBy;nK^m zYHGE5>xA0u9_kF>BE{Byd9KtZ?Te+B>}*$7x$dKE)W6toa@VW}qpDW|MKb3%981r> z>SOM&Dxr+CMGf72kfmRkc>@zB^j$UmQbNm-6B8N~LKiQ?t_)KOA{aeZ16*4}4y8NP z>Nrp@S#{)#koCqa`gIevZ?{G0s3d&+L<--x)6}e^`q@5+U|%QQEEETw9@ciRhnJmg zu@Ef}n7tmzQ4=vnzRtboX>%^o`V~oO)so7bMHF2ZYSV@>vo|;+8>Co5F374t6zcQ2 zl7eN`)Eeo-s+Ast^?;$Zysjo#BGru{2P@d?4wC0sa;a2*Jv>=xL*ZnhN|{l&5iwl8 z_9*WB(Cckpd3J1Jc4YmguG*Elq);2^=FNfbnu1ZDm}kRC3C@!!C5nnf)vc;apn+PRm5q!Khe~*hYTsEuzTBF-Vm`jX^#v zOYGvS>99zl$2n3NPbC-zQEFV9ctuNcXNw0 zP{wT+G4K52UJ)E)!5Qe9&WHIOz7M|(X%KN;lkbx1TYkT{UigH5^_e}~cztm5;R6n( z?82`|>`vNcM~#mNRB75lTS6=K6-nBa$KyA*x?j_o3Ah$Kq0e=Gv3D5RYa(UvN&?S8 zdKu$%_AyEL>z*Z=R+@~A8(W+Eo6kl>H(bLdHVlfjI8O>OA&)kQ2?*wvSWnICM{lQX zlilC*Mb3~OM(GoU0a+g;OkyB88@V&NzF_NJc-(8nc3VHosqX;`PYMrChOnF1`Cb%1@2g& z>~pCnSeMnFGjTt_4lSRouME%Cv0e5v(Q$l3m6uv*Rjpbjqx+iDkbbK|iZP;_W&FCm zbTv_A-Y0kD0r-~FovKmmhTG8@8iGt);QTH(?~uK`4jbz z57w4fGDJOWkoq{X&q|U%F&5U8?kq}QjBDlhK@G2!5L+L+^L-!U^g(sP6+{%xJ-jmP zQlqR@A6+%h{tzo$-Kdu1XNZ^7G|aHJP93fdSFjchDllPp4LYg(wjNWAPneo#^-z;i z5U;r7dbh#a(Y(_S76c(tqP3f2M~6{FYGE!y<3^~0n%v9e;sxirvwRLxE88vwMoaUx z%MvsUOk0w8DpIt~|4N+eiuDYA@i9pa8fo#z{wj~i{Py;*27%|kMDdNUH zSTgJOZQ`h!n2~V5T{PFLyU#C`hQZ#t16hu^rt#vp{BCM#mCoF$F&%}EbT}e;EUNe? zHBotcRQW=-3g2d9_75f~${#zJc?;VFEOpLx8xBJSy05UaK6N9(HJGDV|+k$WV4;cipKLWGeI;Y|y4uL>lmO9t+t7tacLYm0-g; zowAJ)dY@8BydtlX6l-MM218CN@yIhh+$3=1`J*Y(YUKYxC4mhWck8#l1^=cK|E-Vy z&&Sa#^FBCT5d00U&TJhjH(>BU{81FYm#b)c#o7UwNs!7-CDC8z7Wm+QYhC@@>iBJ3 z{J;3Pql5FWukt@ms=B=iEEb4do;P}E)Af&jKRat0T5a<2D-g(rS5(g*gXBL=z5j0p z^WPbg|3A6Afx4(@ZF&u`vblfH*gyspxeW^WOWj1gGEdDa%O)}s7E=DIYq=FJZZ(Dy z8>rA0^m(`uz)bGMy%*%^{JcPIpQ69-n6tDl90Ec8t*z_0mpsZWG{kI32YXKAe{kA( zElr7}uYVkYo-g`bt8v|jLKo|96BB2tw!LWLaa%GYv6q**dwy@|b??HX!jIahRP#L>6(^YK>t&bL(-Y0Kff==2&) z$RP`{q#JtILm*SGpbjtBLUPlI!?dM``$F-EU9pLxe}h*WPirgOfxq2Wh;^% z&aBURA?q<8+5YVlAO;Zf>Ga;URww5zkHx!Cjb4 z%d|q*ySlnodNKni+z-X7s;i}8=x!y&_a~&>PCfiQ&pW80Laomovu#o^k$5g$c?vt5HEj;NtcCNve6VH%2{u@v>!1R1f{<^> zAgg}H+}xb=P@z&y*qiY94;=H1S;%wbC}G0!*$^yac~@;^aarFAnYDV!E3!>{lBW^6 zbLfk`Iw=dPIWnpge9Y>MW6M;~7^|%48bA58gEdPdy}j)KKZvmrVEp2O<#XD^E+HvF zy12KE$bN`_kzI{ViqzR0i;pCQp6HT~HVh4$6>Ic=)}<$8(9$V2my4!BXjDFlzP5GA z|2F7_0ItTdJhfcLXP!?n14G}_NW2PMPRA3Ag29ZwbXZ^U)v*rI+eFGYaL@n+p1Elvy`1hu-Ds`RicU-J_$>#Ns%b*(PZSffyfNP%rrccwW`Cz5vFj zMek{r8|$Gzbe-A;<>9MfFJvO!0GBSUye_amKeOUl7w%B(S-0t=Rpwj|?mNG-l2tNK z$*z`0IohC(>FLr39t=lqx9JUYuu}WqVjKK}j-zYW7gzk|?CQ9SDn6!Dt5#cIk&l0< z4}#3A)*X10_Mo3}Cw*0YX=RtEB`G=8#wEeT0t_*o_hqE256l%)dm1Mcvu|wrc<*)$ z49KslpMAUny?8%wh1q!a@{dQWZCNyt*%oZsQ~8?4;vywEVRx1*UE1u@`JujIzw|{q zEF)ImM@4pUv#&p1cJDiU-11`236`Lts5H@o?dhGjOuAH8jI!Ac(<_% zGE*ZU3|*F+`K9|(l9ADIarYWvnds*a%zO;S=wZaSwg!&bmYU zer#O0e|bdDvl_MQHiiXhU;m?cax(KZ^*N9%=DW~4z20TjM3&NJC~Jl9SOg5uC7oRzr40cPC~d4 zqx5L8OFE5(pCvlID%biLi$)p{1)L|t#j?0HO(;%bJubb?5mRVmp>o}V4a#e~S+j22 z(d~7hE=qm6PC1p*K6eNA*+V^gI=VZBhd>={R}Xy;SWD^VE=#%vnWFn+;L5Ss_!ssQ zj_G>VXGoNqFZI24O4wTtBq^AvzkS!FR`eQ;DS!RVYtLwRNN$&^E)_DUin5cMB8sJpc3^C{y z)NTuK(eQmgt5F?2*#$j&RHi?+Zk*^Gn0#4*a>)yylKE`|>rnWQabQ_-^$5oGeE6^R z$x@ZD0We#mXda{fK0@5PWs3DA%=b#4gJ4DNCRCy=c~GUFo#S^rU4~Jl zeT9J_Ig{%od1!C`uvB$5!qd&CD^Cg~Y(0L&l)H85UhtcX??f5N=g`*3GV05-@97F@<#xE$y1hnvdz0F)s`m>7xe!Fa?Qnhch(dsZRs}tT6tHUs7)cyO-W%L1oyBLjU ze?y~5XApf@if4H&ket*%(kFq6_HcJ+q@|T;!8#lOZHvAv52=C^RVISkBCi`R3l+ID zIO+c0TFaNVaHs|9wNhDsViFa@l9Yep^H2eFDR@E#cES^gpD}eBIso|ms(eB=QoZ8w z{qdF1Y2&$-6|+5u(`5f`bR%`wDXbO98FX6OMYlRwn9m!YBmZ+YDj(Ck6eH zky=-8Y|%;+1xwtAD?9^fF`U*a;Bos1%wu_=ju`p*HNO)hdOuZMwU|r9EVEqp1_EWJ zf7`g)d%G4<%%bJSuE!RC=y!6Oe#o_XQLk=fWK`{HVKo|?`&7C$D3*Otrc7XQy*EGb)N8Uxz~xAAWO4@dq6PwU8Vbg|OaI;Rf4Y_fbowSaS8g zTB>iMbPg_xe6Yc-S_^eiX6|*UT38=oQmg?ifscq)b*qyh{4!8GrmC{~vqNwvy%A)} zR~-AdX)2@oS(t{G_Yf)aWq?=TH}up5W_Gp(Whk@m$_GJ|c_4fB`E(v7FcRsmVClu= zBKJx6(L&EPxu&C~<;Oy03LAD55b0V#@?e}-J)k!?k7iI;0_{qdN4YHrm9Y*ss?V!E za-C5?9`89HiU=(GJyTFn_w}&g<&l zR~jHKvku#4QiXrY-i%&&Tx?NQS$W6io0^=ySN-Hv1<*MkfP$ozzF~g$!8c$sCb1^a zL2lV`KJ=i=qD!^FXkM<+Wz{#h_2VV6H@O|NikACqH=Galu6ZnMqwSadXG19o&|NPZ z!*ZQAYxK+R-p07iZr0*nf-{20m?OM+dq_EPughfF=f| z3%;;^d<|?%a2ep39762XHM~;2{udw^`xV9|NTcq@Hwpz3f4G?!r`I4GVbLT#I>CP= zAI3301hV}1y&mDfFF)V-^9!9XE?A zl9vRKjqAIqRm8tbs!#TW3Qc?*ob<(8Irx z82_Y@%g~{JUf%6AN_(qd`)6cwFOI-eaBgflvHh`1u3vr%fn@ycbG{%vg4ny|sbxKK zV*2wDoPPVO{x!{o=6}ynt_g2d5`Sivkc4E~pJ~yyuG;zOK#9aaxd*1C+m%YTqH~eI z$CPDIg>~(jiOH)h`D0OXE$%@vLfKC`N_MIYs#1hO$DbXNElEYzxRbY_W==^Ly)B-3 zYNo$1=hqH!Ee?Mgqn#d0H9cp~r#oqi$!=Y;o-3x!g=G&+LY&TKYg+srM zu>w6^K}Nm@PhtI8XDSK&7sNJ5FL5jdo&GoLUi;^9iqAldF)cm4Ro|;{?u9cq+XIAA zZ|S&~EDY2#&K|EhZ{PEjUFeRW@$f=@k`K)0us+=AQ73d=U-5H})dXSpxx^`6T9UXI zEWB=Rh`3TI+@XB8Zj?g<|JIP6e5!zELxJYhCjWGKA6h>c}0NWD5p5{g@VO za5;RdI59k+F7qx){-*Bz=yfoG>c9%FMGu*sMN7FbqjvpE8jF+uj?pTgV22d*lvP^# z>{X?2eOH_~YbU@ml`VFY!!mG^YubNshRY|Ca$Cv5USeW$SsoQ#oThL_Mnz@3U_0n9 zAVxKeUF3mk6=vVdY`yg=LFD9Xz{C3AeWS1R>)2l(DZ}t1fP zBvSkmLBpcSepsIAe9okntGZeH@agJM0j$-i7i5YL6d0Q)y53kk&*MR?Ny47t#wiN{d-8AVc0@B5ThAvfVRGM^^UV{|{ z0i{Y8=>!N!?MGA(E^<}G!jm5<>Ad!Op{Kx72SxA{{WP|*o7#)PQ>xy%i}GbC-w~$Y z11MhLGR%~wSL&<^tg1_K?n5Qb8MM|Ne7cDW6Y}NZ#5P3Z4HOwhitKxm=5wrj z=ZVbG^L=oo`p-nnQloZS8VoA#u267v%;#2(OBo|s(__K8176xeU1()UgV!QlqDzA0 zJk|D%eGBCb*q9J;n%jIo;QUGc5t;H4rD0XVT4uX_$1AZDD&`lZq=+S|w)xg|g^qo+ z?ovIp%|NF24itVu&6fBHI<54v75#O8x5~&YospDOq^!<%Q7g&{Oot4cC}siW6<)7Q z0IMXeKB{v3OnbY1hY&`Sm{RIGRGv$y!{c4O7K^J6(7Y~L>ukK=Gc4Pg@PyJ~L>qa4 zj+LfHie;lPM_CFRs;fl?sKeCjq{~IBwO?LiqN5FCj2!!EX@L}W+~Xcvs)UjbMk8!@ za_8>7v_iXr2cC|Op$k29)F~)MidKX|r>iMcVPV!YQe&`uaBsz-_gT^Cj;^Mv$w`Yc z>%}6uWXe9@(>qY@KRVl0!~;v5MYZtR&l_mn2WR^7l(PDqb0{*WeG{o(*}p*A14V9U zrw4kg?1MPzc&U}FA&*jBr+w$u;3MozsfoTcvBIL6?)tx%BEH4jdSq3U1so-|IL`_C z*HAOBDl6yv9bN61P~%oyO$%+NajSIZ^P+A98>1bH$yH((46r-^2+ZPPHAL?$Uf3K( z8`lXxvs#!UXq6~xBiolPHEnc$#LIXoJ~5t3PD{HXqeZ-vMJT+^BaG@uZ@d$A!S;E- zjC4MF&Mv?mPaQuBX3RoLHo>2%8Ds0KO17F}jLI_&8>shl%kyqeKMbREK6B2L;=Xn#!sK;qSRJlBeNhgY@r*QI8%!d$oj?9NVD>o#yH#eksjw1*;Bm zaUtCnT_>{+CU)ju<43kLcBC9nj%p7-YlTax@LU>=(BR3kk}-mHaA#Zv7hj%m8JPf;xtGzeh@I5}WX zP(I9XP6j!q8^I6}n0Q_Xn@@Hkk&v(In8meoe|U@4iP__N@0E?W#5!a97ACW&)*SJD zD=qc)$N_g*`@9O}l0`nP;uoBgiZ^bQdSN0^!u487_s*3u(rgD8_Y9MoXAJE7eDhUf zFAC92u5%(;C0)Xz%gScfHRoub*Bwe77ant_E8?bVr^u^=?nYGtQTD46+rGwT_AKL- z&hGxj49jU)jK^|1J#Y#u^99uA$B~0<;iJQ!JD#ps5;K+B6GS6n>dbQ zJB92mz7w1KZ<%%EOsV@#BkjWIH(^PD%4ejo!)3V0#XvwbG zlIP`1af>4hA9~{I__4L#E6-$Eq%R~|cg>^;V3lc`e`7KFuK ze*7df*!;t_;O3ahO0ujZUxOWuv;J=GSE~ZPvrnjIBh!)5jQfo*td2W6E!p3o=(5Xi zi_nHc7shL#T7BIssiihh>;7#YV4IS%wLsGq@w{A46}Nmpu(XV}eKCMQ&y2;= z!>oo-er)!P0n89aJ|VBPlWjdht1f+#nj$jj>5+(cQfcBxZa55dPu!afzfV@YasN5j zL%SO{a%l60>;%uEnP|?V%9^FznCz65g|6Lgy&-Sk-oHMs>Oo@?j!_doFoX11P9A?(e^{;!OuW{(P>W?}@xr7c>D21&O|MynpG)!WQ{#@A`;p5p=Jr^^8NN^p?4N% zy2-fEq~g3O4mt@l+&1;0*ovP|z-AafdEO9Gw<2wqZ8t^w-00kLM|fV>3VA%4O8QvF zQ8zWrNaz?v3t2)WLh2|_J|Sx(E&`<0B3lR13m2-rbhG;fR6+oYMJg6CQ^Asnj^zo@ z7NcA~^ZK`3y}^&BZr^)gZzo4k7v7KLqVw;#bNe7I$x>u?LRf+m5TEO!UR{@C?` z&}%x-w;BJrI@_|XTu<`h#JetPWn&sXsDIdwO8a;G-L9YH#+}w;X6mexgv@8MGeqop zTYjXqj!*|gz+6bRd`n@dSVoJ%bh4Y9&s8{)L>+8vmr50{tdMGH^`eC3QP@n3a&f1S z^t<}%;@iM)ZyfDB%f`kmXPGg8)=OWKn8zt&kL%=O&+y1;9M4y~n96r+~ zY_X8KB|x53Mp0d37s=L6x^)hz(e}RPRSRX}&6OgVDvnSg1Jz*1AKf=0Y}-s77YcE= zL&EdSp`GGx4%b@aKMmJj2@YbT8lY#pSK^lsb5}#g^4n-SXPvffn?lOm%6iUeIgGWw zFzVlO-iC-?&KX1#op?oWdD~7%ohbAwtRuan&J%X$|A-(mVg*qP+i>al@7WZxA(pAL ze3axHDr9s~4H^m_;Kv(YK)^j>DWdg3%M#hnVpkx7|iLGGhXK2_59*H ztd*EzyB@>R>pU$Wj4l75pQT^+Ffz21OeKWo<=hg8I_nnIhF4+|?kAO;r|(TKN{3-d zX$kMS)AA64M9!YBBC&{_n>A_wqE@gNQBw+$-04s9`c=+h?$QG~FZbwS=BT)}AkigX z`=dvWzXK_f&0MEE^2*fR&el2c36kRs!IdZjx!@UqqGjSOv3hbU_!5!Su(z+@!Omc% zQhewsJLEd^a+}H@G7PZe+PNBL$adIpRzG-59H+mY=K4}cw3;}|x+<4ZB1cObg#2_p zeE0Wo?3xcJz;PjDw^|oC=Gx%@B)t}&wwZsK2pSz3NowUk?OtlT53(PBYKh}bu!wt$ zH-c8CdlMGe1KRPZK=Hty50+1i=V*_`G7Yjm`ZU&BupGayM^KCXH(B+DnONK#hntsp zToNaa{wFlLUp#%_U4~Xx_9(IFUaP;%C}a@M)Ld;TS`Mm8+TPec;H0>0wd9hxtJ|n{ zS!!I@=Mp)70H^_Pomw=+q6c&#P!P51cdghY`=GtDzjbCWrO6(rq|>u9dN^3LYZdfB zgwD;;IpkDC2MCTPDWn8LfE{73Sf_z6qR(GeJ_%#I#p3*jQ6mw zu*;)?MZuNY`a1UU>11S1<{i5?&o6w2^Wu$a-%6|1{=mJ7OJk%iS`+1(@ahyFc<2k3 zpX;J744IVK+IzLmDPhYqppl*rEIG&E$;^ZH2f{fET;B(n3mOU`tw^R)UW+w&W|<#? zu@X)i*yTy{)qWXrxoJ5l{!nyehAAbc+ih$GkGTyB1}1 z3IywxlHNSo!lHZ|v&pNL+SwY~#G*M|1L~OdW^b5ej(t$})2C-npOXjSIjv`YVGvL$ zkvuE0AeRxE?65Mi6uc3{7S5@SxvA;RrD)zuebk^$XA1P4D)5+NB8>;mD8#OR809lv zm$irP`Es@l()|=$@(NmoCy*5f8GzHmXE)oPe0ddv{9w)i>av5(t%~QzzQxfG(9y$( zvkq}7r6ZTZ-Q7OkhO&IR3K8F6RnX8hAT!{`ua%j!R(z4fvR2BLX?z8RmZ1B~F1ZLH zJ~cLB?R0&kmem<701MqSy2Iu{)*a28Q6dh{yRwWEAR_0V?$the=GF~}D8S+d*ango zNTRcNJYKwmQn?@#icvpTEf$8+3X9M+Ho!(dcWkszARfjTYw-Oy^+^f3P0-`S9 zV>_?cGuRivohO0lGi+w7PddluGZkUfhaZr7UZ4?OG(S}&0= z7x`~$&;goFPZa}GawOs`k3Xp^{Swf=IOOGN*(phpo&{W&hs6~pHVbP+XL*6IzZ;nQ z*8Ne407GxRD^cWoYs0duBT>LE4d&FV@)(IT6y{g`t7c%ydW>|x`7%G^l0J}uxn(8x z3DOM0{M`GOtj(V#(`1nv-@Cu@XgTcNiVJ-cpNk;+1-mT9f3*q>y@*GApH)Y3xoq_! z9I(jE*~N~S-BlG=!tx6i{gZMi5gNv|R>Hi6c5d^#xZ|VI(71^1uaBtPu7N<0wjDwAAH_U7(b~UNn}eE#fcYf{ku9Wid*-Q~ zK4q7^<1t&;@J-wfsBGaD;8mULa|D4f*dp`~-GaEr#dlBZuWaEW+aCH=xGy<~|NPMJ zuAJ(CJ-?-W8~FJtV6o1g{`tS~{VyE+A{&<@ebKJ){CeU*3$91oeah|>%u^zvGX39b zhi_iaU$BsS|Hnd(sBdi3{1s=N4B|>cAiB3|5%Avq)?0qxka)vq%<9)l2y_D1@ZD!cce-)^bqXDlL;p8J{I?o9 z)Sp}Rh6+PeB-dx8ouSiGSUF&YXYadU-=!9uq*k@TZLoa4$@pz||U6I-aZhYYjZaRP{l7tI5x95X$EEjbRpswTMHKbmyLdOLK#0 z>jpHT(@iUgLxh@CYUU;E^8WoAXN`rV2lMa5j|bz6(~gq$1*kb*gh5wGSeo%#+$ZOjBMt}LkF$12*`ts!c5x4w_2!? zW!8T1 z?0${z63YT(fhCuFPXEL0Y|eCWd82?)Lvx|!@N=h9$DtT{q4#zHGi2|&WkaM#RVz~v z_;Mm!9EqpZfWJ|RqeVAgHP+}v^upN>gr6xg<(`N1*T$Bq!%^*_ zq^R5(71p>s-aG?Ejg-4JN)roRww>?p9Hs-j4Q5K5$O>au38i@4tPy%*HGs&4#Ia{2 zL+oL`w&^%CXaMOnsi*tR-ZE!xf|m$#E~sQZ=5cZHEx|ZBo`H(F-dvSFZ$gs)cnh5u zM@Pp28vN?GpYuzd^}y6oIvAM!{P`I)^>K#L_)VFf>_mWJY5Q7g>j?V(5L(Ei#wRK+-Db??RABWIGPM6J_Rd{q}08?0C3 zn84nEvRuJTD-z^WG$z9SXb#O(~d@=n~W)LNu#+`L5W* zT}GVAzu)lufeIP}L~YBiPkMusvOEr0H!#IB- zZcOPxcw)i$&Y#B6gU2K)N1cX&r7jtLs1 z_Q^sD6SY`l5uCJ19oObmDW+TCrYZiXSDcwUga~*hXgPuw|E=FcXZS;W{z3e+@D^7B;&D{UPp7w(q}s=FG)&z4xAuaa6tJnTY&Bntg zIgLk&U8zE9lE6e2uubIuBYUogG0-FFCAa_7~!s%&R>d}5Ld6+RGfH9yXm!ryGl>r zjE#D&t5+%xV0F<_!la4!X4yrH|v;*^((AlfYzz9t>QD;t90R|rg&1*nMEzbvuiyy?!G5WQ$i08ir?$h%Ib4^ z&AGW=Czb`K6pRd%MO{r^6_=D@9^@S>vYi_mO0QLkwx-)vsGR_2Y3Fj@T%9oJ7Z})1 z%X1;L-PH0rUKg*T_V=DgPh0(O%w7A-=FG9bu139kvG?yAhY#+&nOUwQaOTvFgGq`9 zlU9wdx*wd3eSI0pBh#}Q#A$L_?DWgeLeX>MC!0^0j}4aOmG#KQl~BrBT!`8349vWc z74k<2b_HfdJ5GF*B~AClh7a)UM(5=c#ShzK=P=P2s8aTJ(+!7HxZn zyEl47z{7Ve4J=oyX$F|UuFCR_DrzNq8RxnVK(5AscJSd$SFPBkUuYK{6BEmYY%-8e z-_7o(EXZDh+7=;X)tj{7ZZhGq?(Q`kI-jE+5PQLhRkG^#vpT}Q@XMqn8t>{mV4Gs>@ zw`y9jm{k171Ymg*cHpo{lxNmGV$;zawTOna{HGG(XMf_%6bNZ%TU%7mQkn!2T zEd9!>!6V!Tjq^-(Q1sb2kWDBoq)hMoxS?-5u z3eJ5n=g|e;D5CQrULzO(X!v^2aqHd7Q>*UbD2pSj z!goSf^7iX{Xf?39d`J*ZbK>4rt%mkynA{aG zYoF1$E#kvTsT`N0+zR{Ec%TiKhC94Z)YviXzrMe64j8M<2Kyf?g|1l$rWO}>ORpMq zHyUVXXt#VIn4V8}eR+spY5DQvJa+E2ETvLEK%Z}nXpw6hV$Mj1*Gt%Z^E%`hZvipG zVL(WXySCf8-dujT*NJG*kIHgaIVw1YX(i$vH|98Msa_t=C;C#=;?8#6`g&&m;gsY_ zI$dU8h24W)9uZ7}G)c~mH4_s@&;s0Pe4uqyiK&Cl^eRhTNJxl%PhC?xZ7g)9$gKWK z6MIO#X>gBC*CVeoE{3lURdH6LT3Go+X%u^o*jAS9)b;j~{+HCW$_=|1cjuY6>7jxC zm{G}Pu&jF-_l30|+x!-YetD?YS-sWFL`qs3nq}UbuGsohEQ~z0g28CA^c}ACj2`y7 zW^n6!SC^?vdl9)h@$uZayXLwMkANQ z4J+Jhx`oDw)(a=+{*~=16bACgWh_cW);icxu$na9Odbt4dEw{x;rRFQ<;W^$lAkPP zLPpRi{}hVR_+cY{{%8Kw1Gb>eI3eIC4g=P%3md|MO_Yr+~#KZ@jCbKZi$Z^hPo zriSM>O{eR$q+JC@+vf1f{pexPWW+l-eZxUGue6Do zbkxRo5MtWdawb!%!d{*HaH7iSY-C!ZN!?F?HDu36wjVaHT<;d~d57007=fz{Rvg|s`io(A< z{mY*c5*)m!J>DPX$Fz!1_GdccDq%edBePi>RhN#bO$6CS1J8B5xj`#&F!*p?5c8<# zl(<#Qj55V%KatNA`m*k2mZ+Dn$yITGUGooWH)}{&DGPL~jEE7?yGS@Ad||mF=)%N| ziwG;@0kTWHFU-7AwA@Bb-Zw579=~_bf%9x9q24!(GMck9{kN>hTHvSlL}rU$NUfLO zu?{9eKnN$~n)kl6ihqc;-IzEm$c5xlJJ|v4R045%HVc<|Z4>%xPaK=i#Lvqj+edwr zdgBv>+)9BsHPfHlbT__<-2(;#jQv|MnE>nlx~6v)YAP8l;_n6X^NpZyBb?LdBD(q3 zv}j^y>6FJ}|J>3*Y2(1a6{s`#+BG4_zzcRf8c_7gZ&~zK z><4qYx(UUTJ8G%EQF7+r6j2qP58~#AbO`3|9VxpM17s&4`6owEm^wBmiJd%eu|usrFx?u4102 zhh25!>X`lk^pk+~v3)b$(23Sb8jXuIzbUusDQ-eZTpbj%!t&dWk#*y}llVW$AH()d zP*ZAm_#oxHv2J;(7W%KnE8vxiPkdQOT=zU`mt|GJtkJuFqb*Uk@xpWP6EPd=%;#xOPbp3M!EUh@|RcB8}>SDkBIh35&+N5r95-cGuL3>{l*-SBf-fMhC zaH@ZWcc!<*D0q3K!}ehe7yCiB6!4aBCbubmO!g<=U;BqQt5XFw{wotImm>3KmSN$o z#42*<8S#*28!>BIoKbHPrA3;M8c&mRnzXk=1E8`w_r&FctwBG5aArN|aTuuqkzi-3Ek&9D(>J%+%bCrk+I^avbC|Z?N;Azs z|3IDM)ZofB=gj8^rZ617j2XxQK6=@EFVjkU8;5Ver7GtEvisa{%lQJ^1ugyp@4_cK z0(wcC9?p~0-Y{IwgTpm-HIuM@etxgi!R2=~vl<8*d;Actn(E@=v&%oxj!7I~I84|upjdr#pyiak$Oi@!9&zwFjUq_`XfjP?1&r+yl zdsV(MW$pLkZ@eyzq7EG??iTma0msSD6~hI%To@U5?)nQjRSg$uxjgY*#k?X94eP$g zdGK3PV@~K-nI~IyWn+?(IA;iDjc-2fy_Oc7LD~iJ)h4!Dq|A)uhxza zsprz;_}oxc)wsDXqdX=dU6{B=uer&fsR;o@9eyBqLVhqcVE_79R&UznXVQ83ta4^4 z+JBE(@O>+c!Ennf|h4d}@>C6*=S77%G?=AjOzw5H!d)J(~6U`zc!p z)`$R@Bm%XP}~O zJa}!nc{hjXkC2wUd|tzDrAE9x{$0`YMr?7_=jzJ^4s|;8Op&rW-A=OB)y{J^g8tmI zEvve~#MqSfcm$nX>0oed4a*cA%$0GFxOHaOdI%ck$^(K@U${B3p!sU2M@y_kmUd&r zJvhRO4LRKouoC$I3tJ5oR-9yIb=v4A^ulO9`il#xPZ|@9>s(}zdWA|ytb8%8Twm2e zAdZ|meY&S5yR`4s;GuM^*j1~mm|*g9k46{U0R+M&OHFsz`a-r?@mnYI8HaV-d{p+cvk+T5JJ z&nZRls($6=8k;^IsK|D=ag-i9B(OUKS>w5>H58`^iQIBHzF8HHKJ;2a0oL@d<(_lV zO_D3CsM3@7k%57M5DJ)?JZS6gJ`O(n4d5Ay=C?e>AsOBlp}{myT3IQ)*aaLYqskca z>i6Mc4&k|fYjYtdgGnMcEEz@4XKVfg+2)=-d-Sp+q4NU4#m(I?(4d7j@fLFl+`Fz{5#MwMkuOmvDeJS0o=Wj*Z}f&+2nNZQ@X$_&?*ibXY|2>00Y zY~zxDg@uLTs^Q655Wy}aUZ>NBp}B2pkPoLNR=vIT1ig{?>(G zth(SkAeIX+D=YJ(M0{N-FVj=L?5CkI>-oi?hSo)J>bmW23!)*erYEc!N9BHXB{&s1 zY_H7oZxpPqd-Y-80m1b9YFr$}&HL0^KD*DqsrxYPa{vBU+$|$RBW-c^ykrNB_olNv zk03Jvj?8t)uj(Iu8yKkmAp;X)2Nwq+SIm=1eWBK7!>(@1)N7%Yv7A+1{`pzO^2h2R z^By$0a}QGErOjhnKj1;S6}tkc+U%2z5wPDzF~Zlc*Vz!tqK__-8qb|<`3lT3hS@U0 zk8_McmnAA!xkHdR>#k-Ea@`>anvW3@r$c!Sgz)2oSvKEyD;EcBc`~2v{JBS z@Zmu$G@)}KkFl2`x)i*QQfl}A?5y%qKK(50OI=;MH*V2AXWayt1&=nA!ZjYy4*B^9 z;DAIR>NRyWvdR1?-$**xpPZHZ+xFdcfGMKZCb1mlp9Mh*yU9(UXm%+y6xT;Q` zHxqT=IHH}dvl$q~cx*wlD(?bQxkZl8V7lpfM|W@KGW1?Nj@ftqLG6vVQi?wQK(C84yJ(+_1CW z>)~b%u2tXTc#ihdRp)?fHva)pd~k0FI}oq8IzAxT7W>`rGOMB0nrpvRJ0qt-8`Jt1 zrZkzYLZ^0IBn6X8~usP^l(z7{QC@zij;uTZD)$V&dsoI-EKyo(Y7DoA~mhr z^1Tb|WTf||*FRyzIQ6Yq^$kh5xgPo8YOMT?K@_8lTchn0Cbf#2@R;}6Khy|*K&5b0 zsaMI{n@M_a_G)`cEbb7Dk-NU|;DKJRZieAC);JIOC%yq+abDZc_)dBO@>;(Agt@oW(kq)BB}HVi9n29!4TD&|da78E zb@Quq5FLT3+;Q{@BcE;-w~GrSvK2?;`kNEWzQEv zh7}1_&3{>qJx;d5r~4~&hX4^%@QWMBkg^SC#n0eAPHiX zs1@SB5)u-;yytG(FMDW1P}o#B0IT?#la6Naa6BJfFZFPy@gn)OzG}_|ZCr*xD>6_w zTVDms=yn~ZS2Q>f5PJ`pfA8DcOt-boS<;6#t~PeLi)0OYy536jj$1P+94{nxUSpJ9 zd>g8xXwg9ybt~1m>{%XKcs4t$Uk>qcW$Qo_KO`{!BTSm+;NjsxwnvRBeUK66=2jrQ ziai6y|9o^4^NK>1bXa>jg#DbnVqyxYR~wgr%1Qe)u??~EeCtevF^sMP=98L%vsMgl zj+F|TnO=j*8)1UF#`LFXxXv3Jhhs><+|j_N~LN z-J`$l1dYy>hJCZQ!n^5*#LJiI@*YKO6(sN;if#hQ!>IR3#{>*85IQOaQzacQL9TZ# zfT-0KL_o)4pFP-iM7nTg^Cj(d?ug7CE*Aij_|L$R3llAjVVIxhwyj-ukplyI@fK8%~Lf7r&<7 zYcq(8H{xZz&ch>dapYu0aY+erR2S7z8@pT7Wr2^5WkxDG&|d_#LE7B2T!Q|3Jhj0lQ(VY0A#|FJDg-e6;~Ycl49X23Jk_5VyDgVMNVl zHh2Y=OKUwT$-PCMTH$idKb=?fUxB z2s*H#YOO1%3kQgEvdKjuP5pF>=b#|M!(wLlyW-7X`!jO(_I9VmwR5r3-U5x@uv1A| z#KI}5c@-$Ov{aQUKwiB(SQl?e$6#dY0GtC#LcaC5GbH;mjSHWR2bsx(*WC-|KB7gB zAy%3c_+Af*ZM8{uIYl2%`%$w=f)6n3L4Cuk(Zgm)jpzYhioua#7pDiJHoEFR)ed5$t>yo%Qc2Y0v`zB|9i2%DWBfy<9Tjj({gU>hhgV-!tW?;{nH7elS`HJ zJGlRF!afXJD*^&KyMB4v_m2-CMwkfiR<=Y@Zyn`|*>(!OM$JcWZTyV4a(fZ$)vLc_ zfj>WjNRMVl{3HLkXP~KF=Fa%u_e&i^n_EN^#2)0<$p6<*g}~@{WMFo87X(Jfwr=i! zF3h&=5U|Ysjw>?#_DaHxCQtOQ7~r*@%fvO{|+D}I|0b9&U_8ySDLXX`mY@MB-(TOa$?UkiQi(GiHuHU|HnFPQcY!oaHx ztYyEVCqL*sUab3eSp9$g2+RPlwFa()?ufDf2#iE%WzvxB#hZAb){y);O-0`B_^la1Um-y zew3U_y!Ff-W@kV#QQSeg{yg!}mwkKNA8V&$R{G@Hso9buKYeK&Uq>YymZ8;J*rg5d ztpfX&co;gB2H@0#y4uhXnBk!B@i)RI^5uTQ;MQ9so{!GbukUGUGP0F&?1I1ILRzTC z<{$q)H$d_Y7l=u%qDdxM>t?6DPEGyKRSImesrqvKJgJ%I8ebHlz~Hbx<6lqfFE%zKX%yg^XKY#iQGytAP`_du680N4M;RZErp?DK*|QB0V08fA>Dc zoRB2rPd_uR5)!tz$E6wRwp7dq*N~|W<{O!;24d+onpvF*4!TCtXjPM{&C4DxwDiys z(+bMMxK+8X_%biH5sWKDFf>D2o*1Fhx ztIad|7tnzUUUKp^8-5m&vf;N201cU#||HOm{v$$U(tMxt5Tpn zlZ9sAn)VB3xKKaC~I?;>oriO&CySoy5SS0Uhy7SB{+^{-FTcbYPjq%NtM1k6)E zjX~E}3j*{mjPY1hM+2At1xiG{r04DUK@Xx+K)l#>Kpbf9%7g%#9t@p!&)w{xXY}Tb zqWIR;>&$f(6%_$9qyFfQ&X=GKmeuRr`tkJKSHOelL^CP{3qj1u>+Ri_XkCrY0LZ zs!wkrj5Crl3)gF@>b&Aa6c4U9)qt7JRFW5>xFYSH3GL^By!43PL!8q}q;K1?CUVW& zfx#3b9=O%Pi^8B5)sd?Fno~Q&P#a<~KuKh;`CCE_v0-4qD8zfi7 zY7jIsj-PK#*HEs-I1OTtO}xqO`{Vx%{t7SEZ!1$5JXEL)6#B7_>8y0nWpLwhh$RCu z&If%`!JWwlF8+B$lxs<9it5Q!z(%w5D%a%p%sUXjQ#%)>f;*=oR}@_1n?HS$S7GPlH{422>KfrEXYUvw|QT^TAXW?bT8M2 zLK286{ce5h*1fp8W&6)X^ZW5-fW`>6s43sbDJC>F_Wkfi6b4NEwyK!YkZqy<7|h<( zXE;$SJdPi`f`i?Ye`kg-%cw+ItciYk52Xo-uPrLEI}yc8Cm%ilfDgVt+M1MxXwKWW=gnQBH)sW4nCf5* zcqlRneFt_n@fTN$YGfhbo4Bmc8|}fb+vF%vVNMOm_o11CD$)h3>GA6@WAD|>_m7C9 zD~rtw@yX0o#O&SBmS36Y$A-*^?`5T>lNCX75wLx)y1uPsUiVHJ4v3by_w6%ko~R5g zI*Un@w(=fDRmEUVQS?T-a-C;;!>nQHePi+Wi68QfKHT$e9(Hhv;C%M%nVz&aVY5jq z|1FjinCd-#O+&id!G|&aG&*NmnFk2US@APxkiF%1vUG$t2QH3H>H)-EA%rAT=H(e! zmZMbJ%K~;W z9q031nZp&&Rkpuhcc515GWZCdHCSL$E*44qsRNbSXoEwp1t)!~R=5Y^d-S4&ebw64 zfN0rMsHz{qNlay{T~K5r&)_2=f2}CA(~nQ~D%pFK)Kpym(6QlUfg`ca0HZ6=2BTN< zZI^ZdPJX&$0S;_KM_vx#Lp?u$cTddTL2>4XmJ55vY1?UZp$St3tqKdRlV#;;(E4vR zrDj4}Mfonf&qWBXkgTfY*ImsJ-52;Kx@ES=0>sj$p^CETvJ8A?7BCfA5;mI3{nzew zQ#iCM%xxhI*V#ATM740`4jEkkj%F4&OJVa~evl{-SXSmscSYOm4|IGZgkymCsYix# zcD7!Xpe|+H_%-@vFlgij19gHPq1sna-qfnaW)+cJ`5y$VAMe0P9s z3b7+ho_rhmZkODTz9newm;fRfKTuO$aO+Z zZr@+OK%U~p`H8&SznSl4`w!3W+qK0#(B{^l!$NtH5^FoR{c^q{OE)0N*}~^;9bJ$? zjIb`v{r)qfBz44V3+}0)qksR355Jkh&s?N81pe8I^|a|h_23%<)Bc~o1XxnU(2&Ka zEot7&=ZL-ZfbPh~??+%D&22+`-O6)w*vwyVzeJr!hG%%o2ct!7Rm5bEE5Mz%fzF{~pir+OGZ&A}6*RoHj{rtxthDo03pG}}W{YDeAO+Tpdwc@6 zj=dPM`^}Z!9QSk>l@9U$<&o(h?|)FaoKShbB(3O`MPa?(-YWvFP$4DFq;hZ~kMe1E zVK#k7N@u#U*R5t22|zPtd$xReRW>pZwP#Pe^*}H0`G5PnT^1DbfBg7C&0Fa6OCP>& z{b@QqI4Om(s912D_uadX6%L@Q*U)Wuz1^28-W1ss&Ervy_veI$X#*&hB+$g8jSug* z{i_lqaVG&!B7Gg|Exn@>ZbDaod{lw9>U}zgTTj--d3^5!NV%}#;#|@W59g`Y$^pN> z89P|2 zbb(5JoT3=Uo9Y*q0h|P5o*>qn5LeMvtv#mxVx>`X6gDra0M@6&x;nGcL$T+*c=pm! ze(l^9b?RZ5qj^Typ54uj>1HZZnT3S+?D8HYBo^)Ca>_b)gzfoXwO6XdUl6N9pg(2C zi#%IJ%t~)L8j-tpSBV4i6~KVKo6P4Xlx>%BBkx@(G^r;17BzyvJG9NhwEPp zy6`@+Y$#pS)b@YX?JMp+9_es~=`aLlU%os=pDR6?&B}yxAfV*UaIvk5Hz5(d zG&mLx5q`i#8Z8M6f5XMYwTY2s9;Er?-kV=Wnh3__-J9JHpgrgVU$6@&Rwn+-vAF5h zOtd}%f%d|Ji=E0-_e9q(5FU-;B0y;ts21Y`sE^(^wp)nVq8o@s_b6v`vU#)+u=G?SyY5jTtuqH9Z?`8 z7|$-1HiTks=c4m^GC>q{vZzPXiQcBn!_x@$m?*h6MQB1ufMX;Qi@ry&^IGwq`ZpU} zE_Oqd*ohyXt4dBpvxcJU+d3+qEBios&z=Ph{m6Gb`ZAaZcg9da zi^FBH#(Tos756)NljA?|5LDkLd^kFLj>>-MWLl)AcV`y~bHsf!Ds)BZdAO}qMu)%5 zhfwC;Dhi|5dfQoKI~>+DMUb?#`SK5aAHsdVE8#C3T&!pe;@W3!z0$cSI4P)-8589+ zWlb4#VqVgo%G>}$0Akq>awBI~F%L9F!VdP9+a<${+xLmSjAx))4+u5szJK^skNy2Z ze|lbcuW9j`zO;QxAkFJmeSQ7nnUVD8tKypD5$@@5${>x*lEQE|Ci# z&m=w?hzAvyQxX>?tv(BgRMG-+%{$HpHurQ zkvxIaAs5I!0e5f3Kap3_E*|`WM&^GlLHM=f8_V@F@wF za_ZTv*DK$hiu=}3Gwu4p%lG|LmlfmX$&(b}0=v;eHT^Hb!I~D>^xgE@#UMgG(Ebdp z^K+b1fo0QPfZM;EnbYjAN{W)}QIsJcXkL*rA!M`apS8{iq&i)K)|L#Qe@i;eoNoDS zOddneA%l07lPcNV%FR-o)3G%zAn5sMyHHFBCkW~qXZ1SiL_~O)ofw`@Kb&v+$%c2v zeyKlFlXQO*oJOD?_h=ivh3x(6uGO|cL1KXOIq~-e59_zv8s-_s2gIdO*lps{ z32>IZR;d4E=@p*oph6do;HGV$7}g+tj=Nc-Aj;{L32c|W`-{&YOPNn571agl+;27G zwH5OqJE5#ri)$il;VPnD?YCz73a$*N2)+)nWy>$5Dq zc@EKol;u8Y-?agE5`OWT16@J&L`dCKUR%g3lg4Az3_y&6lh`(Q@qhp9=I`~?xGaOI zPuBc_4!LH@;)3P3GpxN;2837fcCUnoDJSkPJSJE5x2@p<+Nrf;EgO0ks%-cPWZvla z*ZP+@C3uk)rTIEH?%xi#hP=9cBT~+KH83mcf(0^sw_r%^tM*s*!{=wD%F#h|-#=9_ z#XS7-WxceL>@r!jqG$YCtc06D2NeLCdyC!J0;g)LfVc|=K%;tP0>nB6G!TT^i{dNn zH};M>)7p2ey!@Fhbw3TMqly(e7<{Pi>AB`KCEf*_Dh!~;4xYe_p*_(sV(BLrV3W~Q zlIPqxc~Cpe2o!eI&2nc53k^-fDKS~WQ2{{*obQ$0_sE|``EF*sYBMK)Ij6AJJ5Sxc ze7D6LxJJqzF1GmmJp}p;1P<7{-)e}0%?-wd_p)P*3@=P43TwlF+h<8T*V&#)lWbD6 zdrNFX;%qvMwKH&`sS!oZv9bv$Z?gO+`3R?xR&RiBZe_)J0hJtwTMYsAb%g`$NkCj5 zL4C*{WL+6kx&MR@szQ5%$gIn^v^SEVwrprFvVHk9UN<{zn*kcw`0&Lk@n&q6#i3`BYieZ#SXzGz&BLh17c6dU^wi1Y;dk3BwT{ z12HV$@srI?LuoAZ$Uwn!Y|Z_t?=!2lce+PU925$=95>bGlEW6*Bppi6e@y$VKls$lseN09a4 zE8b(2GgHJ=p1yoM#DdFLF)JBb$x+gK-9V?ABT8(@1K{lxqdsx#x^NOubq?$G&UKfo zGRmwZ4X&Gp?sapYYO#$Lb(_X3-gtqB#a^!KVo;ER!mN{fZmM^@_!aue!&!B`QM-xS zX?QVpGNg9ue^K|IQBj^zxG0G&ib{+E0tzaK(p8#BGb-Rvr3^?N73ocS2cuDxszVbH z5a|p>x^xxk3?pUeFw*PLVSr(nv%e(f=EwPQ?~iltIj)tpFaqOvCEH& zoi~NB85v>qd^6{vjAYfrR(!pBe7eCpJ3HNBOI0a^z9355Zs8ev5!#WR*f2+Cl8f#z zXCNd*u{RQ}9wk=CT~!GU&HUXaa#kGiK{>+l?IwghSWlrQ)59>p3|ZTrZCXJSGTVpQ z({>#~8jJGaZjjLc+ch1U%5f;tY~K`8A%w)HM1?t`1boLvG#h)P+lvzYE}0KA(UusC z$z8;&-D@K(+vD*)#g2HV(^5s4xSpIYQ;p)LjsP4kjB>!SKW@>~Bqe?(2aYj*bY|MwJlD=*fwKlO{!u2g;@7drhR89J z!6*ypufg&SnL}x%=MI5Oh^;JLnG2Y6V)npwGl9Wml_`QXr48wmmNYkPXlSrPY}9$^ z*i_>9XYiIj`+KywNY0Kbs0$!;S)~Pl>9$)B1hcuVXm5^0+vD zBvtkrrRBsge*3g zXy2X3iZ%3b_ItLNMUqCIYH{(;lV|~T3^nRB>e`I^@sI!%fYg&>0R_@D6TcbtCuCHzi znhxZfRV~j2Mtd{q8?zD3HU>idw$W|aZ2&`I?YLUnAY*$BZn1a%Q}jwhs3ZfQ8piiBfp3JSL#I+sR4(rq>O z9Ie$qT0?O0Ios)|q@+|Ow{|m`9%2)vKcDU!ZLiC?xcY#?Wf-FHd&S0d^n`d--ptLc z7~N|Y1NGwdyX&|p_C;s7Tv^_*n!os<;8`gAN`2%dJm^|h6*qATgsrcI(Oj+Dt zYF#|&35FZ)h0qFzbEUQ+*8EYkqg$f&<$KXtJzaZPZcyUP+_{S9NEWshg7om2?bMcs z2TQp;QtIT%UqnltU7QLwuP3?mt?(PPO12b-o*J~sPW_hKq7nl=Wsx{y(bW>6IW2m^ z+j!%(9RYo=j;b!3cc|_vz)3bn{-iEPu!dugel9D z6r6$r7KIf~6@n2NQ-rryN;rlSE3T*^@lw0qHyM&T zySr7bn`Sq?woX}Cu*yQ#Jhl_OKFBeo)5a$P^{dNP$3DIsU7z}AKAovN7dQrfVr|pk zF{{t1?6P^RyCa8za1ZYO0Wed$yJE4Of;S=(!$QiEAYYmLwmSsL;6+JyrXlUJSWe4b zjb9U^359z7>?~0{F!ci;NGf)oQ-tEV5v2ZR&FDL;7Yx|$mtQf0-W}0;x2M$DNw341 zcG~i$<}t)VXWS`5mCN$mI-0t=djsjX?Z1-4g8<6g+B%q2Xwim!C{c_)2~wV_Q|gJY zc$ilOa=rQWS~|stWDl_o%oL`H3OCM-=-+K)F+Z73&~kTc^7N6cfE4q zaR9prTYt6@vbGL(Jhqmaa2yGISwqeSSlu{`mDvkUpSAA7#XZ6aJ)bxUY=wjI@y6q$ zvspK(?sF7R(G~H8`CCc~*7w5NV2Vpdgaii(L@BVE(;>7O*oH4VV+SbI$r9d}0$*|` zC0)ei@ZiyZ52}cqM`5#KA2pD>g+dG;d&NX{`$*fy4lIihHt~*Mqzp^v-s0G5+2XIc zK_BMir2U_dQkx@0Ln&-xT@T0$JFy#`6eQ^f?7wPtSa=GDUVUBa-hl9J~7{ZsxoyaMjr$m<9?u}`7y=l8#8op>dz_Uio^ zaL3SmJsPB}7rVw^#eaJ5trXPaAFq@TrbOd;7rA&$qlT_Ov1Fn7wJm>?7ImL&Ypoyi zoNQZ6$S+gV(t7`-Lou18-#u|Zd|55XzUshq87qfbrtwIT8x#fC{3j#$(K!jJZh2xh zsldIoK)=mEnyd1QFv$h&)$Y@_oHwyB5G5gJ>(muxy4rB?$D|w2f~+U8>y+Ze?CIXB zuin$Z#B?3?{Y=A#wD#< zs5isE_0dWDV#tWIeUaj1`Y2`Dl@Yv$`+RJ3*9nk&oNYnHId0CXD zsbyg5k9fvcr%&Afdw#H6GQ~UbBzGEsd3h5-<^zMme0-YR0f)sDf2U61(@WyL&C5x> zMP4&NujlIn2=$src4J&rQE2f(qUY!7+R?tw1%W$+)|hY*J5C7bY9^-%g^Ydt_;Ge- zilg}~<9dM_n<8_%GYGds8WLXdobD|hxT>zGGTC-XF@U76LA$R#H8T~AV)x*j&{u=xgdyElO9Zq|yUYQjw?chk8i1ZHAb+08SDV!}(g9uu=c?jYgOmqA&W zTzH*&Y^Tprf_xWjH+1Jhhk{F;VNIc_nHJEYVn%A|3g)2V6EOnBwmNi(L!EsgMG; zwl?AI(YeLNYg^W&wwxmO9^LaiUj_JAqN3X3GVd=^UUxKSx_II8^6fr)8AW|u{=;b5%=^_1}rU)0IfJxUlMf||DouPtL?B7_u7UX7czu;AU-xSsHGC6?R zW5&&H-8`q?FhnO$r?`AphW0i>VwO5&ZwIz2jsTrbf9ca^7xj5H>_a=BV{nb0Fx8i2 zBLag$_{GK1wes^gA&#DtOndDW2fuyPtQGTXwSa!*vPGMi+)hK^iShYtVFwCfB~EbW zcte7`cZc=+e$vMLPK$5zMfZM=E5~~BW;*JeS?1JF$!3G42~HxxWZ0{U9H-SkUgA(} z^*-LvuOlb!*r9`mx^`l;#g=gdJI=iOW0U*_l_@+{3q>zu*N3i}I*~!d4`6`dBE_ro zP2(dXf4}As5R>M)QZ?G6_?dQhg3^9Jv;P9R0+vEFuKc+pv zx4hSgvf<5T7^yWwZ-2D!t$cELL#X9cRAP~Hi$-%?&11x@Hd-Q<9-r%A`8k%X_#uHA zZ&2KsS7WQ~qLXs1#PS_%L=#AD${#Hh;Wby;&>1Tk>WNpg9)_c2`^6E0w6&?ppvWyC zFj6pwJxn%wutb z`^=wM_bg1!eI8&Z9lc(?zSy4Lg}Z@umUjhWd~1z88Lg>#$Z z8#H?Io^gDxiY_fJU*~+4Gv}TTgYhy6k2Q4ic2)H4JF{I6#9#Hg3mc`Yq^Yo!6x=ug zXNJZ}XKk5_u<|=KY@jDjS&JeWJN?oE5ZUO`pMefo<~|_mJN>O{@p}B`7@U7QY9sg0 zvhSEM1+DMyV+5e^YZj7@DTc2j;~@_60zs^kxLlMjO6DzdsgT0iDuyTBP&m9)Lo5~ra(m8aDiMb7rCfn>*;SdFVr ztXat&@vR?IeUsn2N#i|#e%9I}1A@{xc`RADY8z)tk)sT1=oZa>6EggLO#p9@N%{2I ziL#_Ko)gz!WS=ULtDpAg=GBl*^VvN!+U3ne>*+$enl5KeB~r_W;yH@^CVm41_FDrc z82uCdhQt8i?$dLbt6lltT%Ob&{f6NNNo1|ut6Vl!nJHivb8hlVE&(BsuE*UMs%bp1 zBXF9XQS4WoPL0U!>K@H33H1GmVF_ApITJdKJ?>Qxv87%%OcvXWyusd{{YV|LCk*I* zaU_Yo;My%(;(EK%Iz0x($awS!mM2v`XFcqch(O!(3*#=Vhk-h}=DNdVT*EF53CrQuR7%(|Id^D45`}d;RelEk?b+Uav)jZ+6U!~Q2j^8RW;OxFzvego?wq|* zbU=DKn672*?C#4cQBhU>27M2yMUR&3!%cX3zRrPf)gi`%y45@Tbn=^%om`x>ESH#5 z)6(Fr4)J@ay8}MI4^x(6RCWqoa~nk1A&EO^H}ks+7s4XcwP_n6HwHSD=5DYpRcxvQ#ZtpZG3D3??Q zM^7FMle%dppxdJ;*0uu_gV1P$psgw@0G%s%6DJC?q%lJnopqN zFT#h2a^HHl^Xxg zf~3R%O)0v!{Nv_>M`f9mv0vo;KVFzqSNbvLr$vwgK@@ftMFef&R6OgoZj%J z`T6)bw=_snWGQIKulO}*Ukje5sKZmE_~uns`Yvo=Zyu`Ej?B_6E$1x>otmEZ=5SX346`+F4ofz(6fgA3rdK?-?X7W-Hdo#4_t7@+-Cv2Q?^E9bxe3oH>4QiRO)i`ShU{09mW|r z%$Z%xD>)5i4B-PLVIXV*7_CUCLV<`KIYbsZG){Ft$5WvBJF6GyPJOnd5ZMxjN+x ztz!1($e_S3rsJ@ci^+pG?W8GpmLgjYUG-Od{kk-`jH?-)(dy52uV>TsuSEQ@|M?mp zFtlg%XoN%ul4q0F8T;=)@o8oM31bspcMdTWJX3o9L&wnj9~aK~|Fnj={h*QI8>Rmv zJ30MB#|WyHqyOGN!@Yq22nufgEB#^kKjGJpM|7V4`*t_db>w>9F2G}=d;Tai0}GRH zsv}d)>Hk%C{I9qjsfizh%V}jEvX|3gZ1xA!)KvgxSa*;A^Y|V|K6JMJ-*Y+tcf7p+ z+0y&8=a?fcc4M(G^I2i~q>J9Gr$4tPw!ay9TD0A;Pilzyd(zauulzOzO5KOb%Gl`A zu>D84!aIk4mk2qvQoTVQ76wM5cy4etcYrreb$dS9{X7hfrQ%3Rwe%ztIUgi(g(%SE zQU$&LNG3)uUq=Ef*pqB3QQgxu4y=KX$*Mtx&+EWwBp;TKNv8e8`LF%2{rq61CICTTq#l zI2eea_;m}4wkKQy=6j)zs4p&$dsbMjmzqUf-|FS#zx#1ozC+SbOIzE{ubRvM)L z0sqco+rTr@aUzlt8ya%ML;HEUQ z@cpM2s1ZOWV&S8qrbZ+#-FaqICJ1L6R1p~%N=a?sM{iP;B8OCVs+{#71k3z z69u|L#_u2r9ly<`5}kAXpy2nP?*C@^*EU`Hb|nU8iAQXHqQu!=;X8YF_m4%2{ol1_ z^o$oAuP{T~za<}UzG@t1C$iAp;B8zZ9TOhT3ay65ocp`W-v!{#{h?KT4!eV6}0#{hB-cJ+;lcBc?mfJ)NE~_tZ2t26u*_xEKQWbeD9EKv<1et5OvY{KFo}e9V zLIWo07tmVh3=hl2mbpA_pIi;2&TPJZ3aLZ;6Y+tfM;ki$UB61mN}zj+)p}X+0(6NZ zcuib)P8S?>=)0C5c$(_!&_rvwTu9C{;1_3a#9V#@LaRT`ecio_+~B3Hw_9M?=8#gp zvwy-M-!il<;TocdnP?kJ-xiT1#zC!*sos$w^gLAVSb63QrY`*ZOv*1ZX=o53!!NHM zk~gHR*_-=Hs&n)RhV!WH+!Ij1a?t}KB2eXcHrjt1UVTpLhI(zWB|tV z;MM9B-QuePYmWr)W#%B?w6ruxbpqCa66mjig6ql8k+HG-aMq!-S2BjDdxx^rz$NIa zP>iJZfS-xQAeeV0qTxBgsJmpBmEL2Xp5Ai-sT?`|3DUk=&S`3(u#yH)0)7E2Fj zbOl9o-N2QzS3ErBH?wB;#N=}h`tkJ1bzG6{h#>Ny0eFL~REZ#Nn=LF$<-`2sA4e67 z!4^3!%Rs}ga`YbIM*NVmIG4YcG4SHeb*l*6g6e>G0eRgy6wJ$elkc+-sOIEMdDX#r zD*LXC5{Ir3aUzHn*5XC*t3+oRAPz=E0lXpdUI6)OUt*#vW1*sDr`r!rTP0(XFSOo) z@}X!wy*T#T?uI3u5)d2#^LG`QsCPiyqYZZ;woxH9J3NU#fBrm@J&CgBg@+dw)=9-e zw}ozn9!#hfg3nUJ6xC0QzUusOoD2nphU&lz;hC9o6)3jc&wx^++^3q&v3B$At6%fNhG)_sLKxcm)2u(T;1kkL$4Of61S{yeeyqLm zRQ#RN0ayL^g|19oRj7KPqNphoj!R6Ih)ity@XE{1m@t`v%O6N4L#N*`5gV+ z_F8$Jf~JRpqA}RDfd$x+7U{#M#Z+_p-DxU4{K=hQ%azE;%xCT+0pkh3ey6~evJN71 zI}CNrrmhw{ukQC1b%kL8*|zy|0Oakj9@0z?rQa6}3JPhD3b84&n9^ll5_z~D7oHM# ztS4lGjjrBsn0kjpmu{JF2c;5XE&{$pR@Ah0c6rm+SnO{(8xTJA1;Oqa+c9i?8{@Y% zDn(lwDh&)Rmo1Q&6V`JPlJvJMkU8pt9vk2-1ui~ z4JSV?s;#Hu>GMptA+K`JvFDSjaqno}?tD--0fB#i?b7vBlqPi$O~`&j_s;i zq{1<$#%Fisn+qv;v+I{d4l(O2^Bn0$Fz%8rwe5N@)p>&5N8#rFjgB=1n@E8ka^Tm= z@rOqiC{f3Tqc5(%FtHj|Myi-kpd=`EN%+#Kh_~QHXCl}Ld#T4wLRIf=n48hO2*cFY zW^L?>EW0UL`W3me-MBo+oe{C4w1mUAtkhMADOt;=&*#cmF`BtXWnJFi?!JL(N*|x~xwf2GXSFJ`>^q+(F~FyY42yqyWEcY2j{>2y$DO(*vRKd1@h z#q8HEDJTphfNocwx1l&V!3qtyqm={o0U7QDpar>1tK-!Lzbqx@&^R=Ms{4N~!xQuESQO%}6fr3XMxK~LVC1C*M*wf} zeu*+}Uw>I-WZ{2bI=lspwvN^(7z!_EmHkBcfaR!OBF^?%v%e>g9~@CIcPqy+Ie@)| zUCi3tXLqE+0aU0P+5|ASPFPvSC&Rn8-k^-R!Y-AYp+`;T^*=af;=j>efjogb$HYCW zpj+}{>W5drx?drTCe>FNUG;AMWpTi5A<50ys|loypt3458)LV-OH20J;NU?)zt|Jl z))!Ii;^V&00-jxT*!`KMm>v{)(abft&e*AP`V38>>;KjQT|Ei zkYYMQ;+*slp!u`;7ZOT(F`u3lDwajDVyx2coAR>H@$2NK{k)HHX6{(xY-^M0w%GtL z_le;G$B<*@J-LP&$-J$vK!d(I%VI4a2{! zC!%P&6LOs1qgDXYfX?rWfi}KoOvJmqc zRSojqF&K~}7Nfb7TU8Xb8^SaTns_us9Ms+xcbzeU!c5Y0xtPw@CX(eS zlS+Y)teYLJ?|npY$8dGyCd-YgZ4tJbPJld$EH}jXa)LX0`J96T?o9CO>b}$pn;!EU zVrKa*1=MyKW);HNl&3Z1c83c0+gN}$vm85F4Jyr+4C2&h+pEi<3TV=*5lN7@@v*0x zYXIOSfC96TD1=rq*&`>Bn*-zBmn+o#-+hMx@#}`w+J5vWv6r0lLGC0JCBNL|ywVbl z0g%#)vVjLvO0klcfHuiW+ecG9Q+&>k{ls-ozmilLohwoMH4W4@gFl;$=NL*D znAuNFhA6A4Me?`;@j~2A6B^*SKPECj7nCaW?es!nfjW^pCDH#2S0{qR!8Cy3^V`X#xYdsO*J>&Dk)lxO!*_x z&+6E)3R2OaUnU(mI~MmFRwsEQyxxKf(^4P~ZKYYey@|#Hv5w@li-<@q#T`d28HYm)2)re6z@;pkLjL&0{-{t>|0HO6zMVX zlb1JTA-f$O4GEFXBhxH9XM}J7>z;~#+x5ZGnJ;I&Z8&})*cDo5u}v-&r!;`x+;!r5 zd?4i5=3fsOinuuGuw^N9W}kIj{^7=V?&W=hyuuqBUq9#x8;eC5E8FO0+`h~#Xe2h^ ztL|>`E>`o?mZ(9#M*?8(cXYGFsJ&6>`LA6`+8xqPsR@*b!^I~ z7PdCZ7xJ;LVL3VeXo-IL3m#rxL%vl#iF2J?ttpI*2Ri^Va#j&%C`J|mD3gk*%5U#k+IZXvyVr!H$Wn#;N z>i;n7wsne0Lxg+$!2H177mf3Mb5ygM!!-xFVw{Hbe;8{mE+%A5& za9C~E>(66*y*<2UR_YjG)chQ#-@$1m@pzUOb+h-dY$ri-?CI+@O)6v#{(KMdd;{RZ zXTeov(vvWst*D2-R0vJ z)^*sj{(}3(z)JB$X$rs}xqrJ5;)%Yun|=E?%`y5)zAc||$rGej6HO1D>oFcU!OD@m z^zB~qEv;H!p>fBsfScFVf1DR|t7n0OoU(~v5XgwqVQg43>*Jcaapk zck?tQHU&(64nspbpfk{^Bm|B5}TQ6F6Ia*#k7$!1n0Jokrj=AdX+w!m)kXU z&7NKfUuKtIZe8{AZ)`l^ArS=zeZ(Yhhmf7MjYevWcJ*c6$5n+QzWa|#@QPPQ15Gsp zPZ9ygU%mTuK#Q}fy$O28-O;!WVS>l$p{%0edI^u4L-%}MqO9bkToQ2R^FR**EKW0 zqCJ&46-l%zq2X%D}`jS{2TrPd_Mm6hw!Vtj~*1Te^$rcEMSUJaR z;&$tTBT(8QrnY?_74M(@ooU|4)@oDPIXPH+`{Is^>s-~n%FFpclm|;*TbOtWk*Qgd z+iq7K`hwkSL>Fp-Bt)bI5c9-+TKAfInunhm*cP(TtF0IMtD4)k8ahOSc7h0r*9(v# z&?abm{15yK)K11V5H>4q1~j0Gg}J{tK#s8~2z3@)Jg1BW8-dJx?)@D19pF6!v7F&B zrCYx-W8#|=;*?INHtdt1hFYHQ`wknD1HOJ*V|ATylx2OrklVMzD0N=}TV4IFxlOJE zqIPl8wQvgLBWhJ;Rn;c8SB$!F+2XE{&R;F{}% zz;_YY``b!Kd}~S(^a_%3=oz?CD>t1yrgUXdPkjYHX3^y*|B3NKf~Ej^h;b0EYz`uYzH6z!fmX)cLw@gH4&tc~i-<5d?#okU!rsIX{L$2yMf=|X5@u2s~yvnYwtgMEkO?7AZHc|Z;!Rp_x3)f8QCyvc6( zN*eS$;W=ksLN$U^2c5J02B{uF`ZD*m{U*O* zQi1(YWfEMoo{P@IG&?RJR^YgYl~d$R<*m9~@dUKIYhBJq#GD4nD47FC=*oXY{|x7i zRWfOd#>mfm&0yKF^uoMvsUZj6u(-dULx z<=o$U+<^;2^xs0;8Pq6OlsagovxsB?djKp_cT;kJE6v~|-VHci_FWdbBaeBzjFrA31>Ke z@l5{F(l5r^kSO#F4?JU((V9jfwtzOEQkxBFm70OPl))akRR~oJ-E4j%z8mGK& zz|s-~#h>PJbKrksDZbVIrA*I5wQB3}h=>bRDlBY)w;hj+v}-HK;rQu&QDWsTqDSaGfJc4&S@e zzb7rPCeDVrgro+<{F)m*EP_`L}!4FRqwKK!bE z&&AZ$ubl+L%G8dJ3N~Z03>%GS#c|Kt7#KEL4>1w?Enf$GE|pFeGq;J@U!!%KJC?a^ zW`TiW=^K0gS)H4@0U7i{x$O+UZr`yi<6jZEdlwFM#h65&eLHshU%>^qKo=!acwtC`{Ft&>ALpt3eqoS1 zHPEl701`GnGm#n?$rT9*EJcR2^x@ZM0eb~X6BB`F;6??0^ix9`@%o>JSD@6?&~U4K zc4F>cjXBZ_79di9c2kLNFe5uDJ~1>X8*F2T%gG_#fSea$43UW@=(KF4tOQVX4F89W zpXbEFh*cq>WWkXlp@sBrcpARY7m8gKDc%OCyvJD|}!CLwgvi^I^*T35Xx)kzy zVSzJ&<$Tl?2x%@tGOe2?kdEU_VLH}K@9U@KY>qcM6B4rSWX5W;&)C0DKF7?-E=U^m z1F+@}7C8dQePs&RqOouL#^fNwzAtka_>2sviEZb@j(|o8A9$*{1>|OviH(wAk<$uvbE?Y?>y1$+7qycvMsa;*UP+JQRY3k z`p45N6%AGsn4J}KzlAcV39V<%M4cq3VsFk`7FO0C!F57k-$cSdS(*34nZ@tw?z>0Q z4lK{iU6Fg(G!`w{8E_b=b$v<*ocfsJ5m7~|#l-!lTFY*CaFtmP17WIYRE#PrR&Lq9zLwEj-u-WK1_3#epZnd5 z_y+iyp}PB#j~5ib|J@ugn2w0P+UKpb`ll`EUlhhiN3&kNyUx9|2BkN-VxxH^Vbl*q zR=tz5M~_rLhun+&j*Y?%C3FfYNxyz3cVvI)|P*y7nU z@W!uPxPS=af#(tXLgfdLB}HG}yyBF)I_^$cEC9{o=@macJQxlxcDtQ_fd_*xQML{( znJ8N>T||fjnqRmBX#6+>>4Z-81ERlK1{H%O?at{MyB8Mon<7#eEi{nN0Ljk^o}8}K ztXDa;w{_zj1dIdz12||gWERp?20aB~MwljW=_OhI=|4#jga7L|Fkg`|&_;9S#B>^e zCVd6{hP-zeD7AMxcq*mvuu=(?-!8#L4zm8o-Qa%(|NVa!;rvQl-Tkjbk)bzHYs(1q z5)2GGQ$Mazpe^*M)X+W#hTYX4#?D|g_rF5?@E>);)ad$GM4j*{G_q|fw$pg?UtCic zi_iSub2Ka_Iwb9bM3E;xaPN2ew^kDD|4ODgY(G$IEF#bNB3; zE(SR4U9#^!1{s-uIkGoPM2m73PF@t8CtdGGOc&U!7l(?`9C{+6)iqry&mb8bAr1Eg z1MBl1q^u~c-0lP0s69JeX+AhE)}1pbWKv9+yZnwJ^&o@C#~%i_70Uw)@em4!H%(wt zAt((*JIUI>xDH7spLjHqou3VIXZnVp+(k$Z>!-T9x=xW9m-P1TGwE;u4rV-`8#>ru z9ol4Vo)P|ue&~l@#a<{uR4uR6R~rl)BzoutcHs&h^ba$`?(e{R6y`^zs5`9VAz7l> z)s0FSTysd{$#f2L2gd(X&Hf)|KZXd#ITTpuRb^)pHwP)uW*BAWcNjp5PW&Tr(nXSi z!Hr+<$M*QHeqJRSopi2_2Nh(l773#DVmtt_(}0a*Q=srQ_as4Jg1ouWEfM6qg~pTh zfPbBp6CP;rdPRjdQshZn&?{(i>7jul`{CR*D09J7YPov*H;&`+Ipndi2VB#%j{u*~ zRQuUrArjuxd@!R9zq4-`13yOcEhCjM?6UtjK`O?;(?LCc+q!k>cW|NaCIAlDWMd~- z5(5NHghXH!nwop9$UhJ7t1w)inIoUmUXs`iwsT{EX-D!vk(@WhUim(^iWDB^U>|3v z0RrbO(F=2>vK3I$3qapyWZqo`1XSbAD}D_WE8Y#dimW?&kqNb>klXSDsL}80X^1y; z|Ilj>bldjA0GQqV@l+|Jt&@Pl!mQWSe2mXI6!Q8IM}sEQnyy8vAAsX2h@99rx|FyR z5-JhK1;CVobJ|S0cndQiPJcBcSg#mx6x)*YA>VQ@+xAbWeOGOK9+g+@sqLER`#T)L zolsvRN<~OkJEge}qlxTcppY;Tz2Kj71m}kcTUVCWw4ikYoJTI`PJ|&S6QbZXUG<8W zcW(aWBO{A&*8Y&#Sv=?BT{WY@>KR6al84T_BkXdDqP!tQ@Owa1(Qu%8jimC<%6>nN zqn-q1MZBSebxXZClG<}G5kM2CAy2w9KZ%9ROPOLB~^Y0T*gwu5ws2L-aa%=_Ui)}-R-N?#wL5=@PC=A>%; z85FzBC|IA=O;a6Bk(_zyNHTdob37P$2{#0Th4sTaJ60L&hxt zRuD`L`a6(L2gIJ>cf4V#USSk@V)`N5kx&kCq%k{^2XCB@swmmbF6+F2NcfRNvN*Pw z=>6#e%c`o2?eKLd^$|*0aE`7E-UxQXJ}U4MI?d?}ga0;msf5IqbOC#a&=U}JKwqv4 zzbAs{wA5zY?$sg;wyXT6q^Fl>-=Gpq`zSB9dhlm28$tz{Fv5@7f{d%RpOE%(B4setsB&lh>8lqx@Q=B44!Sau;RPn;L&neR=P+y#JoK?!2y8GsY0f zlW+kDDA1NDmD+yXd>ZKt0eq)@!vK5*A>Bp2L|G4%1=L;6Gb+#6mF*V*0mFl>I_LCt zeDqC(a!2lP^#0O-#fu+WfHwS#IYrE*dg4PF9_{`Ct^j{J+E==?!+ljZ%c!oozN&0h z=qWs#F=w5W3Gts_$^NjKcY~2ZeQ$cjff*>-E$Y5YnVRZtwh}E+*yfdzvk49A?FViu zW!9T&+{XTP{I>`J@*g)81)^Hfb4QKE9VX22cPrKNs)siSej0I>qLtNgkt$P}bsVR~ z-RpSqMg_wUPS7i7!i9b|2K4PS%^O)6QrDz1f%1*5w#?zh?`n!vZO8J=gx#cy+pjIz zuHPRZ%yku+NdqLMKq44s8n`B4JDOK3J5!6_C%;ETD&QMaF-%`mfpTTAI&?sC22Qei zKO^IzH6RPol@Hlm7pc{|l4d1o!gq7RYEIfjs3Prw64~4B=JKJOyT+mF>R_bto6y}t zwM8a2KHxV|&_a5l*m3AWe_u@D%iMNu6>QE7e5FMJcf30Sq~y-L0}#-DM;@-_aPBP2nsKd)&KjV zi?)a52WZFbO4aj+$7%vC58Z`FOu!iDj#}IjI&G@J5r5^#X>pUF9&%R?36JhO6dCMm zVM`&7>$6MxCgo#L+Yi6?AYTF>o)H~e)VOw%Vc`cC_Q_t{+RgifAAHE9T4D+kzUMQ? z5QPa(COW`gmwRfl>%FT|RrjRYMDzO8)p}(#4rhRGAkL0gObrGCsSm-IRRp z2+&u7^f4r6?P-F?wr8>(py9<2mQNAAN6dhlv~jIOWMkfmf7DYBMdA=0J0cALVg6}p zr|TNB17})#-oXMvj<@A1z*r9*IoJUSK_e)|K-hrbb>~rr{suO!jNb}LXCUwCwA8E8 zJqpLc!=tlYZ}0bMDj3; z=p!;`G2c--4@&9iyf%m*LqIoJY2)b2lL?d@q9K6TdJcu0`RKBwHD7-k&B(ON1IS}! zA{)2-Y0Dy;?yRbO8l#AK`|1%64hQ>-?5>B$IcKN z0hEZOuW<`YTL(0SWpBFy$^%Pc5-cu<4Y>R-ci+{6Zitl@!|UNSoJmMI3w=pmD>1Cu z8}0NlLRW^B3Q8*s-zzRRo=j-xLB8l0 z%S}hFzQ3ktlX~0#O}L4d5;jZDK&4JOs-*wI<9XA)a(^Yp9}FR#n1$}p2;FdBpteED z@}39!H&gW%l9!dYXW?PO=&!wJW6;glLi|Cj6^vkOMhAHgr$X7wMkD8 zi*3uWuiCDknzoHPkzZ9KUrWkpjH{nI9oM6R1^Sm*5btWd)+S4egmN0t_9FKwUEF$1 zle<#P8Q93sJwe=D0A9Cl+klOixoz|LYvXOq5QmlEP}XY{XBH zovC(zKm*Lcpwe(<)bC#Kh=W(xm)45P-PrP+A3k60t)6iMuw_DTjK$&Z>2zm;pO(SP z1k%PrQD3yQzdI6MYz_L}rlfqOCcI_6zH#YCzg2O)kb%u^a1)YtrsQKTLT4nlE9+*D zcWAYq!O%9IL)V7TCIl@rSOBLHz?qd@BI>5Whs6986sSz}PN+MK5E`bm*zPr5EJXlv|6w1`D~%$uwRBk`gkT6wvEkgsltWVDv{zMk)ra`R0u)b#$s_8#%-x9S5)uuGN_R~gL-)oIy^tA=vQojegJ6hSA18RCcQK? z1}NX13@{IplZk1LRTZ3L$}H4#sOmD>?hZD@d++yu9o;J?LGE{S$jfH_#pn9HalkWJ z4##{Y-!(w8D)!Do>GUp54^Yo)Xt4t>*gB%T zEL>`fa4^VtXt2<=a;*euHmTeIz}MV{)Cc}Pf~m%&)@`lo4zlFMj;UW6?j`797TTQk zo+_$wZEo!tAA+3vyFm2BD}iX}N$bcOR8^0YwtJszRDKchr;vY`Vq(>;4Cflc{`fBn z`)&^pT#2y_lOW^*j?X^gn_jw|g>;-wE}Vsf*i`B0bWj)-Nk-nQeLklULP!teLXS=6 z6#7)INiiUN7APkzJpKNtxta7u0m;^Lb90dc8~>XY1Lx75^}jo^E2jQ>C*CSpnJQ>( zk3X=fv*>@y-`7I4L_attUo`OkE7=~cU(QE2YF+$otiKuiYuQH5#fQ-;-kP_f3XED( zGjeae_D#H8d${(^P&SFgg?~#@C9yrP_X(5XqOXzPu1X~m*sl3bxynt^_+oK=6s*hS z_~HmOP==LLW-a#i!!2i#miQnQ=q<|B&->%DfM;hk&P7(Hb@ELRboh#k6U{5CfKQ>0 zemuDBXi&2)>dn+pKeHwu78toh%+)R>sZ6HOw?SXI7b2A|uwkQSShB&oRlrl^n2lc7wdPBA8J4CYpzD&FKSvo>D7MgTc3zF}{gq zd2rg&Hd+Y!8(^K!J8(c0*0WbJ;d-($T@TY$f}e9)Rs8cyZSFKqM6R>TJ-}=LLsQhi zdF7O2Q}TNJbITUhbX3jw*0WSyDi$lAojugm{k3(M z7`0BnaP>yI{q`hIBqA33{!=cwYiVYZU#rXbb9CEWZ(e|tEk$|pr)$dtd629Ge93)o z-NjL57upAz%M; zLUx^P5ArQXTlaxYPxzO-=d&%5iIziI6D!L?nT>Ih!3X#p))$jBsy7wEzT4BQDh8Ie z<;sB|v01!adHk<=OliTyK?-k{ha;D94QFoG8v7PJFLqf^-Gnu^JIPVYTj9^Rtm`_$ zkmvP>L&#;$7e9S;|KSwO6w)o?(!36{-9@^jCZ5*u>kb;ed|6uTJ|*G1N@Oi}+09lA zj?2|DYc?A|=4WeLh_pYxpKQ174E1aHB*D6G3KPd)N%$2$DJr)=yGGnfIH>M@SHy{D z#z)Y~P6bGcI+Cg!+JaIOXCC1$I-Q&3P%xYWe9yd%d6^%#Zr` zqP)6WZ&5Wi#x$||ys}jvD=A~0F&>g^9vm@i>4N}h>mndOh%K?r$t;YP`L-l?69ys{ zKb+W4^elj3Pv@od@9W)+jK!YQ>v=M}qU=y5o237Z@CLUPfkc_$HWI*MxL?S0u07$6;eTsw1ipOYWOC&E2@|sWbuh{w}q}qDL*! zaw@@$uX*&NyYSJwPGhQ@b`@(JoNfx4e0&Ibru=za5f)Ff7)n#pJ#L>p4ET1L*^x3n zQkDJQ-abpWFx1HC(0Eo2OtOf-=BIP4rxu-(5eEan?2pfq)L-=AIgVj@mZJ&{-NQ75u39JRznH_!OBS&8AIXa+oOtfJY9;*25eNmAZ^TqG{ zd8*>GdCyIvh}Ok@2_wq4KSY@9PKQyowY8bS$LEH~0o=W$8(AiB1B@&k>r0ow6BLo< zRzp`XRrEF#3Xv_pE2GpjW1N_go>k1z-3&?Z8bK8MCEFsC8QC)fCGDD}E~BLw00+*) zv@>*$rocde29QSeH=5zd=fCqwyk&O@mjp!zeq+%RkOIy9S$7>B%F0<<-30IJ-9U)X zx`u|9fzm&Jw(csJQNo^c7`_8){Rn(wCJ*p{_7NC_$`WJ=!b>3mZYL%jR0WY?4ydgI zI+D94V|l3sd1TUA(GP1>8ae9&%51@FzV`7ye8(p-AC*DUtv?}9JHJ)SThlrlnYA}V zEAp}T0s*S;47-HmJx`u3^!k6k@vQJj8zZwY)^5|UW{S$qMZ!|n&hUtWYq4}_(i^>H zs=XnEm7P7-jq>jlqsZr#&A2{B3L0;OnpSCJ5RK*C%s_tKEUXzwEnZ3|h$h?h9bzuD zg%J(e%YEFPhiUTA5b})_D^_E}r4U%o7Q}0DAw7Um8S|~|(d+l_Zw{9ylQRccFEYpwu z?4ffBCKUelr*WLb1@=tw18ZA>sYox1LlcQL_ec$UR$4mR(&@4TN zw0B@rn=_oQyOw=C@qzN|PcrBJ$*C9~+kQABz~F6q?C*vyxIWeYw38nHPYdavKYoOP z;nIIPQUCAz_6y$7ErR@c6NXv-c>^&o5Bt{$xwfzU!6G=EFE##PU*Q{h|3kGL{P(pm z+`IArbkDx|Pebnin{R&=UiJ5cGKSWv8@Jw91;A6y53FPC_5LO7UzZu~^F-_a9T)Te z@!J3Y|916_f2^b*%MJxrW&-Jo^tP;X$gO{(28EWV7J{A>z-8;sq@chPuHN5$e`erR z|3S-$$$9y|&kX=SvwHj+0cbvpGldqvwRURw36A4JD6g!oLa!2X5QT_=W4K?=SOwda zvQ@bC+jo%!(VO6I7{w#*U2+5nnlnoO{>r2Cblu-sfl{bfkFzV32Ro(3!$-T4_K;L& zlBlADVm!q$77xhS`YOG;=OxLnz1OO+&=+`akIjELC`fGK<94^Y8MrIyhT)U3JD+cW z)@i`o(v1Qlzy1p6I3BCk2&Q@2Qw-NR@Bg!3?|EfrX2z~8T=J!p!BqAsFJ8=ULCZXL z55sQ9kFqHlYLtZ z3{3yh%bf)VIkB>KZ%gbv)Z-1T8Z(ylH%Raj{9XK^ zktiF@am2?)wS=Z%#Ml#<$~EIpqUpiGwF0!a5cqbs$nKe#HMle&0i@CPPGM?)wP)3r z+&^z)`E{~WN={c~X;XR@wuJupfJ@r$>G0fk>yc0x3Pe(nEZw}!Xs^9l3UhAtj+fgt zXIBx^cu9v_CkHTgq#GL|{3Y%cNR+L7d zT^)#_z+*#(P9wsiFXsLRX*A8(l7-|sIEYOaI}Bb+I-DFp^3*W!BwZk5Y}y)d=vJIZ zl*EYMP4P9Zyy=xxM*R{@`6Jy`3$^ZDS!prnkweUaI#>02{zrM=9oFRbt%-WyHogMBQlcF4o<}Z$HFFhIF znvgUUVl~XuBgcwYbNKeHa`E!Ua-z5^~2FogI zQ9^k;>XmbJV_cC)V830fK5}0lS#Z(dT23DqVGLeUkUFh|WGnD!$Vn9~@;kdX`taw) z7Rum!2v+tp%kK|m5muZXAW?fW%f(zrPESfq0b+p%iI$& zj4^A+b=OydYkM>6TjX^!@7!-)^DgC|>Qr)bAD!34rPA5_?Ka{!cyc!0V@#O>_WrBC z-|G&1wD1dlL__TQNJqU1pzNA0s3DP!s(e9c5wLTx2d~^i_c65AOn_$ZB+9pNhAz-W zTvH_M`e-|si3u3< zucpTW&<9Nu2kpyJy#kj|4(dxL0~_qH^Xkj7VKr@Fl(kY;K?_vUTt95GqJ|0MQvXp3 z%BG#Kb!a2zZ2vA$TSK6e=Bn(s{k*m?bpdDXI7iTX08eWBtzj3Z)klwn)mfrn&1I(d z_0QRVZk2c+ncli)z_)j~0~U_l*GI)_<>#eZ4Xaf1cOM+1T!Ic^e876#Wlx$$w-kmK2np?Kgo}kTXQWOytoqBWCMRa< zY}Xa+XPSTbuj2W1-OPvgTaA`PYuj^z5xcg)^K@ihWY~=V!@^v`y2T48oP6r;MgfI= zg~K^GXsT>&7K1PrR8?s`AZ<|{7udwj0M;p`AS)|7;jIvEZi0(?ZzepzIw)fj;H~ z2?pkXTWX}c%2a-q_tGYS^q zDhwP~zmxjPT7@C!YSg%U3wN7qQwh(tu~0{NwO|D$$s_;jYN%eI?xTpVc#sBT0V~E( zS3qwJkLI_^g*hU{36P%3caSSD!O%7@_p#4w7V!bAD;(U}Kgbh{ZWqoXw0rdzY(1~>W zQ@7O0J}!Q+DZuclr24%!)HX-uAqH$Km2zABLwMt1tV?T5rQZn&1*(df@x{gQ!%8=Y zKBGuXOm6==c*Xmi+#K&4`b5Fkci5Wbf%4-e9-YyG-;D0mH{E{8U(0dnPG{p0*fM5Z ztxa+;NHK2+I{)3bVp^_mGJgU&9jq-cuEdRUCJwDMG$`fL^N*0F>j755T*wCPytjFf zl4W4LHn90-ab*Q{uh7^<@={kB$XtIs^DVR54~f zr;MnosYx_;@IAZ}RfftTcBPBztKh6cc(VWk&HNpTr^j{T%kXQludv0vO-PSOs} z3>GM6;;m8eNt)8?+!r4B{nKLvuq!Wyu#<^Y$Zv56V{Ki{MJwr{@4O~7^) zaVzLm%;Usr?@2WQ8vlGBGjlop_r)(W&MC{ob2haD>8a@@qL$PVq)0ZlXgPbUaw_PV zv`43`@Ze$^;TRdT&7RX9v^4zSrQ;ZS>;m&Opz?hFBH%1XZ*8{&Y!`Kv(uEC;Ahp4z z_P(}3`m$tY31l0mRRyuo^`ImZTx@f#zx&GE0u1Vz9VkZ?xf`PLjb=54^A9q~WX(`wgHslh z(nu#&w0l#a@{GVAtV48mca(AY`}NvCLO6Dw6(xK1T zd-S^{*#}5M$8`Muo}UYTNOt+4&?ryOj~Vh%zx_YJ;>W`>6*V0Cb@0`Hjvf5xTG@Zq ztAA=9M#`J9i3J4(NY=e3C?_YU-{gY*Z9el`-99G6v6VU-&wt6hFlqm7bdUeQ8MzSi z^yzPY%n;t#Tu1)dEo@e2%%>dkwXfKE7SdDZTju7;&_8oV9@-{;`8nR+T?q6&(g z>(ZOd%yUR%_`g_w|NTv4C})m6F+ba&5X6@K-!$(b4j7QYfBLlT=%<%y?F74 z?`GKKrnpQ4@7}!=noUVgmh@Y3Sf1(AImjkb zQqRKlN%%**&fcrnu02LBnuJ+z?$vvHZMT#|?;YraeBkim?s5_)=We-^fVjAJ{jI_m;#f;}iLFkl)6iwox}5j?8!JL0NFZxt z(X>1>riwUb#f>`@VnH;&kFRgGAARn?zI`t`cx|@m`0efwPxe}t8kO31p6Sjvxw~uc z5y}3*r%!Fxm!@$1SS(i7WOu;ej|Al1latqo zkw8w;=txr=uq|P_htASY++1jppxEz9NNay1hWQ-IE}SFgHydFf|qr#Y;^Yn>Am zRK;L0f{+O36a2`HhvC4!*Xc(*pj=SIq$(B4@m`6T7(rv@6`o@O#)7EqNI1JBne(EP z&HT3y>)(S#9d?eE0wf??j@XNRtYWlwH@?^<|0&rAHVj)7BKGz_d0Srm3Z~ zQ^&F;sy3tS<;$0dDUibzMfCNFLJVm1jNG@P;*yWIcP5l;RQjHai;7Br{(K*cfc9u& zU5a3*mo+nUxL+OXpZEo=EBJK*;f3p0?{R$+GLtPMb)=nzD%) zzu<>A)zpZU)JnMQ(N*K)w#EQ0n@)=Ewrh25i;=MAn2^J4RImyeyfG@ZL%!)dQ`ofB zrK4#JcF`9`B9Y7#6cp^*laF=^C7CtEtt!AC?l?(t(TXrZ-z!?XeQN>2|NKZ?vx&AX0RN|Z%as&H!6DYcdMA! zK|iO#inmAklrvRd3EFj}{+)uZAr{OmKrFmsg75n0XL+=Gtwyp|% zGdd|LslU`d<9gtcEVqd^#72^m%*q_SW_sU73E$-)B~akzVARFKC)7iKxG--+W8*Y~ zijU^6tg70|>Yt_7G4c^?Jbw1}gFYS}o_zYSpC*`+zPh@4*sOJ1eAZNFhV_2?Bixsv z>in`TL;_^s+{eGLTbMyeLcnK@HZkGd1`v4{uF?ftG(#N02zof3Yu<`kG!D{BhYJ8Z zvrMWzV=p=c(x?T2LnUN3AvF%L9rN&)!p6<%hX|mtJ!bIPZu=;YLjG0qfy(x@4qQSf!u@N|#O9^6s=f)asY-HqZZ?D1f?129M{rj=L z#!Nh1ux5@st;TUs|17W@?Dm%7)4EZs$4{QXN?fiBxp8WBfed~YnSMJ# zP+wmkj7l>!nS?fp7_M$?`jjDF7@A5wCqrm6zvQPIP4>@KC0H+Uth{`jP0*uZ9 z=Z(E^|8X5-PB!kLe^;g!Jn6kC_KAYpl~@<-z`OfT3eg`P?=~uTO5o4B_SXTUTHjJ+ z`a(G`W&_ONQl0UeddJVGe{#t^$SQOPkJ&8oqM9PX0Pc+AFSTjE)*36NbMM|8#BaFe zeRXe!akazwU<4N8NwsIy}irGz13a? z$hg8(oK5t;M)IeH!oZjM(Ve7hJ8s~XrZSQ^EJb!moJ7u%{B-l8^RT1>o2cnVUE53G zA#@VfOts)G-f=*>)Nw%1@^hpXJX#56(-$yI6O?4Ka)O+X_yElxB3}00@_giEfRD#D z5!TaK2li_B<+Yfro0|?msMLc`ze_m{8mJcB_XTHmAjmim%Z0K_G@cuytqeFL0ku)| zHTzp)l2-DK{jWsLOyQ@o?0Z9Sj{n-Cpjlmw7~d?|Z-0?xFn^XcArXlfs?j3CFt4>s z9co4u58p;AP&{DBepp|8$f6`cal6LUZ&pgB1H!1J)8HlY>-e1;+_IjN^CR`RR0w91 zp%;f=0Er4>IiC(8(r$6IS$kcckI#lsL3mK7XZ1e%DqFw6v@Cd2urp-nUsN>Tk~J<-YzLEt+IG#K+9cOx}vp(o_Vd2I1sY2ixaO0(L6} zEFxPyM%=C|^CbX?o%+@)5x-@r6C`vUj0zzE0Rc>4$<{vaDm-V60L?TI8uItC)4D|u zR6MA|CIFAP4ieMqShQIPnV4&65$QYg_3e|M0ap{Np|W`KA6kv|ri88d<#s5aN#{39D9Xy=qTKvfJ%69~gY= z;#p@bBhJ)+K}n?p0MEhn1Ln>aEou9Yq+PpqIj)pD^mXhT_M{Fch+6D?h!rPBCrR4k zj&h(^`mFoQ9OK*rnH0&U-)3eUzHEHSg-{2^8Zk#ReWA5y0_GcpZ4~H-E{P-cfoiZn zisW1){Nm`6wMbHg%VAYB&arT!HcYjJ@C%s#DG2D&o0V~7EW%w#Ep#W*ees|4@VF0ED6Aws=45ug6 zv>)x-R{q$>yAN{M$Hjx2stMeBe<*%N9`gNwn{ZJi9>OX$pMBg_x-9+dR@SaddNEndJ6AV{0r`s+! zJo0V~{^r;iz4Z$ct0D849&L`O!faL{Np_V7CHp}$u2C$8y3KAOsGlF-DnEbw@#DuK zG$DwTEm6YNoixQ)h%-r=h20u zZHINtVt60!`R%tsk5l_8kKQ}{^jJgcaBV2j`>guwA2FFx<@`)1cY^93i!X_RG1^Vm&F>{3Z+X$_R$qWYw znE}Mi2109pxpNX^O0kXuO+0?;3_4XGB3xq_cg_J8!O^MSAXedQz=27K{v8mToWERi zY)}4cflz8sw1Y?wQhXFYqMfK+8J(EOfN{GDuUh53U|vRVHp6bBRGE{vzM#>-AY!E* zV=n53YQ*R&z6)r**}# zk+E491D2holc73wnnQ7W!wrm5T@`!R4uY;B>ruj;LUScV+Ijo-Eo7Xf0(?Fh_RZeA z%3Q}j8&$d6<~JR9y(7GD2NTnxVdG6aqwI*cSSrs~o^`>=IIBoDAE3Q4w*w)w(C}{j^-S503rRCO{b9KeaxHe}ucJ$Uz5@_% zoB&F0>8F01{iFa0$z&ZV<}4YkX*P41cCYGk5nAQ|zV(LT}9c z7yjz#=qQq(1F(f805($6H;!5msNV5<>!A;t^6F~(LzrrcvrDd zNZcdp(+d)a{1@Oq4hX2JxY_ku%6)ywR?5Cd9r#}lcoM>KTH`KiLWZqV?lg4b=+U>6 zlQwwx(U-6v$l=s(S0zb4X}2z>_$guQOdJfVoX0 zq7!Ak=C~>Ni;n$WuVnyjm6xYFZ#ISVB2Fb@T9YZDlb#9u`!aT`Mq}U$JqF%?5*RdC=9K@_tq8Zi%zqOtt zVpf-nBtdj7*a}ihYm=?q$Qt`HFmM-PHN#urWkKM3v<$+*Ke4&SmD<3}PO zISvDezr7Mf;rJa%btW*`oj{q~SLg5N85S!O6d1W8#uYCd2LRJrpokcLkf}jr0*Fvh z4QfxH8}bDz0bDwjYa6?(_xO)Bx0yE(JrH`abJworPEADtNJ6-_4!K1cIiD7(WOzII@^pl#mu6>JOXUTqdrs zQwkiz*d=XVfQQGX@%RIy)2(=TA9gxVf$K`nNr=|WUyD-u9kMWlkf$`FmDl1A!L7f7 z%F3?-N}%+?Yp{-qB(~FomBlbu8Svm75_1x~68HakS}pP!dX8Z*jrEUUKQPG5>cIX% zgt1JF=Y9V~;@*2EP(AEGC_wCGf!qX{M1@0mseurOf`0|L+iS~e@Nw|DRM3M2&z<{Z zz?TY`tNdK?;kPT0qi92_nubV|K)f5qH(;DPA?u{tZovMN1BVWEfkh!2cXyV~%^f>- zK$GlF5F*_C&ru*KkF`YS0INR0%uEDIvjjuI$&imY5TkH11hDA4E^l&LD7Ezl(JbDlU4MSnq7{V^? zp_Ty(il)Na4ZxUpGE|u4Qt(iJv#0hj#qgk#rj=2tNC+$@D0IFSvv3Hb)9FZ#fh46C zcY*$N;jvg?A2VSG&-?oN+5yc2A=e>lHU@ip_d(XKSSfqS`~UnF7jqvZBP0(8pHlW_ zemyv3b>sHy3nO99oGhZI#_$!fYtvsqHd*eu>payWiZ7|0sHm={fH{{M{z>fXlWD=r z%E(w6My`bKz>MFKvuGbsM1)?%M`6xCcXoF6&H=pcIm}@L1bCW+-{!^~pyz`W1zJf= z;XY@ucdh)x0X|SBusk<(7ousga>ySqXEzBA4N7@Nr5&KIO1h4Iav&6g)(xtbqDy6O9v&y^$R6ukdF!otm+2Y`GImKN&jp#OKedOr4K(DUbaK$0088bS<2 zHY(ziJw$Fvt955U7?0^LEl4Q9cu#_tkXfdx_k-i>>PCW+{$}Uy1EW>jNI^oStJ2Y< zM+K1t56`e~ToKtP+Yrpga{RbklV2n6 z!05O_M6?AQt)vI5BC1hBF{i)ob^Qmu(C!I3{a=BB6o?dyOG{ktW(Mt`qNb+CwKia82+Ae?diG54%o%0y1AQ>CxhymYx>3?FGCdI1 z#^&Y{$s9Vk6YT6+pdBF_ykSmSi+V-3E^i6qsX9nY!1w%vgM)L7=s-y4IS(v_8jQ5M)UwYIi4 zre+3*YQW<;3}5)1H;CCwTV8IZ?Y+VpJQh|wPCeu14Fb6H(-p639tc@HVc*|g{ncU$ zYB~Rcl{A}`nZBxrBadM_8ovtAcrZezrs{1sF4e_OQfq=6HHHUV<@X%%f(*P^4tY=XMCfDwHK&LI%~1b>3H z5KeCC*b)rxfA@ZYxktBaRQUO66qqx0wGT~5Sd>NbYL-zd5B|8=hyi=|4rZZryT@n=44L?&Qcy; zx}kMK(LD*TZ9>!{E*B^q##l@hC_usqCxQ)4SA`8Z(Z{7XF^bRXN1ClI4VAS{h{dRz zZnTa(d@e^I!`E+oe0V9O-DYVQVFSCuK+n(_nc0Fk{=X$PEF?LU3h0c`ggnaSMY-_g zbIJLX72iEiuBJ};M6N<~m|XUCfj!Udbwr9ss;zI26$;g0Uiduu+^M?K?$eB!qD>4h zEQPwbxg94jtY&bLD{4=DPqr>N7j%j1i1ox$&8%OoX^3*lABXlhCtlY&@p8gP7v<6J z97bxX%IDMQ{3E={dcyw_mpPt<_n9FkwkS3yp*@gSb03E1A9GzBmyT#oJv>j73m0mE zPu;}vi|$!9jDqu`!d6+g^ieE^QA)At@Bp;y*YI)#3s|t{{!IlJA5xa6enu>bz^8(v zD)cCxB+v{*q8)!9pe!e6XM0L++`I6+&%v|f<>JfsF(uezY$;mS2iHtKXReUcbnn6i zyUm@9O$AV};Nz2Y#!*rt!?b{nNF-V(psI&NiiCob&-H)ZSgi)tYVN!W4&Jx0&%s=h zqoK96wa?9%=phu{(mU(tmDA2^{pisF)O0QG-?23m;lV^44vM)DW z_3VW1iH2${+c>TAr;B6K&*awP@mQ}CX9?|U_OZKBV|~)_-gc!y%@2;j&TQ_!Y7*6= z(zp>;Ux69<6ZfsPS{UtZtDa7IZp>>p3hdU7gheVmQtc6);31HT0{P( zP$*^B$1lCW`ubkBEof?Lg0X^`u@H|b8NnDv6&j0>;YuBnjsW3XfJ zK5~d9Y-q*4nZvyhQNh9bhqu%TMHp$*f`Oy&n@*y2qSEpyz-N*aNJCC+@bb^<@55kx zd>}3((*{!vIaftx<=;g@1M`(F(s3I6Dz2=5RHnk zRo=v>3sC;qhM6mKZ(rEAgu(N|O2W%@TlmXL;xc1%6?Dq)kK;GI?fW+kX@UoSjdNHZ-=9b7t z^G}@L(B?fnPZErIX=&Ky!Cl-p(?Dfx!XYNm3uY9#+9WBnWy*brMw5Eo~7lUQ6U zOj5+XqTtU(A(9%Tc!7DN%2Qj0K8ZICqu`&W{#)SNYyQJ+@z0-w{}z_pwB=Vh7T0iR}LP<_RLD`@4m3&;Qd7(iG(S+#1KbOgB=s|AA9nw59I&pRzyXD-eM^MD;8b&Pu zlHcl-1m8FmSid6YRc$gzA|-X1lBje?@1*ppW{$mq^(SG2QKTh100H>%K$OM+x8f7V z=2o(Fg>LaaFd>OQ4k;{$@Y>I{FDy$^Zr7BXD$&qdC#Yrm5^m$GF^Wg#=?rwFW#6+6 zjlB){M?X+3z$|qoczxb^EPMec3j)@@USXH zg`WEQ`aUW3GZCxn%k~cpJX2K(<~^@jXTdj0FlseKxVc zP4G_1r$CpEAkzXbaX1%e!tbp=ZPRb!tGkU{QlS>nLl?z@-QM)#93*-Swe15_go#r` zqw&AmAmY0qCWC(EDd`3mGAJOM_~z;Wm*>zRvh7iizyrQ?(mCth0i83t_ofB0$z-R9YpwBnyO7551N*z_MI106`5 zaO7V8aPN0OV<9=Gom#8zmVN(w0F$FUl+Og;` z11EsFKSGrqD-$mX%6C%fW^!9FOhPTUtuEA*ZhKokaEGy2@b=dew=5q}in3e)Wl4{9 zLQ13Zb&_F=e{o+;)s^{s7cL#_iaR=}!Jzxr>fg-9cbRI$`8(TeXmi3eMo244bAzPMA)A3v#B%;(xbsO%ool1skNUP5_-#L$SM+bI3m>)*AyKTuOs`2q8uC(@Cd?2B^Pa(-153aGL(d%RQmKyk;F)^l^= zUqT=yyOQF=R%gwNe?p=rc+t~~R?Bu`KDS+K^u z0ywGx7>20+iHbc*a$AgeIG|2UW#G)+O=jX5-qxCmLwrq zlclr;YrSl2ZjO=(M|t=Eg^eVLT=s9KDi)D|_JNt{<#Y#II(SwOc_mDo2C4_pr7!z3 zDET>%KX~Bp9pXS+qgpJ1KbUm&Ndj5O&IW=ciXq8m5ZKa9M547$Gz6C}hj)8$L#!-N z2!|*NG!*iKyPoIJPk)|XupzD?4;^I8%Lfudo;yAMl%4$@oFsYkw8}96M{v78%?TqfoOl>9+le7Kszq zDGiVq7@L_jfvDK*oShv6_ESF4FgrV|l)xowLTGK(0M3Pg^4+L(_CDh%%Lj%%o`U}4 ze~@xW)SHPDNJyn@eL94&VZ*3WDH>c4!dG1D`1R87aT8D+7++cs@AZSwW9g^LsWF5J zttd;Fh%yZzNOAm@0bMH{l7iEv zu~C*^?2djs;aC`thbXmm8tO`~ySQf!USOODZ_X z-5g|@O(nRF4E%K|?gA~(&EfIHvhPsXGiMBKxNum4CmdK>8MgqznibIeHDMx2PosY$KrO@GP>;H$IdAEG&q1kTdBXU>ORo5h+xyN2V3L;&H> zkkfY=RKVD)tH=Qs&MwysVwYD{Er6d6?af5$JawtI=)&?xfpNtXdr0}l2a7w zXNKg}Lfe26S+PG3^~dbiK(DFoY96ohtsU0fFBV7no44zT;%Ap38X@Y6U`vr>2DSS) zQtPRD{57{D9ws+q)065%Nhv<2m>!z1{Q$rr7`h-J3Tac`RAGMw|{LK zfAd7h98TGW?%VGcyf+FA0*Aq`!1v{onp(&o{GYz8~`j!p*(bx~?;hIIr7V?xqCA9_l?LBqS7)*RRNvkdRK1 zknCjKwF5s14mG@o|J!YKUDbw!gtCnI@0N1QL_PfQnC(>+TLnu4TYDXAeG+?ndro5u zQyX0!D}7E&Ys1h95o)}Ml6aA#v8|atiKVWUwvM%~5#9YO6!^_u;x||HZFH@Tt!#}g zEl4CSY^^Qz>~x8LFxpx&%3A4LC>R+V*fL(Ww6N7Nw$Qg`lrz(@HL$cc=VaWo=Zdkp zz6D-uBgA-?@2nuBl>;NIG8?0gg&yNs0iJVz|9zcJa)m;bgoKeq^2$X;ht{zUM;D6b z-py(IOHA2svzcDYN!?;^%C3`ttmc!+dgEQAyqfW?IM%o(-?)Zfhm+JyD3na=8j2Oi zy&3OcJah5o^_kV)(~CiQHW_L{4x$WHzFy7-dFKoVIxAdj-q?iw!Pj;?+utvauW#7@ z`~4@UT~k~B^N;t8TV4M9bB-q@4FCO}jOo+P|9t=HpZ|IJ|KiH9iYk$iIK1}o@>1CS zcNIVR89k)>&w4%}-PgVCKi}WExP9qA-;${8-6Hni?+;&j@INpAKXPT{oRWzkaL8}Z zx05=4{CH)gc*)Gb&(LK3$e1Q;3&VtCIaIC%JQ`~jAmLtnfPjMT+=QS%yI=oewWb?a8X z%ksyt@bJKMhBxzabLH5Ktp=;rWOYyVBpk=WffJ z7#&T)-6uE+TXYqtwq}{0W@0Lp+gM-h8qI3fvdNW}mVUM}Kbp|xzA0KaZ_$yjr20{> z=K_%|zQ@j>W;s0NPu(`&y!>MPVmU+o_sPgT}-ZM5#tBd6xQF5yeh7Z4Xe z{`tq7b23qSxR3c>U-46U8fIqi)Ur(U+1&@j?fXVYS5+3)f5k}&IZVA}Vq$8@z1QoN z-I<#!us+|ct(IdE{?9-E*be_XSvKdgJTr`+DR<=CDdkz~iz!M9xGcRt%%$~?PWV16 z2Z!{B4<9^->FMcbW@hBGOd1svZqnuDExv*gO^ zWXS=Fscl=g`jJoi+e~(f;%%gNQqe08|B6#eR%E+LiKp#8ejMeun{8&YWOJl8>T;5r z!Dv%z$T`CoBBG*vMzxHNj*g;y^ZV%OEt0A1?H3Q88BL_;(kfiKp0gO~zEMXdzV`OA z7v=8=N;d7{waa8v_T#N_XLY{~eR{afyV_=B&GBcXxc2N|jdFiwVE^P~I+kL#=LRDW zk9?QQ?DMVLw)q!E*$(_T7A|OW?cu|R)MxKL?sQuwdF&Ak6X)c-nPKqrM!mD^=KA8wXnKil-}ikTMb658ye>;t{|4ysyS!*ajFLk4 z{yY}CKelw?%)RM5|2%q>rBmi9@hhudR(w771)cDzW1Venc{T~HnZ_|ErCwWEO%&J+ zz7SjL-iDy!P?CLg<;oSKg|Xizzhb@5TXwE}&*qYNK|9iv8XENuH{&##nzMhsUAx%j zx1v^#h35KH8C3|M>FaOLj>V84;|U&6%`kYru{_A2?m$%8?w%M@RNnV__LC7Si<9a5 zPu@sW`EXb9yTaYEmJch#36Z}t&1`KmcTmzM((qdhAzs}FMy*F0l8D``FLb$b<+1g^ zkLBw*A55Fmk`0Lmr-^nhwb9QQ~ zb}-UCty$Y$6H%REzKX@?HTZE9kGo!!-e9GXuH&9{>V0O zo2%j~F{*lf#A=Onz_7f$Je)TeAyv1EFz!;zwTjBN>`JQ&;;L$FR6@Eap!^`o z!&#y!V`5^mTvzX#x98p}u%B$I@Mk$hM`s>7(bvZ;D=XWWqAZ)Lo^#%#`oy(owRonu zD{7Y+L>%POixxtn#u6gk8Gr6sD4bm_-F@F@r~cBO6VRq$gap*HxZA5-{0FCgZ(r?GMTjJei%29yfNOI zC54(@bbqWxXVhkMc`#DBE!(^?-_F!)G(|lp&7!l=m^Bn{&aRlqC^@sZ*xZz=9=Fs!e`RJnq|?^+VJ()pRYbLTZ&`G z#Zlxu$&cO5-)%M4a#kinL=_2H|NYeoCG?3aIlX9rBOL`fc#j?{Cl!_8Q?e27@@xjN zs`n@hp50R{dG+d5Ymsw~)52KNXmdJEd`F?90;04Tam(wxU{>~%m4$_+enl1mE$Y1R zy{hVT-MZDiduJ&)LicakvW5Db!LwlpB64D`J^$5Nr9J+8k$>tOf4^M>+ z=lSzZyrLU6Ar02kkv&+VHZ9>j%S86XhMfb3{gh!|6Cp}jg_Sa2byrjj5V9KyFmi~j z{Ta&Nh?u`Jh%Ai|TbsH}x?=}*sk3v*=0-tjX(|26#X2Hl)0t9Xm%<^a6+J zXvC+%p63i6Mb3*Oc>egAQ!*ima#5EmIJL9QREws+(ZtUHZp04dF0HPzsQRKb)Oj(u zB?7ewhqNMvNyXrXAebKAmvUu6x zY4fdu`{w2??>{RNMYwFa=eqToEW6Eba$Uu)S$Y-w^(*|oa;Lg)Zctht)T<6twXm>Q zK@XznudAyYsZa2I&8cak7F2|FyS}LYt)QSFwCRu~>f|0Os+{n$xEq1;ub8e%w;iIP zF?F_Pj_~pEd3xw{Jdb`Qktj~HKlv5-?tel8pXn%(>>C*g8T#odXfrU>Vc*qHMzCXt z8w6vSB7Z+wBgDyBtrKM$Fe*OqvO28UpMx~T{_wL=oU-Q`x$vHYZ zr=SqAV;fO76P`TTHQt`5aNv|Qx5w|$^X4B(mt0pD8c{*nCDlIad0+FTHxF*U#qfvi z*g^ge`KPrC9iA>udK}TRQiW=Xh zlSiweyZ=>}OZVF|Y5}XaBaJBubHnwf!1gGM{2d&^Zfm&!r1E@kCx_}{%w>0(0R?Yt zY+Ab_hXhN>ju<`jp#rC z1xxBAc1o#gN`R>iekY~KJC;VP+K5v#dH@ul%^b;t+te{SJvNG^=QtNB!H$1Qh3~&R{=NO9zK?NnsRx?;NC0i=h7uC@bBD;D7l~F&-1gSyS ze2#Fj%SMQJTZ96gC2?4G7NYOIOuCvE^LhcDM;^E$%VGL10b6!b(q5l_P=)La3<%$4?ao3H6hYOdm?Xd?6RY@yZug^shp_UiE)oxOklG@$J$hTBMz zE_qAJq^w{WQ$TgVk{(TT6eQp#Mv*w#?r8N=a|8A0K>id{47RIy6cQ1-e_=QJ(YJ+7 zluM&uv6AtQ0EGJNKF^=Kb~pXmt$t5fP5FlUBXJId=P%@@e;CX8^S6OWW$$YSacash zPIRDJ9Iwv=BpgLE-~4?xT_f*f839^T(4p`gsIJobK)yChU#^(w)kJh0>@JU_@wQv{ zZ?c)VcRf<92sEyLc-VN9OWO}LCT7dF9l>Ji1pYu4A8@^~mAKnK{}$crN7=u9{c4PI z9&hDX;dhcC^+0-hr)}=vzm3W=2aZTRkg8RbU$)Uo$0qmo!T9Fy48!KHK1V3(5eCEW zwdYtKmXwr~O?B>hdiYE-V)IRDFB-Th!4DojA4lsJYR|FmuN0M=aseYPJ4!PBr}+&c z5tZi6ig&tZL3Wu~GhIdC>E;uuZP>5VpjH*dz6cq}3#k#J&LQ#tYFgKK|Ha&AAwHnJ zr-&{09r%m63?r_9R>kVNGM8`N|CCs+&GkulW4Fki!(D5D=*=L=6#D2ML@OjN6r|-; z(POQiZFcs|k#zI6Y$`=d0+yhxlIwGFa?Z@nWv-W?Sj86)(VzmLm8}n!uLB*A{LV<2 zfXY%MwknOn_JLw5q-F5utAJ-GrkJC}-LNO!Jx(9p*tU^8@QNn-lU8AFk_$Ki(Nm1I zoDorqMn;KAYlsDi2%6E8qvPY|>}?r_zQBRe1iY#QGCz3qXd;kERa2DfCk{yzR>j_L z&z0V(oIS*;p>l{_QLe~oUcT_AgwEt}eZrwbhXUix&6?i5>G6K*0AQkAaDSZFy6?Kj z^dvT?Nrb^Dq$AId*z8b?y~A$y;t1-GMfd0E zfv%g;`=tB9WUK{hXJUBtzVFq{x0OW4pys~wz{S-y%it%q*Q1sQad#~=3I4#rs-VfF zp_x3XRp$nn zyCN!2pqQ)Ecw3Gv0fn#;J%P460-X0ioSqE>j1HwfFTt|_- zh1_mPQeY{z#ach$CG`02V@gn$4xPUJqruMwYt>rfUXtn@2uWioCB2xaOot8}&<&le z4i^p)w23#U4l`C;M6t77Te?TVAl6>{u@#LSk*<)gRU}=dQR1eFuIPSuBpoW3_5J(j zZI6Al?!Q60b7u-#V7}{W_N7ag zviWDUg7UyG3Y#12{eS+<)cV?EK@Dmm!iVP6b1c|cSy_M2gLr6fb=YJhMuU z*hJJenipIM!3UW!e}T@e>gwu|UvWN3BEY8&<6b#AIZ$hYlkeE#{VCNonv<2J7~Gb0 zK2{k5xH--aWSOn^11=yj_yZZBz(M39?l(D-iB;wOosW;I;;)7a+3^BA@Xi-#y}hsn zL6eB?mZVl)cVN##^6k{*hud}vg)Ja;rTlC*t3XJ zGvBE6@$Q+asp~UE;QUI2hz2deyLhmydtl%cvK{FxeMT+w`n|oABO}S^&Zgkr5UV#8 zX4iX$hm($9_KY2JAsX4s^R`jjNIog_eLbSx0O;@kXSqI3Dgn)2h}K~Nggc(3`0m%I zgwg`ll>nAVn;~GL%9qCNz;6y*$%zw>Y7BS@p&xW$hKWdF?E_|jJyC`D8%+)Wm4EL`B;B|_dtmkEFXiWjOuypgV7 zesxCu{QN|RI%4J*g!O}v;WG?tMzGA**4FZnq?F#@x!9O^>vDSWN;0v%12!Lkc~DCt zpkKbj#*gB)1gCs@ge#3u#-V4jsmVU4=9_CZO*h7-XRqYm9}j77Zx3LRrHB4Y)cE*W zW-UMhnetzCGN&aeYjO}3pLw;h6Z8E++R27B5rp~!xgh@K%e{m)Pt+9%Hw4Nonkpr8 zf`9@&r}uryk^>5;6Lbs;eelH_(~idOVJP8usb?ASHv$A8PO@Hw zpCX}g<0bjEL>nQ#{+B(*Ukar7Gl#vvd;EdWgkC-+-s-{5zJo`O9FgDy_fY~pQ~V30 zNZvF5f4I{Bn4kZP=I8&5m%p7;#CFMg-3w)!8(-PQz7>EfwwY#PM7?VLLt*pmwi>w} z1jv;mo(>|1Rxc8w%)eh8PYx3mQ-7toCLu~@MqwX>s=C_1LB2b`^EUg6dgE{R;e}Hq zVfhno$qQb@i~s%l|DU<)nv&V}-&uhF+~ohbzW)zD{9h|n<*J@uET?wyJLqty(T^3s zUruX-u08^q2Ma0q(&fwZ6)cfMpcL*LlMIIr#Sy=3`JlUJ_wHMOgKF7kY}(q|lrOFl zHU==bGJZIoi&l3PbRsh-e7gPY|en zybB6BVQMCrh(~@IRfiaguG!bu_r-e;^991L-#7<1j8L0zzf*}4-&|7wjn9y&dUe7} z%XMCf&$3fPOiWDn2h~d$9!9{5gnL8?{DTqB)%)8BQi%>$Dc1ybxqM+Gh)cT(-BOno z3OXmlRx)+;D#)DrtUfPZ;I8kXt*!j=;o>#^#Z1ViRCXQC3uCu%Necbq;$m-tmj9<92O8z@joTwG-8q1T`|k6RTarQ?e08=LJqE^1fZN0z)aYz zK($#>9Out3B(;`nyDeWOoIwDaGhAHhUilOAqs=LnUB#Xs!JglwIqs$tchdkK;0_?{ z9sH7?_7V8VFn|HIxLYC2>Z*!8?Gk^N>A=N{7l~UxNJ~3rwF1UF-)31doYUo|F#YE% zp#c@GjHJBohy*=I#wCPYS29z7G&eW%U_K%SmbA&-P@P{aC7qBF+Pu^B*CPzAE0gVc zG5##F)Ro_!(atOI9VXz8e$23Ve!e*;5y-Ce=fNNG=}b*L zzx&#>m)Ppqxw0;L@V?`p+&u}L?!#4F>npy&DrwS|o$9u^ZVzNW6jdaFcpO&a17A`~ z(-5o+CB?4C>k}S_S~-Z{NlA#kb^nl(DZ?SpG;6R!y0-@cKU*ZX|;`5LH7+2cjr&t88u=V9*cg2eeJq7UGBT)3ZJSm%T)R5 zl`8^a5$*G5SXr-Qt+I=Xt$O|;cv_5vueID3g1JxS!cSDjZpVS01SJEN|Cs}9GXmLA zWO~4>4vhBAm%)n_d?snXbLolPpKNN1lH-iT~X{Bb}+zdg~^hfa78<88E z_6M!!N0k2R{h?bwWjrQiD3SA@KYv~s%czm=N6MOX6&D@m)JS~$mXUy?@IVwnn#gB~ zR0Et*WPZS#2ZIUB-jEeJ7xtP%bw;k>$n|P?qmFRa2w zlUXGd@fq6o_g1k}3aAoM96(S6(j#O(R`>ou*MCv;A zd8g;N^pb8xv9_$_NOO85#1rNYOI2Y|5XoP7t9Ur-M+d@T$3;g+6V^PmBSJ}GfKF2fVJWS)Rt|hYO3=FR<-Y>qO6}_5Vi-AWN5>%W5auDEH<7&c%d><16BC>e z_KRk#g{|%FWoK7`)-OY_|Akr-2S<7cB{bi8F%7pA3zz*nC_f(HT;_!ri;c?K!-Qts zjK{ApXK;UqrI!g8QUVcFf{6O`AR7~^c$V33o(hHlIM2Tz3T8UZn-CKU_7m;5K(gk( z(O4!TrU)3TkY&l>zV?CX#0m3sNWp0U%fnMWWg%j&`S&Noe-fNJei?8uX^CK8KCxM;eLVgfhZo@Pn0# ziVEtyB#{0M>*a={n6!wJT}H1nMYSRbLFyQ|93e652x6@9`O&kb+$4igc8D<$IK_C4 z`-US7Q|7A|zruc>8%as0UP+f7K0r7E%@8OXGYn60XA6biR8Jea_ThZ64+z{#ZG)NC&_DD;gYOmAmk0C)N2M8F0+o zy2z(@P*5}D*~7(LThDL8q>~<<>?$ews8?~IEqjxG0K80y$#xn_)KMPcI^E|0qQM_ zIhuMn#kQ*p=7a!{UO4l#({-`q;Jl9;SRkZO6QnJSGi4MDBA|wf7CWUHm+7H85PL_6 zeS}4G;zS^57_pi}3_(^68TtsjR_*XHXxk+p76%8V7Fw)hh6rGNy~a-t2ExezqAgG5#u0tqpE6v(0a>-Otk zqw25}8%o^W%M4)+|AtlKoY>nyB1@f+``G>7R+6b_jv@K9M|0RNb*1^rZLTcyW=gtM znRg**`uqD&u8NCdd?`#%su00$f_{%+4K15ENesjg)dQePC3@5WlTdt)GkOYSu~^Pf zw<|XRnB3RrgW%%ns`~rVWQ(S14HD&@e+T3|8W-@#Ok$YM|FbfX=ypc z#lWIF@pd%+}yQO+ZtX$ zP%y^Y5DINnAz+hdaj>h4O9KK2Fh~aVgC{is0}d-aUJSGwiS6%o9y!eoy)x47CshBJ z$1l%qtWG8@os)~cvYU_s7V?3Oh{+xTB`6#{B?-n(n^|auZ7;-2&d3%U70YZFwnt07js&Ykfl8+qN1VzQDa~YjZL|tayLSJ!7IMkCf{7N zbV0~DS1jC9PgUc4PA{qkiTwJkeJw;UY@dGOC|u@5NJ84rW6LZ#+h@HAUCWa)&ihZ47;@`fUK8^AdY?kg$!9OtVjgqW3qVlCLx;@Ib|7( z0#5ycUzcgx%nA4-shqCG_Y(#l83rlpKRE4 zq}789$WZLse*m)?kTTyCb3uJcLzJlf#b#^k>+=v-g*Cch>Nh6HMqbIO`TbiJTiWQa$AjrG}QGtHovqQzSt?TW)+gmCMW9&ncV+&kD{m%}_%yEkxCgkY#?sTNm zN8h?ZNU?tb8^fCL{V_S%xqfS((^or`8W2iX`OP5W2OE(PLMwQ~FcyTW$PLJUoO`eD z5hzdcD(4LY4CiAE!mLka5-hu0cO!{z|#YCE<9NGtE-2psgsG`LftvKK{buW z0_3FSIfFrsqRov}HtM$%fEksc{FB?lb1+B6(=ZR5PtXOyFnbUSxI?_I{9_n5@H0De z|G>&zy{y0adW5jOMcF7g6^+HOxNNEo1jnzrW!aHJQu+pe>oZW0-<3^iEtY9Q8O6-W zN$UMI7_fvxC_5vHK9B%x$Kv!{wb0=MB5m>e&hnej2~n3AV428a)`Sp`l&wQ_G7O0v z6LEM?yjh7Z-Fi5;&44tf^7tyoVNgQZ%xFkfI(=0_LRch{fRsaBQZPn%Jr-XO{0+?E z??RT4Nj#_4clzfKF3q-Q(r$By&wAGJtbv+HOM#lgq9U1|q6uwX+K%n3(D2RNsh-gZ zH|@oSqVAdXxZWLG>_J!)F3`B0wQgaHr6Kwcg+4lMBg)?L{%_}e{$laNUX*k^fhU_+ zgxMhXqYa(&$OV0Nn#dcJo{bc=`PP)rnm7K;(sk{Z6xW-046~?{9p-pBky8pfUsxH$ zHTk|@5eaWrkA;ftTLr~h{I$+lfN>zt!_L?>xEBVtd>2?NW{} z)j$j&=41%8N4R_*dk!&6!X|4DU{f$lZK!A0@llo}CVrzIUP3420}#4AT8-GXo9xuY zWf~!JmNQiAmLQ^%UHw@O~gy@=g;}ZS*ExNhzCQaqO9yP z>Wp1W$8QwGWVmreK}C_iw-hUHEt}C^>?&OMK&aD`f`)$RTKWk>(4MjM#jFSQ#=?_L zEkS53D4-Plcc4A@_4b|sfiC0Y!Mt3g+lq0BOv zvE*VO_~X>7IC|0?uP*RzzU}ZI?jf0<^W#Ve!m{t#kqgX#kVGNA3~W`!y@{2`1W;`o zP?Q!h|Z4$kxQ%k-_6>Nd{o8cx$>zU7BjdUHx2M2?O3Natkguf-9 zzXu#g>@9)IjbAQe>rdttI}l?aqUJoNO&o1@;#vHOz&;iZzh;mj@hm$Z815#{UN~^j zW(A+t$h8V-=C>PBL`qi3EmR9nvLY0q$Jb$YUB-8019-U=x9c~Gk*z3#O<*A;;YTP( zin&V9e(`#NhFyz1&%$ikW$lJ(5M{Iaad(a108RCv#)tOrj|Qk<^%MFC)J87%bXDL6KFYm2aB_pM`wZ!NGx*k?{g19hC7ZH()|N zhCx^hc{;0@(wc*tZ%XJ|KxLVj^Ha!?*}eK&HdiLUEAACA1J*PFlOdM~l7^;(iQdOo zR{d@!wX~>=cNVqDz1CP%#uP>~F@L5{48;Pw&>G%t)kVsP4Xa@+yHw;6g8A2DXDUD8>roXbon;=uiT z+-FvD$Q+0eYR@b^@a`|N;>g8MMrJnEz|RuGK77P-ua+FviSFprCMQKyxo zrv=v;+9ilW6zfgQpP44IGi5ynac3k?M2p?cN-H1BX18i&tqoL}6Ss@^;m zGD;JnPnezEAFSrQRv(dn%>a`x8-);H^yiCr_U-RrjUl4W**u2TX2D!}5~VbJCVEB} ziIJ~RwSy-Ya@QUOL!`lkQZn9FYSI0{SNmgkOMb%LB*t7*uc{H+A+o#vzS#HPNus9u z5>7pQ>{12)Gg-t6Jw#EqM{-Selc)-18QN$p1i~0_pnrdau)$M zt;|$$nSZW_a_|ln^{B;tOhX+N<62k^K7M%z;iV>HFq+Uo(6ri0=VDvZ6VRDFIgL?e zrPswVCtiDgD0;nj0asLlTuZQuRE<1&Y$c?#K;j6?r@=8uXB6dvgf}M^N;eD|1pUmA zRqE&ijQNfhOcqPsTNHzcXws1Q3dv*0N{llTgJ~lPxBt}4KK2`_-sK=CTbWgeJI+oB zfisy5xk4z6B_bgq0by+(95H2(b^_C&4Jcen7*WR?ry#%y)x_8=00}SznP@}SR~?h} z#OyfhYM;L>xuys70;8eYC{IVCtMe9VV@{o1cNze*rlFRxXMP+v+M0VWM#l+bK#kxB zaK|euDkKR$ZmjVScp`OnU2$8C+3%0W*jXpAQ7DS>kjigD-L4S(s^jkJjKrYmrPTWw zqC^a+wC8q!?O@7>d1rGD7N)AXSs7Ko4p*|9XctsU*U|*Va_H8j*}wnp5;-`qNZkj^ z{e+sR2&Q#ewRDsim;d+LMfJ3UE zdb?S>2@}G&J1^kLvuC_njBbk^_A+NOXZtE*KyZ(_J%K~YuQUlIl?iHxf@zP}&s(Dx zIyP@Pugn==Rs{l;K7Qjr;?>cWwXDgqIh}iRM>Dbh#CResiq=XD9Y>DXNk~YXhH^`w zfWXtr0g!00#zW8fJGm)-glaUCzF=;YpTon(Zzia>b$+$TL4)Z+V*eW$aUWZs5ePKWXYIu zBP3F?RNcFGGwnuim-(L{lpaJDLfFQPfp zZELG|gs>BElEPxw3CJ51>!b+*>&3M@#pvT$=G7uJd`ty*Y5M!adH3Sy1r^sB*)Rj@ zq*`OHd$fy;EW@K}niwsDdVH*MEIkDW;|K+1c!#UnK@3fyp4kYDV>Ht9b=Qr)mD`wE z#o8QAYuue~xzHSkEsZx`JGeUk4pqEo{!Gtw38vOTSQ3@B9KSZ1%dOdO;70!v&mE^< zSSr@yUptevR}=2aJGn|>Uaxp_!x?3-z9)DangT!Rkfe84^9S8>6WBPOC7Q(XD)f|i zr>?QZlP7z%Nn}s|Rnt`jMz*;@#KCBaRC05_(Ss)V0zzTU#Uf`O6~t$EReM#`Eu3n~ z@?I~7-caIer#^nU8-*s$WEq+^q25zIlJB6BqF0nJ|8+|(d&F?#L1crNkWhqV zIN9#Y&mXTEbS)OG-$`ZD-+Z)*G-neUm|lGasf6GI1W1eCwm%Y+3z(m-MG?gLhSGyO z4=jwS|B4?lxIl~gnSA)b_n%Hw9dP4tPy?2E)Y{x#ft9ny^&kLzdjQRAef96O=D3R+rMH^!lltQ#c=|oQY5|} zzP`Rue16Pe`7(%k9=ltwYJq_;(nCH!+10hsTT^!@+Mh8Wtjc>cRi5ePCGm9Y!sGd29D2kAixG~C5tu7!@{faIb=WQRO@F#vfwNTB z2RH!)QQ>(4yP%e1mX#1C%qLCQK)~^iUZ46(xcQdI@K#^~o%e?MF z=gS~2cJ)z4xztj)T$23B`QdsNrRMSgm-D&BJ*w9TgOA<^${eA`#!pi( z5zh2g=o;;&$DAHeDfK`B46qG!(M>aLE{)tfBeR{wH7l{(w{Hh!fXQGAY6Nd009D)$ z#~6u&LI^H5z1699n;JIkvgcm>NSioq$1#13IA?&iOP<=eZQC|t#IDQ*C<=q>2O`L@ z24_Y3-@|F}@bE~&Igq5*2uyQRi#@^vHX@zn*S8fA4N=fKmf$E*s?qi?w?Z<4f_(Sr zE^gFp9GA7Wvy-vw_`SH(Z1doyyGRI}A_%%tUK9*RLTe{~`!cji%tD_lI zZ8%Gk0;pKuK;>Z4B0m4puh{otrqki{{-pUbD)Hb-zt&Df!fAN4Q6By;HWsGl<`i*N z@{adJx6^i=4?H>Y4ir0{<}#GJcuoyMf@leF$8(CR zBgNf6b_@&3W8#jeAegKrPW`~JibYw(g#T5~&_Ot+5+S3Usm(b!5t)dR6(1@y4XPD~ zRsY`BCj9sIo#%$4P{%IkTujn0;iCE#NIw|ZM4sdhW!hLe+-pP|E z^Oy`VW>*fp_n1Kn!Xn|-v{QvM%xgibpuma0>v8hFm z`q~*IE6_gJYP-jUO*L78f#` z;olW<7%>%;bnImJ1K)Ely5AJ^wZuD?B02S{`~Cd<-lL7=%?5E`4lJ0@^e_%JN5#fI zMwN*{New=$y8|W`Q~g5M5AGjtoLEZAbD*n#r!7!aQo4m%>^J7fXlD34vsq$SWBN5U zG-j0CnfL{}s(i($nAyJzY>j;KLh4gW{kWnSW_@O=(SS^>uGT)*n;|AqkW z30U_NrcS!Y$De~yhY8zL0#~gjowx|5^b}FLTYMo-o(&!MDez}$ZS7t+H#h6$`@*(E z51z7~ecl-x z8y^+2WrfA&gUZM#Bt(tDgzML@e?qhUfKk%Pv{y5t#FFB?!yeb*kdUXajdY>Uu-<-m z6tdf)!}&iWS`9uu*n)$duPT%$2M4_%k6+T%q$8P;ZVZZZ-`r>c?fJpF4EEcN#+s;I zBF4ObH#D_<$BucU5S8NA;IjKWi>sEu0!8bAPICb^*WM#XSc01Y*_pNH?qhm&-9j`eDXG+O-gNUcJa0H0UO%3Qk~nN`h+=m<0jhQ%*}Z@Ner}lb zv|=tQM)shPZ=&zx6#W+rE)YLP8NPPvWPqRFer*3r$9g=d3kIc zbFVd_|KM#r+yni3O;OtV-5LyV$K;sodI}KaS}zm|b7qhV$GnOnUsK0If5)eWOey;` zCnqK8tSwC^Dy1~Zu5aG6wk}0cdz+N>#!y&FN(!Y#i?g4MXWK~#Al38kFJUp3)z7SCjvnwsk3#hWJE#dv(A3=|6ac+4|kBM2_P6vHV#G`80%8TB~s--T0_*Xb38_U_wP zw|Nz!=e>~zwxy+|Zd9D9g@tdci<7Uy!rrN-4lLPFN*%SDco2(lLti0(o%&b1k&M7@ziwam2WpoX{PSM~hakFXL!?Jj&$ z!oUWYN^Pv<0ph)(jiCb(Eua>}pTH)VrQ`Wr8MYdQX28wO{c`zPe?V8UWz)N3I1cB7 zhSm!c*kNJp7%uTPE)K30`&)?6M|t#iRf2?1x|!mWAf(SW0KipbC8>Of{;|-1K0ga5 zzu#wP6Lzi@h7nQscCfLrv8sM#hO)IwSXj8FrKRs!1G*;##;=|etqNEc_>P;G_i~P02d&}9u8Vp*{IH1K z6BCqp@zTl)6KcD+x3`>JM`hzL0a z1%Jm$zF zuySR*L-p6D_-FrmkXxW`9wt3BLc@+k&#tbAUS-~|gFzPo3~FMK&jZcif|L{)kbnfv zW6hAOP*U^mLMlIoPsHt$g2f!_DiKFcd4prQ*g0n~P{B0yg=h2!wiVAHy}R0R;Ay+|81w?;dD60)|ahX_bwCuW)FlH zlr`>~kz&swOEaB1wPR#tq;&ZUiuoN23(g#O5Wt6^b=_bDQ}gpzjf_}<%Nh53Lr)|{ zChhBc}F ziuEOoExbhz;Wc?1RVfehuQQiIyXJ;*_lE5E@0#Bq5BPjlf#d#zM_9OE@@9;L;!)PP z%}^6L;FFTvsi~<;vsz7BzGX9~?*;2CQ2Us>cQAP(SoS=6+;xdP@bHu2`TgE~DT0?b zQyJE^9-(2zb6}sSfg3l+i@Ekx4vr|d6uW1f5ue#KfMdqm8R6y6kAAQEmIe(7PWPRUK<|6 zEP*ujd-=E<>jAACT_euLmmOVJ1>0s$_*w#@e5;F*@K^<_wqXKCk{Wr{%trv?!hI z*TQ$L5b6IpT#HaGaNSE@*`VnDj8(yPs+k&iU<_O!~wsT|T4zbJ4|X zW@*7DyI+F73)5gH{Igkjc=lsif+Tk?o^Hu7q(U*)d;OeN(C@SJ%J}q?>sWEDIjN|q zD4^WUQ%^7sPGgHDyYM1b=?mZ~rUrPHS7l^m`hP9&-B&kR26BPo8l*ZfI!XpmxP8Zt z!%O2qH{w(_3r+VdeJiTfgl|#X$7fT_>i|V2BR4HS$_Cnhbb?hZXAUF1(|X%HB61gyQJepMdQi zGcADb`X{Z+%c-iW>c9{!T}MXufZK0MzS52^y%4B*fw=cP6ui%{-1J_*2@TEA9SRy8 z8hQfQcLyq&N{O2&Qhrm)%pM=bu$Pwh86X*kKz0EB4puX-?1mq<2SL~4h7b5?VN+zp zxYa&@F#==0w;8-SG&;JuzfBY--}aq5Pw?>_BKl*+ubLb~^V{`v1p5dZtXaW!7D7d+S#im90UTkG84;gsFj z_7wxmX!(}iSFQx4>^a%^dp9YqZ$JKVgucq1a>rCs?W^m|W80acw=&B3Ro`S}Vp{Q- zR=IiQ%G_r8L81Tqp8t2=^#5tWL2MGzZfNn;;RGh`?vq=$t=+k6@WO;|v~Sc=(tCGH zk@*Q`KVI^q=MM|zJ6W5HNn9;$)aP?y=#X$|kvMo-Yis==mqdrBOv|)jd;Y|3%Nie} zwj`Pn)${sZz_a-Jn{A4@l5z6w`2)KrPaoMgH6wU8TKuAN4r?WK=v&Fz+wZeKc*MWH zALUqb@!=K$`#*alL~?ypVwYvL?&Hy1GEe>M~)mU=eE)k~?d2 zj?0wYlWiY%uiiJ?O7cYEYcRJiln&hXQNg#uVukK398%It|19O^ZrW}-Y3FU{^0jf8 zpCSEoHzrE+%FdH>QlF!p&hT0^?V0BYy(B4_bYU{5CNi4a^p91n_tz;~@kjMqi~S1+ z{yxXL+tE)fOia<2Mm;8VIU>dB9&T~tsS>|>jggg`guh6nd!%rWZv2mnZ8tmImKO_5 zM|b3!tL~Ik3;8kgWJ+%Pj-}7N?UH6HANF=QEtYk^Ic!FWZ&!M2?p^L)9V-4Y6FgM2 zG_^q&Ay(*B=B%Nk*m*GKRP~kFrtv)!)e5!l|K7ok~@Tu@a# z7RavbQLJ<9%hzkY{TT_@Wn?}N%HC2D&=u(Fj<-$qVe)h*>htF?2f zr(#|Bx}@Z_?*)uM-iT~1kahSJdns;f-saPwhVnmr>Q=p#CspjnD(e z*ydk4j#Z1D;5^$y_T}z0Ki#JH6E}(fJoxNLKiQAS^S5_(@|!ju?oFui(m8e_N~Sc( zVeI$BvquHl09JPwv!AnSX44AU2Y?N_6tx= za$D+EeBy;v!S#qETcX9?_gjq81Z>E5&Bi$AO!xI@l3Ym5r}*GhMUOwO$n^Gk{`teY zMN5gzIj03W#XFA{Ro|(pl_kgVZ9epVa`1_-&sU!W-@S(pZBaU;6u`sr+$3w{l8lUK z&0t=npyz1Bk$rQm49+*TJoEE+2%n=$Ia2ej;B9j)KlhigxY3#Q!V*g-odE|Hl25

JiDtcTLbHHig*mk{YzcB81RPyqEXKi}eGe_IQ!)31oknNcA ze)1%-RyE6Lho8Sc2}!uKz@64YeQUwmET5q}eP51opHF&FGGOqmuq)SkUr3Gxm1a@2 zxL(1RKZ>yl3GRF?FWYRkTws>>cjjPcqV0*o`63zh%-g|bT8OTnYFSrf49@JMUpcEi zY!fTbChzCJNAEdFS2@+=wr!rzUy*fgdyuAei=3WspSiHzql?2l+M0%lO~nAEx9s~} z>$7o<6M}5h2@4AHcegh1@kT+lO<(Zdt^GDCDR6ROzxVbewhE7AvF-cTL$p>)9;D2g zthtU-|N3(GO*wB?rjEI;&*u{?(Rys#JZehpejY%6KG678^y;0z*CGiCHNWY+Oih8) zG_AaX0trbFSJ>gLOFB^u-My|?)7GD-1icKNA|UwoDZ6+2C^7LK2wL^q@Z1 z*r+8oue5n}=A>B7#y;FW($bP5p@JZYpn!yQgLMDabMO6p|9i)H8F$=q<9K*t@3rTe zYt6ZK6<_EPd>pbKEV;Pt6_A3`X+V4FCSSaO)NNXoQe|q1?m899CUca!kD<0_1h}%phRJ1HMPr4X6 z5$rd)wKH%b?+f3Zq_b%QOww^=rjW=N2eNm5v zTB4&?tgsXAgyW?DtE1k*sLao?y^!dPBZ-UYfP;NPPiK=@I5mTgnRhY7OZ9?_qJ-jJ z$lTV;PEF@_a1buvYCaBk%5YW+o(Gi96{g}u zz=qikHH*bi2qteH(4LADr(_+o@Qv4@DVPN_jqqh)e|_vfq%rIq%7ZsR?q*l#-`Y{ zP5n%@a(M)Q{w}ZL(`OOS1+)JQ&jff=?b70!{t<>{TRKrAJ&HfEYbgfE!nWL+UdgRD zB**v%ZSA9#FTai4r`z!Hlsj|fta`{?wFTu@Ahse2&`b&Do7B3hqAyGWa+c2z&m`6K7IApzEd)6`U}J517;5Y?GU zi*qk32kpl4?soIVVK{HKy5T}N?5HE;74rS-6h02fjkav;acFHk|Iqgk*7r2wwoUsh ziA{2osNfy0%lUu#exc68wQhus8~YtxtSs(qKatjx^1@@cx4*BcsrY7Nzi8?c9-vMn zq% zC#|I#Z{*(Ztt8g=F3jvE2PCPnr^`PixSZ7zk=GSdO?E-_m#kqxeQ+CCfVuyo&#nC$ zevVy2SY#00Q@M*1lcq7%*hwJ}_RdOZ65 zodF&ZI_l8d^h5qh5C-ZPu%5Z)8@z!MxQd#Z+DERV;zmtt38=P3q532uodJ_~SUN#9 zmr&H1=^r|LxYnd=6||KlCtH2O0oik#XfmO`m0S>~eiICj=cQr=5b>$c@4Vwc*yK>I z`zR$64Yfe|L&*8qf4JPXxKhw}mnKYDYXRqg*H9ZW5A2DC6||cMw2V`2PwE(&7RH{d zcz9^EX{gd>#o8v~_pV;GW0&X69;~T!u|eEXS;0R)&yC@% zW;yUd1&cGh{}g92FjS-B2;mgL!+;|?(u=FZcg5{3gaxucFFpVA!10ws{FrG#MW&8m z2s=L^O339%wPyL7ig&4&mtU6jBb>*yS0h5qRkJVPtYQNMtF6LXI@XN+;O!4gly<2R z#U7o$x!R^W5?;y$O2Scr_svjA`Vy}UH_d|9%6u$HdOIs@j&cor8UDARBKgzpmWq zKFen4*5L;Re*#>}q5L1!PCR|yn(PANSQ5=L#PsAi$!eIQ?)X>j2MuNpmJs}tlVLGX zSi%51LDKIJF>8lh-Y=$gGYcJD6UId}6kOcA!P;qgV!hDq%Da&QiVpHsk`zQ66NNCadWRxwaTw!d~u zmXBFKehiy9{NeZ`Jfrav1oB7A6E{%`OFhK7>k!_;Xu{!fv51LXy2mt$fzmG0qY+yf zer^~9aNqlV4jO8SfOa=vRE*0;s-ii@utG(eKj zND}bEU@Tq4>&mg$Lkv`d=TRtedt2ZyhEILhT;#vqbPhO+O!$x-AkA{~_Q$(|MB!+s zS^4u~S8MOro0EB4oj1hC#{;Z{PoLZvPe5W8Uc3nXP)VpAq;-hWAUZIvBM&mX1 zymCkMeqm|y#JfWNF`VM)KeHBieXS$IL*0PCgdF}5OzbtHwb>?;>`I|RDopUM_yN*w zfE?lZK#s-gxmJf=n^#2@O^6;*Cp{@qXTN_bl>Ya|HCN3| z#hfOLa3*ap2=*9-$ayJGqYqrBki56hWEW=<9n05h>;gLPFu7}&2**~S| z7pgDftSPx)jJKb)mE@Q3=7c8qE>q78q^g60@cq-AP>`n= z8RXaXecAw+b}08g@%Om+pBZB&wV|QPNAs*}vOY(drlNjw=b?(Xn{W&ayc7WbU#xvE zQu8)#34572GQIPL8_@Uk?jVM%tH;w{^0?dEm7D2at#b#PliWt-Z(p1gd%(f>4?P5% zKj?2Iar?jv+}!+59*PaA0O{HAH%i{23JR|zx0?l{9Cc$KKyH@e?(Wdt@xD|d|p&V(;%?I_8#M( zuO-hrx|Y@^?lD~w5v=>UztR)L$9i0Voc)b4(a|5kat7&eI@p|I+35dBBBT? zxk1nsP}9%^JeGyFtxl*yBA)_X9y`?bZHOsYztDjMwMRb%*Dc!xx0NXdu<~LSly?w$~Wg?F6 zdSb&*>WmtZ@SEFDkC(KOx~=CwXryY|n>XwY?i1;ZGp%+>-PnWn5J6u%AHK&EY=PMNcHMQP6%=IFmtk+t5X|YU8x}!oY8d81cQGtJ)|g1 zKQZZwdETDzk!CrxFp-9j+fmljCcSsoV1L?SBMqWLt zmeFE|8WkRr*@T1TQaDJT*u-@_-_Ju$b_)t^-hO_TAcOo(NOw8qY64*yS8U9 z<_syLl_!#w+j8Z{JUa`;ZYKkU10ZCu*6YG(r-jY^^?vqxf5%zEZ{MKrt6e=^e`Zg- zP^gbi%eDl)v>&E6X@-&$9*-0GyPMa$q>0ta*=8TDDYJ^D@)%=5dKuD(pVRgZ_Oh9c zGm96AWRqKWVaNI!mENuAVv7Dod_Yl=V_}`+|MuNOumCjO8c7C_)4niJWHB&oI*m8D z%C|FM%foCKvM`a#XN^i$BeZzL^H#eonnJ+&Xwu{^|3d;AFEjQR`-HZviSK;Jl!u0xU?i|kyX?B2pj8Q zBkF1uHP_`eQuIUpp180uLA6_Vl6PXMPVubJOnXZWtAw(j zI&r!_#M}SECA|8Dji5d+Rr3v-@&g_m z9>%1!LOb4L#-o{Tb7AWE?RDe* zp6S!iY?48&Yys*}NHGJZv311US%ZP&wl%qBtM`JW;^b#+gUPl4w%1*Q-@cKDDniyA z+%{G1@8Ys=+2~2a07M+sK1QVZ?%Xe$6E1#2CJ{j$*n-@@Zg|{KiC8@GA(_d&C}lGY zNMB2(i1V!g2(n^mLw|x{B6q{0gNoU`MovVKa=gwVcw32K`kk+ExseR->ypnKK*uXA zl)i^lSTew)t3;sO$bc}QdCiquB_EV-?E$+>F0lE-rP6iq1XAdW`hn<#a>f_64)`b_GI;x1A%j(< zZ79WOg4(=P0)&ri(AT7P_bU!2s3tFAU|&vkoLyYRhNcVXjxs|!9P*iv2H?Ay5pV8^ zs^{(Q?jp8k(8cp<4*xO%K_?tV$Wb;oXB8H?7!cYD(tHqWVnb7iTO!OZA%h83)36v& z3P1zRoGi&XPc_8+Ot*{kT#JVE?v>B#KDV_VR+VthHr!2?oUa16i0%FNd^Uo{nX6K! zCvEM0+aHDOhLyF8>=Hh039UgX3TQ%bq zfsJekBgw~@a-$U%=P~ize;KYAyp#5^st|hMe|jHVueL_D$GJtL+8G1MSBj!s5oq9? zIf^7UD6>&7*KD@Ypu$r66QDg^4@Mqebs9rYo-yan%g?W5ybeHBr!Nx;;u7V)(;pa^ zc3r8XAn6ejF2S_6whqH=>w;A2(Wyc=jt87`9vd^vrTMlw*ChkZr~nFA0!{n6u<#Y6 zr%}odNQq+bOq%4c0FEeS*%#e&Yn9ZrG;_yMs?5=Ijug_oRf)Mj4>UAFX1BKWYVA0I zPZx49Bf9gVmYR@O;^T9{?S_R%Xt0M=fcviU-m@u+QFf&lW;}5AQ-a(iu1ZPh@uX>` z(-N|YqnR(@Q~boR7EZJ3fn->OJVa|DV!t=5)nva(L}xz3KOD{oY|%E zzs0ZL{M{PT`s7C7-ZhA)**27|4}kXqwy1Sk>~0^=A2@W??%lhwyLb;+q0q^*i0O|f zUXtz}@=r<0kQF+^+87gq=LG4-;qnf=noFp|(<}MU`WK7T!o&5dPfo9_WS$fR+VG?k zZBES${hFl~v&YhYpwgkT#Ph@9O97#j`x*+WMJc>vhWY9NYV7t?wLvvCv9D6E+lPe* zDtIjTod;D_)sks4_`J_iO>+=KR?>`xI%Hw@HLQQeMg<0(a=uk*^7MS^6DYXQNmXNS;vSDOK1`TKhLfg)d@j*cvTwD7`<_FRb z3+k9HkU!DO2hs-|e|p34i8jC#e04x0A)0<~H&YAH+0(~{RZbh)-dXaOinZ|)civ1t zI#2D`hle>^5if03*tGaMH#a(SRA6>N?5LvxjxfE-Ar;P_e5A$Z%VTBK5}k|)8nOIN z3$!#N#~$IY*_}E03YVGl3jfTJn z$vD-|%axYj&4*e0s>GTPu6#|Gvk`*IRi*E?Q>key3M}Llt*jc7q^`d%OrXKn4LtCx zULM0Q?=J~IGgfRPe{nVlK559p9mtUhJISzw_l#G~6`$njKF~vlaLZ*j|AbK};dTS* zNdl)qaXx)6EC~i<;|%5mh+{Jr2xmwFWrtVoDEgm`m(ogp1lQGFHoKoI;9)BRNIdpdAlxBo}OiY#@wU07WAi5h^G~fY3}{_TI+W~5)M>@Xi#2# zYq645K)rO8GH7ZGm=RITR^RC71KCLO{%aOH_bpGy+^^X1#V#P}Jbt;Q%LVylS3DQ1 zUNN*XCZDXaxdOY^_4}$w45eTQkio&Bu^$1aT6BfVqapeEVgAD#OH0c$T2`1F$3_<$ zSAAh4&7g^RumT;tULtiijh%nY0sWTKq@0M zK1VQ#(GIW+%DT1#I@HYrc_3ba=H~_4X6#_-sOBUfNOb#vJM8}+9+(U96%9gG)M32QfTd;LXxzqv-2D7q z(S+|%IadsR3ydqZtWRBE9Oj!xTITYu)E*v>lV7|*pSp3deQ;M$StSfvMDR*Fp(20O zrKl{QrS>{v5<>(p0V$ym=FF1Nk-c<0X-COB8Js+Kof@>CkZ3XSH!2l+H(42%JkfQk ziB&LIV9RtTqnbSPAgevYk?4M{>O)lf9`A-?Q|adP);A&{5%KG1sydD5^Ahm}>f*+J zxYZr~FJKOxeVtBc1DDmU2>2e*@4Z^x~z<`iX3!4XNxTjpfx z1PuerY(*Q3I%kj7JyaOozE)>;RmPE=8*Ih^8NKCqj)`m9`U<)c5Y|NKi(nHRDIJi*CuKB=y}8cT z{jq$LIXv;XXmM~xtgh12I(i5g#1te6Khp=k{ouv=9+L)Xd!Z|xyuG)1{f|sPdNM zj~C&BK&b*x5q|5z+j*&>nwvW#NWl);fQV1`34wjnfC^mNTkVvlY2bDuAzw6`3!}mD zpHGB3hMKy1C=4F~O67*RDKX&9%L(dRt$&ajfcMD)MZ`gES0@erV5L3&OkTZSR!ppI z!bj^pQWlp0Xp;F>X&-Ae>UTb84plbt4K5B&vtP}{QPJlFC`B|w6w;F-mWXfnqdcYW z;2q$^xVv8wX7cqfhkB0{=fqwrbtNcO4aVW>)%w);^)&-OfRH~xxr0uFoL`4NsHQfg z=SZs|=p=$T%w?)ey?ok5Cq|*gqHB(^SHJB}GLWa!wHNr>rKk7K(jlamo;Wfl`Jnyk z9@&Ttz)F2^PUAv*c7LmZUru8MMBKyWzc0~W>EE4LcYTLX28{lg9|0= z9(yqhG#=+;v!AlWv&E!@=`88DmXkT zq!O?#oOAq9Uj3~V3LDU{!tlI5yFDe+Yp}wR&ZBZYddNC%OMsgg5=Yi6!X)@QjdQF1 z%o25;XMkkLL1@PQlpRsAI##s0tvif745n&L+=ecohEkI__228c<+;%k6($x_33nA8 z>L-+g&ckvqzZ`__8Muk4Pfjs~dl+2ic!i{X->F4Kx?Y&{Sgvt*mq4;6VY$U?*t5XI5A7QN*!b zI0OXBW@gL+#_z+xrSuhaM1q0}6S{Ue;=+zjkE$yfF)*H7IAwz1b7pQ1QCTrdN>YRJ zX=%7PMd;WB`i{-3jDaPF#>hSDX9BTsbegY2XH~|vcd{ZfcQ{RDMV_1?dEnNkXrIDJ zEr7zKCTEZSd~-f*+e-84>zhF5&GN!yTPV;hPSUynDEzb1B4pwiX?`FfE9~VvasPl< z#9i`F`w7umu(9DtuS|JiVM5~O)LBVj@q-E$h^P>>P^hBu%IG8k{wB1K-5ra^51?4v z`N4SH)pb^{k^4bqlXo1GOdk(Nj&gSvq+QA){Y7&Yhm_S_!uE=?(IZ5w`w$a zeuR%492RzVblI#wE!}Kz+vZE6Db-Qdt>CU>l6gWFx!-;;7!Vdna6S6zvE8pi3@Oje z_=0LKHhzA>g==`27lU>A-&~*#1|b$2yvFI^S#UdQGp;x7CKWe6?Rw>*KhyZjHZ_gg zo|#Y3)`^Et&?N75<;2gL(23(d{V}8Go@5{54oTJhA0d!8* z$hc(GP37y_RHepJR8Yi$J92tgpa{hag4Cg%l+(B+DpK&*RzD1Zejbyyz<3~Droq2Y zVPWYIq*@)oD1!w&wVc0XQifvBj~_73W)Vct=~wMZQD)Gh4u)Z}sA(AQoh~8BwwYH| zB@F&7xn*VXh~gK`A10=NLmVAQ9=V-wU-SoE%M6{Ok?$W{eWVx2ak<**mSxq$jEj?f z%rGvhn84b)8U<(}gB6bR{CCe`ID|1KVF~VrS_(>2oUW|zOv1V00_RR;Ct4=oU7c}I z%mI!q=--LL=k?UX$DyfD*wfX8x%zU)MB7NUR#f2AP+vY>+H(|&xb9n#g7Z}UpDfsn zoRGW-rMZyXa2Kj(9P?3O#P#g&Zc&njM zetv!ltz3CbHk+S&CI1AH;p@Dw6h->r@rdEXhhkIsiShbzl@>%5b0n&Ox|~HwXscu$ zn(zfDo~f~;P#_CqsYDo5@UQZJn0F5%3NEF(8`{>@D#@Mh1EX zIToQDEe8i480XA0fy|h5^@C|v+?ua(P}86q*K2TP>9nZo*aJxcHn25eDJf)N>GiiZ z0F3LdXNb@&Ux(&>jT$?RjT*%bkQIX{oC(&5G}Id4N+cF9Ca5AkT_QL0PZVrg!fsYm zR#`?B-M4@nw*tY11t{?#ih{l1HSpsJ0Z|jELdc;N5d0uO`}_e+E@n12iBNVE4Z?kW zePNHOsj1Dti&atM4SY5i^ir;mA9sPfJ)+^t0woEeWCv#xVmdl}aPkJv0YTGoZqjHW znD=Q1YC$kXp0IBV0k1f1>UmD!PjScno3WL5WJ)>HUHkH?RN!PphJbsS{w>d)>kU+_R2q5@n31!V?8 z-4qmXAOHl+nDN>6ca+#kKTJ+$ORq}I&&(j2cRo-=PwkN*w~hP+iXnn(yk$_K0&=GY zO9D{tB0>;AE=o#D7^N z2bmm9u&hjQsRG*%Q0poHV=VoS__n%wH!#4Dc9%2|;t%G7J$WZmar@ssY!Z!`4=A~_ z#FmrG72E!^sB_sxt1l)Z#B4gme)taznu;D|=ILk!vcIj-{!wJ`Kzm)*lB1T^uaFzM z)}lX6E?O;?PBBI6Cb(b5#l7h-Kbp2GHS05aCU@;~)#)fP)P;NZ&3cjYm$;%uLBf-ZJPmZ5meolAb*Czjkr$~|LwU}E=-xn+rmUkw`G4QQLHW-oR-g~o z67*Ddlg-z_Jo;nszb*A78quA!huZ8g`5+Oj2V0*%WRmiT;tDQGYT11MmxhMMrN#&h zFhClIXl_8gh7FS|7Cy88107iNNT1;P^N@nZ(3&{vRx%W2o-Lh$uW&bb(q$km?#NFb zX*hgHbJLf+RhiZNTlB3Dn}2&DYZC_h4Rk?UGqYeY>9T;;TUlE>AN_yd{LdePYK{Nm zcwBU}`Sk5qV1zac7zlA+fCA#D;4LqGh>eN(&37O;3WFMIxKuxdE9u|G6?Kc9b1QnO z7HCA_pmTpPZ@!H9|BY{)pPfkh?sAu#wh0)iT`{cVL;MRs510@B!BWp*NJ3;{B9S%^ zg?g4`2+$MY1yQMR(g5BMh{P4_Z-`^B$sa_Pg5C-2ZKS{hsNd}W9%VPC97zjI&f|lr zz45RSXJG;uY@tXHJ*s8`b&Nbe7+!t9=S2b~{rfy>SPr4wEV>`)sR zt}tanRe~>AXHUTTrhOrK6+cJIX~Fix`!(Y#u;K0UiMocEO|;g#zoW4iFzEKYPYm1Bz%yn1+pq z0w<(Suu2nbM)nTSW_ck0gB-UD@W3{>q;6R~V=XK!q!x1^2ct{^QBhH>e@+_M7q`S< zPpIA2)1yYl9-!Jm%z7W3kjlitLFv<{PbgH4-5@$xrFfS-gy72xOJ{$7zrztk+&C~i z#O`_hsk((l-iR;Q1VzA0pN1DC={k*q=%9z+mI|8#160C^uv~~wm$-ZWa@5ti7CH}Jjs z0d?d2l|P!~QsYDqonv$P*TNQY?Qg|R(&E`i+$t$k!}R|>f+o9ah+h%TVe`>(RP?p2 zeLYod$flH3p7Gt=Gqux2cMjJGZ#+{h`!T7&*RjUfZ)dRF)&7h(*FJrj>QeZvrk1J@ zoY>>k$foM(t+8V4D~I!XSGC8Qo``f?lrEW~mDj0uIAT9}TJ`r1{qR^xWLhzG3AS!n z%TcWM_h;>xPDflCwtNhH1eJkyDyhQ08uWh-zE^D#Jza6ybZ`t4dtz+P?6ZC5+&^QN z<&&OQ-O^Mo{OrIh!204#iN3XH-}*AEjBZIt)%eFPNp&yNRlQzxEveQeM}0gB)rYuB zsoLe$0d3vl)W46r4vh5+@$&1!ScUHvSD$g;liqc);FU>y_F353k6KgY@xjZR>kNa9 z{jse(k^=Z-UXg5CC-YW{8oz>}h3(@|im=qli|}ab60F9K&wso8y_+uXc8u-3-+Wsy zmSVP^6jLdZhEq{?jVV#~f^*qv)5$vOv68X5(^g&uKS9)oDKBfOaosA3?wlvaEd43B z4$sZl1tMQBJ6)8^zxjKjf3pA^4yD)g5Y0PgA&Fgb+L``O9qtSZTiZA8jlgaEfmH`UM+ssA2(`OX3_4(r4kbw1Etzknf7$OQK!?po z@yS#|ETx__dcll7jkbX-Ia*Ej=1`Nyi_gED!fCzOM3Z2Q^G)B20!%~SI=q8?(d6v1 zpBTD_qpD&znS_C%3?b~`Pgad1 zJ4u+oZz?6@MBPkwWZ%l+w3@XK-FTg=%%^Dag0#HuMNhuqXKH>j7LMD+$?ZYZnc=oK z-;Z;>zLEsLO>KHAacD}8LkqW2vn;v2jV>*>zD2Vj;uc{%``UWcotE{lcdwS*;njAS zi0)JHf4dQT?I2HVHmd*?3YYt|xT!)ms(;u!`$ci*lO;YY9*=|C3k$IW(QC%h*uuvr z>T1mk=bdkOwj>3}d|xbt}h>8$FtSP6f-NmdpAylv$}WCj$NH-#>;kJco>c@ar|w zaGJp6!>Jq34$E8SrGFKE@5_?D_oF0s;8c#tQe^5#+_A42A0;VVOv}znLeEEFcBtvb z$7UFQc=h+q2RCjjQMU}s@!yW_^!ObuOvS6D*7;)Hn&y_;V|H^Dlu>E?mZm$qE%#&2 zhh1PxQcAG>m+%7wtk<@V_0OZ+$J+Sh9)u7uB=*K7Ur8z(c6M>#9#Jco zr@E51dI#^h@YDAGJ9*nL3|@XWDPT*pGG`5`Ew=GXqm8-)4(y#NY0;R^1Z#EQ4*RBL zi(9h`NsKtb;e0SMtKc9^5F*s`B^4+;Q&YS9> zyu2*!X@f;e^(@IRQl&Jzb=J%rmGb;pT_Z?+e!Vw`F8~weE!4CxebHCGVHf&G?dpPo zI@3#IDFFvpfKT>d(0Iv>tMCVxi~h^S0uQ{mMGomeSAFd72b7HqeyOv#9LPb=W6txF zdO2WVSS=1uRrSc^c}5kcjUY#$Y1fdpbk!-(Na0xYs&19&clEfom~!Q%lBHWK@DH(A z$5F}(*H%yAun7Juek89;TD6q6cy6uwG_+Pin3$?4fQC)@3;C-G{d7C77}KsV+R`Lv zJmS_hWWU3Id3~X*+?3R(r7UWRef;6rKx3cb3Hm<6Ec*V-yz?v`JJZ6a@d)YyD+nzm z@YB$7p#!)z^Jvmd$_IvRk%J8|Z3Rr8pj}>GCXQBib#>)_X9>7hMNu(yWFK^Nprku| z+ezvVa_WwOuS5SpIM(Nj6RI|5uP)f~0F`vi!l({2n7$(_T7J7QoW|_+dCu+%hpm!GX|K*2T;BtA@=_fHN z2}&qXI%)B2fh>TM5vEQxswK`GwNlNip#KA#c#I-QlXbrOv9b;&=42Ky{S&{B*gN4@ zS2)2SU;enEsY-JUKn0k_I~FBB9GreuqQPhr;*kp&verD;RJE!>lc4iJNU;bVoof1Qwxd*YDpahRec$u{+>> zc8B|=CBPp@h1-XRuTGs{r*1gxi$SkNL;)dPAHb=y@>7p zM2nxUts$T($YODTh!^;F-q6(a5hy6UaCC5~QBqJqpp_LUFi1gclMAlrUj0Co+PH8q&1waJ!~+I4WJ ziv`{T`97%A5Bg3)&InQS8sxsy%gg%HXOBEQI>BZnZ0wXL4UZ@7maSp9VhCjHXeh9+ z#su@I?EA=j%OhpgDO4!E^Fe`_Wd3_bC0Oc(;!BeZw#-nlOJGbpR5prOF#3~~lEHnx0S0dN>LDbnU?SqBRsa4V6T zhMY?6&K<Oh+9y0d;%I&0R*>=m87js3^4xp8w9~K}R736oZEbDbU)= z78b1F#|w#rCrdvKv&pXybT74pGrfq22)MT- zfB}{w-@*<0{}v!gno*xKNwNvx{vOEK&`=W26YcFG7~y}fc71NUIFJ4C1e*;UU;q;% z#1v%oePsXoe!%bMpuN3BJje?($mwltY!1PB6};i%z{OGP-S_T(pi-b2w*w}Lf|6}e<& z=)sKbi8fC-`~@ly@NF`4aY=+I1av}@EMk)YKSk*{{!ddRT{ODlK5(N$lT&1V3cf`i z+hC&~6INIex!PAvbW~qF2ZV?RE2M|Oyi0nn{gb(Y-~Q4=TL*_2*ns)qfC>4c?LBxp z$dB(pw=+HlY9j0N=CaC%kx8_sge*%!|2)mPY6_@~ylE*Br7qAezBvsC(<~63h|3Xd zbdzy8HZ}~%1Ci1a5C%eE>Ln>9_3p-hudjtGBW6h7uT@g`>Q#%s*+t~hsb6TqG|RQw zx2#2IkmMZr4##HbvP%Fjl@B8$l#tm%PN@JkUySmBabVPy`y03;gx|v@<9r1sQEL?c zZC!1%Dq#&A881ANf$WY?KdB$Q9f3xDrjIp!CAZO`(d^GkfjbV7A0C;O7UHll&#Ti7 z@>voJn#;**VH*KhD@Nj+Jm?qa?r56!B1T^xD13kR(9Fy(DnHN<4MkC+t#YRN6@`MS zSRJbrJUXf$3;EGd4z;xBe$cj?mDQV@VQbjg(>*;r6-i7*cge8no!w{^Fa68jWm&AW zTjApp=Uq_WLcuge$5ADyL~YFt223>iFrdC2&t9&q^)kD`!paj&n$vy{yf1;UCZVI_ zZANB*A`OCY%Mr?XP3KxxR_us{9#Be(1)40Nt^}g46U-G6hGU-qUfl7kg2GRgE{ir{ zrw6E78QND(2 z(;{ub4I{)f?J4^Y9-2p|lRUR!}fDn#&k zFH%-u@exc`6N34&V7|!6n!J?O`@W&U4Cr5^R09pS5#VgfopuZH0RldyNV}NS4;x%3 zaIZSs35Ll9VxrQIdPQpNeyh8T_fqwnjSRb>P$pV7uN(&g3CzmpIx$O)TZvI-me1P+ z$9hf={(LELM+M6Vom^V=WUye4jw%|~D*17^Z77&^4Zh6G$|{bwRyHXkIpKm4Q|U!k z*st%Ltg^C%vL}Qidizm1f(rv%AqHhwC~CUQ_JU?!*qv2Dqc%}p7B9!OdC1dO#Lz)> zO>RlNJC$kFjpQOG6YfR1TkHu6J*+cG`WtI-|l3@2KA7zQ;+l6tGRdrMtrHZ+JR zcy%epSi|N?XJ;#|`iX$o z1Q|&8+-8t!yZk}r-ZU<~922Hs4h`}0O)yt!)K{C-EjPfR+oun|Sa$rmJ!*f**c8>x zZ_u{A4azIC&nsnmbq!x&s5tS*pRcS6c>L?juuvSko zp><#7R^r{F4HHA}fx$-l)|2H?3At?5uLil-Q>Y5U|W zPxXnvWmeRZv9FImc;cekpw2!KdX6YF-Bq+{qgAfSYsU;cy7_w>(4<1KHaZHSyTQ@1 z<;O8p(1cy5V}0GAq`1Hh$7-JD^xtsNpP45d&ttOvUi#Xd*i%tc(~1C@r)?HQP#P?c zrNb?JlKBG8%ad!kYc|(4hU1y`eJYh^&3a&_!=L7~EnEX)TRC=ApmwR=id z-sxKtjYgM{|IT?A3zhoVV?@ov*4mntMZngXiR<}a+LHtSGt~6b9BbS+KXrJePyD=+ zsF|?wd3hT~%kjy+i%2EjfcQ}}OdcPA<8w+(VxC*HU;;g9ujEARMKrbCptv9`8VSPi zxxY@GUCJ1!K7KxPduzhEk>&X41PE*yhV5U*ux=HQf-X+c!k<2?H!afe@xf@|hkN&K zt9bv^qNY}xHuy}46%j#uvh5~g%Ecy2c(D0fHAWjxI`SQ?8M=rFRp0U6`Cs0y2jZ~t z&8;6kf$(N#Vez*5p_^vjef8<1BRnE}m8Fy$m9_2)U&kJc^}9gyVVr1+hbwPcWZv{~ z_YlC2Rt_B|r(x~*2mumv^p>;1ndauBxqXV;j*i2n#=4u2?3_doBw^QYF&~PvTi{N} zu6*nmT1gw%D%SSioo2MiXEnERPIytfnirSR*k1>+@BWpvBx-6LVed(aveJhd)#}Fa z?y2euZ2Ay6OFUWSDLEI}^Bl$Op{;!l8nJ zbr*p%4Dv>OJp;?#FqiGK*MmqP|2~YXSAUY!O*JlGv0Ar( z`S43GcFrCnbMyW|?wOv5S48(;I4YFe`m;NBv_=_cHomTIF-nl5D$TmEYT$sKfmsW6 z7F~HcXedX|!~ADA0%ls`gU%z2Ge#C_C;ekl<@VVq>-fj$`H3I{e?b*-oIn*yGs{Wb&>ftt@3!GH!v#3I|9&-`Swa#y6AckZNBt+uaw2{nu^`n z3LC1u`WqMZSjm%asUc0RtAU%(N+&soo#htVh#*>(h)Jj(#iR|c#^xS2UnJ?8bh5De z>Ss(O@1fu=y{R|$_wT=S+#F;ILK%0mKNh*#JUfp`C>w(+!uGmV+eT+4>OWNr3J*$c zVllJk)oZcdUVZLj_ z9b9<8);j;!=?*9()Vp0a`dqKxtxEMTV`F|Y(b{C_BPCy4w)4kFj#KLN zKU-8bWzYoigJ?se^RbwPSs3@N8LIrxHqhf6{7#qTI1?E{Va?Qe3OervxNeI3cN6|l zE5cClu#!|IT1$YZAEqqkma+xFuRHlk>!p|=H%-=V!sSbCvg0*%Y0qAS!d|hMNHY2K z>lkMv_eRvB+MLb_n6jkzB)&*V;o}}ve(Twyksuf! zxd^ew4-5tBC}~)uSo>%7V_8oZ{Wo4OwVGQyN#srA)7c?6{3Tdj>D_-Iby9uMko$A1 zN3$&8>z6N`lMHmio@#;sxqRyzttJWJR-tlc9WIBM!^7j7F>`(@UqKke*R^I(7CzSk zU>!ICV5+8JDRU+=mR4Q(Slhg|{JZp_!F@-H{-se^Ll=|J50OB0?~qp_5qX2a}iz$ZT$uEevh%h#2)F>hI+%U`*2 z)7uEzeqimu+?_DN9pEgF&P8qmwCswdy%zM5%U_xd;4A1`)_+CdzuED>9;kvIboPZq z@#Prf8P@Dl*SMtrIj1Ls9CjjBJQ@mide1uD1v<)ik0}Db{-f)DH<>v@Hj`I-__%pT zBJ8j`ckYDG=Wxv@8!8``%`3%-=SBBP#sb9?HEABs7S9V?iyCSYSs{P-hhG*XbkmNTOs;D~4D@aEaml9lM#ja#O@mBz|AwN` z8XdPRnkTF zUo!7KN9`_29H!yg8ZQLO>#LoQ6ps?apF{3bukA!gDsTjJz%82!=2nL**7&aQAac(} zo^ZL&cvAZ(8Y_zq)S-fJ9Bi6@Zl>|A*YBP{4}5dp)jE68e(+O8+p`2fX>?5caoTMm zzd8YF}%{Ql>e|vY=57nM>;BgW&) zz8b6E$4r|1N$4(4`MI3;=*&lF+>tkzVA33Ka^}7wGu0e;$#ArMxUmc+)r>^^HbOXj;YY z_}6)^|GMWr0b~fv#n!l>=%b4b{`kGgCWf%-@#_ZT;DEd|+#-Sce`LJ{RMp!SFT6n% z5hPUv1f)|^QCbOUNl`*jx*O?|5Tv9Vqy(fyx>Hg@LJ*Md+H}X8+jGu$zc(IZT*o+G z+_C;^%{Ai}cX{Zq?HC*y(QZ9wzInQD;la>0Q8FaaG`V$F%|ip}dhh|!?5x#(i{e|e zH0oVeCNdaUL1QaVEV?gDTKkllJF~C+3G8@r|3&bbIV%5!~HwArVHKuf|FYC}Q z37ZDaWr(ef*QiUjFTRMlm2V9@L!iooIERS~F6$YSDJRA`n~@lMF;_DjiYtqQjNtXt zkWco)SMM?4{C(ZVi|;W|EicWzyP4qX%eN5HGtc4iTisQ#2G8PE{fARr`N4DlYK4?R;hMjvhhJ@{N)Hl=ksj;d9s76G`&e(&l{Se5;zPL z@GvGT1BfI}IdsNpljuC_l6bW%T$-V@X_Y;_H7R!+!bpYgi;~q5IioR$O-YP9LjF_y z4+)5gP@+|UM>EdfiJ2TuhvZ}0sqnm6A^)j4Y6$?KQGYecU@P9bFVF>Np{>KQz1qQN$}FE!{E|^o&&UX)kck^K`LKWzO~uD` zhs*dI2en<-=uIeKzkV%gI@MqG(Ph*%2$32vq@g}#V$*(fX>y&6W0!?Jf3y?c2zK~d z7vD^Eiu?B|3SMF|$e&+L)x|_=WL3iB5AGL)iw&|8Z_0SF?=23PGzdh>P_c9231^-| z+i*`*@#-cov=D&%Ps$K_?h9AYqpvR5?~oX&`U(5(KYlc9(O~*k^6@z!y{oP#TZ2IG66spjzxQHwBa##aVJKPbST?r#HM>d>)%|fFwO~B#jbBDsmT z@l<7Eiwz{13cvID;E|%+b}b38m(I>9Y2Ds(KQsCCHPr1&a9_NS3aHC(wr^FYD$J(r z7}{-g?fe`rBqU3Us0j`DfFFKf$!DCL6B1x=I&6Mdpr^eZpJMSI&Hdv9CO#sjTeptn zH#PZOL>vWXU>h*Zb@3w`O1p@Cx%kiCyJbEnd&PQ7x6eQEcCHGuW?eRaD}V+rBG~0W zpK_J2Y&6Z**s{;<2`DDVNO-pz#qnn~K{L>o5Bad*oF>xlP-lrTziLOP=#jSD{Jbp# zRF`bjdUG451_pjs52b3KcKdWBrd)`L3y5~{tb2|JV&lCl?fM0s0+Ta$)h|kN*(Zr{ zSt)Zb_SEe`5iRua%}h4h2zbq+_ofgMo`|UkFh-g%Mj|hs9WJ$71^2SO67G%SsvJ)5T?L&Yy>nnV~N7C4Va%O8%{+?_XZES!~8};z4bJ@YEPCYyu>2 zqQE|JUY>bhri3;sh(-b+z{W2e#z@k7Z~2h^j&ZW8*{jiA^G(DOVEX_-xo{O6CQdF1ciI&D!5#?|y5;V<51unjxpW$^bkAMpbHVP~2PKso~ zh8AYD>aUMd5d?O7l=lx#U*0B}fMBS?OeaDiEH+iCyfj)&S)kp)G`}OTb+Ij^nm=7& z%kM<=-H9RSUN!_qtk5>uMCM3Z0HbTWI$8(uX4*n|_+_46eB!BI^6kOA=Lejp#@WNYva4Gl3KfjNGY7I&Q@RnMoP>rm) z?Hn(E%zo-U!%xgSwOpR;m%JCm**jWBb~bG9OZoAp=M+5>`R#Guq*B4)(NDwP0j1x& zNA_T#5;wpL?Q@U1XF-aGS47*nh?yNmp7~lI!~mnAedWU-%Fhlb1ySQ)^*sgCCC{wo3gOWl#}7w6O#daFa#;;JNQ z4ChHQ&5ZO8TZ{74({I6M@KH)_*w!{|u6_GTr9eHsy?1(tU>i!DI1&+7+(TI20I@Pv(lV6qutN@m(xWJ{moFU#_l2 z9rj7IymxZRHse{XAK&8iN4muyXhwO9mY*QQt=eD9Qq-VCPzdAPqFAim`4SYz>#kKJ zZ7K-lgT>We5>nFIZMV7T7&1=x@O@56?H{^EO;5rB1)xezbhxp>{*>=eom;iSaFT-? zOcdA@tpVkkMr;>VS_Jnz(ty3C1S|b+jG^xOwl7~yw+2g{j{d`-?QN#rGt>UG2dI^X zsAYZXPl*5lW>pT3`(h}42m=c*ICW(O8fckHSOD9E>4Ns!izwpsDdcz!o2+`maFWLG z&YiY|(0I$fjLVR~f6(?g#%1DSLe*-|HSiibDmBBBo0*;12sN(@gvwX%I_B=p*DhJj zntW2|eU(zfS?#h@)r!&7`==-WXXrQ~J92C%Am;k@V1OrCwr@RXYAB z!Rh$J*ckbHZ*LmNBEzU7oPK2oXh= zi}Wa*ZxT%M-ifL)2%N_OvECey}{xQJ-r=Z8lqM z?ELB8!a)8<;g#EO=U>ym(YIY+*_8CW>`=4cq2yo6?DU}(>D&x#ymH(sMO7& z7ypN>XJ?+DMos|Jv0Yr;$X~tRd5g+H z$oHOU^rgxOx^km|jif!K1c1mc_ICb+@K**L8YG+hVg;hyEGffuFOptuF-@ zF%OY{ygM)Jzv0mDgOy!RZ6XCX~Vf;;YAnJD@r0 z?JE-ixN(3Mna?o_-hIHmNuSNMci=T&}RBRpuh5uR&12~n9J4&b-X{|11N(ZX-lm-4o~q1O8?oYR3W z6F(2y<-$YGCLLl0&8fM#`?rQXdto9#He@pRRHcPGPowgs;4UXA5`2^o*npQbN z`VMTj?W@YIc+KDTswb*Sf`vd+_er_d*>t@FCJeMVYaAqY$pi?a#=lU}GZVhwM~Xmy z_(&_ow({B%HwQyU7(Q{}bS=NmW_IDUdJz|OvTtLr13#=O&0&w+DTf0S&h_BUp@+&} zU;k3c_}tEsrcLZRKkp5y7SyL|jK2zBvT9=7=2|gLBI>#oMItoYS86Ccoz@#iB!O?W zg!t%!w#342nVJv7kXE=morE1xD@aqn^r5hB%S+>d14_7P{e!rOj!U1vHBe++R?o>Zf}?8~J1-WgB>pVhTMgWzKy9Lcoc%>1W4f2dnE) zyq#FN78K)lvTCUf4LU$e2|1u;y;M?OcKJHzc5)*$+ZE%F_FA$^OMEeBTn@0d%x^e9? z*lLENd>Y|zENkij#srp{GP-Aee#rU|rkk2iR`H2?i3y^f6&^qQH8O%x?EWgG@QU3q z|FwF#laRyF6U@^hJvP05J=QpW(cJE2^VO}NSxFD1HxQ4Dqba918wLMPAw$_$)*O)B z3c3wXW*3s1md@d6U87kEOK2blqL8nD%^oskwQt?l=e@7aY)U;9OriUoxb4f1Y{oN0 z0&{!uY{PC-zDFG~Sn$|oL5iK2eBYK&M(6p79D5RwH_bw~MlwvQ=sc9J$p!Qavy}@| zIaUOOTc4fcLfZMrX^kD?*P-r~=K52U2a(d3TXbk#f4Lx_X83CVM_Ll}IE;D@)Vr(Q z&oBE09Umv{BW>ncnmmxWaJEyos$jPm{J6MoG4z0eXQMxzojOJE`{$v0QF^)78QxL* zWPW=jh|1F6=sf4;t9*U_%|4dL`UWh^+Gx$KY93Rql2FH6AC%GF|ICz_6!~p1ph`JC zS5%Kc@bl!ihREfjA-W!}`Q;}G;;v6#_SIx->2B!5XUOHJ1}c& zm!=%t;@L3^to6X8p4vnkuQJUcXA;%U_1FgHaQC{+RYq?CA6t7jGU}4^oB# zJ5)rI73dfMOi={Y3_GNtf}vAVlL5?Yycr6ocDHCyXb5>x3L0)2a)_K(`I?vWg*W`= z`5cC0X?*>!^#&C@11QT29iqdxZ{MQa*}(DrDgw05d^0oW$>fa4$jO7XJ1hYnRH1x$9o3oF73IYx>bY|MSCs(W01oGnFH%*<_o+-ETTXqAqsh~B7np&(@Mco#9o7T#8%PE@k z&0 z*E*jLa{qdT1OHXP&B44Z8>rUKcy|h(TNuIU1C^FSDMf@T?pj9D-F0`#1qh*GE@jFO zLuO}P)@VtsJB1-XH&l6e3I&Bx0|{<42xD`1(`cIB=<9m>$!X{;z0h(t-3r~_ega67 zBDdL;3iqRMt2_294@7{`+qXZ!rrrg@QrAwXZ<%V_mG|tpPe{E(`=pird8CFN+NEr5 z%k_9q;`s^ED}CwWON@d+HX_wvy^6H!5nttFTEbaN)!+^_38O4_=#QyOE(mD8_@t1MZmlG0qfJHO@mQ`hanEI zF>rsLT)k-u*znASm(j^lZyKdY>X0)*yn+)KlSS`2<1)V#0tAGXfxe+cmL#de-3@{+ zh+?3Fi~#kd1hAhVX9eeSmTv@G-@8xal&uaMED#M7o(<^odATZ#ACr@>V9Vsb`#$pH zG-*=2Om72cZ8D0R;T^Il(2DIvs@UF5V;e%q456Z(sM?)hg|@vyj>^}PZ@((*BaO^=Yt~~utkDIw}$qQVAf|=uT z&HD?ah`Y|vj!2?PP!_lt|Ks4~MJ_XL6YIE`qvTrvr7>eFf$l6$$*u6OazSwZu0K z@Ot8Dk&}9?g8wmVeZ+>q@>8K?PcRRcLD1;R_ zeADprp9Bx5tLB24G$!DM2fsYWMiGa>ZSO-34+9D$SQ~Ha^y0KihX$hA_b8dgyAkx* z$lo#cEo~`pzklT<67uI;IEovO_Rr2x{vy40-whzV;!n#yJYk%dp7S^dvO2%RMhkeA zrJCZRE>SEmo&uZhuh$_z|DsD$FGES~+jUYD&yjYgxv2?d7YELU z*F^nryr3p_0bMG1R|7yG$gr4Rfe#5kLRabNy;5L9p!}AB5=dGIq&R$7Ij7mM?S+K} zV4nG8WifzC3;{mACv4Tbz{Z1mo0v_r!5b5g<4qF=$dtqh`u%;izkf^w5QhB)#(_yE zJ_!j^&BDygJ0NVQ1C+zJj2u>O7dc6Z@+bxbFp!(ClF*6)oBbbMfWLShpu%%H?-~I$ z?OpBv?}a~T7RgV+Zbk7RLA{Y!YjrS(=nBql1VCtM$4ZKd8i%z#f&q8?8f;L3M(?f3 zhO)Q+q`-*E`uFdkwi?vXA`7n@rK$q_uQf?h>)=aQ>YT-Jf(wQ1!jC69I@E{;aO(rg zw>T{AM8`fEkC{J;y8@`}?-1AkGx`p0vHtTPff7k9>SZyWR zS~u@C*y{D8Lth_PlitD+V^TFEIs_keg77(jM1c4~wsKaF%kLO2{Z5pr9;0@hz{kZI z5m4@P`4_l}+Uz$2bP^iyu3aOfz$51?QDDe*BLX)-uk!MTtgJQ#Ms##^z<-Yv{NGda z{~%T*HwE6XWFVOYEOO$}a!YO2{f@>)G@vBD0}TwYDfELKQVi(4zdtv_Q*gznvqTkf zFbEWd&fMhv>X99{VazgeE7}w!e?f#vV{JbdBuZj|$LNCcZG<01sxhYrh7>iF)x6w* zGCG=CC&NW79qQ)e3Ej%PY3TPKmsRIUz5xy(eMJi9Fb$1gnEhX5*<*TQF-bb-z z;aUOlmyaEKDC1O!aR1`giHITyiNzocgO=xClUQkKln*`BVqkIn#@sv@1zP?7-M2RQ ze~Ylao-Z7Dp7#pk?dz^8bRvlomvg_2}OE2U;(Wqiy}utu{)L$~({+c0FFsLS?IYd3n2R zF~B(iRN550C)~gA$YS{&*j!y*6#)BwK#>R9+bF&?NU=1TKL4WfGBM3i$ zF9zSdXPCACvS$=g7ItS~rc4Ytcv187lL5`REs@vuwV~m`1bU`wZjtkz2>@3K1NsWJ zg@J{?UW(-i0tJrF&M1G{m(_3Y0i7KU0pRIug;aDDB|Q0GdH|(>XvYcJlz|2HG}hg; z^6>Z^ptI*`^hA2G&tKc4zy{6+P0Ns}M6JfP~>Q4BEAUu)B*G{^QjU5QO=O z$W5hyJ_P{g5C|e7BH36DG!TulEMWsk2;1untwupTVJagc5|%x zo4*0X99Onwp+W3Q+6|Tx(nCB-p&Rr@rA0+gfFF2GzZQy_??89|7#A1N*QX%%^l9Cm zuh+X{;DGzIp;$7s;pK&ahJd&Sk--_bB_CS|x;i*G>;rO|`2q77*&m2G> zx%_nsnP|e;tUkg8$awfl;D_te28PJKsi|*L^M*k~6Rr@HO*=3{EGJyEuug%?FW`23 zqdh6840eET*#_KZSXif@KHbe8T0tKg1nC3jo{2?TyMGQTC3>cG_uWVv-)Nv$)?fIv z!RxUMCjCfg@HpJ0;&a%@8(IK5qvdlylwfCy`*F@M-}mq=u_6|$fMe9&p4}JH)!ls^ zu3*4Diol7q0ri&sXlv6h{ii?Xkp-fO)Jp)8Fin7EUwUS>J z0V@X7_}b!G!ocsK|NXJbpTz8$FL@JIPW~5L0P`W0DyMVe8vX(S z>?GE~q>^vn2q|PTmGTeh2(emb%7t&d()akVU4dvA9aVQsJeUy!7EHjp$~E72^=kin z8OxkLKWQ=&5-h33wKX#sf87{;W?*P&YPP6fR#t|hc0zDEXdOdEZBQiane8v7^rd5A zk#Rir*x%r1{;-NYb?BhRCruDZeXC`{1zBsFmR3}56m+lbB{h93g>D)>X?5#EQyG^%{PfR(?w)H|LRQQ9a01c=X|^0@`Ix zJ_cXpDwWT_f7E!J$xLaet1}l5rKhERvoAZ&E#BZ>8HNn+#fxDxyuRE;8O!~P>f`pMs0zdW>4 z>rDQeN=PUfBvhW52)cr2xD&N_$R>~mWDBLX4!359Hy(0ue0uDKPN?$FTL__(VOB`u z``}Y!mtB7wb$8C{cXR_gC>+9NbZH92B!J4tF~+tHtPtnLqlb|3vR-=PvbXXb40dU=|F;+(vwG&V zPui!ynG=kG*a}k$0L!F7ys~U}b#=81@>wm-Jg>(3X%($btNT);NZ!DCt8BR{8XH zPEZ&^XnplEU+ ztCl@vSuQE2qH?1v`kt~o?z012{-J{|L*rgf5*$TQ#U`}^}-kNmsNKXii}zmrNifqee%`; zgq5@t6B7|owCMm<1sv&peSChLUA{v^)C@V`%QCaLZw;nkLTzPhdm?{ner4s0!*!UF z$R_Xv!^#;{g@Tp{2Z&n$3Fwsi@X8h7ZsLFzkQ1SJ$Y70=ZTx-X@cCJwdCNnJP3!r; zZC98&PQX1i^yWe&q(7-z=2}$&iDr zk`qBj1Le(PZ^|OPkjLeTKJc3NQ~097w6h;RNz!*j>*H$zRjiA z?xOruG*f9Blt+@1$U)171`3d*XYWBT2p00AIu9*KIvbU$H#IlIcYHPw3eOWd#946G zwlO~3DFoFKsG(9Ie=+!S2DC*c#WWv`Xa5UkuDdwrIU$$O(IMcgthClkXAKaLkOb7% zd-!#@C@U+&c>&UwYa#kAa&kGKYeC48aj1Wq4)S0?3;t*qae#nk4+AUfTE58=_7)W> z|65iF+LmU9y1LDv4n_O=e@{(B?*PX&U$qOq^KX#kIzK;04!#f(K?f%A6nr#zQhiVe ze0fnABAt0Gk3PS+*ak^Oh0VOQU%=6iD%(X|pdL4lk0)NaPKx@?9B+1;`>8Ar&fz4xAI%d(rLYXrLXeN04Wr)x**U-KWEFPHgjMzV zS5uX8z=-}c_)Nf^Iz2Vz2|E(*If!(y&jJDiX*f9Wq0ay`6RhiJrq7q)&c(g+sG&%= z4P{M{;<^nB%_ix;_6xdXzY4W`VdD<`(FW-i=lwM%FT%g;H$(HQ;GL{RSJ2nK#qqkM znMqX?(XCURUIc5a$fhhzQ|H-157z1Crj=@;RyWijyJZP+aiCgs9ST2D&_yUNxoQrN zPfJTHk>4Q%jvZr%D|{$}-eft@SXhQC4(wMhUcQEy3VdZDUZR7HURn{bTU~N$TP@fE&gm`JkD2D{;YW!vCxy)xv{PR zexw{JcqFMM7dtEZ&JOEW8-5AC&v{jtDj*=@nz4^#kGkWKkqrmi+cQwj({XVr{WdT* zmVyESl!r(_<;HTTFf5D!K5&-n(f01jP$0A+Y?k_{=)&Ch*VN#!w?VSWq>`;rhDk?7 zVFq%m3OJmnLr@t6X>TwUqZst1&G??JjmsF0Sw_ zl{rSJAO#`qe6H0V7aO~|UOp-P`rl(lnfaj0Lpl6FJLHZG63rXLLdMDr+kVNFn2jl- z)A`_2=9D{&+s?HUsOG&)rA6lF<)K6vVMV)t&7I}4-n&Nvxq!CYUX%@P@Y81K1??Ji z)(1lJ4%F|0m>LM0ywD73NBuJ3z~E-nhjn+(v@OuAZ0Z%h$Sc3)iO>7w{Pc)~OV59M z2ekGW6+U@lJpE_YpU9>~$BL~>a8KyI-{;nEGD7g#+35B=aPRK3YgYL&q;9!Bv@H<0 zBXV^stkVYl0$BZduc}pZ-mK5eG{Jq!^!&?Uy-g#eK>9G)2XTT+pN34-Y8*L%K<1p~ z4oOrTw_%7G((^Ow1~ovu&57wVi-plTUAGU-yT8{Jkx5A)V%@)1!u)I2;F z8zze&$_0b(2w@L*Nx2zz&5DnZj@CZ+P|uHsm_G|5h@PpbRM5N50A+xSjc!OtVT0Ih zjH~^AySMWE{Gk4#BZArH2IwNt%fEZ~f@%^Qyh-MiIU+0?{bs&Ujlx@+(HQhLF*lGJRj( z$gQZbiIWgx)};s}xA^xT72G@pNS4Z91^r)W1;qwyywknFIqCl>xf%={C*WgYQNLfA$lM{Neq_Qvwk(jO)`nWH}ZR%Wn$jbww`(#rat zG*~1QeQg)FQ=ygdIEyQyxzg1Pk9E1-4&4K!RpAyDG3h=j1VJC+6kbzU1}Ned4eUH*Y_}bhOFP=bcN?(!f3q(%}Z@^+{?s77+sd4}V^U61yW=h8s z-x(()6NOftO70PLjLn%m-m{cPUmSv@Gj)DsSX!s>Bna20SpAyCmFUs>K3l-PN6Lft z8loM>p|FcQG7(?o)PXCiq2BGJ?3+R!{G$d>G!WISBfbPd2_fjbtKcpKta`}WItGiF zn3zt_PmrAn({TFwp&@agqsd;6waL|^sHmvs&il_%fe7J$@c!gb@s6+!zUM;z7+Cj2 zIcyZaiC73qo(hTk9VitIVcwv1{PGR9nQINLrMdlsafL8OO=qZxn5^{t*4;7~8K%p? zBs^I)9XHMUK7>>P5T}Z{Lx@L@_&9Yp(Dm!H+M@2al~@divHOL#vRMl0Yix{1yYkxL zAo!9(04m)jO6`O1vD1oCr<$t|DomZdUPDZ4j6x0dG`5**O;^-orgTm2WY9Io4+*|Nb-DS3I- zq4Q!`#s?%+ofLfVXDlaT8|qz;zI}U?2dNlpLcxyN4;oVQ-AK?&tZ+b1n2wp2s=q}} zb-SL|3CNZci0Yc|w@=x)DFES^2uKuFMexR3U6<)M8mTvAn3N+$g%G2muk@uX(VrB9uQ?AV zP(WiaahpG@`p@7Xj^~Y6((z~3)$B`y1Rp+ za3}hwOnCC-30zuT0|{PGyU5`Cv>UJ!;}HLoRi^AZ)h3=?Bmty$jo+O@qq@e&SIBTC zb{gZ#A-DUWLUh5=q{e}%j!FE+{tE`C*0wg56r}SXgU6vgheEq(kg@3^r~F?{xZWlq zo4xpfEL|)v>+tsSdJ28u!S*Jft2s7cCdBS_cyLGVi*SgwE}6)4M)^n4Mvp7GhPgo^$)2}y1F ztPLdGz-@;vD^y%f9T5!6qouh5wIqiw=EDCS8cM&Pa<&_K53sSZ&DOWk!3Owl2qkZO znD@y@wJn*4+34d%3&Dvmlbc_64L9HW`ZA~7Q)bYmXJ8n(D3*6xJGrW?%>DUnlRIg= ziiI5wMA?~M9qt|6&FwhJZohhEcS*$YB_ofMS=`aA2=JHv- z8EL5k^=woxF}qe|VHDQ?_ibR}`MpLvCaiY?&9g(>!*_Ck!E$IgqAj%l&w1aX*Y@v; zMl4U^Lal`&WZ*tcYfzR8^4yvG@DloIv0GkN2qm4sp8mU&lHVm4C4u2Jm{*T6dzgIi zc{@yr4#C{+xe#1EC*HUFa)*yr_Ub#En?6XE0IgOW9N za{ljp_~S+9ztVD7FS!#Xjk*rB-F~Kkj|=NTv0)pfahY+cdYR<}!n4+!=DT#H&*1M( z_R5M|GPdL2i?fQ|62qJqS>p{Zd_ATTp)(et7u$e`jZMSIcoo%;1{tfJCpe(;D-A*v zsx?lVQ)nP_Vmw-t=!VCL-BO3dxqh8VyUw*r_uu>BDwWN#-A&_-n6$)I(cr$*WBT}2un_fIzyEc zb@ds@B_*!wvZ{=TAJv!-T*+KBhlx;q;mf9QZI5V7RBlewj8Ru!UcOxkB{np6@pK-; zJO)%z&_6~-W(;ojf-i?m^yzDhU%A7_E4TlvNc|H(?S#L#)7sulC*65C*`1-6LmWZx z5xYh1VDYDYx2Tn#9u935$Tq04-{j|5WS5&^J3u?5zsMZqw5aX)2;dmB*Q{~>eNZ|1h)8a}? zMiI@@+^?i}zSp{%*;_PLs!pk1#dv7eKNRL6q2D|^yZG`%HA6BC&qSe63ke0o?*8}I zRziZUnNT6$W& zm61_$x{rICv`&&B_%^o}fi8UpuSH|{5bMit z$ECBI``B_0)wW*m%qHwVLOBniR@XEprsilZDzWtkB{A1doA35lX#VkW$xcZ;eF|ZT zmDN8u`0VwSD1LAN-0AwgzZ~2(I(qC)I^5*~eJk*4+c!FhGNV zi{%H0BPb|%;1w+o4TSlZ>S`Cc!?A6@LIld*+7zSO#qc(5Z5Mk(fiPQG_9lPbG+Fm{ zf29I)*hWxR6doy}`^?5egGxf`F)%NGOsw*+^^UOhn^eWVjz!P8_5FeYVsgar#B_cy zujR2Gg|PEuEf>sj!=9Lg&14zr{wez%zTQ)cbSYWd7Zx&!iP%atdL)B>ev~b@T{oOK z^R$U|baifXJ6$2K+Wknz^7o{O;YLo^E7!${tpX4Y@r**;t6o$LD5_4nRHW2OC;s3eD^Gj^N?;+wCv}FMdOIO)B-^_ z@}mXbI!)%TNk|Y{ye-So<4_AvXWo-k6~T;D&3HAUu3sw3^+5129~9C zHZ%CNa}Hji?BTPAMQW+1YQ}Jf(i7!)6hihD+KSMJwpm1dme9U?@_4Oyl%7u``~f9a ze5>G-Np}@ttQjG}{vD5c2IPDx`AF$=sR?4%wa&hxNrg)Fd!QcTA9QuGwUh;Ls!>3q z`3T4r@q%%r`8LI;mmU3P!ZuPSQ`l5CcMg8xqWb9V?{4Yqx&27oXlC4oO9o2okeXTx z&iyVS+LpFj$qDwC&-O<5=r||?Xigl^UcY|bATKjKJvEoNR{X6jRcm$u0JwK73pLC% z_sCN4C{HFrJHS)~NFbG!BK~*gi&vaz|Bh0yt8bzDBjF9Gd`2wfpnZ`7(A9V8`L5B> z=oWyzWNqh=T7jie)1D$;U!-yIZ1Q1U@sT`LXrhDCliOOh14!&T4uW3pjz^ z^qgVud;=8Sro<_D$PXO{rBmbiw6ivuG!`cSRybEI@6 zbJt*Hpw?JSuOK^HlCYek(bdX(`8n#vY?)opWy{wK^2rci9qhmc`RSohP4CaMY7T|m zl|Ez%btRb2eBC5nh-)$k(B?EK98 zEO%(eX^;A0!7C~@wku>~H7W(FfrOb|nJ4S-psLPvGc&t?v__uuydJCPGIsrb@2_Cf z7gs^R1BU^{oo-|RySCdzQoFwuom0w$PBEB-|2T7} zMqE!QeHPIW8XBm)|IqS8stU_B=~>&CNq2+wZEXkvN@a^>C3=U$ll@hqF?NO4y=50g zhgl?@X&LOl=Zf@C9c;75vMVR{VGA;0FGlCS^d*@LY8_j>h4`tcsFI`je)H>M=ib|O zLHyoaVzu)w%@Xze{@!oC!W?H=nac%eD3YlAp#^&56Ex>B;vaFH9~FrXOEf6;$M)q+ zc^y`=wW^z49Kc-{7x85+&HHG(Wf(qOxVMY&8nUp}Mc z&nS2M-9!oRC8TV?F1y1uJue8sjuYyY02<*J$-#Wr;|VE=b4!c|YpxG0oJB9x5cec1 zjN%;H+_xut&^Cz-_+~!U$7xJ>=wLbdy7ggt6tQ3-vqsUi+4=)H zUbLI2Y%xstGUhS&epj83ABL`;-j9+L_w=`C^cN#X0&GdMGrx&m&UMN)HWdiq;nFg5 zy<5s=G&6;pgY)EcFXqogRr;y+%A(x!*DZZgPd1ce$t&;nm06aNg{kbI+BP zD2-|hxV+nH#{4ArthK#8b~`Hfjgel<`sy&SW@fTTHo!8XuhK;ru}#b>7SKPB`rF?F zZBRQev@t)3Ti0wCUmjabCT>2iB%lPDaW6D|7S+c7{`j$n1q2kgI!EX02`MRi+qWi} z+wua4cr^!yR?0ic%d?j;F9OQUYnzsMfHL`*)2?rlJn!W zs9fbDbvezzsEn_soY!vHNj-ea4sm32sG&EKnC>!mwNkb}i#&zk+*UfKrjSQ`*d<}z z->t7;j{ccfh>9v%Z9|vy!Gq>Zv&-{jXwOXs8!JCzyN6N&YlHxbngQ45>G?GmQ5u?2 z`{T9X-qhjzRUG8vQxOpXM5T>JcyD9C)MF2wwDgMk<;D>zw~-R$4WNHw2)J~9)Jh>F zA;5jSY;&`yI{~jyW7vt}P77SO0l@(}ool|GygY^4Ikd>!;R5-eGnrSha9dXQub)x~ zH_i6*;tZM`(b4*}e-ln=K~u>2REZ&am5Pf>D+1j!uE-06=6(1-SFTN-we~(8;}nUB zsOaZzjZ@X)f?{eq@vg5$8^D3Ygg9TKe$T_~Wq0^dv6OWGSF0YC6&{mk6dL>p^s}== z`wctESCPAI-%X>`=0Yh2Uk@${Ud6pO^&Uz8^{9a)D5#%4$MW1MDwFK3{>!GzcZJxZ zhdk~mbQVL`kfTK2km6{Z26p2>8{W^)>0}x7@rKVRyx~@cqZwQ8n$D%nf1ftC@cmvh z1mj!12~z0>=3l!LLJ73%CCgF-tz)rhuAukXPATwJ?G0ArLVVh_e2Us6IdQx@?iw<_ z%tL7Y!2t#Nf@{DSq1rTkxYiOU`Vkm-aQ*w5*hD>s1g!e_V9nC3J9 zD~hrQuiCxyX${+jqRE%L4GlcGNZuZ;Y4i8YWza%ocm`Vq8ic*axC=Yex-c%FXOe2J zBiDEdA@g(3s|*2oVcDd|dSZ{@2^yT%s2tgb$Hq1!B3tVZdA`>r-8UUjJlgLpx>Sv3 z1o;=YyVhri(B340cbyFg%(l~Q8Of_*`e(oD90}HVDCpFi?(!}UsQ9KM_ul^T?zFZoI9)|Pl(#*FVACO zlDgFJaKNLV1x>je3f0J3PQgCAeysMsM7e7}8e zDiG%RG_c&CIGOX9ivHONryL|_-3%6>P6+pV+pStR%mWLFghZSMhF31N)`^fSH^a@E zQ4qR0za-VUW2C1iD^Z?&YP}?&&!XZt*w6cdEfMXL?LnP0dnKr_#BuClW+tB%lVgS^Tz9gMu_NK-GFKde8IJPX_|UMFiQHI(A~X8*Jyx=Jtby zE8S-0YiCPF^#GL*lZS-l7uk~;;_-C@8oRs5evPjsp8|6yoV2!UY!rDHoUA@+QJ>KP z3>^f@DvlP1SidjoPiDBY+^wx6zq7Lg>07ygV5Iqw>Cj@shVUc%-Mi~#&r{8OnQySD z#VgG-E0sP?x?eo6o#niK3rRD{r{1H+8VTs&!O7I$e=He^Fb-v7J&l9uaxtRh4d+AyQIO;qF7o z3e9p=bA>FMZv)s$OxObp=O$v+=jZ?Vt0=|s9hs_-QDkNc^j!c8GHMD??xIm%_D~uM#Zqd zA#5ir6b)K=BE4wZ{Re?z&e)R%Pazm@kBp+~o~nHc)VFCqwx5O@RKVppc9kLqa?yeb zL4KDR4nz4n?xLE>SyEH=9Sjz=k4g;r5iSm9pH+Te8?SPgQ(G!e*9f)KZpjcl(6(lI zSs@v{(eZ1KMSF~gnG4o<6nPn22el zX)`?R|63(PM#B4F;qWW%PnMX31eeFJa==nRObZEE=E{tAaJH*jJ6oya;ZSC< zFluqtNfQ57rH$cnG7dR+oDQ8#eOWE@9$f&OH)Vr+uOZjdgly-;L3=y`5~Ya5$d=EO zF}fCEf#;%6JN}=fZPtEcA~f6os_NN3x0DnqedzJK%D~~JC|W1`{kM_|ugj7zn&M`S z?gyaC0IlCzUo`19oc_=QFAI7i6*ul}rL=vqtCYSY_!W?4GB!$F&&hsux7J;J8KS&N z@?=cDT7UUagiV}jPD$m67jAiPXS=&_qee*J*6?^;EO|X#Rb|x~%rJ&Dt@G~3lJQG>BF+>@KCkPg-?iuVC+LTF38=1Kpm`$sm(euR$~8tSZilex5i^w z6#;42Ri;mb)jfvvsixv)$BYzqit#y zsLgW7xO-u916l)^cR4w|AXBtjJ$|5HUn$O}|FjYkV@H=i{ru-cqcbFxE+>3=HOM7% zmpR+eqvAT>!5EBuemlke(vy+|rCa*PkYMwzvVZ^HYPI-zv7R*RbN(pHgZV}9;f6%8 z_H1-c@4Uu_>W?n?`9=1=S&lr-(aJ*G$9;y#J2rQZzrAW#WjA|=l<>!IqW1%=xTeTh z%IcbZ1h~bX<_GxrIDQTy@0@nyUmmWcdw;Ml>yY|e#!#O5!D9E1dy&Kf;M0BI^beZS z@V$}UlkNL$6P3$SX0p5d-|L(b(3v(S}ufEgzUl33VnY4UuRV}12-$|o>^W% z6#yAC6`V6qC4H7D$gdF^5AzmA=XZ>ccPs4JD**fUY9L^*gm2L-0 z!0p{?aODfZOE2X_s@5evc=+OdugB;A`?o1rCo;Np(kQ}P z`j(69oQv0FqM3TL{v3Z@_b}ar!Yi3Pp${^+BkUWf+#SRKquMvE#tpl(tOLs~q=U9c z=H~+oTe%?uKmn50o7HW+6F|mR@Er@=4b^&aoLOz23Y%T#osy~5g_y1?WT{pk4hs&q z60GVj$^ARJxH%<#fzS$9)V$u*NxmZyr9i zp;}+QC&rVjRNp?mxUPXEg3EyAic0KU??cy@D65B#o-e+b0hfsU=J%yW!9)GMTbi1I zJM-C6;}vgxJq_gppeQ@gXJ6@0h>zobvx4V(u=jO%UpvP{E-j6EfYt7cQqEFeCT4T% zvuCzH{0p-nRT?RBx|d4TnD^>DcfuVCAZU1;=UT6f*HV%O`{`qQfLu~MNJ4%N(EU_T z*&!w6T)eSE%qn}-IaFc2evR{|qM2qbh9{{#8M8TPIk(Qm?Hs*X!GMQ2Qfe<6zw_P? zgNTw?reHj}MNA@9E~#VPew7by{w(83xNb_2{WQ&6Bm9qw^qQaXAl%M z)BZnHeRV)pYxDI%6a*1WkX95BDFvj<0Hj5fR8YE+?odRKlx|Qdkp}7R2I&szK6JzR zX6wE8_kQP(dqv@}&)(1T%*>j#);Milzabb)yQv!#oFJY%p8*+XoaIX5t*I7;wCXhX zmIz+w)17hw6hpbMl%MVh_QG=Sh#-I8^e{wUD?MIV|yJn`I9S%_cq!S^S9}5opuMtdk6pYv0cq zX&aA-ntL{Mg*m|bP1i`MLbG$fMq<}8dd%i7H0U+^smYON7I5*m((v?G%l+n`3j?&H zf{?pH71lDF$5?2;&UB;d-N1BqqJ8>-{YqrP!jq82yuoa3%6^tOdI(7`iLtDHsxTY<|8DXOFaMo@_x(NJw6n&71y7{T-VXfzQ0> zqjC=YeMh!RC8R7l@~!PsUY2nh^A^8};)+u{jI<)pw!@HGn<#gLY$n6loa)wApKo=N z2p1zE4TLb9R2%t8lBa49-KSHham(!Pc8ug7|>)6so|A*-OSyJFDuy$UE%vEKksscO~5{!1y$Z3dj%5q0Ew?H%NK<^gsBxB6C_Go&n_b@zfdoU)h>sFlrG1v@SzFjoPy? z_7)ji3VJrp+17}cW%ePU6=gG@e)8m4v=$ihz!{(e3IVY8JwQh{F*_?-_5RGvC*uE} z6aq0<-(~bY++J#6tvEsicYM~%za*h4*3ezy;sS4%WYY@tJ>@d{`=BvMK&E?!hXeV! z5{TeD&PU+Qu_c~+cNKmZ2=*ihktDFx<>ZJ5#$Gmbcgtt406|a*ei)>gU!Mffm{Pcj z#=ikQ0b<@^S^`XN5Zq~R&3A%IP0e(Ctjs}N!an?v?prd-aL&^Qt75eYFFf|6B zgeA>p1inBByYQ)+^TC2T2mERBO-7qRL2>W?v0>gU+*pu#y#snP%xkKznX2<@08FrIKi@oO?k+=8z3tOvF87ZGn_6sY{VfHciP=G(=2{kkyptXS2eq3fRIuxldfONJ<| z#r!DV00nv>Ru?3Kz)NFu*f9dOe_e8%Gk&GbxRcn+mlVt@KPNy*DPs9QV zwd<$8yxq5dc@D@nqVc%fx`+k(z~;HnpDzG29&~~_u((k$z|+ zFwo$9DEUHxs{XX_?HflYr$Hds0@Yf6$aLr@!r%`H2?;C|G@m{T0%_Fi-XAZTB|pFl zeF~gyWacaAZ$Z!2rRuti2HB%hfe9r_z+p!V1kDnpQNXwCOp&LD>4>#34FeHPLVMo6 z!Djv!A{k_8*jgD5gQdvkbYMw~1DrF|1j3MphXwMT9GKM$*N#Hu<)9ewA?NY~0v;P| zrLyqAgX&3`jiBFmtV z|8PP13+Geu3+9DPn6_(_l5Pd(shM5eo~;XO&ZPiDCUCNz&xN6WPuv0?i(B zF(8A3@iHN^{J>$8w0SoXmS{;Q6Ws`U7;ek?3owAUdHerFTRG#0S7&#f-D#2jg}7$q z5PMQe*=ZWwmJNaIe4ptW_)#28H{LH3qoKrn;z#;yD9Ooj) z3*mRVBV!IjBp7lx4KSb>PDY4ZJD8kTqOdry-kO`!04Gp?r=r+39pnc=(YnEW_1~jz?A**|)ZbWw)02nL=arjxVnGftTwsd*D?$7+t+Ek#`XiE{o1#9A zYkP{ipN5BqfS0I>@WFvxhERjy2t54-hD6wZItXjH%B1v^H5+3C+tg`gyJC3`?Oj>E zwYU?e+3Yhgpj39!Yj7X<8$gSNrTrL|KRbUuaO)!kWkc?~KxPJm_T@L|YhZv{*x@c3 zne>??74Z;;%Y#BIhQIgUOGgf<0BkMZcW5*^^VS^Lb8TJPhNXVyI`f6Ro^ETYGgQ9=ayU>ofZQz@=0XO53Zj6_D1uGt{jFF2*&81kZ;UBn3-R(so+s!01!H^?M_ND*vvp|%)?H0=GrsG^o1&nC zNbGM2xG4q$(xkd=pkW_erv&UtPtex@vFWGDs7mH5pvB(h;J5`F59AyW<%6A|Aci9z zh;JM9Rr%oB00%3o-j7n!*0wOGqoKhA!p$jUDmw&(u-QEb-=N0+c0>X0DnLCCafX5K zKb9Se!dGCdG{m$#H6hrY_N?dFTHt5Fw_)D6C5np`Tpmte2aRZ~9+HM2pNcwdob|){ zp?j9UAR*8eAB%pte3kSDk^6}H4Yd}=>sXntr|}*WhU(y)!0GvC3t83@chsfzw^GQ82zsfXaYzP9#7r!6u~wLnIEtD+}amKqM6gy65B zy3vi?xDTg25+RS)YKS0edV~K}XPPoQJQ7F@>RMWm8QrfrC=&W{_lmIO;i6F!+w_K;aQ-f!>?6 z&jQ_*cn${#M=FxzldCHV?rP#PJ11un$X*~*NpKheMkUx_h@~sQDAf+jLuwFd0Tmxn z&{I=WLwE^xf-i%?bRtjXxE>4g)AL~B1AKiTeCfi8-CPcd*IEl}#nVKI5KI3RFw+3R zRP3(|C|6ina%=*btOu?1(72CmVa{hOJes)&YMDvOVbx$Gj5w|xpdH<W(d%`c*VP5itn6$UPxowKaIh>p7tY;IP!hs(`2Z0W zE`dhG3pwh51J*52-`wm4+3F=SGB1#8z)UGPk6}jN?7AbIXC^&0#Xa-Q55Tr-v!q7+x5`>!=9hEP>#`aDR4Y(2HE9|) zOol9VjfO>doxB9xgg-3YMIr=xH z5NfHd{a}AS%JD+Keg6_Cv^&c3eW6+Ytas5|)BVCEHQoOlvz|mB6jEoVJ0C5NoFoa(TB8DG*$vhH6tRkmO#!K6qX)8988?>Rau4%14B_>9H=S8~AFk#%Y zTp1_S4tlK0KR4-4!}-P0^YK0AHNaKX8j6m?8aSx1G8!(u=$aR0qj6MVMP$WG>0o0r z<=;k*%wm8R*Z?2z%_Wz<^CT!Qc*!s2GL^pDA^A3j>>D(m+)y`5lB zb*Z)ZMGN;&7ov`%*i)a!i;z)w@nUgUa4q7QE+AS0rrAK56C-e%CQr&rP$0gX?2G4m z<*)@()TT$QwPG@t_99+uYm1nhn-_bp{in*NahT4w#Yi-jnmVg;`zy=SCl5NqgR&_z zvCzYq>`?I~D;7A{pcy0X8r+8Szd3(?{`Pxp?81{|TySQcTZ>?f=5u=gz^Qm)+hP5P zbjHHS$Y*0e8-24qnjc5li$w$-URbDvv%As`F#A@1!-#)9te;5@&5+Y4RZb=Q-_&ga zImpWLZ&W=1B52d6c#lHMDZ$df%JD+$tCt%H=y@0xGRrl;0fF1|)o{UDcV^3eg7*nG zXg$QktlPS4SDG}{W7AaI#$tvCtfo0=B|;T;^f}11ePrYKb@sjpzAUm;8#FaANFqtD zauS@fdy=JwdGocS$nIFypf@Fs?wzTzagE}m14h<|YWRc%6TovnvfE~}m}L$1P%8}$ zP*K~MwhXN7_y8VIh#vp%()M&2$gcuIIM@78JP}{M$+s3=ffVIrL6Mi?h$%m+C9vR%!@eQq`|*xed8AQHPg)D<>5SN>#5z!XMCQ~ z{4R)OU$fg@fo1=yt27_cYi(m}7&j-9lwr0}m>E3n)E#O(-)nO{-x{NfE}t?i#&G*p zu!+!#*Am~{NfR)XkH778$63W#&VVuTd9Pl`54lv)kghJ3Jjs=^U7wK0)}IAT%w1e2 zI2WX!1c%xRQ4{$$wDHV2l#g!~(bvp5cg#h(4Lg@F4{qyvJ$tArMo0@F z6=t>fS-+lUyezJqJ->8Yq4ZWCM5QTzYM$D)cTSM>B$&9>`%vv(6)E*HJC5Xv_aM^f zXkQ^%>GxK=^Wy94-xaILWO+-^z}NqdLW(FzR&J}5o;J(QPE+3`zv%&8bed>}%MEO% z>m4a--MFK?<_FVe&7m&tvzY}q6k|)Ttlz;Ae(fqmZMpPV@5$Yr9l|Y2?qD*jMIazF zhIo?eTqcpEOLzw#wd0dd z&$KP4BqQnaT9U78nM+J~LwOFbMvEJ=oKaE^#=uSPNqfrQ>ohu!Zo_rLNm%0N;Wy85 z!mM8zj((K`^heWyQfFqYlLDSXQ-6kc)MR82AMXjk^s^P=Pl1V8w z(|v8^C=TPnNb{?AP65s;V7EEL<&^b#Buw3h?%utN9S_}=e=c+73=CpD1rB*RW?lD3 z2xEO&iH%auyo0FIdSXfpFhx^2k0T~u5)%4ZE_DXQ3>V~RPoXIUvsl}fc%3%5skcZg z8$0M<8^^H47Hc2vgR+vCi;D|v zDH_9A$2B_+ZNL={F?`buveVrtC0HFRYx(lPP3foJMF3sEd4&w$3t42PPjgC&*Jrrc z+eH(GJifHUtUG3nc8sbn)D&YkC6_}j3>))|? zoTe|eZyue(BxAp|$`yWe2A@s;NY_@qsP(toA-CgO>R5ODZqv25yyeFgV(q~lo*s!> zFJI;)My{Qd?8FpL#MZdsX^NV!wlcIw^H#Zz3GA6K-?rZ-uPve8y64n*VPj&1Un$rc z6C0x+%SFe_Th~9#X*$vEj~gDiLOL_M8oRiY_3J`+Q1xI=iOzg9p`VQGG4>U^{pIpY zSe2(Xh86425dQGuiMo1N;o0joWN{!`WK zCFHVgyxUzy!#UD?)>OQ0rFBA4I$khN$hXQ#=ygxlJ6U5NSed8`d7i7puc_F6D(-5oR96n++_e3nY!b;yFQMSDBKpL zB*pp#OX+9ZQQtM4x32Hr@Ro zx<_mo;DLdqyZ~6l$iON`DhL2GdwkbUdHecm1KJLLUx8pc@(cVdEZf!)Ypj5|ez&fL zz>jAv&|woz>Xkw2{g?-RX*&!%<0TGr0V~YR%=(?h%*-kUGnHenwwCq@3yvL?`CSN| za0qB^VoNc$KC3+ZC{LfW(Bk4AJvX7Er1Z2xPJWTgpm`yvVSnBYW466P%bE77E<&FE zw(7}M^O<%{UO|AMdQ$rDzM4CEb7BMuO!m*8T6M+gw#0X$H87(MwJUPb(XX=F0^Rot zxlLY{{D^vfVr7Ptel={6q5HHU%!4A+;r3z;O5Huq56@yt`MM5QwfXoUR zFF%3b%9(S-TJW8+BXdYS-55kq1jyK3V}BnXse7StI%dPu0G|wfGD&zRDNP2@*t+9i zWeY*?mjI*(ejt~-D?qMs7fLQTG=xD{E#B{a8#q4X6S{iWemg)t%DJ2DRrT=RM2TLn9nw!uY_z~+IvQj(?oE0DwIpzId@v1a^Z4A z?4AHrwqL+g9JM@}H%!x)^xBqHZf@Pz2G-%H*;*A{8ar2|-9K zOqSb3n}O|yo+|RT0E6}qo$?Iigu7^cfH(FViOYM6Yyf93A`AP{R#hlW5spI z;M=A%AS-BR8KiOm=O+HOij{^U_GC|gZ}yPXPkw%rK@XfhNhTUe@0FDtW!v3sQ_Z_4 zc|b3_H`f$}$#mvFQE_|SALp{B&~MY_6mvd`^QNk`;_4WlX4l@mV28v@Fu9X+g$N}1 zwb?g0oYT)hJBdj>$5X-dXG7Bf86*cvcmiiwXh@ThK%@uwYEnPoeJ;!2GQcY7NbFc3 z+6g@|Aq;Sc^suiev{o=L!rp>oMu#O6gmAwjm$VU{P zC%a5cR$VSoQs_YM17f-?oBlY4A+QJP%+$3JC9wWx;Hv^#&IiDYFqUX8=2OF8)(vfm znY0Wx4URu1<=m~oacr!&N^J_HJqHTYnf`4Tesdp4?HuoK^VyemNASJ*`qN0syGy%e zYB|ui(sg*G)GP)@+;r{b5nuBa1sp4?vj#`5NacK~*VTslp^wn1$}v1tKp{!C#EaCZ z_HhVyJLti>cTOb{0#5jhVXR-DN0@W-@Li_3)(8q$JVJb}RiYMYyrWR#q1-p429r~h zdJ37mnR$95;721yxH8wA=*^+mRTaS^!$xP2#fcP z&dRo>eyJ(BQ|ganKU~_QVT}MB`t9M4F=}J3!!$p9xtsXXrRpTb6%I+k0DfhiP@+ym=6{ zL%DChXv2_SQe4DtckmLqePHJzU*a)iNR_M|WzE!H92xNFaLQtdPFFTgs#7=MTb)#< zWEgV??5Kf~CE46;E~BAn2KL#ca7%4t&6vAJSw4o=^zdt$cXcX4m4mF}^l(P8S& z0CQu3%@SAZg&fkV3%*MXjyhI}~odrVv$HDF87@1H@S zKmdA|+OEL4YUel8ViL8Gq2KzUiGWOSF`(fVD?9$(w*$$RE)`sTeGJ{4%DF?iS~I-u zv4Tx-Kj5(EqJHtCzqXx)5uBNfhV1T&?1wIAm@M-;eoPHxHJciG@$lh$aR5YhGe(|& zS0eR~PN5GSwpx}N#qk9VJ_#Iqb!`H;Awg{)&Xq}PyM;G0e`{ARXUjchbb(i8d)KUNh#DbwKKo4duzaO}Jg-9owoV3^XgnYl^M^+dtqy6}CsN+aCPQ|g_bPf1 zZ7eKwyW8D9r}@aLKqZFMgx7c{5qo+uVOJq=&a$0u73n3*dlt+pv2-kLGJ{qXhvvX8 zP36W$vq3zq3_Bn4v~E93I14Lyq|{En%C^0hF7=Zh7?MgIdine70tkviz;!UPNgjxE zyqBOW5IP+g_S~+kcCBSr1fy0C7D3RLSA@L;tDI-oA&w?uH4>k=eeeDZ>xB4;7JDNm zL#hZrm_f11s06w}NJ*6EtC&4$d$ncY7RxRL)lY*%ogctzX+9nx-A6#o@6X-wL#PC9 ze0SA-fNx9O`ZjubIcLM*SX7o46mh|9!#c*AK|1{1lhex&Ym(Y9ormY3_OLq^{%ooI zt4;nv<8*Lkn63tNXHMhboGcxmIQ6{`mn`2i6c(;zUooutcphxBv3}(lD&`&oN}tq^ znLQ3XDvq;!GZxlXBjSl+1Q)R{>J`qUCc4JP(qKA|bO503ZGRIfE@Lfr@*~CXKx7Ny zUx5--2C7jyq=UP4WGaYt#45;b8*qi;*=YbYA-6-OZGiPTh`9x~oqa^(_1A|$+=&<^ zHq0bxF6WdZ=;f54?Rp`sg%%ZILr*-Z1*Cz?9~u5NDFPyuoIYJ;g? z$I(F;ld{I0o6n&=bj5p7VixHq&vwLhfNONpbQw|{!mIRGa3zTy-uCj~W$-&dm19mE z0Q25zoPs;i0=P1JPeTn64nIG?-q6p5n+<`baS+^uHcJ0#r`vv88UsrEqw&ASFO zuhXM$Ts8wl9ib_Vl_4!$hU-Y!V*P@}=y)xwTI}70i{i=Zeq+DG;%%-#?u_Rc5cU)* zF}uT+0<0%bo_v9T(;6ofH$Ojrr2!g$-arRphwd6s01PCYh1vmkM^b1MWMtJa#iC6U z`l+&TvC^6dx8WQy(_QkGwkR%A;Ix1%ga()az`F^J_LD;z4PBl6`FMZ$EvPSWtpL2s za6O`Xt*_q%w)#!rs)QIy!Cv}PIyas(=pKB_8FOCo`s2r43!B>~j^$(O?;2~LW2;Zo z-nU(Q4Go>apZb@93WXRGLc>WO4C27oaBX*2R?cRmzWyU{5|Wliz}cG?-1ndwgrZ<= zU<#ad!Rzui&^q9RdRTWU;?+blvu;@awn)mTeqx?jF{jX8T^2F zcoXO{{nDTD5bz&hkmTa}zb5ORPjteFI6Z*R%s`VDA|%V)ggnQf0b^sk2u-7PcI z^v5CC-a$5vtv6DbZQ@Gu65xMQ9`jqK<@-q~6v2O25?n$&k^;3t;6QpDV2BRg%~t4R zfT<)g=**$zIW;pQ0$VagE*=LmQidUWTU(^5;(Fzr`w8&9YHMp>Lp+2UhK(Z>;VQs; zfbCCpxzpj=`nvd^#9gk;-YPd6aFM9`4I9{}V9oe6&)@}oQt!mXJ04|?IXZ34P)0QX z8iHu}g^`d9uUqo+aBd-nd$3~`?$~TTg$8aoo3@0PF~CPajlq6eVr>kGaPaSJ0JXJj znIICelY46kN=I_MfsG1Sje6BT{^tQc~q?C+zSPC@)x zpp<-vn~ToN`u^vQw0ca-yl$VRp&`xMaI40DNC&B=C;wG&-$(@f0gMZgKJlMQ8u_5p zq??wxFaCWyz43AZXi+0;_@7toC0HWx=U8k=nJEN*QoEM!w(~fDKfky;e0q41e!f86u^``ohkv$GPpX<$)!HlM zwg{nrG-kf^6AreMy(QMfs2X)=$@s+Fq&*b2`VIO}?H{yqa;?z2BJ=N2W9WFv%haVnUU!5B4R z@Z_0xb7%h4Fj!y6Rl(2OJ*0QixN`bp06};z-?iY%jvhGO;9iS+Ty#6kv48{=Ou`!@ z7?GyfhmRu|xBa8R?w*Wc;dhrVrqWo+{mTVF9*>K3pxnL2<-c!W=kw@rH#~4RZ|VMY zcQ$nTVzFJJ-2e_x_4#e&di(WJ_@RFvI{*E@055<2CrvS_OfjE`*AipcT1^^aKnwl; ztgKJ71K`8ptgNSi@BEH!cUOg`%2_=8>Z;BFNKzIIg2Ka}0O$-oqUq)!k2K|#?p_u! zJMvs}m~M`NsTI1YlO;R>o9c%}mx({7Mv)_k&Ua&|qQDR$XtbAkbe$M3AMQo$?RTv3 zkT$N}laov8-`UwAH88w;4RB7uR7IAC-9G8=9eDQ?xx*KtzEN5dp+dp5lEi-00<|Lx z<8P1VZ**G&@@rQxC&ppAE&9VrL_`F1E|z^q2kbzHMIp_}bR9}Tio3xe;!YSc**wT= zjSwUiQm2-WI`EC$==^*UOrny8blNh7*qt^Wqe9b;VoLYVg;TO?yj3S$9d^_GTFsJl^Ior8*uUB z;}7Fgi!;8J$*8brwJea6D*O4yV^Lg@K0aA#qD{+X?zZ#bf0~#yN3D8 z_4R$kD1ecr;>=x$%Nz)a%AM+ZObT zK}j9}=LWpLQl19sX8B<#?>4#IPh0WaouGPOIwp<}DpusbplF|N4e59y+@=J6{wYCPp+*dBs;6e2omNn~M zrVBzph#*zuvRS-u^bk^fdE+QrZ_1#gwtG&(5c|c;K1$n`tsIM(o5Ir{<$IGfb!>W= zZ8e2PSz;z0$#2Bse%gN%-+*=D%C*XJ0Y2%&zRdlQ4|}$#10qU&*xOJ-xWB6r!z^X1 z@w;%MUKsYw9t&avWPPdpze7`qtJy?n_q*FgOGlc5G!Z%v6^(!5j!qyG)+C zIrO*T*xu(X({u*LXeLCDgzh0h5vAc|&r<=P82;?yK*FnxikeNCOLpnnhG}W(;$WQ& z@W`(+O+%b5dk$pqUkQVh1Ale}V9%lWxrrKIA%cWOtP@;HxwK6{g@tYg~^C;$Ufs zgVfcdnmS!9QKu21-W-0jbu$!L$VBF!IXJ}nT;ZCWk|tvgEkRfyQH`k=F%pi)L9ZHQ zz2}m7TTXuGeRMyaZ11;m^^S&ix{w(X>mJ{-_=Pb+{=jxyah0EdVuu0kGT^@uM4w|n zV-7;ZJ%i=jv&=uNebICxC`m)n}HvPvc1l|oSE+f#Q(0S4WhDVDfoxh$9P16+tA`HoNLq{LOacL zvA=q9Uon;+1wam1Fa2Q`49&n+gl_HWksa`nX6Q|@eeLSD1PQ8_XWEvm3$pR=fcB`}Io7VUQVw;?gatNMy@u-b#|onEAS?D~tU)iuPQF z$tvnObYX+1Wzs_ryPp?4xZK^knOAWfxM~eDkJE^%58J^yaz)5&`R>=M2Pye&p`*Q9 z2Y{>S9QsoVR>oxgB6ua1eJQaqX70p=A2#Powiy!iFWL=~`jTe6WMIAaH{+p-?{IiU zD!BG2-+0V5vx6BRr6(NM3Rj&%#HU0z=lY~k9EOAS!9Q&*Iv$LEltvA}Y8{uNF#Pd? z`ts$r2!OS%yY`aV${~n7tyniirKx5640b192uO=9t$IDN^+{7I_AZ^svr?G2p0w^P zaKg^=iW^rL;3`1brc$%xyOwjcBFvMB&d+wl)?1-dlbq=NC}(qjyRkV^LNN^Z%Q-6k zyfrh7+TQ+%gjp*6 zpfSAGIR%r2Dar*4l*+Ykw1hF$R#gRj|NaE^GDlmtvf*Z2^EEN2^{Xe&#Vw7oZ0Ebh@y{(Es&#z=|)jM4@MwP$({e4$yc`Y|K^SoKwr z<2pa29=f}J#sEp3hg1ffweZfJtNZ*JCA$9q_7*xN~z}cg7 zxZyyYpgaDK>A)^exornhC8lFzY|60{YCxn=QcVuP&50}}-l~Z&84(^YzAT z3z4wi+Y11K!D`;Px|VQwNUfnAV}4TaK)o{;bnh}iEFVwfUZkdmLF}3L96HTmm~5<0 zTztLsH@M=a$X+T?a$nHU_?7}`>nyYKgQcCA*m)IEN5O#gAl;`)M~o??WNNpYm|Htw6fJ8wlfJK75S>`peNMzKxYfH#9sZ3ORJT_5MURVpd~ z9t3j`J9cNm0#9%Mn=V+gIa=A0HZEX|v$ucnIb82L&>#F)7;l2Z2Oa^q0rmb#C<6!_ zT;+_B8}C{>~LSC_S>6OsEfqXW9ra7lksTGNGZq8IHqX7^n|5g-j2Z4HG7|} z&MEPqxT6IiPHd^{j@Xxwy!%Dwb2k)Mbuqua5GN}heyQBW?wYQyz@ykRao>f=RaNnc zSvI6H>N=Z9#gbj{kXs08i*B;*Mu~`AmuZ|S6*H&1W)de)V^Q73>y>Cuw0J?tk+4CvK7UEBTKRla9%2UsQ;`~7Hu?vwi6@f{YVFWbFSQdPcvHsV-70VdY`eZ~5UPYK>U8heET z8k}St)}wCzK1dqRhxETCBk9tH1qD92^;qQ{zNBn!NIe^(c=r@bt{ zf=&&)N9KElS>ZhwpH1T9rXE8Za$iwG%7GR?q5QWqU1#7b&0rFUk-L}W{K$$rgY<1^ zp&(GeKcDwp0h$)|xuK3u{v|>~NL66L{i3&iY3o4m*X5YG%CXVbNY03fFTwy$elzq? zc)43{Foda7l>hx8=9o*}v>FmdojFKkc`S4sYfQ=WgV+^KP=d#{DDT-e3A2&+JnYh$ zhW+WXaH7HyG^VEzcN2FvD#m6hY1?skrN1@0ARMaSq=AArD|b!?O0j^H4JFn&Nxb1tBCL;2` zC9&~RPpE<1^|(et%WH|p{H@TsnHj3Iv-~nZFPM2pC@dz+GANPo2ldvvMB<%Ww{Cqt ztdiQ&pp^jIRZrcjhfr6)1Bo9(d+PJ8$X#x5htJ>q;oi+~PTh;B%s$h{tNeC6SvFjJ z`NmkUC$eOlBDYTnjvk7vb}SY^lTu^GV5D%ijoh?cBxLbEBHih7w~&h4xN3v`aPOy! z>Ee9@@KTH)EF@`*5`nv)r4VQNjh;seYA-+1Q#MI`0@t z?Uss%5Mk1XtrBdTx(bU*EEf0VbYzYpyLgMUYZq76z+v|2e$=H9tCDE++?{I? z!T)nx&iYEUKJT{}?&#ktcC2g0>>>A}dT1yTFDSV?T}+mV=~-Vnz<@fN8IO?DHM?xx zkhySWuBClNb+FtjSKlE>hgrWXjaD?iGq@7nnd`6Y;QF9OXeOvr{WYlHh7>a2Tn5_B zy|tR%LYrN}214p-2!yReqN1X|eyl{bs_g~o4@a3=B?9PO?|bqA5Pxg*Ks&jqwM3`M zWH2ez&53&x@xc28Y4v(9Nqmvr6t5zSdV=kzX9u$ZVSVomD`oBcmd8T8b%SZ_ zFi-26d!;hnPGNT$#i{InQ84BUS35iy;TT3Q2Qv326H}?36Mmg@&D&E;H1ba^f*dy+ z1OrN#5NmNM4wvHV6FbH-ar`Fvqp3>k936uTD;RFrIji9*mA4Ko(Hl8MZyW6O5PZ`bBdHl47F7x-|PQTB7 zin=d79o<2|Tx%4oOXT5^>EZg!fSi$1$zt|g(^2pF)4)`4dz~^q--mR85YQKQgApV*oIkvj|&Kmkz^DVlG^)af62HJ zt$bEjR~K%tR->4JmD{IA$-gDW-#DYBtfKW}DMcZk6H=qI6*gg3sfuiR1Ia0U0WM_3 ztr5s&p>l~&9(ya8CP2e$E>6~R4D#`)z+L9ypPKF8v>9}|)r)OUQq)`8ja8+Z4ynox z*r4O|Y5u<0g7PW*F8f{GX!&W87s=mm zT6yu_xeyWlKYMmwZrGP9Zbll$Nb}`j!&ye84&<9 z#jq2@YvyCD$|W#p7ZWX&iH(i@Y}wn-Ph-c&G`1@5s8{#b_(ADq;^uF5xm#^C8mo>XXmHJM$cs8T$Ri|1ginG=$CWZ{C=nK$f!V} z{mdmHmT+&F=L07=Q0o?PE(b`-{+D0`=za%UY}$Bv*l)K3^Q~W4NPyn-Z%d&9Cw=Kv z{(ZKQhNzh0EVEIA6PRpG?@tNS$!{h(K`x_xFe38&&I`}I!a{-=u98%8&5EOs7u|#= zS7YwdIiIBHs{^Qelw-}_O(*+@vcjweDU0UXnjAu`fPHmuHt0r@6Oh<8HzV7&oGQrS z%OG@eX~r&T9K>Ni#XY;gs@PnMFw*t zZVeia&(qZyMM_rDyGfmf%x?kjTMUP4dVg{FaON%07HI3r-_-~v%9MSCF$tk?wId?} zx0Q zsoT_*e~7loS@P#)ck&lVMvi@WZBYCLICckKREsTQ-+fFpPK-Zi0~^ZDS7{=SxB>S%0R7N8Tm=q z?DP#tQ6bR{TmayZ0lFanyf&)4Y4ZR50{-7|V;&e&oA;@z_M<{LhZ?FO2f<)lfXcqx|#tI$&OiWD2P8yV`W2p}TYbkoO6a_TaiIE^#HpeZ*39Bka3RgLjgN=TNZpKG1=lW7NNi#$`K4x#$HmwDZJ>be^uX*8a5 z5jCZy>E%RJFxRGtG%b+Vf87`av@bt$ywzWpnPk`cGn4+V&3ec4v}eac)M&t)G|=;n zj3GZ3+b+WcP*Wg(ad7ow(^<)Lb=PzM=EJvQc9uulv#`pH$Upk= z;|~_=Bcwa4L+?sFaAAPden$iOC5qZFf@nEt0jNbF{U}d<46)_ra|H7FqK-{;=0?yz zZ%TOhmuZ!R;rHc_A1B19J3@IZ&?F{kU&{e1`7HEFwE# z?EFx7K6U->1ZS!O6Z;CIdNvy<46w2Q3x;{1zrO2R03H7#m?#a6J6DZ&kSOj_0GpS) z&JHN!dqB>CjOIEgODH%+$R|7BEcFVpr89$?=Byau^37Y^X94h#0M1V8Hu)Q4O-<|N z*qC9UIVI@C*h6Q>1IDg8tLMEpRLuNb*Y5YOMpR(K?J8tG6ayva;32@X_za+PzR5w1 znC3zc_ia+S6@vYsEJK6^sMi&PL8q{Y#;}MW6Jg_vkc%lpP5SxBb~+|bqDE8 z5%_QeJ^+XD(;r55kg`7vWwO@KFEL*@<#Li8@oVBb^i#eoL_^c<{KTwMD>@7-@SzdPBKn?nX#kUCCU${akCv7I3Z1z1AG}JL&4YDE|d>N zvB+_KCX!GcUqF_3Kv>}42;zT{lCmyoydO33;wN|dkago457Mjp{`D9SpWyCr7`%7A zZ0rMQYRDTA;S;=`48N$Psw!Xn@{B0qzUQW{Od_|=!Z*9;>2gt~I&ECEnj~$9emvdD z;X32$CHC&0G)2RQ2q^c97~Tmv4E66FjL-nh6$Krb3(#sqs~cutN;>Tf$O+_0x}Bxy zyO6D3t4Y%jO6kwSi*Io84a#ki|Fj>I^%S;xn5G9&_)1F@tB|^M)T8ylL=3Svn#Xzi zdYn&$FgycN!7p6Er^R|}X|B`gdCz}&bE9rkih}(;G@ACEn(N*;f=L+j*&=e0fz>TFhhrzAp{ z<2E9R+o$$AA0bdTJp9(E`)y1o<4%Y7gLGRVw30F8gGKzW3SAp$VTLYl&qA`Zu<4v2 zT=h`rPzSyZFmIWAeTm+)$nLP;q0}pmQ~7CP7+N`=wZm0T#^)6tK7VCc*Lm}*snt@I zn43qkp2cQy0mG^lIfqVjZHIEvvY1c(p*Ix?iwbZ&kaNmT{~zG_8q#KFVIg*xdwM92NHUFCyuiyU{)<0n8-rgr3xMkK!LKbvLo9S zVIV@TAkX8(;8&GkIWOsHx&jN^YL9$_&v8&Ga&21=CY8< zaG(O6JMUR1Tvy5bWe(#`e0p6HmS}-Lr-l8#Mr8Id=#?}Di!fhhVd9g1*SD*{d_;(f zfV@}c)NPtsT24cctaS*m*Y(jJ@w7etd(h9h!YYUFfGdVieCHeWLmJiqpb24p(VSWS z6Qf_^6=N{L->9tdMd=d%q#TW!NQ27$6@SR-!zssm2G@Xd0HKpfd7p0||0qv*oR50z zcNr`d$o1itD(BnF|Coc@pwU+y2c7s(ebwDvhwcnJZ0OX&tSfu@Qxy+_bP!JIS9fOT zf4KloB{pxNVGmP$x@~@CiTSPf&qy|gj)pJ<%>r9ilws#pnPY&q_Z;@qR1<8vj_@!I zobAk(ur(;#xm|WRgqUMx3#iIY9x1(`?H>(`i7%AZR;v~@hVjz{*DM9ojF3n#J ze21i>4&4?6K~X>nDD0kH6Af|up`XmF@p}#`AjR64de4KL%N=x5GM$;8rTl{GXd( zgIcfHU8OJ01ooq4SzuqYRcxQG2)_c2#u&aj4HSr6i`lit2n1Z20k176L65pAx`B_trn)rY%Qenc zcH+Q8?aFVrnfPQ(==uG@vf$DVzy-~AWv-z{$Ck81AG<)GH_1%1duf5R(!S!Phf{mca}K6J7A&XEov@BVMBhN8iJ;uvoj!iVV=?y*SKAuu zSUA}PJMRg65HW`d6=yqDFaFr1bOwG-0nTBUmeRBM+;qSJfZ+iM4v|Jh48HxdnpE4aHJG?HfYRFLm*A0-FNz9!Nx`vjrfy z=!hTtm|L;CbkXcD@BSPoVk8Ks{|%iP^1hRLI4IsORPzwvJo;Ea6?=30Bv5^BJgySj zl1fF6c!n;A?uLr2$;GxA`wo5PO&7*?9RMd@12FPVFa_|39_ttMwu3z}*fMS4x2!bdf zD57*omkKCKhm@2cpmYz70n#8P4T5wx4Bbc$h;(;%_gUlqe$F{RoPXfVwJ((o%)IaW zJkN@It##kQl?1PQp~*pfwd{He^kqXIJk&@8gq?)jGBtFjrk90NI7Ni0g~kf2^_Fb| z1YRC0)2E!ds4Z8+j4r%d7Fpi^TOqro*PO-$@2Q2D{b9EB4|KIKGBW<@?c*F=Ii2`f zqiDe5ap`iDCQN$@Bm38(JquzbXv}=Q9o{ujy{C76xql)u(*)jq8^%6aPEM`PRz50h zRe?c}jOJr#Y(MdtHC7S70@DuxPB{t{NP^hc z5d5*4A3TCic#;`9x{P@!AOV>H;n(ZsQ1( z!BYFf1qZUvMK$r6RkO@n43jXeCBa3ZIDJYam|l&4UR0cKXJN2EHX=t^b&>8Kl0!#OX=K1Sj*OQIMQ(Vn^aWP zkaUzl;KD*G>5(&0;CWLk0d4`NInoWXc|Vg-4-9Mo2;zU$@m%^BYxBd|u$S zgeZmr+^q~`(+sbVSlQjUMfU20_zg^ZR7*=lBf}a?l_5Y$-vf0IHexyIn&yye^2ya??+=pO!lS~e9R}FwzY;o5TOzV?@w=)oWe$>cPr?na~ zZkySzHF?L86uetCgD#do3q>TD()h`s0yGei|LXPx|2cLzQSU0S1~}{1JXO@jHmWg2 z1AQ&()wZE} z!tHc5=Uf0glW7j@$+aq#YtY;;ZhNX1&mnIJzer25d^ z#J0*H|1U+=o*aot+Z&!-J%14BWsHBUEhgPbIMZ_{T>PCkfYchw{0@bB94= z`|~TljOCw2AHO`5lDV$06Kp4$Jj+kU(Rup@Led*Ejgvv-X_?&SOq;a^Ui()&}NS(oqaR9XCFb8a0R zDl{>WL|Lu<*jN*9 z%<0L+;FgAiR|=?XU1b;5@}Qllx0Zk=gQVxw4(+Oiu`C{8_DEPkOoT!uWRQK16ynOtkK= zU0mG0DaG4oJSwwYpS__aSdqL*Y1o(ji-io6Ne32$uQW=H(L)@}@;y9U<=t4@@G`r- z?wf}!JJ${BYJeeh>T|;hgQdDFmEeK&jJ$*HiA4L^o&^1yt=V}K$G_CKp4wLJdXtyP zh08>&249t!vj6BpE@O-Te&_nApZ|)1Whpp8!EVD`>|9TI6 zL|5g$-x4Fdf$?6#xu|I3`zkuOFO{P^5g1g2(pYh6e$~y~k{y6B{n-YRH;j z;l?L#y%6Bn&^G)wh87cmSbC*>!gNv7(0rq=H#PX3JN!!O)k7vu_jX@v#FyvvM5-l1 zl$1wwZvVo7*@Wt->bKx zasM?t!6(Q+YN zE0F!a!~XyASm)77sCQ87iGtu8hE05kV;o+=tUURv$L3_?8TR+kJJ&dV z;4yNoMUja(y@+o+cxNI~TCICam-jxp)srP4_=m0ynQ(&cEz^&8br|X1lWcYMv%Q}k zj@B6+(Vn^fZs#N6heC?i%o8o^x-Bf${zxSEru_Jl^@jjgDne z;XI`e<%$@{OqL1sIZR&(aLtcSPcJt8eQPLJonF#iyn~h;(+pi;7piIYBUtzR39Hn- zH~Qm=kVf26z8U>7YI|`*9NYQqPWchgmYn47hO`}Q+>;Tap!{6YthH&zETer$*^g4F zPUof?CH%aYqd4tOH9cVcq*<>bL)~Si{Ptc`d1nl>AGY0Q(^6A@qv8Bnj*jOmj&fhB z8@C#!H8RNBQ2KZGI|ZaIe>CcyY#1?cR5+*RMF*Koe$Md^4xAr3d`KklCyD-R!WX~d z@FyGrF32w<*!?K3GNerzCqC=YUeo;c&WZ{yV<&2QQeS7J#(YTZh|DA^V3t+iMSvvA zs-nchtA)Ea;bUn)cd>)k!yFc541d*_X$8{AgofUc%tUxieEzhkWgrGABTLIIgklJi z6-NFtnKMgk9|@n!XUVu^$Es^b;Z!-&+OQv@C_w)|G8^%T8>)>_x{q<9dEO`{S|ho6 z8&`bJjZRVdTUp^uw)FO=MWbs66*|*dT>M=8(QjVr_P3nqOrJWBL>G@!#Jzl7yiBt= zcHrbOi;cG;^_z}i+~%2!>la-o+s}T^S8<3{jG3niXn9Fod#tinWEg9f5*DtS??mHW zf2>M^PdBe1S*jrX;5`Y6W8PG|L+VWK>84`8^P)aI{hc_C8sSKJS>uZGD6bv$3=ynz z*&x&anx41KZ^a3|baj2c8E4`U8;Xr*7x(7zaImaX_Ux^p+<_IlvM|Q}&JMjOiZg)- z8Fe;NE8Av%5(B&}TifQ@z}R^{ zZpsaAo)RP39#Hp)>Q^pge`NuWHoyy=}`Kk$PuhQuf}gHCsk$ zf`OhE@}z>|oP_o5>I3~U|D37$XS6IOI_OFDUTgvK7ay$3WFsS|`Gf?zhytuSg&D@x zj0uZhd{3!QRhJH^LtAfJ8Dbk=a^&?zT;>TiqAjoC9UFb0@FF)w!SQvmy8!9^4Z%?< z{KhSgAa*8Vm6++u(a}_r1p(*KZC7Wo;2yjx<@u_zCsgZb_C z?>=svkyiOhr#U|(ZTmS?0Y!_BX%}E1ga7NN&C+?5P7uAcH5C-qBIUClZ;c~qt3vEWXILel-=01L_TVYYl$iKeQwNb$yC!EGht=f ztnX2(W6AWpIi^N_W+>?()-qHpOholm!_-0hF5>F6VQ9-X1C!tX@k`(zYMbl*hNv~? z{Vb3i8mx21nrlfuEG@jz9sO_? z*^*g3P9`fMRXoSMpGtBzi=uiLZI9r81g!?c}65~qT+vr&{f3>YW)-%I|||D58# zum6AYrF|;iKUWHljnU3Qnvq{;%Xc=_^R*JK8chVAxKo?iy}q;R1%8eOlytmpQ|vAWMWJM4LVs1HJMt{g!m1fH&Z|%U9}{hxVn@JD1jOY%%zw}yhMgEa*tVzh z2BN1c1tle`(nzH0oqdaYj|=RaRk69|duscK{q{|>q(t>teyuiBELK!=k(F<+T}_{9 zYDV_Od!K71B}Y^zv0LdzwpC@`nW*)hwAt7_GBRE)Hwse8yHY9VZEwgHLrmG^@8Vrcb ze5v}l+>ZH`*nV(qn~ZR8-~Py}6t=ljxq6ql$Mu!PAb05LNQ8?r1swjn zCUGZ0bp&r^P`(5NTQfbAgm#W6p+;ue;_naFB*Qj$)9tpqV+c!6Lk%l7bjhRq=SqF6 zN5&OsPqRb3MR1(b%z@wV?^PO7Rp1$hzu{%s!Zfm|Qeh*Up@OGfP#;IP?ZuERUdBI` z$757HFh~*h(jgXnSCtj6F}UyJn_DfM!|F}zGSEqX}%roY%Kev71eX8yd-?P=N*e@m$eD!7lE(PvEvqA^CjE%+D+cpIk7AbE&rY?al;v?Q5h@nFU<&XSk&IP}YJ`g zxZZj2u|q4Rp{)JP`}-jo>OvN+nbrGFbKk06YwSx}Iy!0% zEzVy4YI$ELd@r}|Ox^qUY^a80-lDt_@tlJG@%y?UJIzR{=HQqMWX|=ohgDfZ4@nTm z&E0y*RM`#dTTQyQ7jiA%i;PG;Ouu9|VJ~gLh&H}Y+iWAp$=F4dV>M-ko0&LU*13J> zS!`)JZnXuKoxVXlGc7&AyQvgaIgnuBnZtiJ=D(?*5z`>qzxn1wAa!O|B?MhE;9Xgsp!Uh5 z{)&mTR@W=0hy8UI1Syywnjg`KScs`8qkaS$&=!QNXOOMThVZ9YX1ymZ{Y?RvTuY0l zAV3mLe(v0S6+wclKI30fez#YBM(w$Ne4CGQratu|(7%*?(p zVad6<-i98g5MB?`njpxWQCAU)n|R&nzPx>P*NWLUE+FIi!eJ@>68yvGQX^Z$Pkr;2 z=}}2phQyh;(DJCoinvo$#fSvy`t+nyx}6pax06Jm5tdM=`_wkrWOYxWxEosY&-WYK zV+@npW0pS0CVA%&s+XykhoGkAT1PgS_14sv*3GD3-qnoy% zG4Xt_>W{p_KEoxyu5==W&RrFJ*lraJPzpJWUy&3usTDZj6AiN(&oc_ z?-IV}qKR%hm8zQ`{{wa(GikDt`ZPX|Dcj|-IJtd!e_iiFD|wOz$&EI&X8vQXStxoWl6ic`f<~9!sPRnyA=Tnt@TcTvE z^%R)7O}j@TD6bWBk47H%Xe{&K$-ic%#1N8Gl_yhG%Hvp1Ove7iuReD=JxP1R|Hbon zS)rF{b4Llm{4z=+Rh1w66YciAyzB(ed~Ug8_iEw8;KeVEjuP*v-Sw@~#$-%QJIB|2 zqN9z&ws+2nlrnnL2wu19be8^?2ntZYCJYtfwEa#)6qwX{bBmx(Aivt)>$#k8 zr@OqTSHKQ!f&ofAKH7JJBoThB^9fd2q0y`P;tmvZI^q7PJyC|M+T7M{w4Z}pZ6#mc zV!swbg~yEKw5%wlt(R$GpqQ+Uq{F~GNBY`Yuh;4@7pY^m!JZX5>uc`dnS*#_Qto#% zgU*Q`Sb_^($!ruuPvVuEiz|fI#DaovYE;d`TW;2jmeeh#d1^Y%S@5a%x@E7bZH>J( zd!+ggSV$bn=u%jinVp4^B^d_?zKDp3gM)(`H*Y=%hXpK>y(t*9BB@w=j1ghjmoC*q zl6B=8?p!@G<+}?#G$mvwybTz0HL9i#Ww@Io!I9uxk2bPf>LzUBs;Y7c@4!h+eh?fP z)DG0Xi~Q!E^;~%K9aFtwaU8{)1XmsivxLP|n_)mi)Nz%Eo%LT=isDzVUeoM}Uh^PW zoA1}^+w3?O;_Q2qrRKLs!N<1NXz=Jslq$O`vq{1*Z;X+pJPnkv8}$pi^Na{aM!cJX z-WwQf{;l8NJci-BQb)T9svF{JhPYN$$KFK@VcTQAIUl-!mOJNpLBs$N8&K;JQXzxR z%S~7xx4d$VQ7B@i&F~cG4T;os?;&uw1;7}tN)2~ZN2XSFED%lS8lWjNl``5&f!zYS zhon-ZW3J%!2yZ|XI8z53;qc=k4Z>lJ=SFagXqG466SG^5ar`9(AI37PHEPbc11Jy} za*3SOFYf>A1z6s(=A&gh{hFr2L}J+QHM2?}^h_a+>>YSh#K zXge@+C4c|^#cOJAYPt!k08@=Vi`~C21A$&08ZARc4I8gfZM(-ZzknTUp~vid_gKpK zy|WQi%y?hUeOg{GuvwvYaa>V;s^jhNPs@4{>QKvYIm4nI6Jis>2X@+oRD}}CXv$hj z)f}O~-DsZy4xz|zaALaBd&3X*jt>qGFHFqOqf@(e2Z|TD@hL{onP&QBj1zG@F1`!4 zs}Uz-bO&)(u@^3lw=dn2XlZTNK1CZX3{{5JA0^?rUi_)*qx9s%-;1Bz4tVC!hm@bn z*Bw0p)#ILu;xcO~VDX#xy1X_yaaOr<;EbnSRN1r`4Hj{`b=j;3S))yx23svGQA;3>e~nCT zMX1OCoB}!k_5k?Bf;ZiB)s3;5@)Il-ni0Nb}{G^V+uI zVZBUva9XY~yD^_V>Nj;t2JA`m$iJ*Dlx>dEvM*2J>X^D?Rg=y>}a4N|kG&JYwO$21Cq4lLvr@Qs0RBP6BFnmjcLh&L_ z@}`k50-Odux;6(8pGt*>Mm&4=OlWLO2!Gi~>~Fk2o5Bg4b@S;@)a}umgw>gErEHZR zo?(4PFuoO_La>;bIqcv4U`%9nlDc_xihKyqE_%;M`1E|HV6Ig7?J<5#=ub?p0KPbM z8FO6b>j{LdAC2vUo=%i$+0`7YId#D(prr)zC!pcWvsqRWI-DW0w6cQOR1dZc@Os9k zzsaEIho{hx`~Hkv_iQhMLr4eAgnAPNUuoQEx$U0{J)^MYWzG-cf8QwUUIwDat6(tt zl#7yF>Z`f?&9;@~Fm)KJD*lLGh98#nOrdCDmw+6hb2pc-rc-%x=d|~7KRm5ZE38f? z6R_5%^1xr=U$5%G!I8!}srfmvZQ5hjC55?tz~Gh63w@y^@~&i9)uQavIO;9Iv=icw zOcGg+a*m2a?Y0H&BOcMKP=J8fEqN641Pie5Qt*3I9_X`29H3p#M1ZsSbx}aE$*?>S z_JAgbsG8cH!NEaE(9luVw8udr1$Q;!X1y<6XA}^0mVO1p6)6MQE>Xq(tPbsNJ`OHL z%c5IDA^V?ij&%uF)~q*?v8CMMWhArM~0vhT~$7YnT20vSTMP zfAm?@veQzZ*;AiBlLM=H+aJs)fp8;vM$PMB5V4*@IKxjly~B$tFXxzE<}HnxUuU_s z^v|`82uer8C!-E~He}ouE8E6rCW39GEZQ>#EK~Ne{Man+YJ@g+YqQGs8+mdS zTXh`9DkYI3m-k^-BGh`_`F?L*9oYMURa@{$ABQStX?X+AzM9RUV}=LwOgP9EQ&diS z*b#N<9{hs0OAJ+P5SdC0pP$wK%_d57BO*K`wcP*#Fnwo30ye>Iar<;3wwK{pZlk-KH8b#!v zsxEl&jZO~4D=1Izdeh>C7{e&7WdJgxX?iq0?LJf^AC>vDblW)f65dilF;2YAQgS(* zCDlDlhT}1@IzroBU?B7SxwmPDw3L)yf3`*|)J+0iQ6nSdN=#bVPn(&!dD_QavYAI6 zr{<{;H3{;PsBIb8tbuJFQ>Np^jdnmIu0;@2X>xFm^*uV|_d;O6_aBi{xf=0A6H7^-=p(|6*NUg~0l6F(`dY;`G`HF41h@{g< zi%uLb^qSpj6N$;#k~I(vmKIG+tU{GAkB_`nrjYv_+VH$(OxNdixwKO?LXgK4U5}cMjMspEf@XuP6dbbMqEOI`fTPE zY%#LKaZ<0UqoHc&wbM(Z4S(E)^AAgQ5fGlggeZtvm#*TFd4EjEZ6;B)Nob!~;i3O8 z)6+{3;{@F#5`QI)t$k2QKu`VOkqo-d=SUX-hT^*}@O9p0Piv_8F z6x~Jt230IFkJq$Mq4FdvuGdg+^Mv<&VW5o^ZP3-?@yJ z0oM|~E*B|gB_mJj4>yN3ttcir4ezv_dW@(W8{7FsI&qjihTC-ukfF_I=Qz+k=CWNM zdIb?_mCG5a(?mALhk1&NDKfvDNJ&ncf{g5=zz7p3=dJQl7rEqlI68q_ zi%H7k4W=d2)6-u)n%&Q>4T(nm?b}>0{RFZ)}@g&f4*$R}Ich zwziCd57)ytqXh^>uj&)xF!1veQq2|e+UyFQFu-E4)&*6LbnJ5 z^H=xCUSbVwsVmsP^v%_4rw3OM!a|B#F>=pB+pzyPO{6FgVb5g){AiC?{LspDisrMH z_$M$zHEpA>KPNU?(rZXdm$LB82CBUf8U8zFaMCcZnS6#YoDba2|dVnCsLjUy4TS>FKeV3_TG#-BG~B z!(&;@Gv>NG?zDYfAz6|d)U9Ivs|jPgpC&5$LN!S4K>VPcr@~{qavNB{lfXxu+g00^ z$7s16HmaBo<|na>wuTlKJ{bdUx|_|Had0*ph-6Myrz?SO9wSd>eiZMw58tG9Gy}ds zJs_MzkPA75LumPws`TIH#4NxESf};b*+$NObqL7sBEVG%$v`L}Jhl#O_kNGp@u95RV9C-={pcBaNRe^LHm~Ib-kK2C#)&?o4CQFbg zN>xk?g8O3v;#_znop%+A40~OzeLLD57i!q^ z1#U=%3?;cB5@KQ?%6@<$86P!e*%b1p6l`V4si=cHMVKRz^W8708vFZ|2dOOsL*@+f zX~>l^_axdHm)Kwjgs`4Ieqe;XZkhzgg&7$Wk=_+i(r>!uOfc@UdG$hh(QI<;s+XWk zq2#Wav@HpeV70M}vY*|eo#QnRVc<>rlKUIHf~04`GC~;gZRyvqaY=mKF+}>n5!98V zC7h6upf3GYw`7>tZsQ(C>@H2HClU8w<4C|{HHvSsFx(NV0@^Rr_z0XtSu)SL`_Nr^S_ zWcE1)V z@58yxC_MeA%E|~^bHmgI^n{=5aAo}( zjzLBdj79z)SwAyi46s`vTdO+HW7wno!4l~+YD#yic;{|k*3^&5ecnb$RDz?!pJqXC zu!HHb?Ro9yFAs!Du{l1v>ZELQT?l$6G$Go(pqQf>1JV%e_FL~k3H4G9aBbju3PyOk77Y%g>5PO~{Wl&<#RMyM)z6=wV)&65 zP0p%=DLfJ;HMg9coV?S)sp)CIepOXf1ThK8ojil&a_n5StQW^#M0BOg+2uS=2f2Ad zKst)aSV4$x14uXd$y0K3zr)D{)&OPxLKcm(kT)4h-MQ~XbjL}2#Jv79lwPN{4=#V& z+dauXK%JN!JsI;p%OBYN__&2J@>nBcsXF%blGQQ&p#YJ~dpDgs zb|nwZ=kx?i9(KGXBvg$H*Y$vAE$v(DV$4A+*bIoXv7?Ce)K+IKs(YZ5z`s`d6|Sx3 z9dS;h&Lo7bq$6Xn-qavo0J5rjZi=^oxzTBF(15ug^bxOtz40i1Yl?Emm@}P5T$j^b zWeo4>8tlDk&ZiJZqQJbO9o2mr>ww}F&g}eBMvgux0)nxy1R{Hv<5ljevm>K#1XTVY zvkCSZNWT$aj)ikS_B=#rIxNzD`RX%QW&Af)&TD)d2&@-C#11WdF5u$g0@n2#8HYYt zeT>vF2stEyfZdCe#8<(-Q_lZKApjJXL3@-Hxuy>4w^^e9bs_mup3q7}2WR?lNW&MCh_esFp3%&SAcoMwJ*o~ouD#qi{E)o-!B z^U8#0zDBWzJ2~q<-s$S38{1o3Dh2x2fex3BlXKT>y;;9tbG2lg-Jp{kqeTJAlWB4y z)sl%$K3-lHcPw09#0fb=?-2qr8a?=CG>^p>0ZmF&tm$;uTQQk18gAJi1SoM@nqVQ5Om-63_F!D6 zd}{v2joYFSGXr!L(UZbvv6Lds7D2Xn!n3^Gv-$abj8)iqs3PihFZ=PKgPlXV9gAgP zqaJZO5q+tpY7Gk`<9iTxWmL^~8yQhg*#-Cmn4sa!pP8GpR#jEmIXdd*sr>Hf;7mTn zjNXHPACBnISIW@@$6^FNbPmo&3LuBAHy?)q8N7zjg)cqsR?67e*mm0;+$cCrtWh1l zx7o0xI=(?>4_vqp0aG9qFa>Jg*w|P&e_P-UAk zNx^!TDr_4iqiz#@(Ce{`93KNgO30K`|7$qgVj)sXx)Y;T^lmdiIM<~^2Dn<|fZCtG zq5W*m%F-$ytMt&{@o=TSgr#&G8m%@~P^pX;V*6+dHj1w&sCSm+M}ebZIloFMlO7mChY?N?RbB{mI`0 zPL4tK@ZQwdsa84>I-ZX_o7T@-u%Et%m=1q8$yr(~qO5zIgr~2|{OG6Uhc|q-%dd{o zp7dp^h^iZ3tEG;&Uh4B13f%QxeGBmhoQwQ@G=&P2(w!%!uN8-ly!b7mO`=?JevaD2 zT**;=dzG1HRvN`HaSp=IC+n`~_kkEMUXYdbJB-x<`vCw;kAdx+bR?`p|31io!|}{B z8RpW_@iy%(7+%G&_X?jKdD(nptUCBG$b~w*!*T; zG>b9bu#^&om9X6B`>yBB%~Y=KKu=93=uo)HF$AKHAd%WSQTtUxNOn&i#GEY#UG@YH z`!yZ2ZelueMMbY6{p>>YeCA>)-Onjms@XbJ!lHsZR&s4HR>fP#V-MC3kqwQS-s(h24OCI6+OuxcUCMCh_`V|)m zGtCXN2x0xSvF69d{(1nFoC@AGUJsK1N}-9azDhI#dVk`=+Cr|fzQHZI5lF#CYlFN1whISm*6*!^o96*_t@6kK}N5PmIXi z%RTs~UkE?5?bAYXI&G~zptN5S`!;Hmqd*nQW9znw7H)~v;3prt%Wd?k*|hRtr?Lf6 zakNJV{bJ%MMbq>@7!9)F&K6>;!W!~?rxYR5C<~~1YiQQWnsvXX$oH4hk4W0Nd zdxzWTYRbz!D+i$?`1lv?5rHS}?)oo;%<0OD&2Vb^(Qz-NB7>oB1KDlR4c8z8N|cB4 z_@NuVhC`{qSZmU@YJm-B=g5CaJfWK3GTF)rlFeY53se zN?w2H%H~mD5%=_&jh_?|7Y@CYvMj)9*iQ{r9iSw$FprBhdPp;?g?cnGKR369i9(=$ zg2`p$-qnBW2CZ+r&K3=MzdM`>ASg^O=nw*-XWi|1$Bl_+nogTym{j5pw`pc(Ag&20 zC;-sc8^7ACtd#_E%YSpUsvC|$H<$o4@8cnM0de76ke|l{G?2Fp0-?L`$;Jh61l2n0 z+wCE|;D|xy`wxh4JO9@mt5bS>)~u2@^D>hJPpk9b(|l9qD?o6^I~dD)P-2QtluOn> z5SMEl+ATf!WjFq7{plJ1-gPJXTi5TY@yx#+bJ(SApTlkv+`l-^tnTCC^VtjLl;=XZ z$Lo)+shx(CulqVoNCT_}<*B*FFwJ_j@XrCOg@G(Oit<~6j+qL};@}^c5)vY|CM8n_ zatxe4RKywaTkEwiEBFzf8M&&}xskpIgu`XTBGj0vcUPTqzdufyV=^mX@y{=rv||AX zGIOeS2`S1*VsZeOw?GTlF3pO4G;;&oGMnB zYY2X=Px!chr1j{kzJURU#gwSW)ae^#jYd#1U+l{o7GoHT%Ys}JBN-2|Q++9JOhiNr zL?GmyB8dqtEiIFS260Ic-vLWG2xNZ&%LS37!0ulKC?SSA#DR5uVlos0QOYKmP>`Cs zQUj@s_vqZn@DmnyPy)D}Gw!%97}nh0*{LM|FT7TNF-B=>=NRHAFP6n;v)GNihiD`1 zprt;0;4|m)QtdT+i1v=NizYLCbvY%a)&5=G+ZUUZKhL{{Z#x{1A{ycZYLAK-G}a6b zJYV`v2|nH7?7!QC26$;oCbsm^dcR>2`*Bo4+Gl#@wARv=b$&^7Ptr@Rv-Nno$(_Qd z3?yhPRGcHF-HF+)b*wwwChv&Xc~85BFUKC_>iy2FJMVaEc5P=H+5)bkIqhh)=uq;#|5++?V*Ly@C9uTng~jkW>DKCo#Drpuazgq=Gvv%bdq#5uiU-Z-0j`mjZYDSUc z9k4V5ptJxweJL8{wop}?sA!I1)4imW5sJxGQswR&_Ge2$g#i@ln@vlIC@DK3cc3UX z8mmBU9)S^>g-G}z)Z^x39sS>$RqiJ22eiDOD(csqhD?;2lQR?}p$1_ce*-8$yN84j z(*|QU`)|EJ@t<8I_~IZWDUC-Xlo%tMPgJ$;u9DtyCiw2)kSLTg{}-90pT$>-LcN3f zoYX7&O?!JgJI$T_$RClGwcFRPT%&o!7H15E0bjs zg`mUV+Zt4agtgm{jr51PMcvJ*+`Cg-D^@}y9nM;7C*;O}5iV#CXA^xv(vekaHZJ6H zv`7a9nQtBX(~(Bmlkkhk+ZF^}pd@$c+HGy$?+^@E&L}Jb)d@xgq9;j$)T8?jXcp+! zxn12^8B}P&RCqDCF8V^i-JWIaCuY6hZ2OT_6Sgh}9ptj=7D7x+jR(j8*QXP=oZ}1v zN?Edb%(zIu$Aqdr?CfWm$T{2(CYlaSrICLZI!Jf`noyjY`|IQ5@2Zv<;thM#eaU%) zt`a@e=86KstyzepK>k8f^?@G9Rd61_=8zzFO;%Z3UT%X^iIw8h4=EVF1zg`W?7ay< zI}uGsprA!`>lV&aP}cKv#wTNUgUrz!G7fx70deS-fVlsO*;w>%@&5a~fK6bIC<;I~ zUh@eA3WdTX^-z_qUDP>_*eAJj$IzG*{wRz?{Z>+sD^=4f(3g|@X8pfj04a<@8oVbc z;xvPg1PKU?{x`(-zC6k$y58c0AqK^1L=aufBzSpdqOVsT7wBvT$kui5BcvjW>fWc&Ql^B%Z2hUS)3%cGj&q;Uz-<;5Gw@d~$9mWIg|peq&G1wwZ<^Ae!Y9A+nQ`lkhG6!Z+FK5v`#Nq~%iY-@nhpIo>{vMjZ`q?d{nf zjxq1VN(|A}*48#@Trwc%x4kB<6~cpz@eI*Wiqe4Q9~yaSrjZD!tMxt8TdG)dnXph| zdMiGy&xA@A3A<@Rq*ioiHG921dDmzVat6G4r)5@;69}KeH_wQ91dl3>(05r5kIj+$ zHToREb5s5Zs<_u9UFE_Or0x3Wu^LW?i^tcfHdFJ2*v=mGjFA(hP%#f@oPTFHihw9o zET-%_kIT*tVWtR~?LK9;onl8{AAHpa5Iih5t$Ji6PzLtD()7wclKzU{9 z$xB<+u51l{z!Fp1p^FLPLO*bef{uHMfZv-RkAa38AlmwrNoDvhN5=k+C_4uSi}Pu8{>OSXX!tsq8GU6uS{j=% zr1A2_(c>R1(IWaU_z=jkV^)asPziBfQBik)Y2HTvK3=bZ1i!N(Otn}WpXt|7P@sjk zUv(y1r*a}q`Qoqp2xEtAFTP)2=nG)nCs1OweVY-SjQwn=a;io-7u4pX1?+>|+%6)Z zo^J;tbiJzod6v(OYPRc%jw=ytirMl-io>FPM!!P{47Tc0FN;gO(yuQIh3e%WmG!mB zTF;kp`oFV~T0i!?kX@`}aqL(%;Ww{n?=S=-Qtv3mln5V2&d1&cLZWNcW@n+}+~6O@ddo z>CI*wW~gjUGrAnxb6VuF9ULC_l&UfuT}Op}lZsAxl%-nkZ!rSHi=@NN20EZoKU2%2dr$ z@nWKxjcqY4(JF2mhFJO6%K+eL6@T&K5WJgF&3Y=1a)y{~9#rKzNFnn0R7(S_DcAA3^blQqytB zJvLA__D$ltZ%M!2R#Up9=EB`hx6ho}2Z!Y1?fnD+o}R=Raw>zT+<)9?H!Ge9`_u7X zWq@Sne>C~}gxO3$pBq|hIr|B`PIR?g_o%6=1PMPqrs`ze+{P1a z4Q4p3R8?{fi=rI3^t7&2?bZ}W732L4nP{F`8R63ga9_WF|9-@k4Wgf-FJJb643(uE zBRjNw0Tu_+P?if_)ZXPJQ;UnQd`Or|&yJQ+bCE{Tu~W=6%0J+g|JQ$!#_=!d6KxHI zf;R|rZ2*wbWm=@BT4I@&N3L2O&nF?;nk1oCgN7Q%-pU{*N{i~Stv!Ii;0KFVCv<8o zfY=nIrVai5C6cxy5T6`OdQgc=NmXha!()Un!~kM9M!-_}*DHd*UElS=^tWJMGjRt8 zqtL9w9%ov4)Lp77k^b!s9Vf3Bt?*e5q|As9jwYAu>WtD0L=)A^-s7Bvhu?lE6Dz5v z7AfK=E`9}kIyDc^Z2&&uH1pal5yQva=ineh0Ky6VSc6s0{AhvgYY-JNaZuybe35Y<%Y%ll7Pul=8JQA4)-oM;FAh zb0H$ibJ#TmaPO71^&m<^mHZTZFtqM?44YMJ^xW|RyQrv*B?ME6kMq8KadJ_ujE-Jm zBMiXjfELqRTY^T@>-aPXe?VBRU3K5Rt20iw)I;XW-mIyy_rHh~PB(k>|8AQ7fTT zgKlfn1wrr?P#T7KMQNb?S$;vmAw*lE}`40GrKx z;wh|q3p`(?Y$_LKjAsD;lX|}-W5}73Uygs?S>TtAp+0|IqM@M*!>Ag6?XRrZ^7kZ5 zaU|CP?Z)J6m{yhc=9xN3PtCiNYO><)ffS=;oY1Ma?O#Bc0r*VH6L|mo29$iN@JQCt z7!gje;`6Lh%?5GKAZ`%NE*H*W;9tK5Mk+Uf;{)RtnBxjdC@9)Iyg|Q)P~0ijiwp@JA#SN%TjSp0GGA` z=mLR>19!S#f+kth_>eM`%jD5yzwXLTo z_$8&i+wb2A!`L1A1@);$hsYryB!d;e{t0;ci(}YVe|f`H0w@LTKvi^Ms3?Q)S2Wb{ z4TcIosv}2j+M{_Dkh1?G3ry>rRX6T|ZTV>YzyRQo0HT}(>41JXAVMq zN#LGC0Khx~!29_JS4~aL5!dt6rbaJlJXBhc>#}@j@M7H2&TeC?jYXl-q5DJ+>UTe3 zc%nUeKEi`7%-vb5)e5E9Tyb;$#O(X`?=?L#!ctHI7|7T2;lmiFMDnOAA@{{)g}<0V z;X=WOVXJH?7|~uy==kiqvmgh?VsfWqAD%+xqhksv=-uTqu7M690EyrLUcG+3GBakU zjivf_&hh@bR{BRPZz9M}FokMZ@VwzwXv!VC9m;af3|(N1Z;P)DF}q_6hJr;bm*1~&*0S0Lq2d3s^*XmVzENm4FYxdL1V6| zv;FFGvf+Nvt%Q_m{GBjp=ITOO8iVb&w@YGQxbShfm`A7rhLg`jFF#)%A}}^OI7zCl=1-)&_Z;9hMlJCsSRQcHBSk8dSEsu zrUQg+4@bFlo33FBRzbnRo12^QHp_k2?sEBp`Q;G|IX+&m=Jw}+`vlwxte4*Lc>!`< zPEOqf;o%x07V=OGtM-d_Z8qE9SVCUofS;VM9}`=5IQnL}+}Ar2IxCe(KUz6muS$kg zPtSxuwAMEU^!ATM!aw1Gg7%Da=VtU^1V6|ASd6N``iT8{Zw3^2WUHL)zg;7vAtJ8E z!br=3{Vy~b{sD(-a%F`vhQ{s2^DCRNhljs=YOO9T4ZbqCLM_VWl(&b4=x-{QFqo0! z<-$IPpt{u((ltxtI#rOXp;t`(zGy!NafTfl0D+a)>J+Up)f1gpxNiK+xW@8 zXa!4#^`SpICSR~%LI6`20&}d|8UKB7&(re?)LTNhUn2m`zk20L*IOCGu5yq>2KZ4i zVweUneF%O2Kmi$$#8l*1q1DBU7jpqrhBAs9)bsda{5Lktp<3Spn*-hG^ADi`iTejY zDxRg}1Co4TKxdYEGsyf=_S^anI}5e2SvR-0MSgWZ+MkCUTnhFl<}&5aK*yf?-n~y7 z8?GlJQj6LOzUte0peo-wr-ymud#l5(pcflzRT%UpT~Q8uq{M7I8Biw& zJ=pxPvFvB9fH+q!OO3O#yAUdNu+)f&xw#*@1P>9Qj`=k$?J?M9H#aw69u$M_!P)ea zfLC~3Z&g$<(u@dob@f!0JQ}b!(75rft4kI%TeObVN6V2Za4bg3kuqc-G36JA{SbF| zcX?Y_%vMipP%!{Z;VJa`qFm2~+I}$I=i`e6T9YYI!Ul|kLvS1rvIs#(E)boa-kNR0 zz&;R#;!_C$L~rl`jw}F3G*GC9-?2P9vHxqxiAT!(9&WSM0!(NavA7&~KgP`1_t&rcfQ(#*W&-)E6EYF(?ywa#vb)0GYin<4 z?hb=nMOi?fH1xUr{vMkKHW@Uj1A~I1WU#tM4~Z5)I7mZ3`Tlrk^lC^4&tbLkSh@c`@p#ne}TMcFTJuVSGf z3M$f~A`*g<(t--o0!oK;cjtl#Dk&-5ozfjjw}dn-(%ro*e6#O+&iQ}ubvWl-zPS55 zznHn_o_l7%`Zg9D_CMCBtv-2s_21S1bYr?ENc-GL@h)k~r4uXJW0@t00-VNxRkhYh z>Vff@WutleV|PeMzWVxJL!?W`&qGen%4^=zFk~p0$yF)}mWl;3N^l0N?AElvQJ9++ z@58v)Fy~w_MNR~BF|o1O+y<=dJ$iZ{h)ML`;D*hQlwAT+E0s}z=lg4q^A|Hfvjn&W zEXxgI;?E@|C53$16|}G{&9G2KC$0{+Jv=;M8wHJi@VjpEua$j(oH!m8e+&d2z#fM# z;e_jZ?$C$%04n$a3-reI@UYeag$x8-SYb8ib79j9d)WRl)9P&nh}y)zbxXS?qcL4o zq2a&02Y7dT3d_p$VeU7Rh(&b!B(9{3(OYV+Ya)rhGDf|!7e_I z*YAo-B~Fm8It3$K4#S^8XXL_Q3nKowLRFY5EYXn?Uq9H>!`zQJE$T~ObnLZ@WzwGP zNJzMco%{+O{Hvc|%G*StciHm(5Na|WIKDUBQRf}&-t9XOKi*~zKK=Px>d%2`xx>~p z58oLq=@G~?)SU!iBqr7t!}Ti);}-qTHTrbpJ}36QVA3{-K)!JsO^lE8w$Mk$%4}jE z?7`2j-34)tK+cI$u|4we-{8iINuLG!|Z%i1?F&d$yPWhd~GC}o6zD>pqWYZ1Ib zU5f^%>mOfiTmwLo!F&|XGVC;{l^h=|hzo2P%I~Pd=vOLuBsS76q*l^^J;r&Lg+R0q zl3MYw?Q#GVdCxHaJmlh+BPAt&+zGNq;m@ZnCa7AqqxH{tB*D}g&Q2iZP}r^xBw-capq2;v z944*hQ37iBVG$X@%3}HP0TvL=NOTy&mNDfm{$hpzb(0E-Yp!}iJ zMhNU@MDl`5DMksyWXMM9PL8P|%Fl-}?AWvs)MsFQIh>nneLDiXFsU0tU%o%BQ;}Kv z#LEPB3*?nz-3H9e%n*mgYKQIa?!F7}2xB;q=gE+WWVuU1g0(x?z4rC(C@8xIqO#DS zj_&Sm^M@_9t+6KVTe;u2A8 zHbE*P(!x2Yute)5<-8d52!wNFoMt-NvHIOHg1PblBZM3r4y}6(b#z|A)M#p3t#Qya zFg)0rnVvpeN{|0%U^aNXbjTE1$<)2{_V%_}ACH9B+XOf*x~cxZ8|Ek+z&C1VY5B9M z$qQgS%%W|DPg)u+Suv)vhpcXU>j*H#P=D{hcGuc2eV~qZ=Vi~qSIJ z7(7=CZZ0As;#*8iC&=VNxh`O(dEk}eohN}rx7=HP4=PWevNFzimu+0=bja(t{mRk! z=ip#?yxXy;`L^cPKGwj2DbVK{`SD*HOx=j?d`e3Y?)GpCXpPXIo&048&4F}f>7S{S zgtIMgrPb2d_`~IBdx%>df;-@=WIrab8x4FTA3dXncYzL==iMO?seMWl;^86aZ7g7V z%MBz=k)Zhn?DKKp-E#se{>HYpuW;LS?^c{LhKV&n>Nm9bDXcs5TmTe4feJJcZV%Wr z8*U1i!M$k1{f_=NHdg?#KNq*Atvt}*xff87pYMindKhN814`Eaogk>Rt`AKRL7fGQWoyK9MevIxH_Q zuih;%8!!I}Zt9JM#DxSnwRn`ghq1N;VGIg~z|MpvIA_b}F7_t3hzC|nN=WEH;-lT= z92B@<<7*fz*YPLfB|vg(Ta1v9P$;6-)AMENBjx~K-x=_=-=YRH!S=P6&d%z9alne% zq@<((qfo-67;q0F0Fq!*Qz)I>qyCxZ#zqUwu`PU9L17`}?kHgHPK{MTL4j(WTP%bp zIk~wYMz#vK=D^af6cqSN^WWYiLtb)#9}JOGrahEteev=6IM9n}ZYI$nbUNImh6(pH zFf9<0wyJ7s;36MBKU_j?!v4dUI=A(*jEu*58g)_-2f-)6W2tkMfr8qoBnpno-{q*I zPfC#2SFN)12bn1}LCtuZ_;Gt`RtW+IvC2e38nMNiTp~!XrfjyEIGgR|A(P?wih*fq zjTaCAJG}t(T~lZ0s=!wC7K!GIK0!YNgSNA%+SrPA@<$gC2`a(PmuRbf>*jQ6`Tc3f<5bnn2;uK|Uk!;*dtUpj zU|a<%H!vz@PZtdbL%z{?eEcrs1_%Uy4eU+` z^kTgxD5GL{?612+oxtC@xz`YeWXL36q7e@taDJ26Yfx^xatDiIA(Cpp;s$5B(q>5r zK6sY?me3bsgMo~jV7h|>{?AY$eYT>XX8`ymqLpfFXm}agt9ua6=`)m&BK!wIE;uXF<&#kAeRgsY43P&(8zA z{-CYxDV$6V78hc1n25<@S6T23=MozDo=^p22*L9#A12O?eU+^rv+(NP0)h z0o!#1G|h9%=}#W>QUn zsKI7saIGRlb-oxn{zC$y0hSj`;lNm&0MZR2^2f3%S74xfL{yY)!i#fYL4k9H$>kP3^?7&5%gK!7v6MFtp zuoXwX^BMv>;{syl8PHFV2#=H9y*mXNqZ+3@EC~PyVT4}~3=^m|Vz_O6AQBKVT8@Pn za|#kJ5F7!EL($yaY+h^)>#I~^6sBHDWo&E=?pFkS|6`>>ozngjZ0W$vOoP`o;^+43 zG21U7iTn{8RBz9Uf_Cq7NJxlkbCpf4;CYEK!Dh z5;p6=B9Wt#qxcMT()E|ATsFms=&_&$a!fKnLGquSSOeSF4Zrur&+h^@pv4yKNY%Z+ z_+Kr+?#ht#qw?F1Vo;NT^~pm+#Bk0)p79VXNiD`%IQ^(HR$q=4mDI&(}+ zSj@I~%YtP-d3_y3^Pp~7fDq(N{b?9|FI-@S>*M;(BO{Tp_kHksz+z?kwI1y4=>nY% z283S3dIqN3v6xTtg0oD_0i!lm)We?)w~8x#`jJMhiwn@&Y-ZyTz#YWG4~4(*ONb&s zLPADB$(sOKmS9;C-XYi=jpv3|dRUP-04 zA!KQu<|NnCt7el%d?j z3$&Uh9dz;W@C*QG=jG*L2`9jlgoK1xfE@8u^Ox}-fYMlXDUE~qhYv3xW?cN8*r0-& z06Zp^hz(~{`UTjCn*DjUY$k*_LcG})e}6sa;n7!2?==9mug3r<9|Q;~Wf~CepIW=J z!_I)~y;=_-YIgcxHt0SvY4Pb2S+ls)9dyqD1nfF3QAIZqyLqM$VuM6#-XR3QAUN*K z)BSY5eEBj+r#y3W*Hm(Cz(tU7nvuhL!hF{#uvu6&D`)^U=xA%D(zPM#gZTP|(Pk|W zD8N{s%0T4T3!g zB;`p&$$=08-^U`s9j%5f_kp0geE|sV>b1qXMuybXPa^euI@*HW0|xaL|CnDSeIQ>C+EW^&TL1nORv8hg*uh z1^wyLEzpG^PdP{%f)OxRw}F~`z`{ZZu?H-P7-VlUH0s>K(XT~BiXaslK+g3G=;o(v zY$<>rA?EQ0v>3vyQEM{vb7<&0IN&L8#}F)9NT^z4qooNYd`Tcapmm3@%Hg^F<(7I}{xWF>Qam3o*JIv~SH8q%P zD;8^`?8gTi($HQltp%zoZbOPZ9gD*4pe_&64qz>c3z>k&1uJL`^eRxYSX2|LoyzAR z!E9Q%Sj>TgE%1Z&$6LG4PSXpJZ@AKy0eD15ee;DYTl|&8aSvnf0ZAlIv$iezFPx~Z zzlL)NNdO49=$9S>*A}k%$=K|7t*bKt|8@Ywz+IY5R%6T1>MZIY9-dlT!xTU8^T!sf zfg`I(Q{&*^0PR}{+I_(IiRPYCCar}sn6im0evPzAOI<<0;pkw zH2ZL=DH()mT0mB|y8&b)`FH>vZwka}!2A~hJ<%O6&;v9&3BSv8NTt0h#ch-AK(iRPl3doQ_lxSj=)w4F*4%8BpJ7pEfH{3Sh)!7 zA|&Z3U%q@v%5LU;II6B?%%L zi8x-8Sg96>!3LVRfjh8zeYVBN7<4nT=OB&kQr zXE!x8;J`A9rD{P~3XC(u80fF0AZ7uxstKOOW~uLW??7$hu9B|Ds8( zQ{-Cm3K^z+Zt+6N3@S~abh@ID8ps6D5vm+_^a=(U3K-G%rFcABOK*{jA-@t?6{d7jCsHnh&I1jB$2;q<4mtr8hf&loq%=dbV;mb|eHU+oN{~N1w=m`%;kcf)R zYW7fgP!SR)rNqE)J1vA%)xH7h7--x32>qV8RQQRAjMP=$hqR1rcr=@#kjMFo1A!~L z?V*VfH60!wCjq(z@p_)?)IBQ!_(%5$%KTc3C64zD=YXVwZ9bO(Nvy~K;j)5)M6e5B zKP-T6A|BKQv;9wL4OJTltCc?7DH1@Yg-mGy}pj!WuOr8A#iNxssT?0vIZ;{1p!I;wKxAm zXBEOhAL)H=g@)ESj#!IKX_Q+CVtLPQ=iUTJ8CYyg@*$Yit0R^^RVn1D%JkWU(!ci+ z^13>AY=;f7fm{fJNyAXK2x3nbQpM`v-JvcpPY`J3sYmW6FZ8#oC@BR&`~ko$cy7@t zS3Y|j31nTHyla+1z7!ZCI8MAShu^@{Sb$ytTciWDiBh%0vk+S8v=2W&wjZg1T%^KE zH{ie@qD=rZFS7p--ns>)tITkObiq~qCyfTt*xY@>|MPF=ufrc~0s|jLz6Dkpj?`R@ z!P{frLr8TFMqeOm{YJ*)dNj@mn3tr_quRvHoVw2ikYc;R0c5f19<27O)HszC#@w0W zWMg}`-3>iDV0~}bU?8+g?(F?G;_0yF#4x4k8yI{kDq;l(t+zT{-~!r7lI4H0<8XplWsAAtKTFt6x&J==oC_5dl}%+j zk`3NmgPWqMN=siq-7iC0LShyevdGdWT}4C=^Lk4wPUwBp3a2K9XRNH?;wTY-f>{6{ z&uOXwj0QiufYmn-fXV?LoDL0UiVDm$1b$eO7yvNvdTgZlJa(&4u!J1oY1vst)y2c{ zXRuaC!M%D_c6|Z}P5tL}Fhhyd-}~kR;m3Ot*=Vey4*r~#PeECk+#RBS#0}^w2ZtL0 zt^h!H5IEVSKSp7NkVXc+OD&)2-!1tPGi2Gu$IVnhO9AgWGP|XJg%aug2JD1V+HcWo zi|0TMU!&j;vc?%uSBUZSZBK`g4uIoAjyh;-15>!KUcw;x;X{II&2$*Hju$WhSj9%5 z91Q>z=Z| zhlRo_cY)>Hg||f1OJ$3UkiQ}s{)v_21S+?0SPH73pB+b zWixP3Or+M-)C57e9~@4|W&6XUL*g^da;Dj8>whNc{?PBGxKkoW1-QO2Svyphho|Zw zOJ>P4U}@~!4HA+_!-@kB$jRvG>A{YN#KcIrxbOmNWOdsY&*v81FKm+9_z?r_F_hPO z1FsyMt&mShNf`h+KoC15f>G(=69GGlt@H$AFb_u_m}$AJcXTx$oTFHD+aKGCAyI!I zQwM&Kh?KM)@Byf<0Bp&?@3cd)4KV9zEyh5K;ghmWf!%??k_x0Dkbs9TpazQ)xvs6H z^$Si9HcbV=YsMcVfc4l86e;{JhfpZ-H6WlFDtCe0g8K+yB^hMo8ShwA;*XKQ2Ov?PEkxAX*OR->SyC?xc;@MN})2zTllAAK18%gJNn zHsEY+mj@h|c7P+3g&5c*Hv5Ig`JN~G=H`I_ti|K&Akk0q$i6!OMA32pcNojSOw)zT z{SURl*<;crsaF7Bzg;A_c_4ah$BR}NhKw}KJn|6RAB zj?VQ(Ml7ah{(F#^hb4c|u@6N4@9R%!cb5h|tY}u32GS~Qyl-S)jkqgY|4OLZF|xqM z0>cc5FG9v+PI8TTSOI`Fz0^CO!*G+mP<|Gze;vlJ9>F^bPC2A#|eu z2BGo4uc#1GmA#6XKk}PImuCEFl*$iR50>+enVSC@p#1rJ0_U5 zY-%%8Tuceze(%f6Bc+afZngGb0s|X4Dxq2BUZZBgj*hCCA{Ec-h%;my+sq5bOvnM2y` zgFo^cy!1pwu*#Q_sIL94l?vnrQ*$%@c!$DN5kDr(Nq_z)#yJPtOLctjCZfFKqo(N$ z!*xV9PBmnU6}RwoQQj$tCqI(*$B4_u zf{LfiWXj{rRZBaGPG{pquU3=czKn5H$;n-7>uVJxpKkQfoOAi#LVvNZ&oq#Eewnl+ zYBuHJc4X9fi{lM#)mn=HCwx-ufbHf$N)M@Y=gL5985##FV&V=39Q5Xb2a1*a0G>BS zz}aWc7pd9nV>@6zz}B$u>{h!vR)Zse;(;XmGrIu=l2Y{X`25sH)sgOq@~>Y)5xo06 zb0-J_s>2<~?;JXybXHTAoYqE54JndE@Tj@y=o+7ov1IgPLlA6R8DNw(D1)qzz{~L&LX3aUMU24iRjXtv6Kku|hmQseAj2%1F zdxIn-ir2{)3iDl1sxu<-RWMJH>A%y?5i6BG(9~9;n1<3cc=JOxZ6hJE{TSZg$z?RE zDE+MaaN-cP(05d{I$BB+co7Grv6 zvppK_gKGk*HcIxLot_u)`Tpi8oL+d{5jMj`{3K6dLyk58OO@ukc^be>HRx*v z<9od;Skr^P-RPOMOkVn3L3SrxzMY?wH~$$-HUo_J$9QlaDIk%UqGGsybi|@NZor$b zrun9>hs$oAbpLulK4!vfq+A|%GVm^6fHb#V69g7#Xtx+1YoEG6iog9#6LXJ%@c;Jz z$%UvV3-_|WTQ}^Gf}pIVt+><6I^B{%&6_>iUM?xc*xMAoy+FG|7{P73(st6s5N675 zJd|wc8)&GW&%L!}*%`qbkZN4ENTHG5FOw{kYa76U4h*b2v%OxAyp4&91Q~%{PO{1zCiH1LE$XxBU?|Hq~CUI6#RpLsNELx$H`@v?Z zhbeLji0fyL?!ys_5`m~4-*aM{{F0u{U7>Sk@yNsr8~p>uzZI3PL$>?VUIgiz17`h+Wgj0>{{VJ&P5s`K)ix2LFBb)_6Kt0HVkV{o%WnkzlSdfz*Z>9(I!Yqg_EgX*%H&iN?-o-URGY`_SUv_s_ab1 zI^l-z4Sr`@>1OU2oK|JqIazM<`JNyfW)x|*Q{#F~R>a{uoeEPw3v>|B-8udPcH8nG zfZ%=h&~Jtr_1ce<+zi=Bxt{XMu)#4iQ|)ZV>_2Ch(rV0|=qk4J)oNPV_wZ|IS(n97hj8pHwqQIJ~pjk-s~^n!Hokhl<=tZ~`q*zRqW?PZGdkQY%ZP~C2F29&LO zal59hXxGDki5koKgYFy<8BqqDK4sm2Pae7m^Y#Zz#e9lfws zx5o|61UmC}RH|=nac#$r-KOuXt&?KiLZYlJP?SP1KOPw#KfKe8`N|Z(r`2e$&Du$E zuQ7XcD>d6jh?4eSCo^j(O*lPLWcmO}g~5rBR);*B79?=7&TYW(O@|#trgDwEx2u*` z(vjxu2L!(vO6mMzeMCN*DE!#xWH__P^2yriFW335h%ZAW;s_T`%}cr+3hlQ|BL$2J zIrG%!>8I2SSSTv@+yA(3*tbNhsSH`K&ps3K#DVZ&Ijczh$OebgJK&J1ql`4}Ni>BtAz|V8ykg^5us9ZQ;_9A6G<2Ir zJ8VDnVDBKd{;yB^(*D z+FyhJwy`aeBi^R-wySQuSxQxn=M2Li@njkOO8&>2z1I$G!M1bNG0vf>(l%#OEKSCW z16ni_I}Z+nD(aX*e3h)KLE9Ev4ORgR zpmwt#wbCOJY<_D}e2gPU=cLRtbgCw@;-Y^PQsRIgXX$qdv2OeOckPq{F5mvTRuSGN z2{K;Zu)x9Bi>d0(_|x+QX>i6wL%ZHWQMo7?aKMBy z-eLTC6U^NvzXxX z{#yIa!QN4S%Sfr)Mb12ppf-e8Ycl2QM>~&Y*Ed!(P#L%>^G;{z;UI|)a{h8E1epqt zn0^qg0PXucd9}!U`-kFEm!kKPUb87FurMymQL-I`UPn_5uIubDMBOYhqTLSqI^O_-3CcXr)R^W~Mx8 zT0I7njW_35S(mCApL5?%Rrcpbyb?ZXsE})kQ#lRf8>8O)bH+hYIiFP;5vlS!vxT^V z!-3$b{V{=Rg~cU=c=pFm)gLjoRe?{fPT0Kt$Xf=u>IyJDti?ygWGVB_@m!W{{AJa% zTDOI&hq#~AiljFkq&z50?4qRzBVP__b`>!&%Bo^BR$%R>=@u-$$w_x}w zxAYNG%W0Cc#^JDDprI1nBjWk~5Veb|nf`F8e#;;uRh%lXy>~qV(|gn=D*8&~_4&8u z7LzeMg4Jqj-zSn+SdJz-J0``h#zLQ;2QYss|9e?nkIsPfZFz< z;^}$4$WnA~0~#_uGgFJ9y4?5tm2=6**pP1T=!xPUHd>>4h@C&*AHG8$ql+GM8TA)m zkUD2!v0=B&ZWd(7RmU?4s2!2u1igH2Sq?IU96t0&aQ|#jqD&s^qsK*OY-L3#`t|EK z(pyhiI37LX_7Ndim&b$u|NOip%!z+`SmebhsDXy;5(V zMV}DShv8P0=Fo6)eRX7|-5wi_?ZMQFN?N8!Ni{d;y)DJzzWF!Mk^GL(>G_KmO!L8W zZcqdb+$6AKVf@z|EaIW|=51dtn{dzUlK}s(kDfn8MXoSj9A=stSb zNO$mB{vg^);~|Yq>*UR7n-DI&^hp8?=8crtucr5PlR{rOA1sUd*a&e%M$PMY|DxGl zI;A^UKbFl`=7OT?0C-q9P@C0u7u^fj=Z=hK56pAbXVSZ8VT}EF%}hf?P=iv%t^&P$ zo)4I*#GitPlNDXAbyX(g74i3_&Q7TkHACieR4Q(PVxsB}zCmvmJ}fg9x`WBAwyUX) zf9fTNq-p@5AJXElhgqI`m!oQF_?!uV#ju$3$! z6=z=VZb8qQifWZh&@>Z`qlJcmL#6q_b!>pU%x)x?DaL-4=Q+%yZVM7Pm|1YX?7m)e z-0sDwK2{EBO+^ zeX!axggT3+J)Is2Vcd;q=R+Yy%%I(kOsy3=|Jq=5+LJMW6!qoimn{EwqDTVJJAp)p zM=j}eSBwXZ8)Hy|^kana4}Undo!>V@y^*lFl@j2TFI#ixrAR=>H~K46yI9(27x z)W)-2tNOJsLBT47%eL{{hNEMj(`1kkRuD#KTh2GiU{7N)eTKq%eVl2#)mXp;tBLOw zWM}YE(Q_*cW>HaT9H@lA-HR^%u-Ixj%)_nwxX$h9zWnJvD?*%ey$RjS?iR}zJ3ISu znZu+xM??Pnl;hm}9ev66r{^br|Gsf>HS8#5Wue$$LJ)T#|4{cjnSubt&2qDYn{!Y6 z*a+59x?9zeF+7f{qBuG^vNGA~+Lv7Ul=DxDP|nfU)gr(^PIBfk>}3puR@MdZIh_1I*)F}#AI!8(D|O4GyZR9#P;h2 zc|agZui9i8$BMsiJ-BQQE(4L)r6!R)PenvZ-Wrd#vplS)Tu1G-{XiBF=HZpSR zG_I z9;Ulh8S97EyFuI&ci}_x$=rZ>pOD)JuNxIoA$M{Wi*z~9(0Y&01yWi~9ZZmvmKpU^ zLI~d$Ustb^9pdFh<4l0*I_!NgT%^rV%U`x9oHxevT}IeWrWX8Nn=Ab{|VwA}Edp!qu&l*h}h3fmBWV};;v zH{ufm$=FSQ_V|X&!L6U3VbzPWcr1GX@+JY5jM?j0Kd-N9x)qRga4-FU5L3&=3! zsVGqX215OsrZNe2ZT zw#&fEP`~&^2?W zxut2__H2E;b}3S`Mm~EEkQUzQiJTsjYS7$lTH;TS&Umh`jg71RqcWKihZ$YzCSyMc zOSmH&F5C4NILG7ZvKf^G`_?3s;O~*1MW;R5?8T{F9t->ZEddV=z?K|iO*O<6tNk}C z_m;h7$L`7Z?pUo6A%Yr($r^Ljg_Y9@ne32fEF5WXshv}i=1%H!X$$W=yPg!?cCT?c zWjjM>2|&EEH)^{)OwAO_(-gQ9t~Zh%c(U=VSXv)?C1YLM>tw;~KidZy$nBn)r*0}` zD-}aJpf6dF^_&oHZZ2LKc%Xt3Uy-gVTrP7&#Bf=@!ktPlGu~02OB9M~{eo278{=h) z7w8(NS=kdV*qE%0tLSf8kxU|kAlY~RwAFv*dnAXwXt-$ged&L)Ux{MQq!kqvw|A1}{NhZQi>L6e%8Y>lvKjT0n$rVBni z^YnIY80kk6-8{WJCggQA)eu$w9m)MxYiAwXF8XiWXgb2Pj#PB0$I1G1)et}h_PaXA zV+d&`GTxWDg`0x#Rv&x$AOQo?U>A_86_l3hLFFm%G1z)Ns94zD|LWs|Ev$nkwD%j1 zxvb}*82Lz|rwHy0&|`1jy!p_VB064aD?UQ$lV_2xmGGG1H(aktg_z_WR&vvcl0QGSV*zUdE&`z|XkNKoZp*r?j0 zd{L&3^Q^3R_9T<2Gm<0x3-VVao73IsgR@ve#D|6Xl6RTvzWf?)lA4!AgyPy>M?)-U zr#t4Io;-OrT=kOK9SNVVJ3SS?#!XMB0xb7T9mOlK*WeIe)>6Feu#*-Upy2-jBR!>Q zr{gUiAq8UQq7U>*Z%kI}(V6ntrE4EdLR|3pNV_aY&09Iv@8wPXJGm2rn$9BdVFc?( zg^p`h1Ix1}7dIaKqGCNAV|EO_5eS4{&86!LGyAd6K!j4iZknTB>E1|-D{*3WWyKDY zI=d{n@hg&Omx{wjitjjq<=NkXf`-nR=TQJP!2!;lEX@q$p=K*=--ut6OVXjMxh1HS zshC<>=z`jiO4Bt%?WUNvth)^C%fVr{MyJu%VR${iY_>S4SF}#Y>+9WZT6dk{7U+op z&b6tjgzW(`Lh48iYId*cY@I}sLugrdb{w)sCf_@^Q|adzK3d!LjV97nPe%~Rf^azfxf!1<|c|qWFQoqtLv0Me|Sy7^mTObuk|M@ zp=Gp>oKJk5z1qKV=bv*2spT*Gm!VUx4SG+OSPd?e+If|`{KebRleouCAAH+)ep$_-fQpTk)q0uJ=vWLbpDB+?ZO@jFF!)_g zf=|lXA@2w@69o2MR?dRP(F=nU4?$U2lKZ2^Gnh!Aia$ffm%wPC-9{yq{!d(0enEj{ z>9Ve|!WU`onsIVIv(%0$dF~y-48xE<{*LCs0D05Vj;?#39G;l7xl`S^0UgSvMaHp` z8KBXx_Cho5(~})(D2Qsu);RmUk`@(B1Qk~dA9{Ch4K4r_O1MH9WXj8n9P?z+4v(U z*-CQ59D^Wx6FNjd;SZ-I zd4jij_S(+Q?w>3=)t%cUiHs^^vjKIu)IU8Cp>!HDZ9h#xr(t-oEfMid;N?g$`7TGP zSE%j9T+_$Xs*EoyKA*agbN#sNF2Bb8%w&L|z0|u)Sxeiv<=Z61*RKIu8~gr%yk6z2 zo9VVcTlpR)b~W_ZegLzHud$*1IK#VUbhB+W__JD6EFPWi{>Ee$W4h#zs}|df)Gp=% z6d#^#X7}2xADP!5^qbYd)zI1>C>Y7ca@!=0#9vCWh04O%^i=UYuvHx*7jw@j_((9FQ$cJK&ICHGt(NWj#6x`pIsM<% z<2h*QSw8ZOar`!(vX~=PE7Ca!*6Zk6#NPsgl0H;-lUl@T!52*p8!Sp4%vXKYLg~6I zy986XcE&2zLbDYw-g{~03}#=c)=|TI8^TDuQ$0NOA*R28rcV!CAZ6OrIez~0z<}7D zCCXPLw+n}=n3-buJ%;{>x4S+yT_@DFklYCzn~`shW#%puN|rC09^#6;v0iiK{R zSbA~a_1^?pWAwH#iVm{j4?yV(0x6KAC*(+9n+Kt?H3+fs(5o<426qya>k%MwWd%V- znsj_OlniGo>C^Bq38>!mZe4ND;FjKetDekT8l^aeXH&Qs4?#ZTPCY<;A z_$8s0;Y-5(J0@4?DFmIu0?zk|BLr{KCBAe@O)2ZDMLuL=tU8yJuyTyym8P`UhjY!v zN;W;hvtS3ANT|Z;`-D3}6btcuwmGWy_;4ohPp%4Q4SKzb6jUFfJFgqShpZn}gU%}r z+T#krRT=uB*--r9XBT%d(&Z-<8RjG;tdg38bP)zhvHqLguo<{q-0b?ToKtaFOwV1Af|E$!_YE7)3?@7Ed) zW~RGaaZ+WX;^jtLN19&ejtb~mR(GkCe0P>|oqUP5Q<*UT%*1E))jTyO;v{9uygJ90 zftX=&*0Km!dZVdUO+Yl*Cu~!`OdZ2GC@?ugCwJd*BOq5E{rKH*VYBO6C8x+74N9JB zX#4cZN%}gf@e>&*x;1rN2v?({O|!=5_K-F6^$;sD5=;#1nUIrHlp&M16Mpn#q{m0S zAqFlw84AjtT1o|NZJX0hI^SCHFgks)$%Q7WgWQKO9C+4njE~1${X4O`V?$E4?#Oo7 zKxM;BaEi5H`mL~?xt`qi(A*DVlQ@THE-voYwqB1H2*kG;5C3}3yLif#Hkl7ILHD`6 zUCH$^1NtXy@9qu&;w}ckwe9-&2-MtGt_JCVfUy_Ks=tV3Jlc{OVjK3WOyQ zHRioZUP~qU`A3V9pi?nw4_%cphrW7QiUrerv14ntXfif~58UzP<*OpiJ98cS${c3n z&-o6buT=e`H7+7#N7IN;=sTyHEyIl2%_)UF6CC(QeHUgbF5l09+!2aWn@EhHwlYG z)*?o#w6nR_GaSDg-!XcoxgD#cr?c3CpIxH4C+p>Qt=CR_Bgr&VTqPW}TbI3Ha z_z0NiQeDkxG!aO~DFruP7IbP$nIFMx0;bdh?bA2VfOk?{LDahrW;imejaP~W7sG>> zIOmMPuBD!DH{7bEP_8rj3e0_GksxhYXhZ@m3&n7e8nk~L_^i<_+b=RUVI5y~-O2#& zti8mx(`<*mMh`6^{T`TfYgJtcRBS`dCWktmTcktw` ze3rRw8{SENH|6qu+FXs2cOK66i>phhup3n7q_0n{r_}w?o8l!an!@x$u{+bm_T|0U z)0QPv64Y8?CHf#vPJ{}MST1W}&>ccSZwPdg$bq&nSt?~n{!9eIozD?gsVm|nD6hF` zc_<#I#Bl8@-(aqN-Wn?_OR?>avh8Xgt8`oSA=%K5jq1f~x8;F47_k4aU;SL>rsmc0 zD!KmJ!?$gsU!ga6G%wAj_{C6mo`q@?X8~TYVwfNNRW98GD++DdinwzAXpY#*sP@0( zypvYPd#!@0si|409p;X*Sqz2>RI}G~q-W9N_e5oF-hFbrKAvsPADsc;TVm18X)iIw z^Za>hoefC7pJ(srOVP1Dc-dWs|LB>H-m=MWT+v{SOcsKi#$h69{%lt+n$LR3y-ZS0 zb5q-2;{uE-PgTA%>k)jUQ>hJJ=IwhdjYcQyRa-5OF{Jx_6RABB6P1#MLL+vg6On_p z6E_Uk@6hTEOy>^i@v&k*5dlmDbvnzvIa=h}5x+cEXQKomfymO5VWrF8oOXpQquxi0 zv*vdet|1VJ;jGKDI0D?<%44OvXVM}NRzc$@j%`?wW)&R5(n@%0McYzC{@i>DiVeYzu;bj0T{Pbj&lxC-ov{+H#9ZW95IR9o;1i+ zX6ae(w|gcNbd37palE+}6m|0Vu;ws9_MQv9m zq$QP$IVgN9qBR~VX3tIT_S&Ug*cms}P&D09Fj_@Pc5&~$<4&j!mDAFlOR7k-4Medx zu2CtMuIV`Oxfbr}Y-(^#j?Jk28dC_gUCEVSD-vfH7I{5h5~!D3Sa+d?VY zJ38NQYc!dK!SOZ4JbEHQG@)Kany%S%(3`rW&p1kgcj-wa=Sx`Z5JWR<}c#nDUH93;iE1m1$gGsgGN`Gp47fbCM~@Nv&Yk&9|pm zHmNt~1M8xIYhDL0x+l(IeBk2CK_QW^FNqnLwnj27hT6^YCd!pGzm#~3-J9rx7lem! zRpf+)y&o=oO zY|{SPc$U2|eo5s-S_#8Ivf6{L*nKah;Ir8+4co{1XMc<$rS~0IzN}CU=JrMQ)7rp$ z8WtA^nv3#n%@)Z@$!>?8*6eot#Ss&ec>C^EM0jU(V%!?p%3>mh;9)ZqRxFP{;E{zKZmn#rs&n9voG#^*pBjpHtxHeoUWA1-9B_X6GEGjsM;d}Q=wlr;q`FHuNwJy{> zErrHo1j~b!wP~(fiC|SFI(ja~_D#l!N^a<~{NeAp9-OWqBidKY%>R2Rz9!#y;iJvc z35g_n`lah$+XYeCzxU&uwkwFK7WsHjXXw(H-v}{jygO5;VBaqxF0txWHB~Q@wXQvt z=&>`;Hq>x25R(-Uut{0wGz>D1@2HK=#ZA`u^B2b%`^FqO>RG3JV=K*jtF*fDt7l19 zCs@v{yHh`yT@R8LPL`QL-OYjnN5{hZ4r~**%ZEVk{f$h5NjJ4fF604%UurP_XWNDIh;w3%fFNky?A^#J56st- z#=IjTS*mo|iplJ3U89%JOv8T;5GC+V^d!`7!T9MYNr1eCeIwky>eTbkv=^BhiajC(qh)xp^mF z7Nxmqh1b`(ZjY+epxcI=)|%37VPE%6P+BDt?IJTGOVLN!okl09uRlNhj>#90GZHJr z9%b0nEJa*%m?=od^hmkg(P}!3bF$W0!^_uUxq(@K1(}ccR_Fx(3EHeQ+?8`IN7Cb? z7n|j_gj~{@Ae~~y4?RZZWXlJi7vVa<1{xhm3t)egi*0Rf`?Jk!lg(<-69qk5G}R@x z{mP~j#e-UEfG{%8eDkB8Fi^Zz-;j!e+gXz|5US(4bMy-PQ^Y|Vc$O7avXW_>UwhzU zhQ*ae?8Ykt8!g3FTV@$lGt+)IdY zPp1CD%k?H?GC?1PGtvC#k!PnE7>tcB7#0v7UPuF^1)e_dp&`Ubl~6wg-L*AC(i2y+rXG7f@`i(^VBBgQID!k5B$27l~rHy zMbJ~FD$FftD1+euuj=FbH9Q22e6c}tr0v&8FKe>-y9&e3`>QOWtTc@X&r`$1mJu+;(WT<{<*{W%+Xhy5%pWH`RaB{{0Zh?baGp{RVw>F}m2x2aoP@Bz{UJ+gao`FHAP zRkev*WE82|A_M>#B;j+MTeuj+!s(L__>V@l`cxGZIQxyI^OK?~4E&GlcC$uY=fSy- z+L9Tx$D}%Co%x>s5PetlD#DS-g*NS!z|Kl*kiHXcK1yBPXCRrT^^o zKtN6aTMiA4l>g)?v6h&O^cC~MY){A%RfMumV_;a}vs{>M{yT zM+HTx5}JSzrT5;ei1c1U?;S#K0RrT#xYhmp?)~HY?jLuYG0r&e7;N8UCoAh+WtQie zYtDhUAmvNBU_?{-G@q$!MZLo&`e&90esAMTfNWvvo&gXW{0y?MxVmhx>z|5)P@A)y z>;0>!#${#aI$ziEE$<}t=@<6w98y40Zfn3H-MJgazSy@D$ zXx6jshqD=me?MkSNP@E75m9AsYGTo<@b-;ZFU9a(9w~nmueZtxqvDzq-Enhy4P^$7 zd*IWjHjdbm`J-egBQy7%C)`xtt6c?np~Z@9*u8Z;3{OY5JUUZnVTh zw@%inCEX-I_AWYIt@>C+70gt}~}S%S*CX zT~E?MUVD;W+an3gq+fr7j~ltR_nQ6I8|%Q+9{kt#AjfSo6T@d+klFGPRp2z&J&@`>_k;N{ab?ZMxPe!a~Cf-pd<-Vx_x1OFzX~ zwfI3bCQyd{ii1DZpFw*vbd4G2k`+u@19|P*i5+f1CLrq7ZeMrs{WbDn@QB~D-IdrQ z0%i70>iZE(jgt@P>l?tL;z9%HYbn2=HI%ClNz_r*UCK=OF-g%#*M;J`*4u8^yHhkDWdX6_|V-_-v9t%bdy# zT|4pp>?gu-ka+uroRR=fhIa;H`9gI}B8+srYsWl%=SYECoBHNPZ2^It0Kk0WS7A;T&I#wZNejkR)SJ ztIC5OSVH!*-I6Qg`6|^%E~*NmJQLR9^`~4NVaTQ4^lX7~k08IZ5dQbSN9Oyfd+tl~ zNi&rQn5R76A$N(-#Gue}J*Fh9fvgpXIUv zcD7Pvq*siCUrO6^ASI+4pHwn%9Tb2_cdL_K8Y%Q?4$pqpR}3trG2P*?&`>6rkR!j$ zxZ8dzv}w3OT@ngIz(p}vhsNr1W;4A$}!g0)td~) z?Xzg)e2R5HrUxdLDTd^x{la%KNh$Bt$Ha7e7DGBKLJsYP?H*cYX&yrdULO1VpTKz^ zD2)C{4%1b%g8r(cBF6UMvTVeV_}?v+;gMP~`lTgLaw{w<>JE%b)anOK2)4ATxw+eC zEiepfzdH1&+|i;(*t+uDb^)(SH2CUT(9O(Jo@zBImoiCll>1&RgWKV|Sl8z{P-Y<( zz6TF#{~z-h};lDv=4(7@=1q7Q=!(Rmz|!MZq2}pgDrGda$f8J>0yt@wspH ztI-!IggX}u3}AXw&s=j?S1`$?<1MkpNd+KkesB0^q=&vG18t?tAhADkW@#=7R9HWp zJvLH6rLE1iegX-02+OqF@MLTA#_3R2PJOlTAx9S-V5Ni>7iVVyhXOpP5_p{Y%Ka#< z0<-kjHnX!!OKEz7#0P^t$tY}7%fa_0M~Y3mfTov5X4CxI|_ zz}qyzwDMo1p2V(owWEqm`U;p`W()C})cG~f6tRIpEVpm#DePjhSS*h(Bt>^Uj7FgNq2Kb?q zcE+`ri*3TphVbT}>cq;iF3Oka(lRjpS!OLs_CO^ZGX9b;Kq!=S+_7!$30>dK)rg3U zO!W|@D=r%~Mzn=`rgl@bjE;&Hn4qQ7F~^`n%k=Zy!%1nX#yT|o#^R#yxeGXzc^lU| zx=0-zT{t%QY?;ks+?^i}ZAXhF;*%)&Fc&B{RDBxCOq$+WliSRKkokB?8h3LX8}LBV z&?$*Px-g4~Jb|nu9{d*H1KtY7k7bckh1mUml%JOkN~8O8SYl;)k+v?jAtA5W99`s| z)r8YFG%{cU-vnX-n5n_g(lP_ab%4|zcMH}%mS$#yY5;P%xwTPI$HHP)3=pKFNlN(? zY7w}Z&_%B67iAUXLZZvd4ZrvkjBOU)J>R&vJ7I?Rm;@QcJa=>;uDSC%Z=tEqb#G7C zKK&)UKxLKz-nS4)BfOL#+$rX0KZGnp-ykX;6Tdr+P_lSZP$wkK$asl{t9kpl_v7@? zeowgYuDa0I5t{zBv0QnC!m+B)<>j!NXC>D2S1G9CnnD=NdJg#17{e-~uhCazayXXN zo-UYgKW)0nNLpyWC6$-}-Kv4CPJ|^6DtxD=LxepWU?#|ti8qbB*NK7tY$T1RIoen3 zt}b}^HhW<^l~nf06U8?t1*97QrXGoQNAP;K>yqwqaFi;nm%kjd2yi|k2i^tw#&8tR zis;q`08+TWRnVbdO|7)nFbtBoC~;(EEp0yLM5n;DJ?Q0{&~5TPNp4B{(w=j*Cud|@ zTK~%)=}UUVNF$P?4i?bCt7)Mfx4iLFg9<>Bid`3%0y>VI1>X1G(zT7Dt*yCPIeepo zey{5Db`^-n&WV-QwG8*QV%UqAuc=;S~`HpI9TpVt40d zZytLrtHYG*WZ`T`EYfarWDw^e#u!K!0!3T*{=DOis_fty(SaPR{4q*Kj15mVr;Wds zKqh=xzh{r>RF+6q7{eYpmAHP^V}wFAGEn<%5uR0RZu<#eKHBH!wMuUgt&Yb}H+Yn; z%}>YjL|VRxn=Q@Iu6!7Lb@Er!OyB;o4A&w4sQWNu1(fo{v8d2ZJ1t8p@eFAyUbP)9 z5}=5zf!WUlbRiDJsrKO3Id!Fg`eA0{eLUWND5WRqxAaZOo}888A7t@ zcKArT;v0K(KW4M;f>ya%DUj7xTK$u`94_wL_Y2K;8{Gp2qpcM>-&B^9lhP#a=O>#V zFE1w9ADz9Z+|aIzE|xYXC}0xG^BDuee^s!>b*-`0_NKN`-8HxMkMAoE%8D$;zHS-6 zCA&(S29jbsyEvzQNRDuGJKC1z>-TwcB2oNHjKR%uc9|;`daIDYJ_J`ccny%d+%-baF(6F9i>c<7^r_lXeTV6Fqc) zG|xRUZ+5%qI9`j_x+stiCkfL1vcMT0WDeV_w)CxphDW7+yDJ5(oCuPOw@g>_4bQ!e zvAstXAoT+R*~%c|icB zf#>jpgc0mV#4ZB`TGAV~RohX9w9OM%bD}qh>Uy}w59VLrOZy4~i%>Kh!`?K?bYv%RmKDyI* zS8^xPJ@^6SyJ%FNsFq5RH0e2EsZ0NOL(fP9-z8i*4Qcec;rap!tJRLU_1nq#F3!Yt zThI9EQ5L|7dlL+`_xB?hm*GtDIMK+sfBWbX87Lfh*>Hqc);gck#} zQiP`~k3fEb5=l#!mb;E%t9ZD(0rnjS@J&(^2;?|!7k%L>=>I9yLykX=jsr75K0$FN zDZmRWS=fa_n>wls%C_CKDpAgQSy@&|MIAdP*$a&?6O$7o!vm-{2{tDbrt7+cp12w> zkGU8bIkbMn4Y)=PAmc)T5-&^Dt4&(kTE37r{Pp8tF7Q+cT&Q_CdnZ8nLxE$T#z(7M z{pL}LA_Ow>!VIPS%SjyV-2B@mMlcw)2q)ReZ#70OwU4~`rdxsvJ+TT0ZU-l4TM!SB zp`Wj0d+*&evJ45iCZFA71zQGyf@$lzgpkcklqPO_oA(#1x@)1{kxaCJ78nnXkB`52 z)=SI@XHVeFGmb!M?In=M}zSc2otS2p}3J0}!uESVWLXbHkchD-H!ij_o_8b@%iXSOSz)tNbC7_pr@TwdkQC zOqrMtp@K(M2n>Z*a89nIS9OpC1EA(@whl;iKgS9XkjsIAHg>JCW+pWm*-x5WgWUFa zdTlw=l+jCN4t!6=GZ2Z>)rYN;zf_5aB*DHK2+NG_U-~iFgG~6AU7e_R6f2OKzR_80 zW8={G9xb;s{T}VITlX9U3;3If9)5r>0J&bUq=*oITQ*|7kR(x7!DW<0b}OW!479Cy)#KtbtK|(i4 z_tq82%b>ACu_x9ZbW*pLuiph|$ZOqFw?#nmrB}(dl&KBh2b8J_FU!nkqR5W>#-V1< zuKf~U1uj=d#Mni5K3EE5uYs)I*)tO&U=JGDVS8A20;BA;i^9K06~sTlkbddWhlg_G zxt7lrIz)CX8(Tx6fxE z{K$lvgvXg$B6tOCNMz*m+zite{-pl=>fN>s+!MQMV+GjD;a3$n zdoVQGApOx@-3@SBf%}fDa!Nh>P>M}q6fyxs_bdiPpn6KbssNW!w;J#ASpPjq_t#`0 z^M>#MWP^*41Jur^|0b+{`z}%OY`w;MW>nh=<7v;Y2G}A_N{h7INoOiy*aU!4bpcDwC z&~4}7Fv6TC=ZK?xAJ|KoPtnxHh3m+;6RQ_hweD)aG}Mwo>j}pE(oH@+_gI>rXHEe~ zsroi5jZ6_bS8CkWzslu*phvoJh&(=w^t~yi&6~`y*A)nF#)R{K3^7QYT z-{e$L3#mF*BZNQAZLi77$-}!|(5t3u1qd@N;4V|gHI!ky-sLBgdrp@DC{_u?!nO`M zk+WuP(8+h1>oY7IMye3l)K}IBc#4iZh$y3Qr{I`MEoSh@nv5TZYslmPH;_P|136ES zv!HY-2ydu{~)ZONuTff#q*blK|RG} zFdjbwhzD}`yjrdTRsTzF@d085ouZj8Vo9JeK%Mnc^Kzh>(%bI_td@=+o;gpDdPXe( zi%vLs=EAFQU}|hxDLffrHZNa(0`7Hl;)R{1#x8#63mZn8qmC&0l2H<})HzRE_kA@DfkW`B07L29>IcShpKh;=x-f1rQo z8+TVSFrgxqzG<)WDOj{J_Gmgd7q($BBFE7H=x7Aa8t}N(My08=*fp=yQltS4avy|B zc{@5VCn18U`%3dA2utNb>89C0NhFwI+fjs?TKLnL`&nR1sw=;U8xa*zEp!3(j*fDu zSMSx@7)iaqOgoV7H{ZdGI6h3n$vgt!OpV>;MN?CdS|$$^XR@}(*&4sKTl)D5NB|DH zY9wen^*jZOMOXW?dkZsb)lFSpN_=MMa1i-UZ#G}V4m=X2ec!O&0!h{rIh?-{?bq@P zg41uIi{GQ+%S#n%Y~$Bk;q$_8hH~6jO!8XQQ1L^{we_a8r)<- zo0%-NI;ZJv-Oir{4o@Pd`2GV0abZO|HzCQjAi7NF{G`a(lY6hQx8+d>@8;SMwIXi} zA`)7o`Q*2Zv07cF!XHC-=y-olja0?#2M|0Yka<8WgH@&I3pjL@FhtVN zqb;_T-W0_%&AM~*o=nGOH>GS6ngrk{MHX&s?DPQ9t`eWb&UO$ZVj2ff+~l0jD}MF_ zY|P(TPPOUdMWzzN-0bX;ErY5c5 z=W;i|O54KcTOtsgBy_2h@ngvN>9(mGu#dz`1Kj07Uq5Nk;CE8qlhCFB_ZHSoT8X;z z22a=P>9JcWQ0)n+$x-<`8-NWM7!{-uKFDuB9fO40uk9G@db*0GDu_9$-|Wli_J$!E zYwsdr559=)*veCjOCeQrZOHj7x^(~pd{kmu`SxjyBKC5S8St)>lqQNn2tow`4Wwv8 z{lmL+clXR<)Rl9&lw1-#?w(m4zY7Cqm|mjn_>=Q<9fWU4|l5T*D~nuJajpbvx}A^_b!OTBy; z0qCADIP}GeAgs)UNM9$H#pXo6p+ficES<8H`2)M2H~JD34R>L#4i-H@H~Sas12AU2 z1@AzPa{E<_Pwg?p$_9^1&Wenx5P68QXUEmG76}6=`uVrpEbuaa!kz_dp5W2H!8u?6 zTtqcc?Bj5nmgnY?=jHS-456la!~Gk!>l z%wpy{ID_$Dzx%=d*vvreIXwvHY@&5vTbWL;Z_|=RmSH`#P#$}+03j)KKGND**xDAw zIvy|IkdN96gr-WPl6UI*^JkZlx;7@g1CHoYd(wCB-Y>)yk9EGCHD@ov?{}U6vlDN` zu5JFK0g<8{mc~sL0EBU8Z{(IepatQa(YkNvnmmrygg&kWKYHVWjql(!X-rQt1k1c@ zI~B!xTyLMLSM`|lB5jhR*kLbkl-p#*Al1>3A_0$Y^G&26J_O+aX-RF`OI<}Ytx{j{ zxw-&_E3)2~(2ZR%d?fMRHhoim!X5j$EJ1yC@Nkms&{;IW(H&P{%)vCIAA1y(zo z^i*61&$3?K#MP(!2KuVjJbYK^fuV&1O;oTNj~(ml4-xNLHdWa+s6IBr9N#-A{ML;4 z`c*9Uuu#YmbDJc{%}C)sB!}*g`@0TFNK7&tT_Q&&WQiVpma+zP$XAfX35Hagf$puL zfeH%?j^z+=bgUFN&0}`vgwa>H^DfLTKR#Gp(Y9YP`%7)nHxe%MBknyFDdV^~u%cF^%pC(*aZLxut$OM3{$O(R&(e{)L zV`iO&E@h%WMI=nd*K5R_+wCwJI#lvA1PNvCg%vv|NZci7QtY&-1iss$To}+I3=AXo zbf&4P1lOoPOP9y}d2m#CxH)b&rc8s46;M6E;AtN-et4*hjRbih0j-pnlrW;>(1IEF z5H*Wp{aUN9o3Yyk#woo8yE0}<#g0hdt)=L{$Ib^gJeik0T?!KK6VPd|x{wQyWm19# zOSouXi%azrgsZE=ae}C@CwFGK2fJ`C3iz^cIZCsi`1&ywlv40j<)rnQ+%vKrU)i@6cD%RS&G-`wu%4sXh7#3%gTpcb^R!EF z-Yr{pZ%;#a)y2nM@4lnjcw5Z3yFMq$sjg*|419JiyNo1LEc$}xPn3;lG8}%{gQ*e~ z8h<68%heYYKV%s~b}9$)SC+rDvCE0J{dkU~2j~ZS7+_Wo)<;K3w^_qEde@b#M^%`k zz4w;8rX@uxRm+}yrx^xSO*)rs1O%`+hgbjsI_j*=#TL7-3{}CsVi)6QW|lt+ngasp zY(P|Fe+PntgX3VXMPaR^GT+Y+7zC(fcaQ29sd_gL-U0`j5XY`%zx0bAU71b}1wlpv z#_#L+O^ZcNfE1iYg}~;^Xb|VxnEP1-@&kpoup54|GPb<)0%qfAJ570ZolfGfkrr%O zu`;Kl#-YpPRdWlLwy#K&-n8%S?W9}wQjfr*eEaSWjE_NG9J^XY2|&_Qv;wrWN}s3e z>f(Wg9dLcK0%W*U&2SS-!I^O=JRoaJ#dE?`jNH zj~3~?^G)h!^u!087CS}?c7)yrOcu=I7jy(!jYpBtbS7Sy2M9T8Il^2R=&;^2c~@9B zkez^9b?UDw7wM8i64a{XmeDv?pHuPcS5q=qJJp$SDgpiWpIOK!AroU}wL@^MJ4&;7 zwMVuoIsj8uT&4?-m&%7sfdFO&Kmp9nzu+}%Yc^CXwgfwvU_T>yf%TKbc?I&F>Nju; z!FKu94aTqO>gb{irPqXuR`z>&kf7@5V@tj2xh!|bwfzJ$w1)935!9Qlb>0UddUv8*j z7lD#}!&$`!1HMe(_bO5Q<%qk!zy_~5<;v&uy+_FHrDqnG2jCf@gs-P*E zu2PikF=((72}YmjSWzSgqT)Nz`Msb&UP%sd@?B+m7~FjnUpNm={3Kj5&y)5owi=eu zEAAC8PO`*3-2@@@U_BPNZJgX`hAqt31H;~yBD43xk;@{me|Ry`oP6Mq9TZ+0*jGT+ z%4Jt1b!A_IgDAmEPD{i+o9|frni~4o-qdA1`}V`-)xt~pqeFe&Qah!fLNkk^~qGhz9WYE92$bIES^) zKud8mz#PeAK-?YQ3_3Upn1AWGl^(31q@*_{KZs%B=Y~ENcTP6iK`K5Naj_)+$)jD- zsbvH4kk_^U^2-PA(Yic`3j-|pA=~?}4nG5HjT5#cZqlebTwYSPG&^~>(0JYfKtN2u zcmM?KV-j>0D+p2}=;RmCz>+Ee3OZ9S7iv7?@CoGJon2gHwnm5(;vaNT_vrQ!;Gv=l zE!bg>GgxiS~tCdIzJa(AX@O-jLQu@QhX03diNRX@xUh?vJkJzw-(~~;6 zJpfi|eMtw*9YyxgSLgiXMI5=D{eHJ(sjJh%WLc2OKw}pORShvei9qVP>-%dJVp8ZY zRH18tCwSw=ZQy^Bv#aRb>NNtGZAkyixPhoxrAFq7!+5EZCZ`R;%`Z;|_bmcesK(Wc0NDuj~TO_jk+=I3nY) z904L7fQ!7zu8DjN+55}HL7eTtek*;-?Ftxu_G7CoXe!7T3F0_9J=ugHtCW8+eCGn= z27ZDm3q*lB@G9XIPQ>JFsv7ehXH9{n(cIijh?E#}!Jc}1WDS`lEABQ!*00eeD?#tFWum&**|_>>whirAjmJ zp9&3*%>uq*I*3^yXri~?MD70udJ)2nP}}Q_=l0ZJy|RbLGqe$(KIW>#xDY6hKW%$I z<@>xq3fX+|kWw7z5bEE7W5CoX^7&8J<(d$IDrVAd4Y&ocG?77D4hdv$#S4kIfPJ~e z(R2&O5;gwuFQxqQmCY@H!`@QR)1+d735iv){ECROJaC3}j|s4l{)RfiuMBt>w6Xyr z-1!O(3ijUznt^Zp*8sFX_O8WSp9|9$Pu$|9rup!u^3NDs{1`EA`Nfl{Ylq$XH?ks! z*uG`vE`gzWq(#OE-Z$<6E&sayYx95Y7Y&b@sBpScsauc<+8Dp5=+Xt%lg}Gc{$Wl4 z+2nfjE)?dzZ*ECl!1d2a1PMt3Mlb#>At3PA^_N!zum0ZW5ChuDym{Azz_Br{zd)YkWVi}g zsK1K`uur`D3K!pia5r8jrJS>z=NUje+;64-S{|@#ryz}fy=kHl;Fj^j zf10&`MK{euVTE$83;DFbHA`Cc?>o@%la3DGwLvc%%eFLi)f6lr$p;U)^VgW+(yFWu zmYTYaTE;YJ()4SX)rJL_q7igu0E;Q5&z3UZ4u7YxjO~M9m+@Q+4uR)xuNnuGtli082 znyw%4Fv)OUznk}kbBZz#+pw^ppotD69bcy6_kv{&8Q=O0VwLwbCS7B7O%inWoT)>J zB2QX&W8`|h+pad}E$gjv1V(i?AZGo_Bg2ng-q8ywme#aHbdD#G~D1fYHI_Gc7f*_Y!4JHo@Gx7v15Rv&xARr=OqHtR-t z2vM=H>~4wfaCb!GF0#)c|IEHrbTN^4X*YN@fL6w_^SRd(=R-sk`FX;IU~qlUZ=YM^C_w|K-UI-HWS=x*^MNl(j8Qb3u_Yi;7zB}+}6Q8P*| z_6t=}S?b~Mm;~DnD*4O2g0$cqCRz?cci`3;=iypOvz61!3tx$K-QnW5s!azcTVPbx z91kNk!&gSX&}?fec*1w=1=xqLl4S=KgPv`c3NB$70#&ArHfvs7s4=Nj_>-QT!@(`2?ivje>EbXuM& zgWwvC0%G5i(qjkW)U%1kRI`&Vw&fvLnM;cV3~q!OhFUenFJg0gb%rTlIu;BZ8$9-X zL7K@3Q;<%Gs#ItReJY*AavSaP@c2oBGwx>r+Zo9uW_F$yGu&3!T+=wF>1@nL*^nC9 zZWScQ=&s}=i-$=ZKNDCY<|tl$@b75cY4H{BlcO~nMtsbtm6P^aPT0sKK9tKfEiH1i zvFqG1@X|bqce8WJ>Stl7QTmvAHvjZ-WaCKo$_|-VH#dE;Hn(B&9oy=DW$wJtNW)OS zp2|`4;_((usl~?{X+LcDx#tbMoo)DQpY=Aalj$ntdaRuWy?9{b68`p%O4t{xqcK6( z;mVLt;qGf1I2$P>$Is8j)lmkW`wJs$g~QqUst@0D{7ll6|177xw{y`Ci#J}YijPk@ z76=rNehPa?{K;D{!BN76Xi+lZ+*{={s%hP-SFT*RCA)FqeB7N%EXOU$2SVKs*FHVG z6|lXD^D>YbkiT{4&9-qQ6*IR3f-z%Wk`#N7JH0nsm6ywqAx(c^H5c91WIL=W=h!W@W#DqxgeN?Tkzq$ z<&Rs0r!V?!iA{1IexB^gt@nSf9Rd^lvVuIF;7gK^ydauySO8~`9)`Bu(VvBRSx1z3Ou|--eVH3Esy|OhO&ely4zp;6GEmE!GM*&TJa;IM zv@w1q!+lBzx$0BT1pQG_nb`#GVB0lnS$qGOwe`nEH&<;HWRA!iYsG>)FItC%hlZGv z{q{w=ge8=Gi>{nE&M6MoHrm=PJ{3o%e><&t*(5#VM#$K3$^AvO2MKIvv^ky@*|a^w z_3AUqoz)f;rnjV{Ax#!;@{d`LTt=QQ^2J}cFm6!)^s=$ApeVn?k30JI0+G2PEp>XA zg92hceYieK;(m}zN=6a?-Jm|un1V>OPv^vQE($u1!Ou@N?Ft2Mnje$AAfqyUo-rt5 zfiKW+tLc*}W%H*&eKnEPx^CVS81tRUkA#d{Xn=uunzX)sd9bn)h8tyIhjIxzsi}woP77}rHHD=uUdY?Y)MyuCCs6m?vtzGg@57CWl6Z+b?{mJF3U zyEP$1KgbhFG>xCrXbEf1HCp^8A*;{m$WkBGW=m_{fpb02DWLZ|zy;`0@;e09_Sg0O z{8d8G#joqxyL11B=s!ID&*OmqqjCErAc_CEGOnEXtNYbHK>h~1vp#&x zfM`c>t(UyD3bX_FrvAU5|8s6JUl%b1lE5Xq@Xs&-L*f7L!Th&7Esl2=PD38#oi7#a zCgz0vo}hcAG^EmuATX!?PaNO>xl{AMdEIINe^!DK@__2kzEWe}miPeD!je;yrGTp6 zWW)83rEgji$Ap}};Qn<3by=+=u#M08&{(0cO%z|gG?Euf1!w*Sj%;^-gnW|v{WP!P z2YW}3=mM!5e3?p8e)H{5b61*3rTvRq=J1=@C^^96a$1VN_wg~`8X^~RoJ+X#qc+wo z+P(a^6C}MSBc(a|Sw#d95&ylBzb~~#@hi=@$I3cX!jk4~MikEOsDXrXyk;`vcva7c z^gU2j#vvSvP8t4i;vB^EXCTKvYq<{;a#Ys$+3Dw$6KG_9B$dCz(6dFnPO>^2zn;^H z*YTWZSCo+xSP4UQ(2z5$Aqijh{w)kA5?>xI(aT5QZ=WigC!fGalo~b3I+1{_^G8~y zm6jMOITU{ZA=?#w9EcPazNNIMwG6N!5zE(}xH)^8eH`bRGiRC19R%QMS}KCJ&fxf@ z?Rd{{jcAF2Z}C!a@oYC`buM zhW3oABsQxlzEu3WyE2qj?jhD3-0}!*-mh+C91$AiyjZ~zE6fUG(ankzv6)AQ$<5pE zbf1~l0tYQ`972w_{!V|sKW9-&(&?T!B0k{(8fOT?s}2if ztAKNQxg&jjYV5=IecvZhEn{O6NV};`>b`Vl~ox1!=UbbhDBxqCKegmxh7c@#|QD;M$H)ku1~MG)IOv$S@O;uDlm!! zWtx!G)lUF;#N`mTNy&yZ*r$u`Gh5AAL?#US^(|rtdkT#kYF}46c-1RhTvx!`K6!Lt z*X8x;*(|5Vt2}yIi?uvD{O~ z=U4yq0L7x7Zs)grPr69THGpQmL1W-&Mlg*5~n#>{*M zVB$TowekGBO=Z(QBr-@A)if4iSDerMb=ss8qT}5MVSePKq<39iiO89@eYq|i&E(cD z4JQU<=?I~g?X&cR01aaS56>8!S&f|kc(i1HK<7S^olfh0nicWpM-;ztZr*(3elK?R zZT}5-vF%f!PwBqdZ~>!HqBCa~@^8D_uVo>SgsD}p&aw>g21&v-4XoF`W@6M$o7Tj6 z22MtsQz;!{wF!w+JIDdFAu##`36y3wp*-Sf$vW3~Lc>r++DH zff-Js&;%C_HC;mu7Z{Gp!j#)wvdtHhDz(Z?v#F=$#zR@OYcz`Ndi20XW(jFZkln~{ z4Wb#crsf0^FObfIY=Lf|yvOPOm4jQ4fviD7;c03;CR%Nb@W+`0wTi0;pRVs~Mh*JI`uXjK-e zXFVpxir>?|i9L>&$30z|Rzfs^45_)7Awjo)TS025ZceCNe?S>0QCfppYd6u zpCUnO22Gl=>V~OOP&(-0&duU`v_{a-(=H=GDe$H^G7?0f!tvWEG2@RTHgmE&h5Z@* zjZREzCEUEPXTdt@2K^^$nwZ|CN0fyt+ZX7yz<46jSxfbNDyk2m|Kfu-IkEaD#jYlUhk|7wUp_cH=bV>c zymoMK9o3(ykzmhCLGSuyU_fIOTg@6Y;*2^Nv6-<^2NMU57CO_279iTj7fep?fpH4a zzU&C{$EXwSuBsM`o$^-}4oed|f4p`eD%9Y1=DfDZ)2g@c{5y_4syap@K4LnGo#Nv` zTX^*~xJ~}?5h=P>Vb6~U@;&5_8J)iC^qLd4Msg!n-)9#CV)}+kj3V<4+KgTeU%bI{ z0~K9pm90F>lH`=(O_`e{m1tII2l^Cq|FF0{Mv&Q49AVh_8D_|;7Ovf{6D4N)r9D=J zV{KS~6kVOYvp~&bz$->{b|l})|NCoPy>as66_s?meRcbzW_qwfOm??n&FWusfGj^8 zS$jT(d)YDG!35%^Wp4a73Y``QhJ_clhd!^Yt13q8O_*d!JCE6yHsBr z_Zb~7zITBR5_|3UIpNpveBM)tjYN$D5*qrti0a&*qdY+@s#(AsD0*I~G{%q%lAy3M zFw{@&65EfRy4LpE6R;WZfNm~Oo*^6x&(!SKK&{91d)vSB3%{rLlNIK?Ne)82$e}Qx zDS4J=8P`i~U#&0mKuMITomB!6Y$X@dcdfV ziYBa+6K+4eyweJoZeY^USrG=hU`pCLUj5V<92UMlfB$*w%S_!WMO!1ceDDkUot*;I?DA9^sHm8! zS@a-Y+yfCWwvF&DFuSI`Qk%jdqXuTpq>8G~B9Crx8!1~@Wb}dU{83+e zp+8cEjO_X?pmEwq4^mW$#qe6yMT-OQwe4d?U(dm{Phz?uy+in7`-LGDU6(3O@IkQu zLD;@TezHP$RN|JQv30iA4#GI+)OY&KtMoq^z6I>U?}t zjlByPn_;*08Q>|ZaIn`iL4hjMj=;=|ZV;KDWRwxzwiXabw*Va-%FdR3eC>R<_lm?p zL8MQ8@$SsrNyxf?y>&&rlDU<+A`NAL$;9Tvz(uixBq;N0vU+9`T|3U3=lc3XckC=v zrj7^*;I`hOt5~4Ov5~YD)SPvWJfIGSItR6ca2!MuSRumYbo(Klw}u<@E1)$_)uF`)4unWH2=Y#o>6&Vu3s)H{wl*7li?9Rk1x$HVN*<-b zPYgL`{o@I)kL>R|nXjQdnHYq$#PREyb^Kt#p}$|%Uu1)Pkp8U)U(a($b2ze(7N~Ms6X` zFgSQ^;?x5($%dcI;Z`yce=v=VQ-h)q5z@J_WW;|1Smm5o?L6_?`0=F}JHOx4sBvbK z5#q%EO5}um@n5g&mgh;xE#|+7m(#}$2=CD!$G-2)>GP0S7&TvGH~YUINe!MzR{vvg zvwZ&imITuAKD*=J-yt%40Dzd&e}wD#SFN|syf*%^)v^5_U2z3naq~?2_c_+D>7VpC zko*g*aq?U{{M#l3qW6M;s>k$*rzCI;`F5dYg-wKtISkk<)+AMF|7?L?wW z?$f?W`rBP0#o8N?SmN>6MasY3Gv)H+<-YR}WID1tuFjAv4`dCi1>$^buOK@3Roj=1Eqq zbmM|^BA)OhdEkCdpFTUH-li^>=Cc3dV4)W4Kf6}7^F9}K^^us{l*gEBa(r3@XVSM* z2T2E7;-FaXeKng|J~#IYbcMtvLL+TU@DOawFkV*N2l!3*Cd5djUsJwg-dm*j^Bk8u z1YaMRW1ph4RJA}|#NMt)o-Oz+R4xZNu?@KM&I*K4>hl2KsBQEB^qXVuwL}Ab^K^6d zY?Cq0{8$ZZ7Z3$WI7ptQTNy`Hh`G^HX+7Run6B=BWHDMGR(Is&I^G3d&dg&%mM}Y8 zJ6wT=Fkl~ydE~zfq!WaSBM$^rz#~~JK^qnZ1}PN=kJMb$Wmh{Zh7cjU zB^Fqj$4c5|+CA2~=}e7^2S^ZG8<)hc%{XkeH$g)756;+uNOeo&4~RXYgYy>H*b_tKTHtU75fdUUw&E+kn?xB}wND44I9k383YgdADn8XIF4w@2O3+yAli>ZI91K z4hPAzR??uUJJB?~`!TdlMQw|# z->j4I28SZ9^}N%zt0Ke?a9(&ihXXxa8XUgsQwk1q4%nL?%2VuU{qgb z8qF|2D;t}tIR3sa=mmVE!iwc{i>7^gDB7D-iWLgIKe%<_VoA1XNVQUYRG6wrcC_9` zYuW+41v8x8uy`zrhgw>Xm1K3iztlYkN%&IKB)rDg66G=dOhQ6Y zoKzF^`qGrYCI*te{^i;sC|^Q_a@}>_^wUpAr({mi%Qj(_O^-_Tqe1=WFv0wMGP*{P z6XemG6WBt_7ODJoVGfR`tl?r`Vh}kmuY&NXwka-9N$e|3#W+n5y?Orp#m@B_x`o0t zbQryP(fm*y!sGa3INN7d7}ZXlz*28vd7VR**x14Ro)UA_NMIfRn4>>`lW1#BVIf-E zGHMA#zQMxrg|5lRLzOC(xJ#X%v)eE=*25JNbh|DM=Q?=|8dHc!9@;IhOV9|rDH1|T zQuTL%^c(&=Aj3fY{Dt`nyCu~0P>DkrLm*y< zQGr^{=8*8_H`)i66_@-;DVNB3eo1`1Ev)^r>gzj%XGX( z8IB9RaE6fCvnmkFWcHI#JB9C{f#}!6%1=qz*^4x@6a`W#rAWnqguemA%AOag1H= z42`s%S(r)C2j}~@U$z{P)6yng&eZD%1Kyiqc+*{c+L&D|~#CC#=Hf^iA*&Hi`$o^RyY-fd`x&4O2YmPDE&dp1nOcCWi zR^w$LugDnx$vv>f%*Hot39;~Gh=-=&w!S(@I*@-C-!ONHO57y@z@6Zfk2#HyyXL&r zK?|l{_Y&=KB-9k-COWY1^M@XNxiaSHhSQBHU5d9AII=(P6SW-3RXFA25jTknh)+;* zIa=cFB3isrUHEH@+|n(_U|3n;k+QZ314B&`dy)nYtdfN8=#T)3jwrFGmji&_{ zYgnFJQ|%RWeZADm6vD+E_QQWC(7HPPARtO+BhfClzXg_aJp8&P*ky740Q}R=q!G;?vC75VQ^3F>?CUbGN@V2p_Qs^ z*xY2m-PGRxbVb-6o?UeWAgu`YZe4MNUqb~@dPc{25Ax^P;XdtaDOO(n@H{(BKT98X zeyP!MR|IWrDZvc{hnr05zwFo7M*Lip_>@4uHNOPXB#?C2&ryRU6^|VpNc?nE)0MQR z8Rf&E&M|_CqMlZs_jhedSwT8zfY(G2PNp)0`}%4CuKFK&ith3NbapZ9MIE1hyPNY| z`(0S`XRZ)2T0sznEr5g`iP%Odr7^!Ej+c_xZse0wV?O={2Z3|E%RO9XV`c1!vx{W= z?gvtTblKZQ!34!FB_)^jTaSG?nKuirRXg$Dn(#o{(~Nm&^oy}@E1Rox@PsoSmm0+! zNUfW}ezaP^qVC%#ebJ>2cf7$h^XSY-H8vYtoAc(#KL9rcWw#A0%2^rMXRLZxISfx|{P_r8E z?nfT)L?$@w?%LSd5cc}plc!=`%D+GU<~}8aL2hfIsQ|9?2~>m5n3bbg7}ay9hYc57 z@*B+*D?HylnidhUHE&f9+3xaLYwE`ZN?oGMSj?yGOD!(s!u!2x{BX`4w{zbQN5#8L zYS{X@$-=q*NhIHDbkl*+2$S=8X(kL2s1#<#2QM|-nRihbwV8RcTLS}GKojV3Q zL;<-CS#Ys+K`eRda4P%b(P54KkA+VB;@-yEIiHvKX@`cQjr4`gFd?O-&>MM|_-p=) z!_n%^<%Q^F_{G`5=_IFWj&!hE)6-)r{5LY*>(xcJI0Mhp9+9i<9_-{km9~=$$gJO$CzJ2F9euViQ*iZ{+IWV3U7d{RaxmF)I#-Fpf_s>jppu)@G;7+nk zg*8~nZ!v;o;T1yr_YL=gdt3VpxFsEbKBj9kA6hn%8?;av<$C_AXmin+sq37E`=M@z z@H{U=(s0Yd0#mEuQAWn-EHrP`7BpjWGY4ptZ_)2Nq`MO1E@D1i+EaNu*B3{$o=(9o zSM!A~D^!$(PXbTFh$lw5}lbSK{TM z6PpjSbh6*QVs0Xh+#)8P@Atmh7hrzzq1V=0_Z(I?_Z{@oOf|O1wayxei>^^jH{4sl zS(uf?ViFS*c}vPBhzT?06`q{oX$=_BST649Lxb5){y`MWMOStgF^zTgCtnIIn|RDP zop2k5y1D)3WHXtBFI{qX$KUEuxC0Ou8)MmG1V!pt>$w&=+l9@+V<+yJH6Cwjp97houKN-EQhiB>djJu+F*NXLd6rf zrn2)BrqaZvYLDavTHqH2?8`_>$#|l>5YgQqzZt|hBjEkRLyz9M| z__=w&w8^me4Hx^GT`7FV-hHG#6{ z9Jb6RriWv$qsjDQett34F&`TPHviqtQ!)QU7&7|J=kqWr`^{eIRg51hwr`bYmsu`t z_KIr_&W&B3`ti^fpHXtb+;?=fZ|?W(VJ!>z*A!*pl&IU*E7M zq^(T%l|(0$NtK98DEw9%g`#I9M%q@huwxJ!z%}m*`H~<(B(G0O}aq9_lEMFbfGCm`rgxhWlU5jLn=E-Hlq6h|ZYYie*1iDS}j2sl`TQYJqXJw5z36 z5R2TYZpo$n7w>+{8PDyto%%e8q);I@SnQMCNF^J~xK6ZAE-;9=T^^k1;br8L40cgiI zvu^9dd(n#vG{`@NE4YnS{SiyS&~pM>MNwcHlD3HMp$XU!g@aokhpXE8Uo|73NzgFW z)TE~oA{w>mwbV4#&9*ze^=8YxBRKk^+fB;X6&3A96)B>N$y=C|4Xil6GS!X?6%-@V zr5~CO`h_9x@S8Z2x?&)CZd09p#MRn51KL(H7*&odFWilonVn%gx|p%q`g$mV+28%) zGIuag87#l43?)}*p<4%R@`t&L-HEqzjf#yy>!y9-On_Tgi^dzxH{d5K4lwa$Nv^O? z&wBds8+nwb_gGDMdVeK+=R;{_UK|^ZmeP1RYsRN%EG8zVaZpRjeKxO}G%Gs&o$0K> zC45(G_2pC0@^PV7Oor^6v1HXUIr_r9+?f)CJZAV>6sFFXlB*ncJ{f^|Z@ZYEK9jwH z1lkp*@+;H5X7OH&)?o2p1_p*cPBzC-j-S`kIWjaf-S6}O`Ows$TE_;y(VZ}U2v7nF z*$EXgEq%MTTqQ!fZcNh!vP=iou$0LY8@g1erfM!|2vq(*d?AaU19d3m`k z9(~r}gVM_pZU|!Pa@M~bIDS4!&7b2(+x4X)zm)@VB+29ozp8KG844=OGVRx2dgE+V zx@A`}eafJIv9;(PBRORkC+)1N*{ZoX|2;PgZ^vWPRqW%0&oP0SOwFy$awU_UO{_1U z|DnG+<8=YfOpYgSC;M1bo1qjRflq~k(jx4vH-uNa(vlR2DD(%ll0SN z%|t$`d_BfgFU&&o(LXXIM5DYcg6O%|=JV=FbkBGB5hEWL*IhLC6-LAdS~EmTMKgi} z6-4fNovL@^L~CP?k=nEM+UY0IpUu`T2upQeq}g>c{LV=1BJ?yjR2D~(^3T$i3Hce0SE47y4$MSX|oxv{4*SkLopQ_$*P?uwpcZ1Z-`0o!~ zjxpcO_kaE;+!HIugEXh@i=2IPdDlnbpH${X_>0ZmHGIuh7s4@ohce`_B^arO?ns zZ5J8PJcvBjBDb^jugCvL1h)3Zf7dJ+*xGo;kjwx18F-8|e{W|i{#(Ci6AamrdHdnD z-D=yu{Bn%2raaVj3VXQyuX*x;yFu0cs>AF*c~KkVWNO2g&>$}8=u4b&baZLLgg4V1 z1*$qB{`UiaP{c&qd}J$U#@Q5qp=qs*>_cp<5%w&YB@Qluml;H1dORkPfTQ}or))k4@!4Zuz})8hdbHqf%tSZ+V<@^PZSJ~j4Y!`k}HRA@}8 zy+gq)E$!}Ct1w^9_ua0oz`u^O#H7f@>k-7IQfwahPYpw(&||t&<>;k46^37QebwnL zihe1;-|>lwb>4|3^1ebEtwfxsRm1tU=j2w%%xr8aBZ`MJZ@fto>X&D>yK-oAdFRoW zF-a`kuPPJgjikAr%m-vr*eQ7hQR}`Adt9Jhgoq6BRE=T- zlrSxhHl>5X!5u@RyIS;}#GDdAg#?9gIeP684m!{Qf7ip~4fuRY89G}%LHgZ@B7OD> z!Cl0&fghh?0yR7wk%Dj2HEZbXzIP2~c~})V9Gg7~F>_p;)goCw}3uc$i$muS)X`IlB#1{(;>6JQYnfFfgXDXfga#GW_ z)&imEEVO9d;(MfFiFLf}+@Rv^+wZ)I{pvmL^PFX4U)K{(gxJ9l?-t78-I-T@$JJZH zvh3vWXY}4?pRPP)5qob2EN%QK1joRLr8)xR`1;pL2;Q@SG|%$REicH=M|o6CPFh2k ztOA+ZlM6w>n7|Ds5Vz6+alwBAtC43<_Sp4jjY;}!r0k}hYZO#GuZz8DhB=Nhe4?|qE>Ii@rRlhG0Hqb_ zh>Op{0x7TJ?>NX0MSdlffGA>SC^mzc*_Fh;P6$BxKt_|YTyqR9x!US;021=hEc=M7 z%kS18$LzXmfTMB6X^7~uOtbS?#O>qVeX zfAxeA%r{`vk|ZuJZ#msOs#jgW65A>AAkPmJ(~X?x+8n$DY0|y-8Z6s+dHg! z8M+!hp8jBcjW&|WMU~Vl&W&=G!`X^b?hWv6KILfn)XYUE6+2Pjge^?RCmQZWwav+= zi#RNHDV&g854dTW>IS8pDJ+aCN*KQlhc`FNqJiucvHf{V&}6ORTrEEWLSC7TBjJN3 z!O^%n4XE0$0PhOjn~ZSpc0YZ2Vr9yy?@w``zutR{T|C74Qv(uWmc2{@rdo1MCOP^q z8U54zl%pOJqoqO)$5q+qs}*|V-G+Z2VdP(YxymN%kgqnEo&|-TOPH^P@et32NDM0?WF1eTnbV*&ALk?0Jh}A}F$FU}Z_S zOn*@cb3Csv@o6RdII@6*;w(|j zInGT#A!K38-3X}<<>fRH9v0TZoYGgD-SHmf`t<43EQj_DgsczjvLyaW6yov4=Jori zEDWKG7i&k+6CMSWd|!%~E{LeDd41}WU=dz42bB3Yr||i_4VN|@Tz7t9cX`2e7bJ=2 zXHujEqf=Qru;~!1pfFGy;npCfevC*x8>ig%@8 zZOi0bXGH>VX@tryX39K;D-Kcv@W8Bl2Wo6gv0Er_H^+1^o5el^2b^J97KED1U2}0r z{PpPZ;|?m$af3YiRHx-tyu_>&aFXeOTQgCCdf%e>3eUe?YJ*fJ62`SxQ4B@xy41AH zWuXX%=w|Kkgaq1fnj__r@I-O=w9KUB zq<=IK*YCYYoDD0)_@M6QHv8=^uDzwap+$AkLz`c(4bxX?-9Zi=lbX{LOpSIhcjV4}4IK`r%xAJQh+qha z2nNTeLgdQF!(+%US#6&0-<4;aCCJ*z2R>wszcLFjPNT&|-0~1E>S4m%Harp`CGF0b zfB*O0h8Q)5Dr!Qxr)@TPvP0}Y^p_%XMJ=V$Eg(0b@Hg2JTeUIWYlG_a1Xtu#Vx5ur zkp>~X;7r}hF)IJ%3Er2a$D~GCU@|x9n#1X4JJPQB6HNl^gZ`lJdj5%12?GNJGTeJt zzXA<{(JB2z0Blqh(~+4JE1pN_x+yGjZ<1FJT}?YT*uH$z6<;FHYu;Uf+pzuenqHI` zd9&QV#f-ct3YEM#$^ z+jBkTkTfPiPq1qtIiMKm5n!p~^8F?sRDm23rXnC6o_y5kE{&1#T%C#H0wk$NVyNzb zoX_`QFW5-k-Q6}b;-Pp5qt8V=j}dW4>h8JthT1bwt~jlt4+B#_62*C(o^ERu!#)oV#&F>Vik@mZLNlBZ=lD5JGu@J4hR)3zn3=y<7D%2UP$$OkVRU5uexXkmBZtm^Z8W6ztDar#@j)fNGLi@wizaeV`K^%163cZtoRH zFSd`HMLSkpnuQi*e3^-dC-KNvNb+wzktj5 zUHE9zy!^;^k}L0~+*~ZN$f`@dP)m}p>5H7#a1>YOwT3pNP@q-}iJphos{w!O@dk6~ z4fxU%s(8b!8zAQZX2QNC4mdeb%1bj6cZ$lD@>ok;SO)+ClhP`0#=ia)@Ni+p>R^>w z%7Jjx!V`NcD&)K)weR0I$RZIS0hLs%VsOHYXUxsB3*Q)dj3a)Ta(I6sF);F6?u?_v zIY*;nZ$VHFS;%au8JKIOjrjWfdHTTlb0QsUGq1n@8Pph-eaqPk5)fg66|r;cgW+9M z+97p~@2Z1Zm|7Bq2l8M4oHe2!tT_TP0%TVOn{N@9SW6G#0@&rzH4kqs7F(3la-E^&5>wonWNp>)8At`*d(1Z)SEWkpe! zW4P_a&`bUF)2-`GT&ZqsY=T|yLV{mv!2#3SRFMk}g}Dc7z7x%YBd(V}qEVO%#l_z7 z?_;-}%YaK>@ER{_;4{^K{*sf#ho|xz{V)S>j(iA%9|Io`2 zlGaAIjedZ{hfeR|I3y2}9Ht&Ro_nsOIKMa?h(<`W9%E&VosmsWN_y4eQR0D;RpisU z0QI1bcA0x#4A(U73H6VuqP-tkot3*k%`83JS&MF;Zi>CpU zichZ5E{sJ^EeVSYRmsS~^ndXNQ{zwcztm}OuY(*Chtp?56Iv7gE_r*=LdgZa|5*r2 z`qigqs0B7C5Ld$gJh*?qUM!D0GJvN0=g$naieTHGq#z`i%jxb<`EW@Wdt)=2+$y`Z zsl5@Wy-8bA@2}56rKpvv@=Q;rnE$3Ae#trXi4WQk^WvbWr!z|98IRGKUhr{c)DT?7 z;m|C{noE8y9_{DYPxqhNeVYI5>67d)-B2%{KRWyJ<-=PK9-QnK{6h(q4!gSGihTk5 z($*)JK6k4({rYcjuleN#5?IDZH=DcHM`T;1t9S2ppPgnj!TIv_Q&NrfPF}Z;6$Uw} zY7brU&AikKhX>UOwWOt#48QZ51x2mI0mI>u;d509uV25;l%41dlY)^aW)=KY)~qREc^*@hk0ewovFBs*!<+qwz|EN#Mq|$Mw8giVNr}on$g}xZtO@cnwz+R5=*oj-x zm-uhI|G_|Fw>(`@YPYS#k<;?>`bc-d#E<8k_0Syjxkkk8N5zyLYxS}+Evl`oFx`Fb zyGkv0tq>WRU)+$xmHet!PMuQ16AA5 zg4gu1f?7xA3M(c>kd>{#e&c&O3}f#XfeqNj)y^vut4lvT@H zZ;KI~0v%?)vfcj68CdPO8oWT{$^spQv%+pL%|vlzWrCAl>)K0+Js1pTneu!HlrMds zkz&tSoj*jjP2u9ZxR_@sl9OI0oSwFS(39AgCZ=YJALy6jMy;6HZGPlm2hp)ZlwbWi z^fxvENWMQsn>wl|4l$-^#QJ!H-HcNcu~F1}R9~SYnhf`Lcy=)RF?rQu&2@Z{k}X6T zER_Sr!>Y~Y8n@D>S%O`WZ%?_+rDq)@rFIonE1KofL&JAI1Zv4}dn!wW=A{n)~Jm-ssC=9 zY(ppSV>;^^PP;H(4O7JM_|N^!x>Ku?aSg`y_~&DQp@AV~kGA^qbt^m_+yBHjX6sA= z4m_YkG9_k!NH}yEYZ~i%}DY!2(S<*sKD`>dAKxMF=ODKr-!k?L&N~rV2Rp`xfRpNjf%a3Dj(;E;!4r-|u8vtrtK)gM>?> zPE-82)~Y3QPjXPKh_=i|r2xqAKixCfT8_h4HZOPwAH` zg&B$pvNsE+YcvAvCYc6Py6bv-6|=?Wbf~0&v-*x^AVFtG%7cD8k8tF~NtHSwiIJ=u zWwy`c%&Io`iZ~1$MJicdym--5>6bg+5}yb9X&InJYFO?a3Q}aTbF0a`s-iqII3TKE z1j;*km?U?q)}jSg;YUxd4{`(%74z>?p~x42Br^~6Uoa7XcT%f(WtK^^`o&_4Ow5IX z|3`cF?(J@}V#Wy?pAmm*XI)ddk!2%$9$Jl~m+>()=y-Fpgs%omsQi+Fbi^X`MXynhVUz84!D<%kzh#uVf~#yuPwADBPGU zM>G||haF>ON;fWZfyr#&U}4GiuH`=BR}~M>>04O9Sh({^NReVg_xixKxCVwdkhBu% zpB=el?!M8nzyIKYu*VH4Y!a@SU$tw)M2%KeL$u+%Ln?M`bkt(c^3;XcCgfZ?u9-xF zz-K(aUSE{4h@_;l_~S>T!^26Hd_03{s;XgwD?SWsmJOKN;WC~t^@O@0AAkDpg1ey_ z5jk}QAp8d>8Pz~9tktaj*Drs0IP5mT(HQC>x!U^r#{MR?vZc}D59-&heL&dU_wWB+ z-x$}d<;NpKh<4#BY#JU(z-GpeB|cXTaRw99{8lm(A=YJ2UFj~c($7yF`DucXxa{w{qsKm-`d7p|7G9T6vBgWy=9lI@7It5h4Wr@`*7|O3YCP1~{sD zeN|re<5H`G#~33)R*@<@EI3543>~@cP5y-=77gv48Qy5S`akIkgZ|`mA3Z&s*X!pm zq%J>_S9vL|`~8nR;m?hYM#W|svEyhux_|&leI}4O&)Fn`zH2CHDDbi>Pw2O>?8J*> z{3$ob=jLuN2w#$v+#0cRb@5R5jDzb+CqFoi`yrd^yH1t(zO=g9>gsN)ytMUn$%+`H zDfx+712d@fQ~B{*SahHN2bj$por#9MpXM{WbzZUAnEqhBwuuxkuddhl&A>)!X=%>; zvuZq{%8`NzX)7Vz(5+c`Kd4PR@zyC;>MEJqef7_a(vn3#{+F=%$RNPShk_oksq=$S ze;dr!CxklASx(cb;Lk`_*!w@`>bA&^WyPrHxx+9Rj!lpBm-{1gyACymrh<@bDWEC$2o;2a0y^|s4T zGWj*LL}>T@lG`bSYPJw#7_Q;(>+5r0u1o+UuErb8t~npqd*IJM!(mRb%lM*h&_uj` zT;w!mWVAj61Fr#V&4=RGO=Dv^lo@>%a%8`8@yF@!7Nqw{GZHQ!0Ux2D;KuBb1^|Ka8g*wN_JM($V zokv+|7md>6-%%EPA?vm@Nlq~2J-@OQMmyc`+4+X%YkykNd_7hrJIlJ=H?Zs44Vu@? z9fQ5wdD2^GE_g*knp(7-3I!FAsEZ3U9Xr|5>|{Dw8jHj0xSjNjo+8a&n#+be_kQ*x z-Gf&nL+O;A;wnYx6Jb`M`jEF1d^yebJWqE&I0*0R-+q^#;=a8!%=O6{JMW@_&Si!p zRsX)j>Z0aj_@?Wm3p?-02|fs&A4R{H?EzlXpZ-7BvT+uYm#V)O(sA3r()rJ?)D`~s zS3qvFHhav%IjeyRasrTM2*?%F4@9E2_HfJE47o3-#L;9`6Ey$BuW9S~+&&Y@r15vi^&< zySk7ob39B#f6dQ{x_vvkGSi3uViJni$Hj!Vo6HRjWkRYcLo;;Twk@d`&YWN#6PP!{ zaAg*%f1x}RTFz0coUEp*^mixKV(pKWIjUdQ7-G^&Yvk~4wM&ew455}}1hQNwh>PBu zzu|b4CL((3ojGv_Q+~79AlfRK^!J59z>YA0>dnnsE`L{pDmiWtn{KPmmpp4ECfiYU zyg4p2H@4GiS{m?TN7h0kG8pxa`JK_Jsni4ESsC{mv|xVlYc7O9*b?9TaD^r%aBW~~ zClXK?-1XRPTKToQ$^JIH(Hlv>Rx{U#9GyZv2c%CMl6{9AVr)ms@zf7Oh21$a{jWoj309M|z@aE&e*|>Lll2C~j$^`{DHecp7Kos~38umSRg%GT{ZEZERz&MG25q(#PVVzHmgh_*%$#8!QKYM<(ZUe z#+QVN3xjuU?-lgheJ4Qfhs53Vf!v#X=cN^m= zoLw5&@ZjXoP&PIWHrxl8Olw|RTI@wSB4}Kt2cgt(joDXb+RZSKZ^#8G;v7ixk>TZUV`C7W>-OLh^e`x(*Q3Mw zdeR8JXyGC>1c4&Ufj)vxyEmOCmqO?oYHH4CPhdoQ&2oJpzx@G4W1z6>KlbwO>?RU@ zA;DOT_&6oTbP<+%7+I?oV#8!ry#AVZ3dGo-*gYQ2Kfcow+-wBWM31O=` z3lyfbG~{LExmvZ|G#Y1lVUHkx(tLV?RNaZ@K06eH7b$PIjHvr zL*%H2@^rY^IY|zVcm#n#v|j80WXGY!EgVDWZmGv2mHcY0bpMIEbQ?p)>Z7;4wX*f1q)7?xDlutg0(@6itxVF0OM}8I~uz>jB3YFvs z&y2$xNav}Cp})$@$_Qiof|emLG6vDk@@)KY4|IWHh zCer?&SEZ+Lh>yl%-(dXz{cYb(dLaxLN5*Yt`x;^2M<~tzyA0=rxz_+XS^9mpApJ3K zW+HljyB6So;DF}W1&bt_!5yPONO}NI<>}^*1BA=}57-0E-v9H$ul;Aea$aukr@2`F zo#DpL68`fNP2<}W|0_DZ)q-U8L-wtzEmegU7W(=YOimq5)fp+9(4FlzJb3&;{Q>&R z|E_JG;@ivD>#q@D!WrQ2VsxOTT3p6A-E6y#pJovLdBifuJXyQ1#v*5UJ&;i!x*^N- zX;SW_(M9hVVU1VbSg{U)Lo6y7;I$$3$(?IfsO8s%g)!e~mWzoYe$mB0ESo(c7>|*- z`)C^P52Z#A_HwPQ2Jcw7DLMPy3mxs;Iz;@l%mKophZ(Lv9|@_`V0`;zU$izr^jB)H znl?lQ$^bmHTzlq3-1brK(M@`@qch{rHZa77B8{s%LzC2!QITFut^k|sf@RDW{xN_`c4G<7FcLT zFEZjcnhqZ)lUovG)GA6GG;#W@S=VDd%00#k>L`C-g|1Ic7~CYgvB?TWRAFQWnw*`@ z8{WOo^0un7@6)?^Ug5ig|wt5O~$4R;qbE}Al)Qg)SmtX8eQup%mf+147HxkV>YU}Dwfc5nK zAhC;8k>2-T%>Aur9S5McQ;k533v5VnVdPIWl%0)qR=96QuJxYoDa0a)aVbwnyQY_S zW5iuwA`yMapuZx6<#%8ZMyP0YIrXSC(5=AXH$_3;A**vq$o2^*Q?xzH4NYd;PdWsJ@{7hUvf6RA!NLg4m*nkuB(#~p?oHYJGS?>z9I=u{i~8gA4+ zE(pWmG+bS2LXLd?H!zAnq3y?!%qJIuf|DEyg@88c4>{B3w5!btmbJBY%9x%KqJ4h( zR#UKvD7x)FHy77rh;Ab_oQ8UvUBV4Z3FZ>@#g!cy7@YX^D>0_n>__dAmHXz`Q}Eeu zwRq7czm00SC-WTl+1~u>mHCa&=C27ytHy}fy#f817|ZS;f__j}tx8zI2V7)n2=XA2 zhDPZ8p-z>VRk(EDmvQRU$u%fH+ayV%^#g2f1SAafw%K%gm2@uk{qfNpxIAW8vyZad zkyQ4Ekk2pA=>em-auSBiq}uzs$)IiKmnVDvwFR@tFd}U%j!iF;&rCC}e5Gv} zHBxLtG0Zhvs+bn{mBDYsn-0_wq9!xkNUV%r`WWc&d^6+{uSt z_icP?m|I)0Vo+G_qQoz11(KA5CciUITf6HBm;L3u$QE9|&x(>ReWwLKn!I}#QB)?NxDsC2;tYZ;f_X; zZA+&XOf%y~L1BGYOn5bOV`XI}9!HYGq=eQ7Ti*j^nYr(LuwSMvZzNi(M<*sIhG4j0 zPKXv()G@+#Fg#mEfLnjcU-JF~q{J3dQG3@0e{}4ZvsvwUK`&kU`Dvgs2F`n;L#+5& zpx-^ue0HxPmU995cvFUqp)n(SX!o9!+(d3Wj)GC2#npWK&!>#ebL~BS&w*berp^K6x(JJQvb|%+$AN1VQFD%ecWW_?i5z*jsAf zu`0;gZf};Mq=}aEd&w53r6@RCvHSzbXg4@|vydK=DsfaUuDm*ywYa$0nIKhSD2)}H zl_G%WLav66;(dZB?$E85;47hp=_r^rDt9GEq){)b^M-KZrDP-x`}9i(C}mwVKJs(` zYPQL$JVmyFii)yqVEYhu{>V3~37|ibwa_8tKM_Eg8aPgI_vQ(e-CS0IbcLe3sdu+^ z;0O@+kg3Qb`aBk$=+cFR;f)x|`1f!lSAUb#@!Y{Mr}#=27Z*%GKss=1Sd!d^DnnXJ zd*dM%gjj@byn8XT1iZhR8CqN~wJ{4WYyjEVj6{klL$(yS5E{)n6v zXhCKJrr7?e@^q``15>9?e!Q61;<#{Ktwqu+CK~%U4fd94v>^2*aQ5t1N7=?nhar|S zK4W>-alF3TlC$$Jny)F?&OMXuj9V=8k3ybxEUjw|Gy%<|H7ZMRRB&CvELZBC1?zo z`Hs<8oY0U<_~SQ5LO7|R)``&Vcfiyq;bXt#n70p=ZnV-nnv*r(Ep6u$_Uz!Q?;UUc z10Ub>5Bt!3y);csIJo_e@_#<1y3FZwdK)TdUktsn>vG+~{L+id+r73gH#ul725)XP zY`+3{mYKs(cIW5ue_h^nc|3~mlmEZp^NtEc-_%afDZdw>=wbTHJGb=F_G<#o$-?`) zH@Hsz=A!iEH}}v)%hPZ2(ryD~me3b0{{Ur~ovQ{M`SavXm&5(<|Hsb?{DWf;m?dcJ z*^b3Uj{%5&ExYpx80ftNsHH6@6PGNuf4@_n9>D)GhW%R%pI)4T-+{kQ42&EOLRp-I zDH9J4>}Vzb=R5ud<kdYlbTl@dHJKv5D<~<`F>o5L6^T16eygmHN8vXCx{XpdC z(Wd`h+Mn#ODV-P}!*U0CC`S7vpS z%MIq`u7otm4d9>WYXck6?jd^o$k1&`&y6SN%?~E zcI3;G-N}}(!B~fZWM6!yt|C^<@%&)rIy095GR}(L>`*ut$P63zZyfZUbf`y*8x!;? zJSfO~cf`i|q#61NpS|o<>XZgk=|*sNZ#ZM+aZq79uN55Zk+qyXR>PqwMXmQ`?B~BY> znhF5OOwT`*iMinXs>f|Ije{cl8azyYNW&1N!hQLdEHZ|&|5QTn7m3%zd;1=CQyS)Eryd$xqP*DP(#q#|jTOb_3W!It|%W)H8zRNW`T!*|^0miNO2 zoRyVTUrN*Hhz_k6nVw*VTwEc+d3__4#5hSzPpVb^SvxT$ZXE@I8j^~%UD2BmX%sTM zB!EH^^5f}zi>nQvIqR$KJ4c=A56(w^kBi{1q?;eyw$@^2_!skJrw9B0AO;>6ct?_> zY?!)x765*wS+mXgUEx?`!KQ+*6-NxqZ}B|dV-QU%|NHROR!%WM`=kV_t)08k`twX`=4T%i zB4<#OmMiL4chu{?Hwx2<-B2TYIQUph00j(U%wJUIt8NyTNHxPx5nH-XbMlpo!S`6*5E=FroYl(nK1o3!e4yz%QsM;DG;L3{|n8Ow@@)Ke;2D%_LAB`2Sj>sDn2D~WLeg9Rvj{0~v zMbv7#rKid{_Ytu#0^{HCBN3%i?47L=qbDD*O38GRV?BqO?zBo&4&zJ51{_BAA9CG- z#yH9wV5rvPNKBzRo${$q+B+absV0QI2t5h zcTJn_$gfd`JbJ0daeQXiWfLEIB~Q#qN(vVx-!?izC?;8$H-r!90giantPOj?;;A1d z;o0~bcdkD2F7h$)JaZaSGHbk1S*e^TV)TRUS z$$awW31F&NO7F{83&!NiNR>{`WRCFPOxrZ`54_N7C~XB=RTj~US-R=*hiA#{!&RG< zJR8dNSWo^sy@{lM=N}*bWaLZRr)D_a)GQMeuvhu;T17Fm5@cx;b={}C&8lCm>%rXO z8#<#V$b*JAW?4}`)eU|^gAs;~bY+Ki^Wx@s+1o{=hS^wsp-!ia^SFH}Wy|#Ci_T$*_9DyLqw6NPb0~ns z?^(p`l1tbt(k1G>+IekjFq5lJLO^OY&w({)77mVW(@XB2 z&3b;cj5j7emvSeI!o$ryMbOwc9Wof1>MbV?{ayZX*j(1m$$FHlZkGLC^P4be7nTx* zaRgQD$NQ6^Cre&A&$8OT?{Jf8iGj@w(Nhi1Z~G@)g$P&)2??q6o(c5Af^+78iqcy4 z=dB-Aa-z$)c!jZLbfi#x`np2^Jq7vosD35{P2T&ZM~F`k

M_{r0efQxs6uWHX6sCEV==T~I)UTAHfYycT;?(F?dPs`4fe1$bb_|&pq z*8nusLxEg>CNo3*S=f2O$18gF)R_GMYYqo1IH{Cke9owF)I7B2H z9$xk}m@L28kkrW3Y&$L_2m02})Y)WGR_r3OZ?|{y>*ghaK#mZKMK+}xv7b%@n;X_^ zo^|YUD(Ch8K040b%JnAb*uKvCUfPdd1x`z=CLHuNa`o$t%Hk*2W939YlV_md$z;NGu_;yqIuY8waW5S^ zHi`Z4!2l*&P8RdzzM}CT=?S;ly;&>}aRM7#;?;Ox#;eb>jxqwpajS`V*YKlY25 z97a2l`7L$Hu;gAzk0-&L%^ZxE!hPcCOb!^m8{pN#!=f_O2!35va)Q*)U+l%~ zev)zS3w<`Qh8eZ2am&idmdeVi9X;XI)1=ft-BvrC5&KcKvA`5aj*_s+f$<0L(sGfS z{b(UCiv!z{?tYShA9rfjUsPVcF5KY7ablgCS&B<|77NvB-dja&ZNW}3O5!H)Um~78 zd(iV!;#w|y=PQ{>yUPyf#AOH~|(5tw2Xa}WgNuA5tjTfK0;A=e42ErLi| zF$;|v3S*SS+h;mbbfwI1Hc755*9H)+mpHTCh9^%UD8t>%;jJZSUY=u&h*KG!C#Wf2 zII>{kJ2z?(B}+%ASTB+wep>qbA9owWuqmx@p$+S%mkld*m>C5$m1zeZvAyDtp-tA< zeR4n_<2E$ImwU6yK-ux;a^u^&9&I}Osf*35LI#7i$H>CyH0wiSWMA}5l^ z(}1Te_)aL9LSPdL14kxHlMJ%KH$EN`RCbgLl~Ghzi?6A;^S05lita-A=((3yKiHvTd$lypU2EXTXXEa_pCW#^$T44QI^4 z-~Vjwz(9PMGl}B{oCPVoheMAio;A)pMKb;3m_Vj|Ex&E=ycb`YZZm~O4uE}V{ea!g zk(n9HxKEs_9?d9Rub+AU*wfb4(KAJ2xSL zWSXTKgAT*$=+;!R+b!H?o`3(r=O3i~K<%CG%al$D?LgEIbz~Na#8ts9nCw9K~N5$pvlC#l%kDe)7pwa=n0N zxqa5WX#?XF>G3p>%7BAZ!dlM0vCsK~@?Qw845S0}gLlU&!o zwVc3emrzll(7LznNpB181`#`ZRleQVQic8)3`WTTnmJ9P^Tipupt55z#j~&ZE32{h zGYgKlIP^aA@z(%^@13mO6X3>PP6|y9ZKtiK8<_41nCGC1|EB&PsLeN#x}GHa6D24o znVPqX9S1h|a(YWySuyG67@9!5n}*blKHDPz)s~)K%ZgEdxrdhtpsVq!s|}WnX@G+Iu1V znL97-&{EVTc8K%C+=^YKU=?w6`TnNNr;_c|OzaJuCHu$< zL|+|bT*#L5qn5x4b;9D)LJUF?6EcTWFS@CE(rb*YP_m}N7+k#Xdo^}neY!y5cei4S zJjhPeyOFrz;docoWomThoTsNJ>@2ftA(Pir`6{YPYXsrcC4oICxDBNxW==&u-no_e z#fZSCgnLw$RNtthk(*`gY;4d?uzI6@u_;tTH><1dp^TLK=tqwJGI#A4s27qSJR`ZL z*2`D4br;jwddVYknRYQz)_VqF9`V%H#nUP!GZ?NeI%5#&t*TtdzrSght2Eby9TmG) zY_>5m?Ne>1-+@)JOa9j1-F+RZxDccVzK#ryEs&YOV^vs9F8NFkxV!(uu&c94WVj1N zZjGQ9-*=ITy)hQbo-@R1fB(fhzcjarQL%To-lXfi`z9>uVO{OGZ?E-S$NuRypUdS0 zUT$uicWe=%+qD2Q8AA$RyEAUa24tO3pW2*az;KC+d!BjHsGs;cztW0&<+zWh6?B%@ zeH|pmK+XTk(8L+tYE7iXod5iJJqYsKe~O ziOFuI4s?z=hb~4Q2b!oO5L>x=qjyYY;5=+kB;1W&4Ck<|c&27VfQSVFjN>cWFfhx( zINj2r$4L4a7u3?7mS2)caihi`gxqN5`T^^nBz%ayw3Hh$8`L3K>GL^CHnt)!oN)uP z%Gjh|WTrPC!4$~YZyrhIUUAC1Ax<0_8#j_9q3sU-Dq58XCu|*sI1MlW%Fjc(-l6@c z3qPTQ%-$K4Yrr_GG-x@OUS3!n6}Y9ELuv2WySTXOHCz>Xo7MU=NZ`Pqy+Q&>R*XN* zN__*R!t}l9gDKo?)%r3z`G6!;@p%_6^-j%+Uo128R)!Mr9}w%Jx< zY{#45zRkQWBV(n(9?2Dc&O{w6YIB#ZzP47k*epw^U4FSSSVL3SHz78*0szP zbmzBW$D879bpoikgpHB$?j9i)L2d9RVbe^1&}$W{n~bc$xi_+(b(z|X79rxj{x9y{ zGpwmJY!_w5aU3h7*bx||2q-X8rCZU^I|2fu^sZE?v7;iOp-Bx$uaO!$Dgpx12@wJW zqyz{MdVmmep5^HGoj>Qtch3Iy-e+@NnM<#Ui?!bMzE8QI`?;U;2MX0y{3uRhZb$#2 zT2;)hBLOCntlrqP-rZuHxD2hM&I%g z2G-Eoa#dGMd`HHASE==u@eT{h9&Uf;-qhHfa2gAco6V+NBb3evSig&ji9`0KS^6u+ zqeg%3w$&&--{g7a-q&r7N=iy?Zz7Jq`~4@#FVSp}osHkI=QYOk*~=}PH@@CZ$>eC! z4w%y5agXKF8mFW)YmRQRi$k&rG4$H_gK*p`b-)CaJW>HTf+)fPVuOvDM>V=`@{!+<*!)xNDbsi4!Dc zS-!C>v)(a7p$A)m4UdArRzc*cijiCOzXk+A_}NJEzBW5hXpV5Io`-fJA;omNG9UJ` zrj-54L~S5An};RV%;_ClEnlwWM2)Ss<}@?~P_!V;NZ#g$WE^)xR%t`YN(m;@;P10a za{zfzjm{C|*Lp0Usby3L*Ty)5%|e||+%##N#6=}I(S#$$DP=hdr2gnIW3sl_FBLe_W=80R_Vz_wL0 zzZTd{z|6$5&t)Kb(CoiKl4e@m9)+GVRMWJ(7iBK5cxbMuVQwpupP_A}Zw>d53<3{3kmFxx3261-i zT1SX`9)gyq%fh`CPANUeJ^)oa9o7wz|I4Nl%iC{DJU*xgbX10)dwp&`v*Snmz}E<` z1aeeSQBeb$6$!D42LO;<%Kq=aRUn--)Xs4I0*93ya}vW1o_ps=_1_B3X#o-Gk*~8nvEcg1>57vjKvOMf}NwOPr1U+}VTzWR?7ZC-q^Wnb#<<<%);DL_rg zcRmDYx;;l=&li-kkgcf2c5=D%X6$(QgKA0N#i~7c+OOP)5B)pz^!-FJzmww>vD(!k_AawI42 z;x7oD6(1G`h*8qMLB^Im+IzOrTJV|6*W!|mDrJ2bb$~ETHwo zMNYpa3kGkYiR0tqMcr*NtwDZm3bs9|ebZm_@czuLY)cx2Cn>*rzUfoJ8}mz@kLhcsvc^lotJ^`%?~1zmm)Z%FnxO-@{hZ&V zCi`w5@6SD-0MnWG9H7GV=ZP97H+))7UH(Kc%JYpoN>jCG zWE*<=WBF0|1-pa{FB#+zD;&Oxptg}G)1BjbRWZ^uB_X45)awyjkCB@%cWpmxm{mBw z+hen*=OEWtM$m0B^H)@P%n`*&tK?fx!EX#CnOzwP= zAeU8KtPfbgbSxz;gH*PjY+5@{#fww3O;twS2^Flo)e6Egr<^qc^R30uQ=di}`afb{A|A1abwi`sXtKcGW5-QCOdXIS+8X(>DXB$~># ztCPgpxU?D6v)u=V&Q$JWl+5ucv)Ija*Ox%iO=WyG-NBHg2 zask1bzM9{5z4$r5=>oU-7aJ(M34L>V^E6-$q{1>i8(k#ZxU4{FAXe^H#KP~FLKQ)6 z>WRDU^*;8(Y_eHU!^65;zx|jr{VK@)oQ?zYnDFpJzTYxM>&<)8z6LjJe7yRQuwDYvt)4rM@80+P2Pb#)Of{qP@9KrQYr0-$ z&cnXjw+Gp+mD#JRUe|`6Z~eSpMK`bXQ!5Fd2v)iz%|Sw`p7+)Gy?T?+Q(gpDI(=bk zXQ_H_DZMYC-HQGG>e)6qxBq@9dH)?f=&D;8=GE%>lXUqJhI6H| zhdaqyxh?tJ{kSnzj(->K+Cr0GfVHMpbQli%`yTF#j|OfjvF4qb70%)D@$o(RdSN+% zGIdV9uO7Hw$2@%TIFt7NgXBL-&`XkU&r#FtUES2Gaq;RUecVM}LLONg;12wXzw#te z?^g2Z;@h1Hk)o#N^Z`0s!XE=a55w@^rMadli2K-e*1zS~P4Jpi(JW`A)o{gWsWVNg&m94ADRlr5sg>W@2nO1WpC za~z}`so}3V42thHk%)Lh$3(2_CoFoVXGl4)d)>X$m+`2*?ASc^=g8G9!WMqPHspld zbxhCjNCRtwI=DiWhWrc5mt~UIKfuNke-iv#4ve#GsLeCX0XrtN==A37;b5d?zh(t$~-Myz&k4|z69Y153{M+F>!GAv&`TmN^EMr=yRJF7! z#~H1E$3@1*E{2(_+1RA(>S1ucc{>;GqFV|?4!0PD%?jpwg%TZo&se0@g!qWbdAfx5pD%w<`u4eg zn6WoG?_B{n``4@&)%PxjHwGrx?bwO{M?an|)ib+-UwMBy>=>zvREe||xp2e=p0i47 z2t9fVg}kg4AI4?N1b*JjXF$Zi@QFLr9Bvus8G@mnd~O?;Wrw)X zq*4{atg%sQ&cwrc%Q&Vh&QLoUkFoJ$Zb`E{k3na+*BEN^KF8B zsi64gO9&HhY6uwQ-NlFR81=B9lLHMD$k!VB~ zxnYT)1^sbJ_r)7h}&%tVPxs zw#hh;_VqA@zAFiS%LWjw)G({9ww&@ztM%a!kB>}c$nDwF(OX}~aE~vf2 zXClH-C2Fn+9=4QCI7sEJ-qw5ayitZ>+};gyv^5QOD5v`Z3-T1_F6G=r(k zgkf*Sbqo?1uw0p69yR#v<&iglUElc>YiQT?Cbx6T6}?stbZ@E6QEyOwxJ$@4wcxFd z$#QW{H1hS01vpE}c}o@Ic4?wr)8~$y*+e5Cmq3UnX+7jN8vlUwQ!w~^)#JzKuRg`T z=3`d5J)dZOVHwu)1O;<**sSH?>(VXc>0`%q1sQtlnHwDI-1IZkTFT zLM1v&lWQTeUb0c|o$z#&w3$|@lpBDpHBGF3)E2K~e^w2p+0|UXR1&ZxS6=QXC?Nb% zbh8%5!LP(X()-S0rQ(i{rrR?Kb>%I{c zACU>gc}GUn-d+2IY`oj8nWdsfTijWi*Zep04aFFp6P1wP^cPW$X8Trhr1Q+u&goHv z6ct-tY9WklEnHsq5Mty#jUT_25lXFetucf;M5h*%xP?uV-%8_JeDAL^aI7Mqj?*a6 z7S4q;uZl@@>Br1-?|bqi!W__v5?wP#o@C{?WLH}75h*xUvmlsbm}4+{Ezse&HJZQV zO~8m?`KtrtmlF80OtE-_(K$KYRjVY~n&TS}_75p$#U3A=>aD_J+rtbDTi-~k zXqZ%`UQ4{EVxN^Id_Y$nv%I<6S<-a_pRI>`a9q+M7iNVBN=aRfBj)vXZ6~)llsJ5O z_Q7}Txh>32%Qa{25IQ+mxVpI&Y@^A28s$J-mm)ZJKm!@ipLQqcq;qv4^ED%NVL6_S z`HKrnL@`royWWTeWXjPHx~S4d#N6>^c2p{K#Gd=e0ENHlv{17{2XE?+*lIdXu@@aW zbayq1P7f8Ku1saRdKjf;%lQv$g>Y{bd>a*YWF?wSlk)Po(dk*{PUNdGF^5~RvY<{W z7EW*=j1^dX{cVk1CeQ?tW1K2H)RVcO2Rap1;u0bo&WyvYapzEngRcUarU$46R*x5= zUhb(Te_t=N(uax8L#hy^pW8SuC$7-Gq*RQ??H@WKVt5HxvlS|An3diU-Y8|gc^!d? z2aaGfZ%0ncC>|ig;t+8pN#C_=4TX(ZdmI0)xi}oGfO%aIAzzPuef>yR8<#0!#4G%T z3~9CN*)!|BL2EzkZ=+eo0)l8Gte%;k)GbbQ3&Gw_AJ8aO&r^Akonw0qkG#Y z{nb5ch?rZ_IR-)bpyC=dC}*LJ7BQ~v&L3#3rYmRerra)eUXnx96u@`r=U?JC$Tzv% z%x#d6GGG=tlP@D6r@Ncd_KJ^yp5MQhM@9nzMoen6fn%msjN9=|7oG&g099oXi&Tnf zX(S)B)VXNkRSG=GhC>$}yr{$~>04Up^iY0GE9npX$Km~>p5BALm{eWsmNVeEp|P4D zl^y8KRZX9|K3d7+uViwKovR}T=B^y}y;N{SOz_>ESA0zbt1h~l%-0f%BKWn4F)NW+ znac=dQs#=EL(l7=y1)@lP9`%ksSNIW*I{*wrbdtS*SrnKS zvZ>uK_b(v63%Qi0m{Qu;-4rS3k4DGoptI)^md6=bD#V{MgBNqFebNEDyCLAX-$)gB zqO;b~I>%-2rZ4=NtxK2&?R(K`B0dg`Yl&MEyKjM+Evk#B)|u!^lC~W@LU3?cjwqLQ zJyz8*VeRQkzL|VF^zdYrR>FB3e;0--I|R~aQQ10``3^n#If=|6iT++0v!;cLH89y!N6q=RD4g1?pYG%w^HSHB<==3b!(`scWx^x zHSp2X{>VQTmlqAQ#I0Q?z2!YRO#6$yB3n5*jY?-j?3NaDyhf@Kb%E(7f!4GcjJ$lQ zlO&Y$tMzT%4P_H2^5o}t9NrhS|6fRZIF!Pp{dvtPa;2iQ)$Dhx_f)f`O-(o5-cRpr zwXot*32j69H=gSXn^cYM#WErvIJm$|hv8U#h*JmCnsxooIYJG-=75`qw(lU(ZyL9M z@iG~GrhT;0yy`EM97hz@SIZ_OX?0~uFKwoC3;zg@Zzm8Lnp&dbD}}SY2GzfdCa$A;ENdQ7bqdvG zEthKwdDxuDRTphFu{{vw*@OGb_Gzz150a-_Hr)am(lAl4D`z#_Hrru zU#N8&%p6O-Qh(*@QGQIKfBuuI{DnJzlyUt)N#3JP3Jw^jP$0QbTAMi&S()SZ3;s+R z#2|8QSc!m1fm{F0wy5OTXe>G^fTb~0qaeE&40%kR5)#i>J(>+)0Zdi z=QaZSufTR2bg6^@LKtAjw#Ny&w=Y$h^(8$lUSP5-X5*DNH(e{GL72jGvR4g@ekHXS z*#8Ola_2udh3FyXj$`4+kBl8(rS+wOrGoUzU9FiHj#{<>L)3v#6WrCMi)h4g$<70_ zDj~Up<64Q=_<<595{lqBxP&hzbhn!Kz|BSgN)PD%)LQ4GhKh-D?iXyKkAIBq8CK@) z#@OR?Kh%YBVr{kJZVU$uvb1}zWI$Az9?D~lAG%#2D0FV&Y1>lI0&m0h1+N72qMcfZ zxUqxO?!$K`s`y`-)&+5LVkzp3hWsz;fza0bX-&cY%QJ=A@Ha4J1rYHIoxX)4fj$;W9i=cTZwA~iqFvLoeM-Fxtu-%U7Cn9HO^T*u5dfIK%o}( zYN;X`SD6r747sxaYQui~Hj)cPD88NpR{p|{1OExD=ByvtC8^{(xE2nt5mM`^o9d7q zCHm2fAXnx?0odO8Ir-7ccL*G`A=N_2&^t5AkWEAtmsC2!AUdL~cYd2j} zlmb{9DHGN8w>wKU90}CODCw0B{;8>SkwXVhoI)W1Rlu^BSb~x#5`W8qJ~fiR?3-cM z-J(5+@28<>HtPkhOUHGG?6k!#`?pA|Se%WOsY?%IZCYjw&8@@e2DwB^shQtOI3K^j zryQ1}m?L?P95R|H8=$+*f|hNdc4PoUb(f0iNmkPuC6fb|N*2oX`xbpqRhBXkmkinK zB$SO(YSHUtl@?UmGfV6a{`y_CV3=7Ekq5;_2%q#MDrF4htko$*yS&K|DrED2Ew z+^D(zme;v=JW&C!US63$S(z)Yd8Dn;4sL-T1q;#esha4?^V`9$Z6Sr+%`=csL0+nc ziN)VuG&wJ-I{6(Dx{oe_A||wW8N)%{uwOp~1IYB@j}Bem_(3b;3E$M{h$XUgFV1tO zrzwBwE!<8%b3XNrsKpDZhad6g&bVX2f#NFj8yoB2b5Ivro2r@`p+8csSlaIjuJGDX z#*Tx!1LJDMfHsJ2bXxu92wRGk#dbq`xCZd1hjR3ZX5sG&%~Jjqg8RY^Y_rX*bBct=LCuAW|gP8d(%rafrAUUs)tM`~6pu;1W2V-s^O zzcrHZxzXqgCl%K|bI1gQw3piD0dalu+q(M(Gn&=c^O2@(m-$hX8?kRM19wBY%uN@A z2`?P_>t0HW+D2fX3e;(!G_qZ3-Ru38iXM1uZOI_Cb*6>h0;`ZB+-O~Pe>)a2`)yTR&NO=amz=Q^SI4LL-}QRTt>eDrL^iqx&%Q-n-2qKyv}JEfdv z^bK+h8gJ1ac`F@HKOH9-4a|>ATHM{){K|Gq6$C^!!NY>L-~S=?#o*(V4WhEu+A_PWI& zNCp%nnESaWy?A^u67Wou^eXct$AQ zJe(!Nv)cW>A&~X?TyS870S`$oE5EL|uHa1%$!!4KcA-fgv=QiBFF4S!2>Z>l#|I9R z*F|4|e!!AZD$Ox)zzGYyI}jx4*InDQfv>f=vV>=t0~ow-mXuOf*0Q)nH;SD0CjAz; zUJmIk#qzB|+4Xn7?%sjIRFno9USsFczL0z&Z=iy>HV0!zUIuL|KYgrK4NCEF8W8YJ z*f1~o)!X{WH-EZDtnb{luSw$m%dpB^Xebh1{kDCbu_MzBek;466}=i_6RtIPJls?y zkQA~FYfT?qyqfP*eXjHMi{~%3FZ`3F8lCqS3Q0qnB={9asG^T*hJgT za@|tIa1A#@Yro@FU3SP8sdys?D(vncA@bmQ{Mcim^ zbu=$_9N}WNA$D{1!JwCcM5)CeOUy&=fYV zc-cH?U@R&sstSCBakT+Rpj{OU3Nzn53(sq*Ls^2{Xg@y(*9%*QV`4FHcqR0 zt1PsRv$Or-57>fUTfCX2QqY9^?d{6%U6$t~;q_0~?BK4LkX_qY=j0{huEA@KOCmdI z8m;yH4LaQyNUzH8ydM6QCX;G+H|~yUUz(Yl%2$g#4LET3TA6-6j7$(gCk9meBp6+M z$T|Ue`ufvNBdV6IEhR_kL|Je3pEu5b7G0*#FIP+e7Qr}QxvBQg$R-X5tj16v*NvPm zahdcUnV2w7T9Swv$X{1v*TVhE+?T(?`DN!f_ulcq&$^I~*RUUhXGBDeKy`beA{1>j z?%<;-Ngbc%6?4V@N~1um1&MiB1asn-5ycZ+{6Ct}T|Pe6xs23;0D5dFj2Pn!V6I<- z7LHeZgMO?>l095Qjhtw6`|idpp{C=CGnSUc=!@J@1JBdZ5+*s(%*>t$E6bR^_gy4x$r!ExjKb*_rl4WgOfr@p@t6c3;*-rw1^ zuXla}BB>6bM+?IrfpjpqvFtrbJ?}H~tfHhKA+yvlfzgwsW9f3|yn@P)w$*{ZTzPn; z?1xOuf%i17|M8ON^xFQkbiNkwvj{@@DJFQAzU7Hfq%r|mp%t+as24y6=?Y+1NZl!M z__*f?OCvrZfskvk102_9%gDb(zuG@UMC`s6YoVfLWWWU>HUl_(*L^hfJzpu? zeU_v^!FDz_quU2aXYq~)Nb&{_4ljVBk_+i0L~qT?Nw0tavyR*2$;v@O@XgQ3EGr&Z zN;!BO1dEat(%JguK}C)m!BO-5j36fbz)GX5d;}&w73f-70PGBO;Mu~zZb=CKP>a74 z4zU1&q(T{x>-eZe^Wfm9GO1A@oTQ=g(XZ0ShcTJlm^dcT!1eqmoq>(*|BrR{|6)z} zXTkRVnaJI48Bh~1YyDXb=L!$z2!U$3v&zo@&`0=hVH+6z;7i<`>Ukmw)PPsG#fx*2 zAC>R_fX%-)y}Mx#%xD*hpN8rGENaAm@s<7Gi5)*F4Qwu&;6#xV1U%Xt-I7=sv90o; zR1MOFT%pmiX@{;B-0JI$Xqk|mY=8fhcL_ST8hlcYheHGf{=f772j2W+4z#Zo7agNS zB)6D7?jp;>DOy`8l+syHRpvHbK8qQ*n_Q2yU`w-Qf*HViHkMe57-^SA*+qu+1Sf@4YW?xE9k5 zxV#Nlf317sW^r*3 zRVgF&(_^ZryJf+Rh**q&OaHjaYII^^lr3X0b5GHFHbYHQwJP5WJcqWgZ~xu|HQn1E zJp+{aWv8N6uU+h)R+*DRL2^_gR1je#Z$aA<|1!a;6wNt!_#j5#uN-Hbx)0iIMG4XLJ+{T0ly$R#cJ~odmP1JY@nu7AlrC)enT@{y%D`+dDsfw22aolRPYAMP z1WD`no)Q9JFOG&Yskk6B@9~XDOgP8N?Bssro9Y5GiXD?W)$2p2p-P!PnfZst ziItdAS92V~YuPE3z0b^Vobgl@ZaP5O`E4)GJRLZ5nAF8XIhj7xRfpBve&uETDvwfv z0&;=nV#TaEj3e~!Rd$5Rn_R$j8O&8hCIGHX(`#B4IJn}J^WnPnjb_qO)$0!OmSksh z9|N&spC`%=@Bc#vb>CD>x|qYn*xeVQdx-RIf$12VJeYfM3f0eOblq^LYoc_i`^?5? zN5d&fQ1mC{wYsZ3OCSL}hj2XtNd;HKDQAmz8y;zmOD^Dj`uTRj(U!Bvj>$n`4dJ{T z3Z_tl7rK#rq1Eej=!q^O7HuWbB@b)`1lNEt-ynkZiLxd+E8Wo02WN#I0Va*0MI& z(?jMkilRuq7ubg;1$K#ngQY0FiTroF7mzhMSGR-o^1d>lvoHuCBU@@tx}RUk$ZssZ z<-U>`%v-}rx3t$^nPS*;l=v`@hs4NfoxB>%@?_`R^kmO9E4cz4=u$)f{2kX_0Cz93-g zK7xPcu=F=YcV_$OO&g74t_pm5t0W$T^a4G>T{1<{nNUztA>Uk&x(I38f~=BgsItv_ zjiet`xjWbM1;Wo2ID#$jpYPsfIcSkWV=mujw5`w8LFg`4J)i*qq<#jsD?+M_Rq!rX zI<90mF079!eAM>DJ)Pu)hXP*+_i6!P)UZu^3Y|{}h`YTVTDN;MS56N^>6$oX#~1Mk zj+L$j89NLn2iu4lXJjoIO|M5ev*c(^qluSV>Y zevl8iUy@8E6(*M;BVpi>0;gkF2zDVQ(RT|b%ED=OSf2kt^xOy0eO9_yY4=PTb!B#j zKijl6Il?TFveK3Z#Aw`W@|T{0LvhQiJFw7sekO-YCXWDkTnH2DVUtfvahV={yQX_7 z&H=J_p9-`Zr_+_V>?70>Gzrz1@s8&)zhQiz7*5jc2JA;rKH>f z^sne?Se#8U3eHt^e`!{BZ)RSTd$i-kTdec1jAP*qUms6G0rU90uMt!OfO(>aaH#s( zb0od`{S`OYx^=yFW;a>Zex{rBp0&sfdm7TY8EJ}O9N9P0Vhd;kq=0j`Dpa&$9A}I$ zLp*Y-DH&e~lS1w*5L>KS{yy;A&qO3>MZG9jXYHKMerUL^LaO>mR-Hh?U&okuPCCfA zZK;Zwc=v#BcwMw)F?uuy$Ui2Yb9y-$$3H{io!-VZJ+P!9$n#!sl7Z&lU+W!xTxs*( z15%P%1~(!OxIhCyR9IO2X00>nEZ~qpW9hmarYMB^zQxV&a|i6mgs(Pul~8-5fkPGv z>a4AWJcI&MS7^H{y+G*ztZaI{*RP>gnc<5uQ5?Jg?aK%#Y7}mhW4D08q;^@K5fq&( z1VId4OtCi+hah(a&Je+TEe_)|z*u}&+##HAcBs5Ae?Ysf=30P!%TXzF@SzX@ z5h*;&GNq2a+0_fo#zbGuFr~gv_uP3iXjDMej&S2}&{x2`dvz-9xkZ4>V>&i0@pf?@ zpW&t&n@0yOwt#P%y(LD(fqlf=tFE9v|wE77wW$qjV4+uowQr`z-)ir`1_w?yc{ z9|-9v-ZUUh!Ru|H@ltL<%PSWu+sE51QrI+6`JeY9@BjE$gKrq|o1Qbhrp@_HHO}2~ zt+C>o7@+-S7|vGSbmxIEN*YSRAcZ9eewa^4l$g1e60~{-6e=*wSHdTd|9Pap3gbGv z9Iw+|E@jvAbuIo*0{_*;A+p7YtN#H1=+cB|4jwb2vuHb zca<+aI9Y?>&Gj>dvar=<6lBHjQ|o@NUq@`QT^=n%Q`Ov|^7^_wXWZMX^1te;$va(lPwc?Pq-oYqWi+!A^VB??fPTqzeJMOk)z|h#$7ah1& zz8Xy(sME9#sc8E+k)`&m)9SBsDS3OUtr?=idre)y3ALAPYML9+ z3QAndp;65&$XF%M0XuS_qTJYeLUE5Isw<{@yQ3eJ|3Cnw=|ps*H_tuBm^+UZ1mu?< z_DkA(5G>;r1E%Hr<+RK9edakm%=1&%@9xK=IHHlr2&4^+ooZ_u>x@jB8F5?!p<0k*wRwV*04kegN zUb2MNpTxw({#x5;Ly`57DDD=BNNc&dN+@)T)S1q;A*C9`Tj)$rs7Af46*%ifpRdXv z9e1LRjd`?|iXG5hcCBF-f4SO9y9w=s_M1^`=S6??y$;XER?TBlNB(&%g8|Z1JYDjoR*~zFQGyr=P_&TL^laKj1Z2Y0ZkhMyusy&`i%=ASV>o6~6|0>C-?(6w`v5gedzTmC`4N9?L&<^^(WP zM_KDPMGj+VVBrNQ`8F#D2DX}dcCe)L4IucJE4_DV?{P%|Z*TS|i3;h!RIf^F^sSf* zj+L=21YB}uB4<_~*e3eW6b8p;s(n!2x|K500&z;`kXfxgRGeNw5u^S06(HWijBB7? z%$B+XehS5%Ok#`3A&x`IJhj;LC+;oC7{;;rkR?q@oU`Lx(Kc0}#N(iv6UHb}DqeOGW1k{Z> zZ}VG8XE;I)7Ris0JSfq%CIiB=@$v-Npili}FCuqxv|bDaShd4aa}ILlb#4i$GH4{! z>T1b23qgzv%>baoDXgfd(bm@HyS9+A(nk#6>itoaoMP}Ju?HUlPry!r9CUGhUO#8m zY@OS+)Q#>)P$71<7oq6#&$sejMjIV-SIdIBLQ zLg5vN;IVPNeNJSYeprMMM1?kYmB2dLrU2flN>g@A;plfhJ>d#3>=02GG% zm~f2#`OIfPl{XuHHQ=Nv)BI&^ZKKc3eWrdll(yO`f`X%qV+2=BNa<;5(=ZR?#fv|T zMv_i_?<z_9Y+4#3q=+)wW1(@%JEj(uB2y+Ea8CV!zEio|yApQyp z)d+6~i~|_XgSxp_5}grNmyfjiv8X)|5*xw}K^Vj4v_%YG7R8j76%d{1zkKypM*7bWgM+-}&gEv&L%Id3QU{tzXax4Gf3Ei0r?#DO#Y>ecCp!OzHl?TQh zJa!f&6#Zw#OcZGE^MHnDq)5!+0x5;72ptwrz~k^XF_#=KhEbRP@pDArR6}{gl?w?X4oBv zggRW;RM3rVGde#Whwbw?v?;{??T`t^zsOX{vg?MB`~VJuH_5~IYY*}MP$a*96Oj*W zrS{qly=;P(8tv-^xz8Of-=2!l3#qWK)Fk>#02+Vzr*ihI7NF>+x_9azkPeV^IHjf| z6&o*Aw&6fvj%X*yhL-;+xcyf&&=mox+6RIcJyG;nIyC%B zW?|iu|3bgttbIa&;m{Jv5x1R9`@)Z>5d7=MlOKHbqe8Lk`j0!_&i8Mx)BRr${99wY z@XOAB7d7er519Xd_+^p|vynjQ9!|+IKluRLFo(eRdx*_)GXDNon?L?7-~4~Q?L2*? zgLB>-*P3m(lP$MH1^IPK|EjJe#Z+uyRN3xpy*c@BA@#rSasMYSzC+qBc;fr*h2E65 zCP%T_Hyaxpd%ljSu~tW0v2+*nz=iUvjjrH6$J$3-$(E;;e`AYM`@RtlHnfXX|3&)} z|C&#hjya=e+*e^7Q*RKwxTGfb<>U6Ha6XHE&-R&ex{Pwq!nA{95j`5OikYdRCi1A= zM>FTu7T*tg^Pzs9E02c3L5Kd`OHUTYM$yd?O*lSO+cDp(MJrNLg!N9;BZ;l1T0XVQ zbez8@t=+%6dZTMMjLhA_P+038@WpyJ9)GrX8~=6{=RB$C@86`ehl&N!p@}cQd@_TW zZeTrbTjSzV9(ZqSZGMuksqb>&w9PMT5&v?vwePL0tQ=b$)mw^IR>bi21yUXy^BoWI zjomh%_3uvGm6x1+GCJPRm-`s=0>&yxEPy|H)BlPI-aO?F#Y9gpp}#*i%n7Y_*eI>0 zW=Ml={+RXmEt&LAHBC4FsIQBP@*VHOVhXdq&eg7WDR8UGTVm}Mi*q+R4-BYT_q+=5NDq6B_uZE1719sPN8eW|dwpK3#|UTwvP9*viF z?iclrySn?x7_qkMoF(_sm}9P*Y8a=o&W1ie)@8T%x5&)*t~!xV2a2Y*FCQfNYCm zZ(Mi!gx`+6%>%MUEiD4I8@etP(}gd&mMZtuPKA0?f1@jGuxkiiC?fW_gk;Od1nGW-n_q7G#ZD?X)!*{-o}Zmjf%=M z88;1LZ_D=!R3s3-Z{o3JGC2sehspu{_u#aXMlwnr6dMvP;3xzSw=S;K}P>AF!Uye0x zUg@E@$CbTiGqn6}n3G;@*?y8jVVawJWtSeR6*U}p-pXqvCACxBN4e-30YRbT)_$|z z+M043ratTS6y)4k4RC~6zu#w}ia{2>YWbr(3Nxy?rL#|QQMrZWT++IldD9kFC+{AlVo<1k^HEwUWLXUgc+Rdrgbj2+m zo~>Hu3gcVv_it3dU;jjM(@vZgwD7J;2OteTh@=G$ok11I98cjeqxj z7fzNql9i$@=sEtZ&yU2{3DJu)$__(lx~b8Z$>#6&?{B%~IoBnn zTscyD0{v+#W?`cEC3;b8oyKzl8;mPEnJ0WEy0Z0Kc6QkE37aY@MOs@zIG?d?_0ae@ zHJ2#ws)6Q($NRj3AB=;I#ItdfOWsqHv5%_=9&@=rNh_a)^<&Qe^7rqEXoa?uVy}sO zookiiUyeVRQNVAxNSQ>Zf`MD`ZnDH zZ_|=u)`Io!x2A)5+uJAL3s_5HW|lAx%&XrT&)2lXfoI6$~{pMd=yYY0}E zb1pZvyOfE*eW_v;HaZ@yH~HA6>I?Iu`b!9omF=MkauH(&WZDK*I$nzZ8oJ^%F zHo(O`Z@1ktLe4L2III4Mrt;x0tmZd6RJO>nxwW@y#O-Kq>#is6+4atp8)C)~YevKm zy@`#TwzAJeH;ND`jTu|EQD=>-REhW2hB?5m$?e^{cb(Q9+RnqnZN^OxY|Hg_>$_@8 z>BsIBu~;gU^PV!?+MMia@}#uIB&;tDc`mW#HthLM%E>*fM8;c)R&VsiZ(zx782(_1 z=`FHNh_thNzwBiUANiR7rKgE7>vpWB(xrt)tCkq7$cr(ieNku4dij%?-td~)!zz^F zQ`=Vo`-AeZ1?McTu1WHmbze9nGf_OcuN|DNd#hH^*v1(#R2a9Tb#E@dMRDYeux8UM z?BIM&BW-EDMwVH13t7Z5a58Rf?4hm49g9&EQ8Ni@LX5qgoglVlxVG-p)!uo>$8JF_V#M5J^``f`3*&52a99|h+&+x}Dl zwc1Zlioeqz)}5|FvYOHsFS4yp2-tUCQn_Ndi#|wSm1qBgX6H(DaaDX%9Keog};%Z?qGIb;?pzoOQjU}N2{`bSq$2;^!4@mP~{XQwjBJ6 zuEn9sU+=d+i;=JmbjTlnq1@W8)zH=+MmITqdDV=_euPaa^<>IWGn&k@(dIp|K_Abj{(iEpFy{L&0 zcvWppt7VxK3s%bvWZqn5Rw*v!RV{BS>lqkCO8G6T*NT*?g87P#^z)vJ*B3EaLsP0( zEA0Cm*0%x7yppD#uxSD(vCsv#Qq*)r+H~Yobx3;cIsR~qm=Fd--Fm8lF@u9a?evH)Q#{*Tjz?cglm->HdT&=P?uUi@bkUQ(=^d zsm-7{RpyMTj!vhOPfL`@Qn`jS^F3wAJQ9Z==KBJ``P7!=`_p`k_4tuN$X4Ew=|4baS1tV4Y32g6e314C)43G(@+Sh-~ zg8QG4O#Zj+fd5#7|5q8Z_89{xBhFxMVBi7=;+BV4ZfPO z<6jl2|2=Z@|AEVR;lo+;svG=l^RInoONeeutumD5ZmsqfvD(~A!+O~j6>?HiQo_F~ zS({`54Nt2#6&3DAa>kFpzQ*13f3^4BQBB?d-}tqaS_iFKC!FMdgW3Bt6`Af|Mk(^8uF++|1ijZ?>-nln^&OH~j&~`%B zw96r=xi5nK%Jy=cru7A3!bz2e?l+ecrMZ1UmU^L+FSn&ua7g^XhJ&fnrdy4cFaGo& z{4bY3-EZ(I>rauzN2$$^HlIu=@naYXRu(Cz^rw1JBs%dA;WRP= zKJD3~RF!J<^T5d9cs$uZ(#g&L?wbx<02+<=LlQ0`wr<(zBZ`{ zok}0?$YEJxR?ZDXbNibz^|f&6hAk`I@w&c!Zyf?^e}bLrquKT`-~D=Nb<*9KQvc{cqO@aWX!E65iIf<@ z%E6OT9>PH}O#Rz{VX$$5YXYu2`&C)XM%j$h12G_$_h@-vNO9WnTFwbeU54g->Eq1|6%TMkbuvtS@jF-XzHF!tj9@75{yvFLu} zOb{B`=W3syj&0{11+Pj*gDm=4lCE6+lQA@7?^9D&Yz{H$R_pe4o$4 zpF%?sYcV)SWwEsR&Z3x*o0$g0?mzC`7uQvThVZo`dh=AJxG;8&`kbo7LdVMNFwTIK z2faLq<+idg&Rb;};~|OAu(h$b$I73}b^&2G-kx=`%NZlEijbqXwm!5icFQt}-6}Pv ze# z1*~s{rxCZD$~@`3a(23cUC6{?;Kp_oyHh8LbYF@idG^!395{53S5b8%WvSPzXkoHP zT{biLYOP|pI0M_^L^q?wzORr4*pxm0q7YSiE+-RZZ(YSAkT zGOpbIFa{j9xWRwBY=-}D<(axex1o^hBqO~emL}8Chnssp36|)phLKB3crF1>y%L>g zofIZ2kqKx$ zeUESD6k#}Cf3)p{R5YVPLLIjEfx<+;Ho%WF7K|b8^u0rMaEm|`OQtcMpZGqlL;>@uv%;F-BZTG*#|*QhwDao zEW95fYP2bh*I*fw0N9|tX{XYK!KGBkP}^Hu_H{`Y>@kjs+N$&A+WkH1GNMvZ{!S6& zS#uQ=6B7NvQhR9mW-<9p?geJBZ^{)TAuAM%ro^$tEK_j zo7hF<0T=!!CZKyGAQR(7wL-Gl9&KNTt$Jfm(A`or#GkV?)Woa~OWE*NrDB!`H%l7m z07>mRbn(}ic`V}v0D-TL+*<@<0K31Zguc~*(p3XaflAkx8V}gJcW>a(9of;ObEg^< zi>lgP`b#`J0tdob%vKXIN8g-B$GHQMek$c7C{xn16_$|&%BTlm=!{(N=Bc96=bWd_<%QBCO!2xj@Y?nRdO^5>I)Ls01 zE@(8(y`o%B8fHM=)p-C=Pv-B7zf_*{YuD3umOb{n_LxO{$jrU$4j zL3+E4dA%5Bxn}iH-1>jl0{k>R+-c+7Jl6JH9R!8JT>XI_mqd#M>5d#rY8aW{ze>Xh zY@EWG$w)BpW*=Z22Bs(Si<~Y!cnP$om6H)@CJDWVRD$mCef9Q=5Vka;tw5?Fca;)y z737k8-`fC0SdA1YwUmJ2Pp>6(Z~WF;A(_yk$4-(hECg~4sFR?LgoPbj7XuUu*p`@kE+?Bn zP=mtp#YIk+qe3Z%A0RPo6@ZQIQupN!0DS54!CU*(5cgjlyvY=N{cq^#97WYKAqpx` zrD%=qyk-Zf$@2$Vp0ZNmBafIVUVWtV3QYKy7AW6zrRa{a!C0x5zcy}Hi&~xup7<;e z_imSi|NHMFhb+_74n8mzUM+k(bk_iMz#h#j=d~D7^FMWsORX)}iHCeXlM81Lkf&G` z+xa`>O)bW2t8+itB0=}#3Gqh=7LZzmqDTdmWsC?Pdvh4Xm)obfeY6*KJal6MVb_yt zZc3_iP_t$^uzsTSKHs;O0D|MqA0OHe2<(iAQn~>zT*%452ziPIG_S&P8HB;Y?WJs=|0{J*?It$_3dcs){@s;%jo(z9{Qf}Md zmD`e`%cQ~yjmGw#fPh}5dZFEAeD})p6AxODN_MJ_a0O;<1}#=RMWoA^0=kf@WqRw z`Gpk-E27^YyT-3Vdw00Ndrv5(f*GDQqJ-krS?!I`#`}O+`Oi51!QK-J9 zzup-j8*La5C%uq~q#I&JIzZt*gc2G%Gi+dd^soPTiQ`7SocVSnem_UX6d`B^lp#}1 z@%tI6h_R>WzL*BCjOVcUjJ24Eg|SBPoZu-p%Fcy*+6Zk z2`~su&~#A=NsPkAcHERH2$xisZ`TQBu-<2JVMtLof_P7qW-Ncw4x}iTxxkNkkyCYG z_oZ0Zk@`m{?*YI{H+bRD_2K5Gc{)R=#Sku)L3881E2oH*%t>v^Liek}!WJk;EJ� zL-WZZ`J+#6Y;6>98O}l#n#XisIS(2l9R!7zsN~g)D_joa#rk7qzWs#v8}@0WabH5C zGM8Qj7zpz6@*JRj8`|##u5mGI+HlVzBNiu;5$QcY9;Xl{Tn0q%^p+zWzac8t0+*eH zcpNm3g{0B|fG#o}p#Y`@)l0p$?wKLzLYN=7Tbiek*3E$HN~JKMn3zu`xd3%Y1fnY^ z)b6w&!7^6ejz5sh1O~lY&YDfcuS!Epg0iLh0kC_Q^@n-_4xpC!3bs?wo!@Kkx`|%0 zy1ZL?-@_)n@tVohFbIJL&D^?ojMXuOxR^8XMZ~jfewJQ_c_fv!nmkQa`^s%KBbk(_j+2#1Y@2Nq@kXz=`}|8&hysRDUmTMY~T|MuvbRG_N_XbTee#o zJc8bW9BwWKIf6iD3`8m;9U89kaCW$k1zyfKR1zGa zBKNjq8d)&qrwb@u?9z~cy5~dWg*YFwU}=wGL@;W3P?qvJnW1(|(0($B1QG&FWZl5+ zV4g%9!(W-4ys!E!U?I^ZPP40CJ|`nVDf6$;9?@iq-pE4}qnVIJHmztiyoZw!Hx&-R zc5CyVcJG8~a3I>_U)si?P2`R11*6*Jo`UTbpu;FZSk%)Nx32&7iwBzPH8Mh7XjkI# z=+>=Ut5Eyq;1BFdS1vj?q)Z7&%oKZ-vLTUR3vBOac-?dvpU`5=MaZ zRYh5Ar{XS~%i2y$Cr~2D8^3@hC#;KsqXh{TY3`ul5&EKY2R(h|=DL{6_NVRb?Y9bT zTzcM2*mQzDOK@Ht+LdM)87gCuo&=)G4Zv!=x6~_MZ{j7Y?=(#>Ye^7s-Xz{AQ!Bhj zggh&*A3FIsPOI0Wysqa;{;Ro4hCl`CuZS#+;3;1_&e6W(_k6#}Etn?A0>xPlkoFgM zM!prvc4(5GFh(?ml6@*O9gG8NT=`@(Chw7TCxP+YsF8!EM9ekF1_7s!-rVI>kDLzp z0d>&s7G2OKfoZPvwNF8I7{Ma8^iM@~aBy=m$M5fc1SnllvI-#sw(6+}UgIn9zKrBf zqcc>y^x>~6K)~>s_oBwFQs<&{BH{O&NhQIM-oN&sj>@mntR9j=z}zXIJ`RFf zm1{DoME6A(yjVh3HE*AkQ@n(}7U)3xX{|Uh1vZ;)-1;UZ`dv?vGnqT}`yWO>+(qv^ zpE;~Q@$z|N(JT%~5K&PozryeNF6XGjCsUNI)I*Ts)B$QTr0tTI(G^4#D0fpr_#s=@Nj&D4nH5Q$o* z1R#Emjkl5ZCkq#gY+qkIB4dfG>I57egfKT zNajXoQ!&*DyH!H3pv1xpfIs~9Yb6P{04wE6#+~i5DqNpWV*Kc0;BmP207CHXPtI#A zv~lQ7!h%mOXosf_(nF|%pPPfQNAhoByaT)CKsmw7GirTkqJ>riVC&e1KT3xV>x1?z z2>9)dbed6g#1cSp)#@CDq6|IIJqg}ecjn!C%64(za%KwC<{p5Xv(~Up@-%t)OWxWnFD?l?WMJoQ-3M6>m%WPln&2F`?jNw@Os>sVQFBG*Fc3`B_K2 z<@p1nbkKz=kgf0GuloWm4Q7uzMa|Yq zH)LaddtG!sy3)(QNV(Cc!9$Gql?Nq^fGux^zNfoHhlGTLf1ousHbVVK=0^!3S#1KU zPRvSd)4IitO2p5 z6%Y*M)x4nL{Ps8^BZe<+3RBb@#3VCv@G*K|=)%Fd1lI45O_Ggvz(J6lzsjSLg&_!I zcR5{s+IKPp)mRy6@f8Tce8x&wnOe;1^5@T=T_9pBpUBHsyZG+f9Q}u11C&+YQQHIX zz_E?V@oOZCN@Y@H`~YC6E;QnNUl0`@T9HYVwQPB~--G~BAQK#pqo;ll z+FjVved|THnx_yO0HqPEQ1(YcS_iyHm!_D1>b^i~1B8FHZv6&n(`hK_C{84Uf_knB z@|AMnM#_}Oy`9Q)&!q%o&V^J-!-$a24rAz8SBgdk!D+|{8w4#925bSGceG!61cSAh1D`cITPJ0i0NGn% zRWrhULvEkCnDMGV0!AQX1JuLw?I%?an#O~7gSE*~4QyK-rGG7nN#)0|lai0l(G+5|7yDL)Bv!lw9dlfZo#uS`Vy~ z2y^Kwu+7cOYlKj`7=GL6hfS&ugJkU)Q#E+{@SW_%%#f}IWFfE4d@%r%n)~WiGvFtG z6$r!6nt1HqPF*mT*v$d{NY6nSK?Nvz41&r9=^`Q>0+mL?S!0zm9qsxzEY!mq9m443N9oRCOHm7u5; z1gJgoZoOC&xGy|IFdfG0=fnWu+>ymSKJ)yz<{q(_d&<x{D{{(0eh1p<8a1aOdZxQ`AY<0FS2Z9}BX;I{50A@`Wep2mGj`dZ#L4(Rr zD&Z?AM}piW5TezyxC3D5wd7-$Q4HY(mUnfjjNA-a3C<70r#CMdgZIk_8wi&gTLnZ; ze1Aif1%=lF#-am4t_3zt6$-r(LP6{0)5H*VS2@I6L0|q9b%QtwrA8!fH7A)CHr1y%p;|w z-REzDWI>5{520aYX^PJJD(B<&CZAL(f6?15`jQ!_E#+rV^}8#wygW2?hQ49zstU`^wR}v=Xn;D3gKoN z%I05{cA24H#1ngl@~}_$_aI^fuUOhXC~b5F35D{UHNYu!Ptk12(qzv#C|Mr7LKX#8 zO;~MJly`JpxEyC?6oZ!dMa?%irE4mJFr|QmB=@mZlg$wNj<#fCPi5-4LQ{J$wf@yv zmq%puo=Bm&nd1y}Q4H#dogMJ8&dTJ;Q3IfJZL6*ZAXzQ*r%}P}(Ai=iICPqB5Gr>x z!7vK0WK{u!Pd|M5IQ06fuXB4V3$}wUWr(|B&bXsGl52(gbxDw9mSql^FdUk;TF*qY8KA}NK+_~h!`w_W6s`?UzadI8Dt+wlU6pnW zpqVM${V0fLzy+9t?1yZi(G_W1U_w*Z+1#gp&ljJ*ilQ!X8Hot9OwS@K#-W#brK02$ zG6OY(t#}1ktM^5{88$A;mrd0Ck4Bc`NU6!$;rt8Gv3yr;K*zVE~YTcSZ>BI%*!(a526PU z?cER4*p1wy$Bxx1aEj}>%ScQEAS-htj3@9mx#!IQDPhI3SHS|wceURG;-caZ5WtWX z$*)b?m-%LD0@Y;I6%`dxVX_+5&f$S#Ul0g^S#%8zjTf*A1Wt+>5tu(*!t=1CVd8{w}#KN*o|q&2OOUb7!Y!?(nC98K7Zv zkfqpf$Z@m^RDe8pWCmo!bkOn*m}Gwehz`cCIVYR8BASaAo7dMs-|yD2K7ot*Qbg|KzaN)oduk1?AQdJyNBH^WlZx4Z@ zQDCUTMcS5loKa9zOy%=S{Rcmsx3bDQuaJxpLh8~01{Y2L%ZTai#jtGP1hxKtvS<0F z<_0VfxV{SB_3_RfB;)A%fP_NfP#Vd zbb}*Jl>7*b84-sA;h^cl;PG~5bkS^L7gP%<51Erv-jF-MJRFQ2oRlk645V$Iw8DN- zqvODxDBFkd(ald5jS47h7;J}WBGSM^wfBlg)4f)hwxcX7h2-{9FPe;4h0r-NjDXW2 z{B=o3HC{?FHxHQoSyBQoO66YdhgA%nxUKHZ3r~gk;m6UM#uc(9dzTc0vnqV(1O^A*v$R2&Ny4xc}%6EBzLv=YDE@$a5#U+fL{we+*u zukz{qbsYCUXKX$OC^^#ly%pOTMT-$WdCzjpMXT|P`ecV+-xtw{ujI0%H`ic;b_T!1 zx%PCjMtazKOT2cUAm7C$u7Tx;LrDfy(c=BS{mv1%V{3*&$~q(Ef3~8Wc@^WLL*uj! zFoe|agcB7h1SQU`oal(}JaDWVjlj%bxrG0tqX85!aGaEI<2<83J%`) zi(%{eve`FECt)a#o{{#UJ5{UvLg?|vReXx*&Xa6c_+_YK`eN7hoyQ*O-hzYYDa=M= zT8Ct*L1)a;{%q7Mk=)dX%`UIlKw$=+z9W1YBgVJ)rtf5Mo2kMvs?C`lLwsSsK)2yzWQp{EI>_-ON+ZYxL>Fy_=RhOJ>D|KJx$^#$Op~V-Y%AgiJuU! zJG}puux#r+<48dof68Cy_Y~J5op{^sRu)TbOoh@eZqlIv7jlvGVLVvv-X^FV2U_wo zxeSNWmSH_P5nYv&C-=}RZ4unSK-sPj#HO^XALv}irr5%Pdv zbh{(S97GZn9Umb8r)B3hJ5b@xmk zy)xs6V{JS32r4zd^U92*t8Jri*}e>{5~Zp2wBwm+KB>SOp6~d0zE(f}Kq31sD;%z{ z`5lq17vLB+GjOl=w*dxhBcAZSb8@$PjSb>KbAr{uF;fvnV@QA7>vOSBeCTF47QsQ~gkp3^ zmg_&GFAw4|0&i?YaQ(h<5f5{qt3A zOZ#!GJ7>-E^7bm-?)yGMqHfO?>k!3fLap@d!<)W8v%{1YB4$|qWc_?miPDPsnH}5e zW}D~VGB~1Zw2(~Cb;COG-Nfkmiu;bv{X&*n_u6EHQQ(%~dyfypq?&J;I)-Kviy4Wg zEO#QUjq8$>vAl(ojJkNVJ|^eTI)k&GWH%=XRmj!HXKsXmVEsQMQOO_g$c;fF(_HBF0QMYPZF~ny7NDHyp zO_v(d8*Rf);H{23Pv4U)fV6SVEX7r|RBB~UurdF{Sl(12`yd?uHWmsi-k$xval zNwLSVug`eGsjmK3_FE^>hY4T=B-Z2YG>wV2J+|ucU4ebLT$be4H_2!Jpf8la*32iXk%F4#XU8>yAXcN^L%HC>i_3(<3 z0%dMG+CC-gDvtZv{RJas!rCvU`b*ZX?CEN9--Wk&cZ{nOH6YLNWh-##4D;IsIow}pUS_cu&EAhwi;k+#O(7%w$GPUeP{Qh8=s zrM|Fob1DJ{D!??w_4#0kZHjZn&IfiXsW5UbwptOY8C3UR2C3u z29?z|+_^qN6(3(uFmKmLUW_n&T5@YtEJ%+g}=%I4FBl!Msj z?n^N+kNYR`wf} z{F+5S8{#N?ee(5ev))I2rM+qUyg}i=}jKIxX-@VwHak@Fuo7zhkRD8`H7^ zFT1y(_QmUKF>8>|9KW(WLZmZl``)DN07U_p@>gXWiLjw>f6Ugb9=kG zi9Xj^f*8LK2GoQ*!v%gW09HgcE;UaQ>wu=!cCQ!D@W z&)=DM5=p5GwYMfVG@tf6PiseC+sDwOZZno4y_%T=6&m^|I+hM>eB9xzQieBMv`tTv6 Date: Thu, 6 Feb 2025 10:36:45 -0500 Subject: [PATCH 139/669] [OCPBUGS-37638]: Fixing typo in HCP troubleshooting docs --- hosted_control_planes/hcp-troubleshooting.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hosted_control_planes/hcp-troubleshooting.adoc b/hosted_control_planes/hcp-troubleshooting.adoc index 7b04d8387016..85ebb2f30ce7 100644 --- a/hosted_control_planes/hcp-troubleshooting.adoc +++ b/hosted_control_planes/hcp-troubleshooting.adoc @@ -51,7 +51,7 @@ include::modules/hcp-ts-non-bm.adoc[leveloffset=+2] [id="hcp-ts-bm"] == Troubleshooting hosted clusters on bare metal -The following information applies to troubleshooting {hcp-short} on bare metal. +The following information applies to troubleshooting {hcp} on bare metal. include::modules/hcp-ts-bm-nodes-not-added.adoc[leveloffset=+2] From 05120cef18eac853cc7a06728dfed1874bdeee3c Mon Sep 17 00:00:00 2001 From: Michael Ryan Peter Date: Wed, 5 Feb 2025 10:51:29 -0500 Subject: [PATCH 140/669] OSDOCS#12930 [4.18] OLMv1 optional cluster capability --- installing/overview/cluster-capabilities.adoc | 7 +++++++ modules/olmv1-clusteroperator.adoc | 3 ++- 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/installing/overview/cluster-capabilities.adoc b/installing/overview/cluster-capabilities.adoc index 31a09366f2b7..39894c67b390 100644 --- a/installing/overview/cluster-capabilities.adoc +++ b/installing/overview/cluster-capabilities.adoc @@ -125,6 +125,13 @@ include::modules/olm-overview.adoc[leveloffset=+2] .Additional resources * xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-understanding-olm[Operator Lifecycle Manager concepts and resources] +// Operator Lifecycle Manager v1 capability +include::modules/olmv1-clusteroperator.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources +* xref:../../extensions/index.adoc#olmv1-about[Extensions overview] + include::modules/viewing-cluster-capabilities.adoc[leveloffset=+1] include::modules/enabling-baseline-capability-set.adoc[leveloffset=+1] diff --git a/modules/olmv1-clusteroperator.adoc b/modules/olmv1-clusteroperator.adoc index 9af4737d9b21..a24c554e6ab7 100644 --- a/modules/olmv1-clusteroperator.adoc +++ b/modules/olmv1-clusteroperator.adoc @@ -1,6 +1,7 @@ // Module included in the following assemblies: // // * operators/operator-reference.adoc +// * installing/overview/cluster-capabilities.adoc :_mod-docs-content-type: CONCEPT [id="cluster-operators-ref-olmv1_{context}"] @@ -44,4 +45,4 @@ Catalogd:: A Kubernetes extension that unpacks file-based catalog (FBC) content == Project * link:https://github.com/operator-framework/operator-controller[operator-framework/operator-controller] -* link:https://github.com/operator-framework/catalogd[operator-framework/catalogd] \ No newline at end of file +* link:https://github.com/operator-framework/catalogd[operator-framework/catalogd] From a5c73fac590f9da10fb8949f81126527a1e5fd67 Mon Sep 17 00:00:00 2001 From: Jeana Routh Date: Tue, 7 Jan 2025 17:21:55 -0500 Subject: [PATCH 141/669] OSDOCS-12745: Rotating OIDC bound service account signer keys --- _attributes/common-attributes.adoc | 7 +- _unused_topics/manually-creating-iam-gcp.adoc | 2 +- modules/cco-ccoctl-configuring.adoc | 153 +------ modules/refreshing-service-ids-ibm-cloud.adoc | 16 +- modules/rotating-bound-service-keys.adoc | 409 ++++++++++++++++++ ...nging-cloud-credentials-configuration.adoc | 51 ++- ...ctl-provider-permissions-requirements.adoc | 182 ++++++++ 7 files changed, 643 insertions(+), 177 deletions(-) create mode 100644 modules/rotating-bound-service-keys.adoc create mode 100644 snippets/ccoctl-provider-permissions-requirements.adoc diff --git a/_attributes/common-attributes.adoc b/_attributes/common-attributes.adoc index fe4d672216eb..c6a90c372fd3 100644 --- a/_attributes/common-attributes.adoc +++ b/_attributes/common-attributes.adoc @@ -320,8 +320,6 @@ endif::openshift-origin[] :vmw-first: VMware vSphere :vmw-full: VMware vSphere :vmw-short: vSphere - - //Token-based auth products //AWS Security Token Service :sts-first: Security Token Service (STS) @@ -333,8 +331,6 @@ endif::openshift-origin[] //Google Cloud Platform Workload Identity :gcp-wid-first: Google Cloud Platform Workload Identity :gcp-wid-short: GCP Workload Identity - - // Cluster API terminology // Cluster CAPI Operator :cluster-capi-operator: Cluster CAPI Operator @@ -365,9 +361,8 @@ endif::openshift-origin[] // Cluster API Provider VMware vSphere :cap-vsphere-first: Cluster API Provider VMware vSphere :cap-vsphere-short: Cluster API Provider vSphere - // Hosted control planes related attributes :hcp-capital: Hosted control planes :hcp: hosted control planes :mce: multicluster engine for Kubernetes Operator -:mce-short: multicluster engine Operator +:mce-short: multicluster engine Operator \ No newline at end of file diff --git a/_unused_topics/manually-creating-iam-gcp.adoc b/_unused_topics/manually-creating-iam-gcp.adoc index 2d315023f0fc..b243181fe2c4 100644 --- a/_unused_topics/manually-creating-iam-gcp.adoc +++ b/_unused_topics/manually-creating-iam-gcp.adoc @@ -14,7 +14,7 @@ include::modules/alternatives-to-storing-admin-secrets-in-kube-system.adoc[level .Additional resources * xref:../../authentication/managing_cloud_provider_credentials/cco-mode-gcp-workload-identity.adoc#cco-mode-gcp-workload-identity[Using manual mode with GCP Workload Identity] -* xref:../../post_installation_configuration/cluster-tasks.adoc#post-install-rotate-remove-cloud-creds[Rotating or removing cloud provider credentials] +* xref:../../post_installation_configuration/cluster-tasks.adoc#post-install-remove-cloud-creds[Removing cloud provider credentials] For a detailed description of all available CCO credential modes and their supported platforms, see xref:../../authentication/managing_cloud_provider_credentials/about-cloud-credential-operator.adoc#about-cloud-credential-operator[About the Cloud Credential Operator]. diff --git a/modules/cco-ccoctl-configuring.adoc b/modules/cco-ccoctl-configuring.adoc index db8ea4cb73f8..5b9fdddf3ba0 100644 --- a/modules/cco-ccoctl-configuring.adoc +++ b/modules/cco-ccoctl-configuring.adoc @@ -1,7 +1,7 @@ // Module included in the following assemblies: // //Postinstall and update content -// * post_installation_configuration/cluster-tasks.adoc +// * post_installation_configuration/changing-cloud-credentials-configuration.adoc // * updating/preparing_for_updates/preparing-manual-creds-update.adoc // //Platforms that must use `ccoctl` and update content @@ -169,155 +169,8 @@ ifdef::update[] * You have extracted the `CredentialsRequest` custom resources (CRs) from the {product-title} release image and ensured that a namespace that matches the text in the `spec.secretRef.namespace` field exists in the cluster. endif::update[] -//AWS permissions needed when running ccoctl during install (I think we can omit from upgrade, since they already have an appropriate AWS account if they are upgrading). -ifdef::aws-sts[] -* You have created an AWS account for the `ccoctl` utility to use with the following permissions: -+ -.Required AWS permissions -[%collapsible] -==== -**Required `iam` permissions** - -* `iam:CreateOpenIDConnectProvider` -* `iam:CreateRole` -* `iam:DeleteOpenIDConnectProvider` -* `iam:DeleteRole` -* `iam:DeleteRolePolicy` -* `iam:GetOpenIDConnectProvider` -* `iam:GetRole` -* `iam:GetUser` -* `iam:ListOpenIDConnectProviders` -* `iam:ListRolePolicies` -* `iam:ListRoles` -* `iam:PutRolePolicy` -* `iam:TagOpenIDConnectProvider` -* `iam:TagRole` - -**Required `s3` permissions** - -* `s3:CreateBucket` -* `s3:DeleteBucket` -* `s3:DeleteObject` -* `s3:GetBucketAcl` -* `s3:GetBucketTagging` -* `s3:GetObject` -* `s3:GetObjectAcl` -* `s3:GetObjectTagging` -* `s3:ListBucket` -* `s3:PutBucketAcl` -* `s3:PutBucketPolicy` -* `s3:PutBucketPublicAccessBlock` -* `s3:PutBucketTagging` -* `s3:PutObject` -* `s3:PutObjectAcl` -* `s3:PutObjectTagging` - -**Required `cloudfront` permissions** - -* `cloudfront:ListCloudFrontOriginAccessIdentities` -* `cloudfront:ListDistributions` -* `cloudfront:ListTagsForResource` -==== -+ -If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the `ccoctl` utility requires the following additional permissions: -+ -.Additional permissions for a private S3 bucket with CloudFront -[%collapsible] -==== -* `cloudfront:CreateCloudFrontOriginAccessIdentity` -* `cloudfront:CreateDistribution` -* `cloudfront:DeleteCloudFrontOriginAccessIdentity` -* `cloudfront:DeleteDistribution` -* `cloudfront:GetCloudFrontOriginAccessIdentity` -* `cloudfront:GetCloudFrontOriginAccessIdentityConfig` -* `cloudfront:GetDistribution` -* `cloudfront:TagResource` -* `cloudfront:UpdateDistribution` - -[NOTE] -===== -These additional permissions support the use of the `--create-private-s3-bucket` option when processing credentials requests with the `ccoctl aws create-all` command. -===== -==== -endif::aws-sts[] - -//Azure permissions needed when running ccoctl during install. -ifdef::azure-workload-id[] -* You have created a global Microsoft Azure account for the `ccoctl` utility to use with the following permissions: -+ -.Required Azure permissions -[%collapsible] -==== -* Microsoft.Resources/subscriptions/resourceGroups/read -* Microsoft.Resources/subscriptions/resourceGroups/write -* Microsoft.Resources/subscriptions/resourceGroups/delete -* Microsoft.Authorization/roleAssignments/read -* Microsoft.Authorization/roleAssignments/delete -* Microsoft.Authorization/roleAssignments/write -* Microsoft.Authorization/roleDefinitions/read -* Microsoft.Authorization/roleDefinitions/write -* Microsoft.Authorization/roleDefinitions/delete -* Microsoft.Storage/storageAccounts/listkeys/action -* Microsoft.Storage/storageAccounts/delete -* Microsoft.Storage/storageAccounts/read -* Microsoft.Storage/storageAccounts/write -* Microsoft.Storage/storageAccounts/blobServices/containers/write -* Microsoft.Storage/storageAccounts/blobServices/containers/delete -* Microsoft.Storage/storageAccounts/blobServices/containers/read -* Microsoft.ManagedIdentity/userAssignedIdentities/delete -* Microsoft.ManagedIdentity/userAssignedIdentities/read -* Microsoft.ManagedIdentity/userAssignedIdentities/write -* Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read -* Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write -* Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete -* Microsoft.Storage/register/action -* Microsoft.ManagedIdentity/register/action -==== -endif::azure-workload-id[] - -//GCP permissions needed when running ccoctl during install. -ifdef::google-cloud-platform[] -* You have added one of the following authentication options to the GCP account that the installation program uses: - -** The **IAM Workload Identity Pool Admin** role. - -** The following granular permissions: -+ -.Required GCP permissions -[%collapsible] -==== -* compute.projects.get -* iam.googleapis.com/workloadIdentityPoolProviders.create -* iam.googleapis.com/workloadIdentityPoolProviders.get -* iam.googleapis.com/workloadIdentityPools.create -* iam.googleapis.com/workloadIdentityPools.delete -* iam.googleapis.com/workloadIdentityPools.get -* iam.googleapis.com/workloadIdentityPools.undelete -* iam.roles.create -* iam.roles.delete -* iam.roles.list -* iam.roles.undelete -* iam.roles.update -* iam.serviceAccounts.create -* iam.serviceAccounts.delete -* iam.serviceAccounts.getIamPolicy -* iam.serviceAccounts.list -* iam.serviceAccounts.setIamPolicy -* iam.workloadIdentityPoolProviders.get -* iam.workloadIdentityPools.delete -* resourcemanager.projects.get -* resourcemanager.projects.getIamPolicy -* resourcemanager.projects.setIamPolicy -* storage.buckets.create -* storage.buckets.delete -* storage.buckets.get -* storage.buckets.getIamPolicy -* storage.buckets.setIamPolicy -* storage.objects.create -* storage.objects.delete -* storage.objects.list -==== -endif::google-cloud-platform[] +//Permissions requirements (per platform, for install and key rotation) +include::snippets/ccoctl-provider-permissions-requirements.adoc[] .Procedure diff --git a/modules/refreshing-service-ids-ibm-cloud.adoc b/modules/refreshing-service-ids-ibm-cloud.adoc index 9a4dd815bd28..cb1a1c2c2d4b 100644 --- a/modules/refreshing-service-ids-ibm-cloud.adoc +++ b/modules/refreshing-service-ids-ibm-cloud.adoc @@ -1,27 +1,27 @@ // Module included in the following assemblies: // -// * post_installation_configuration/cluster-tasks.adoc +// * post_installation_configuration/changing-cloud-credentials-configuration.adoc :_mod-docs-content-type: PROCEDURE [id="refreshing-service-ids-ibm-cloud_{context}"] -= Rotating API keys += Rotating {ibm-cloud-title} credentials You can rotate API keys for your existing service IDs and update the corresponding secrets. .Prerequisites -* You have configured the `ccoctl` binary. +* You have configured the `ccoctl` utility. * You have existing service IDs in a live {product-title} cluster installed. .Procedure -* Use the `ccoctl` utility to rotate your API keys for the service IDs and update the secrets: +* Use the `ccoctl` utility to rotate your API keys for the service IDs and update the secrets by running the following command: + [source,terminal] ---- -$ ccoctl refresh-keys \ <1> - --kubeconfig \ <2> - --credentials-requests-dir \ <3> +$ ccoctl refresh-keys \// <1> + --kubeconfig \// <2> + --credentials-requests-dir \// <3> --name <4> ---- <1> The name of the provider. For example: `ibmcloud` or `powervs`. @@ -34,4 +34,4 @@ $ ccoctl refresh-keys \ <1> ==== If your cluster uses Technology Preview features that are enabled by the `TechPreviewNoUpgrade` feature set, you must include the `--enable-tech-preview` parameter. ==== --- +-- \ No newline at end of file diff --git a/modules/rotating-bound-service-keys.adoc b/modules/rotating-bound-service-keys.adoc new file mode 100644 index 000000000000..11ddf5e57559 --- /dev/null +++ b/modules/rotating-bound-service-keys.adoc @@ -0,0 +1,409 @@ +// Module included in the following assemblies: +// +// * post_installation_configuration/changing-cloud-credentials-configuration.adoc + +ifeval::["{context}" == "key-rotation-aws"] +:rotate-aws: +endif::[] +ifeval::["{context}" == "key-rotation-gcp"] +:rotate-gcp: +endif::[] +ifeval::["{context}" == "key-rotation-azure"] +:rotate-azure: +endif::[] + +:_mod-docs-content-type: PROCEDURE +[id="rotating-bound-service-keys_{context}"] +ifdef::rotate-aws[= Rotating {aws-short} OIDC bound service account signer keys] +ifdef::rotate-gcp[= Rotating {gcp-short} OIDC bound service account signer keys] +ifdef::rotate-azure[= Rotating {azure-short} OIDC bound service account signer keys] + +If the Cloud Credential Operator (CCO) for your {product-title} cluster +ifdef::rotate-aws[on {aws-first}] +ifdef::rotate-gcp[on {gcp-first}] +ifdef::rotate-azure[on {azure-first}] +is configured to operate in manual mode with +ifdef::rotate-aws[{sts-short},] +ifdef::rotate-gcp[{gcp-wid-short},] +ifdef::rotate-azure[{entra-short},] +you can rotate the bound service account signer key. + +To rotate the key, you delete the existing key on your cluster, which causes the Kubernetes API server to create a new key. +To reduce authentication failures during this process, you must immediately add the new public key to the existing issuer file. +After the cluster is using the new key for authentication, you can remove any remaining keys. + +//Modified version of the disclaimer from enabling Azure WID on an existing cluster, since there are similar concerns: +[IMPORTANT] +==== +The process to rotate OIDC bound service account signer keys is disruptive and takes a significant amount of time. +Some steps are time-sensitive. +Before proceeding, observe the following considerations: + +* Read the following steps and ensure that you understand and accept the time requirement. +The exact time requirement varies depending on the individual cluster, but it is likely to require at least one hour. + +* To reduce the risk of authentication failures, ensure that you understand and prepare for the time-sensitive steps. + +* During this process, you must refresh all service accounts and restart all pods on the cluster. +These actions are disruptive to workloads. +To mitigate this impact, you can temporarily halt these services and then redeploy them when the cluster is ready. +==== + +.Prerequisites + +* You have access to the {oc-first} as a user with the `cluster-admin` role. +//Permissions requirements (per platform, for install and key rotation) +include::snippets/ccoctl-provider-permissions-requirements.adoc[] +* You have configured the `ccoctl` utility. +* Your cluster is in a stable state. +You can confirm that the cluster is stable by running the following command: ++ +[source,terminal] +---- +$ oc adm wait-for-stable-cluster --minimum-stable-period=5s +---- + +.Procedure + +. Configure the following environment variables: ++ +[source,text] +---- +ifdef::rotate-aws[] +INFRA_ID=$(oc get infrastructures cluster -o jsonpath='{.status.infrastructureName}') +CLUSTER_NAME=${INFRA_ID%-*} <1> +endif::rotate-aws[] +ifdef::rotate-gcp[] +CURRENT_ISSUER=$(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}') +GCP_BUCKET=$(echo ${CURRENT_ISSUER} | cut -d "/" -f4) +endif::rotate-gcp[] +ifdef::rotate-azure[] +CURRENT_ISSUER=$(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}') +AZURE_STORAGE_ACCOUNT=$(echo ${CURRENT_ISSUER} | cut -d "/" -f3 | cut -d "." -f1) +AZURE_STORAGE_CONTAINER=$(echo ${CURRENT_ISSUER} | cut -d "/" -f4) +endif::rotate-azure[] +---- +ifdef::rotate-aws[] +<1> This value should match the name of the cluster that was specified in the `metadata.name` field of the `install-config.yaml` file during installation. +endif::rotate-aws[] ++ +[NOTE] +==== +Your cluster might differ from this example, and the resource names might not be derived identically from the cluster name. +Ensure that you specify the correct corresponding resource names for your cluster. +==== +ifdef::rotate-aws[] +** For {aws-short} clusters that store the OIDC configuration in a public S3 bucket, configure the following environment variable: ++ +[source,text] +---- +AWS_BUCKET=$(oc get authentication cluster -o jsonpath={'.spec.serviceAccountIssuer'} | awk -F'://' '{print$2}' |awk -F'.' '{print$1}') +---- + +** For {aws-short} clusters that store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, complete the following steps: + +... Extract the public CloudFront distribution URL by running the following command: ++ +[source,terminal] +---- +$ basename $(oc get authentication cluster -o jsonpath={'.spec.serviceAccountIssuer'} ) +---- ++ +.Example output +[source,text] +---- +.cloudfront.net +---- ++ +where `` is an alphanumeric string. + +... Determine the private S3 bucket name by running the following command: ++ +[source,terminal] +---- +$ aws cloudfront list-distributions --query "DistributionList.Items[].{DomainName: DomainName, OriginDomainName: Origins.Items[0].DomainName}[?contains(DomainName, '.cloudfront.net')]" +---- ++ +.Example output +[source,text] +---- +[ + { + "DomainName": ".cloudfront.net", + "OriginDomainName": ".s3.us-east-2.amazonaws.com" + } +] +---- ++ +where `` is the private S3 bucket name for your cluster. + +... Configure the following environment variable: ++ +[source,text] +---- +AWS_BUCKET=$ +---- ++ +where `` is the private S3 bucket name for your cluster. +endif::rotate-aws[] + +. Create a temporary directory to use and assign it an environment variable by running the following command: ++ +[source,terminal] +---- +$ TEMPDIR=$(mktemp -d) +---- + +. To cause the Kubernetes API server to create a new bound service account signing key, you delete the next bound service account signing key. ++ +[IMPORTANT] +==== +After you complete this step, the Kubernetes API server starts to roll out a new key. +To reduce the risk of authentication failures, complete the remaining steps as quickly as possible. +The remaining steps might be disruptive to workloads. +==== ++ +When you are ready, delete the next bound service account signing key by running the following command: ++ +[source,terminal] +---- +$ oc delete secrets/next-bound-service-account-signing-key \ + -n openshift-kube-apiserver-operator +---- + +. Download the public key from the service account signing key secret that the Kubernetes API server created by running the following command: ++ +[source,terminal] +---- +$ oc get secret/next-bound-service-account-signing-key \ + -n openshift-kube-apiserver-operator \ + -ojsonpath='{ .data.service-account\.pub }' | base64 \ + -d > ${TEMPDIR}/serviceaccount-signer.public +---- + +. Use the public key to create a `keys.json` file by running the following command: +ifdef::rotate-aws[] ++ +[source,terminal] +---- +$ ccoctl aws create-identity-provider \ + --dry-run \// <1> + --output-dir ${TEMPDIR} \ + --name fake \// <2> + --region us-east-1 <3> +---- +<1> The `--dry-run` option outputs files, including the new `keys.json` file, to the disk without making API calls. +<2> Because the `--dry-run` option does not make any API calls, some parameters do not require real values. +<3> Specify any valid {aws-short} region, such as `us-east-1`. +This value does not need to match the region the cluster is in. +endif::rotate-aws[] +ifdef::rotate-gcp[] ++ +[source,terminal] +---- +$ ccoctl gcp create-workload-identity-provider \ + --dry-run \// <1> + --output-dir=${TEMPDIR} \ + --name fake \// <2> + --project fake \ + --workload-identity-pool fake +---- +<1> The `--dry-run` option outputs files, including the new `keys.json` file, to the disk without making API calls. +<2> Because the `--dry-run` option does not make any API calls, some parameters do not require real values. +endif::rotate-gcp[] +ifdef::rotate-azure[] ++ +[source,terminal] +---- +$ ccoctl aws create-identity-provider \// <1> + --dry-run \// <2> + --output-dir ${TEMPDIR} \ + --name fake \// <3> + --region us-east-1 <4> +---- +<1> The `ccoctl azure` command does not include a `--dry-run` option. +To use the `--dry-run` option, you must specify `aws` for an {azure-short} cluster. +<2> The `--dry-run` option outputs files, including the new `keys.json` file, to the disk without making API calls. +<3> Because the `--dry-run` option does not make any API calls, some parameters do not require real values. +<4> Specify any valid {aws-short} region, such as `us-east-1`. +This value does not need to match the region the cluster is in. +endif::rotate-azure[] + +. Rename the `keys.json` file by running the following command: ++ +[source,terminal] +---- +$ cp ${TEMPDIR}/-keys.json ${TEMPDIR}/jwks.new.json +---- ++ +where `` is a two-digit numerical value that varies depending on your environment. + +. Download the existing `keys.json` file from the cloud provider by running the following command: +ifdef::rotate-aws[] ++ +[source,terminal] +---- +$ aws s3api get-object \ + --bucket ${AWS_BUCKET} \ + --key keys.json ${TEMPDIR}/jwks.current.json +---- +endif::rotate-aws[] +ifdef::rotate-gcp[] ++ +[source,terminal] +---- +$ gcloud storage cp gs://${GCP_BUCKET}/keys.json ${TEMPDIR}/jwks.current.json +---- +endif::rotate-gcp[] +ifdef::rotate-azure[] ++ +[source,terminal] +---- +$ az storage blob download \ + --container-name ${AZURE_STORAGE_CONTAINER} \ + --account-name ${AZURE_STORAGE_ACCOUNT} \ + --name 'openid/v1/jwks' \ + -f ${TEMPDIR}/jwks.current.json +---- +endif::rotate-azure[] + +. Combine the two `keys.json` files by running the following command: ++ +[source,terminal] +---- +$ jq -s '{ keys: map(.keys[])}' ${TEMPDIR}/jwks.current.json ${TEMPDIR}/jwks.new.json > ${TEMPDIR}/jwks.combined.json +---- + +. To enable authentication for the old and new keys during the rotation, upload the combined `keys.json` file to the cloud provider by running the following command: +ifdef::rotate-aws[] ++ +[source,terminal] +---- +$ aws s3api put-object \ + --bucket ${AWS_BUCKET} \ + --tagging "openshift.io/cloud-credential-operator/${CLUSTER_NAME}=owned" \ + --key keys.json \ + --body ${TEMPDIR}/jwks.combined.json +---- +endif::rotate-aws[] +ifdef::rotate-gcp[] ++ +[source,terminal] +---- +$ gcloud storage cp ${TEMPDIR}/jwks.combined.json gs://${GCP_BUCKET}/keys.json +---- +endif::rotate-gcp[] +ifdef::rotate-azure[] ++ +[source,terminal] +---- +$ az storage blob upload \ + --overwrite \ + --account-name ${AZURE_STORAGE_ACCOUNT} \ + --container-name ${AZURE_STORAGE_CONTAINER} \ + --name 'openid/v1/jwks' \ + -f ${TEMPDIR}/jwks.combined.json +---- +endif::rotate-azure[] + +. Wait for the Kubernetes API server to update and use the new key. +You can monitor the update progress by running the following command: ++ +[source,terminal] +---- +$ oc adm wait-for-stable-cluster +---- ++ +This process might take 15 minutes or longer. +The following output indicates that the process is complete: ++ +[source,text] +---- +All clusteroperators are stable +---- + +. To ensure that all pods on the cluster use the new key, you must restart them. ++ +[IMPORTANT] +==== +This step maintains uptime for services that are configured for high availability across multiple nodes, but might cause downtime for any services that are not. +==== ++ +Restart all of the pods in the cluster by running the following command: ++ +[source,terminal] +---- +$ oc adm reboot-machine-config-pool mcp/worker mcp/master +---- + +. Monitor the restart and update process by running the following command: ++ +[source,terminal] +---- +$ oc adm wait-for-node-reboot nodes --all +---- ++ +This process might take 15 minutes or longer. +The following output indicates that the process is complete: ++ +[source,text] +---- +All nodes rebooted +---- + +. Monitor the update progress by running the following command: ++ +[source,terminal] +---- +$ oc adm wait-for-stable-cluster +---- ++ +This process might take 15 minutes or longer. +The following output indicates that the process is complete: ++ +[source,text] +---- +All clusteroperators are stable +---- + +. Replace the combined `keys.json` file with the updated `keys.json` file on the cloud provider by running the following command: +ifdef::rotate-aws[] ++ +[source,terminal] +---- +$ aws s3api put-object \ + --bucket ${AWS_BUCKET} \ + --tagging "openshift.io/cloud-credential-operator/${CLUSTER_NAME}=owned" \ + --key keys.json \ + --body ${TEMPDIR}/jwks.new.json +---- +endif::rotate-aws[] +ifdef::rotate-gcp[] ++ +[source,terminal] +---- +$ gcloud storage cp ${TEMPDIR}/jwks.new.json gs://${GCP_BUCKET}/keys.json +---- +endif::rotate-gcp[] +ifdef::rotate-azure[] ++ +[source,terminal] +---- +$ az storage blob upload \ + --overwrite \ + --account-name ${AZURE_STORAGE_ACCOUNT} \ + --container-name ${AZURE_STORAGE_CONTAINER} \ + --name 'openid/v1/jwks' \ + -f ${TEMPDIR}/jwks.new.json +---- +endif::rotate-azure[] + +ifeval::["{context}" == "key-rotation-aws"] +:!rotate-aws: +endif::[] +ifeval::["{context}" == "key-rotation-gcp"] +:!rotate-gcp: +endif::[] +ifeval::["{context}" == "key-rotation-azure"] +:!rotate-azure: +endif::[] \ No newline at end of file diff --git a/post_installation_configuration/changing-cloud-credentials-configuration.adoc b/post_installation_configuration/changing-cloud-credentials-configuration.adoc index 7e1b0f47e08a..e63e622f8c20 100644 --- a/post_installation_configuration/changing-cloud-credentials-configuration.adoc +++ b/post_installation_configuration/changing-cloud-credentials-configuration.adoc @@ -10,21 +10,41 @@ For supported configurations, you can change how {product-title} authenticates w To determine which cloud credentials strategy your cluster uses, see xref:../authentication/managing_cloud_provider_credentials/about-cloud-credential-operator.adoc#cco-determine-mode_about-cloud-credential-operator[Determining the Cloud Credential Operator mode]. -[id="post-install-rotate-remove-cloud-creds_{context}"] -== Rotating or removing cloud provider credentials - -After installing {product-title}, some organizations require the rotation or removal of the cloud provider credentials that were used during the initial installation. - -To allow the cluster to use the new credentials, you must update the secrets that the xref:../operators/operator-reference.adoc#cloud-credential-operator_cluster-operators-ref[Cloud Credential Operator (CCO)] uses to manage cloud provider credentials. +[id="ccoctl-rotate-cloud-creds_{context}"] +== Rotating cloud provider service keys with the Cloud Credential Operator utility + +Some organizations require the rotation of the service keys that authenticate the cluster. +You can use the Cloud Credential Operator (CCO) utility (`ccoctl`) to update keys for clusters installed on the following cloud providers: + +* xref:../post_installation_configuration/changing-cloud-credentials-configuration.adoc#rotating-bound-service-keys_key-rotation-aws[{aws-first} with {sts-first}] +* xref:../post_installation_configuration/changing-cloud-credentials-configuration.adoc#rotating-bound-service-keys_key-rotation-gcp[{gcp-first} with {gcp-wid-short}] +* xref:../post_installation_configuration/changing-cloud-credentials-configuration.adoc#rotating-bound-service-keys_key-rotation-azure[{azure-first} with {entra-short}] +* xref:../post_installation_configuration/changing-cloud-credentials-configuration.adoc#refreshing-service-ids-ibm-cloud_changing-cloud-credentials-configuration[{ibm-cloud-title}] + +:context: key-rotation-aws +//Rotating OIDC bound service account signer keys +include::modules/rotating-bound-service-keys.adoc[leveloffset=+2] +:!context: key-rotation-aws + +:context: key-rotation-gcp +//Rotating OIDC bound service account signer keys +include::modules/rotating-bound-service-keys.adoc[leveloffset=+2] +:!context: key-rotation-gcp + +:context: key-rotation-azure +//Rotating OIDC bound service account signer keys +include::modules/rotating-bound-service-keys.adoc[leveloffset=+2] +:!context: key-rotation-azure +:context: changing-cloud-credentials-configuration -[id="ccoctl-rotate-remove-cloud-creds_{context}"] -=== Rotating cloud provider credentials with the Cloud Credential Operator utility +//Rotating {ibm-cloud-title} credentials +include::modules/refreshing-service-ids-ibm-cloud.adoc[leveloffset=+2] -// Right now only IBM can do this, but it makes sense to set this up so that other clouds can be added. -The Cloud Credential Operator (CCO) utility `ccoctl` supports updating secrets for clusters installed on {ibm-cloud-name}. +[id="post-install-rotate-cloud-creds_{context}"] +== Rotating cloud provider credentials -//Rotating {ibm-cloud-title} credentials with ccoctl -include::modules/refreshing-service-ids-ibm-cloud.adoc[leveloffset=+3] +Some organizations require the rotation of the cloud provider credentials. +To allow the cluster to use the new credentials, you must update the secrets that the xref:../operators/operator-reference.adoc#cloud-credential-operator_cluster-operators-ref[Cloud Credential Operator (CCO)] uses to manage cloud provider credentials. //Rotating cloud provider credentials manually include::modules/manually-rotating-cloud-creds.adoc[leveloffset=+2] @@ -35,6 +55,13 @@ include::modules/manually-rotating-cloud-creds.adoc[leveloffset=+2] * xref:../authentication/managing_cloud_provider_credentials/cco-mode-passthrough.html#cco-mode-passthrough[The Cloud Credential Operator in passthrough mode] * xref:../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc#persistent-storage-csi-vsphere[vSphere CSI Driver Operator] +[id="post-install-remove-cloud-creds_{context}"] +== Removing cloud provider credentials +//TODO: split out rotate, maintain, and remove and bumpe everything up one level + +After installing {product-title}, some organizations require the removal of the cloud provider credentials that were used during the initial installation. +To allow the cluster to use the new credentials, you must update the secrets that the xref:../operators/operator-reference.adoc#cloud-credential-operator_cluster-operators-ref[Cloud Credential Operator (CCO)] uses to manage cloud provider credentials. + //Removing cloud provider credentials manually include::modules/manually-removing-cloud-creds.adoc[leveloffset=+2] diff --git a/snippets/ccoctl-provider-permissions-requirements.adoc b/snippets/ccoctl-provider-permissions-requirements.adoc new file mode 100644 index 000000000000..9eec3238e4e8 --- /dev/null +++ b/snippets/ccoctl-provider-permissions-requirements.adoc @@ -0,0 +1,182 @@ +// Text snippet included in the following modules: +// +// * modules/cco-ccoctl-configuring.adoc (ifevals for aws-sts, azure-workload-id, google-cloud-platform) +// * modules/rotating-bound-service-keys.adoc (ifevals for rotate-aws, rotate-azure, rotate-gcp) +// + +// There is almost certainly a better reuse strategy for the rotation perms but the content needs to go in and this is functional. + +//AWS permissions needed when running ccoctl during installation and key rotation. +ifdef::aws-sts[] +* You have created an {aws-short} account for the `ccoctl` utility to use with the following permissions: ++ +-- +**Required `iam` permissions** + +* `iam:CreateOpenIDConnectProvider` +* `iam:CreateRole` +* `iam:DeleteOpenIDConnectProvider` +* `iam:DeleteRole` +* `iam:DeleteRolePolicy` +* `iam:GetOpenIDConnectProvider` +* `iam:GetRole` +* `iam:GetUser` +* `iam:ListOpenIDConnectProviders` +* `iam:ListRolePolicies` +* `iam:ListRoles` +* `iam:PutRolePolicy` +* `iam:TagOpenIDConnectProvider` +* `iam:TagRole` + +**Required `s3` permissions** + +* `s3:CreateBucket` +* `s3:DeleteBucket` +* `s3:DeleteObject` +* `s3:GetBucketAcl` +* `s3:GetBucketTagging` +* `s3:GetObject` +* `s3:GetObjectAcl` +* `s3:GetObjectTagging` +* `s3:ListBucket` +* `s3:PutBucketAcl` +* `s3:PutBucketPolicy` +* `s3:PutBucketPublicAccessBlock` +* `s3:PutBucketTagging` +* `s3:PutObject` +* `s3:PutObjectAcl` +* `s3:PutObjectTagging` + +**Required `cloudfront` permissions** + +* `cloudfront:ListCloudFrontOriginAccessIdentities` +* `cloudfront:ListDistributions` +* `cloudfront:ListTagsForResource` +-- + +* If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the {aws-short} account that runs the `ccoctl` utility requires the following additional permissions: ++ +-- +* `cloudfront:CreateCloudFrontOriginAccessIdentity` +* `cloudfront:CreateDistribution` +* `cloudfront:DeleteCloudFrontOriginAccessIdentity` +* `cloudfront:DeleteDistribution` +* `cloudfront:GetCloudFrontOriginAccessIdentity` +* `cloudfront:GetCloudFrontOriginAccessIdentityConfig` +* `cloudfront:GetDistribution` +* `cloudfront:TagResource` +* `cloudfront:UpdateDistribution` +-- ++ +[NOTE] +==== +These additional permissions support the use of the `--create-private-s3-bucket` option when processing credentials requests with the `ccoctl aws create-all` command. +==== +endif::aws-sts[] +ifdef::rotate-aws[] +* You have created an {aws-short} account for the `ccoctl` utility to use with the following permissions: ++ +-- +* `s3:GetObject` +* `s3:PutObject` +* `s3:PutObjectTagging` +* For clusters that store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the {aws-short} account that runs the `ccoctl` utility requires the `cloudfront:ListDistributions` permission. +-- +endif::rotate-aws[] + +//Azure permissions needed when running ccoctl during installation and key rotation. +ifdef::azure-workload-id[] +* You have created a global {azure-short} account for the `ccoctl` utility to use with the following permissions: ++ +-- +* `Microsoft.Resources/subscriptions/resourceGroups/read` +* `Microsoft.Resources/subscriptions/resourceGroups/write` +* `Microsoft.Resources/subscriptions/resourceGroups/delete` +* `Microsoft.Authorization/roleAssignments/read` +* `Microsoft.Authorization/roleAssignments/delete` +* `Microsoft.Authorization/roleAssignments/write` +* `Microsoft.Authorization/roleDefinitions/read` +* `Microsoft.Authorization/roleDefinitions/write` +* `Microsoft.Authorization/roleDefinitions/delete` +* `Microsoft.Storage/storageAccounts/listkeys/action` +* `Microsoft.Storage/storageAccounts/delete` +* `Microsoft.Storage/storageAccounts/read` +* `Microsoft.Storage/storageAccounts/write` +* `Microsoft.Storage/storageAccounts/blobServices/containers/delete` +* `Microsoft.Storage/storageAccounts/blobServices/containers/read` +* `Microsoft.Storage/storageAccounts/blobServices/containers/write` +* `Microsoft.ManagedIdentity/userAssignedIdentities/delete` +* `Microsoft.ManagedIdentity/userAssignedIdentities/read` +* `Microsoft.ManagedIdentity/userAssignedIdentities/write` +* `Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read` +* `Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write` +* `Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete` +* `Microsoft.Storage/register/action` +* `Microsoft.ManagedIdentity/register/action` +-- +endif::azure-workload-id[] +ifdef::rotate-azure[] +* You have created a global {azure-short} account for the `ccoctl` utility to use with the following permissions: ++ +-- +* `Microsoft.Storage/storageAccounts/listkeys/action` +* `Microsoft.Storage/storageAccounts/read` +* `Microsoft.Storage/storageAccounts/write` +* `Microsoft.Storage/storageAccounts/blobServices/containers/read` +* `Microsoft.Storage/storageAccounts/blobServices/containers/write` +-- +endif::rotate-azure[] + +//GCP permissions needed when running ccoctl during installation and key rotation. +ifdef::google-cloud-platform[] +* You have added one of the following authentication options to the {gcp-short} account that the `ccoctl` utility uses: + +** The **IAM Workload Identity Pool Admin** role + +** The following granular permissions: ++ +-- +* `compute.projects.get` +* `iam.googleapis.com/workloadIdentityPoolProviders.create` +* `iam.googleapis.com/workloadIdentityPoolProviders.get` +* `iam.googleapis.com/workloadIdentityPools.create` +* `iam.googleapis.com/workloadIdentityPools.delete` +* `iam.googleapis.com/workloadIdentityPools.get` +* `iam.googleapis.com/workloadIdentityPools.undelete` +* `iam.roles.create` +* `iam.roles.delete` +* `iam.roles.list` +* `iam.roles.undelete` +* `iam.roles.update` +* `iam.serviceAccounts.create` +* `iam.serviceAccounts.delete` +* `iam.serviceAccounts.getIamPolicy` +* `iam.serviceAccounts.list` +* `iam.serviceAccounts.setIamPolicy` +* `iam.workloadIdentityPoolProviders.get` +* `iam.workloadIdentityPools.delete` +* `resourcemanager.projects.get` +* `resourcemanager.projects.getIamPolicy` +* `resourcemanager.projects.setIamPolicy` +* `storage.buckets.create` +* `storage.buckets.delete` +* `storage.buckets.get` +* `storage.buckets.getIamPolicy` +* `storage.buckets.setIamPolicy` +* `storage.objects.create` +* `storage.objects.delete` +* `storage.objects.list` +-- +endif::google-cloud-platform[] +ifdef::rotate-gcp[] +* You have added one of the following authentication options to the {gcp-short} account that the `ccoctl` utility uses: + +** The **IAM Workload Identity Pool Admin** role + +** The following granular permissions: ++ +-- +* `storage.objects.create` +* `storage.objects.delete` +-- +endif::rotate-gcp[] \ No newline at end of file From ab6626c1b09dcb02ab170bf1630f064762b64c92 Mon Sep 17 00:00:00 2001 From: subhtk Date: Fri, 31 Jan 2025 19:41:12 +0530 Subject: [PATCH 142/669] Removed bundles feature from the oc-mirror v2 doc --- ...-mirror-imageset-config-parameters-v2.adoc | 42 ------------------- .../oc-mirror-operator-catalog-filtering.adoc | 21 +--------- modules/oc-mirror-v2-about.adoc | 3 -- 3 files changed, 2 insertions(+), 64 deletions(-) diff --git a/modules/oc-mirror-imageset-config-parameters-v2.adoc b/modules/oc-mirror-imageset-config-parameters-v2.adoc index b581f041027e..4e8b77340452 100644 --- a/modules/oc-mirror-imageset-config-parameters-v2.adoc +++ b/modules/oc-mirror-imageset-config-parameters-v2.adoc @@ -213,26 +213,6 @@ Example: `5.2.3-31` |String Example: `5.2.3-31` -|`mirror.operators.packages.bundles` -|Selected bundles configuration -|Array of objects - -Example: -[source,yaml,subs="attributes+"] ----- -operators: - - catalog: registry.redhat.io/redhat/redhat-operator-index:{product-version} - packages: - - name: 3scale-operator - bundles: - - name: 3scale-operator.v0.10.0-mas ----- - -|`mirror.operators.packages.bundles.name` -|Name of the bundle selected for mirror (as it appears in the catalog). -|String -Example : `3scale-operator.v0.10.0-mas` - |`mirror.operators.targetCatalog` |An alternative name and optional namespace hierarchy to mirror the referenced catalog as |String @@ -438,28 +418,6 @@ Example: `5.2.3-31` |String Example: `5.2.3-31` -|`delete.operators.packages.bundles` -|The selected bundles configuration -|Array of objects - -You cannot choose both channels and bundles for the same operator. - -Example: -[source,yaml] ----- -operators: - - catalog: registry.redhat.io/redhat/redhat-operator-index:{product-version} - packages: - - name: 3scale-operator - bundles: - - name: 3scale-operator.v0.10.0-mas ----- - -|`delete.operators.packages.bundles.name` -|Name of the bundle selected to delete (as it is displayed in the catalog) -|String -Example : `3scale-operator.v0.10.0-mas` - |`delete.platform` |The platform configuration of the image set |Object diff --git a/modules/oc-mirror-operator-catalog-filtering.adoc b/modules/oc-mirror-operator-catalog-filtering.adoc index 46d734131042..9b65333b621d 100644 --- a/modules/oc-mirror-operator-catalog-filtering.adoc +++ b/modules/oc-mirror-operator-catalog-filtering.adoc @@ -201,23 +201,6 @@ mirror: a|Scenario 14 -[source,yaml] ----- -mirror: - operators: - - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 - packages: - - name: aws-load-balancer-operator - bundles: - - name: aws-load-balancer-operator.v1.1.0 - - name: 3scale-operator - bundles: - - name: 3scale-operator.v0.10.0-mas ----- -|Only the bundles specified for each package are included in the filtering. - -a|Scenario 15 - [source,yaml] ---- mirror: @@ -232,7 +215,7 @@ mirror: ---- |Do not use this scenario. filtering by channel and by package with a `minVersion` or `maxVersion` is not allowed. -a|Scenario 16 +a|Scenario 15 [source,yaml] ---- @@ -248,7 +231,7 @@ mirror: ---- |Do not use this scenario. You cannot filter using `full:true` and the `minVersion` or `maxVersion`. -a|Scenario 17 +a|Scenario 16 [source,yaml] ---- diff --git a/modules/oc-mirror-v2-about.adoc b/modules/oc-mirror-v2-about.adoc index ff07113a0d3c..46e46dfb5ebf 100644 --- a/modules/oc-mirror-v2-about.adoc +++ b/modules/oc-mirror-v2-about.adoc @@ -27,9 +27,6 @@ oc-mirror plugin v2 has the following features: * Can generate `ImageDigestMirrorSet` (IDMS) and `ImageTagMirrorSet` (ITMS) resources, which cover the full image set, instead of an `ImageContentSourcePolicy` (ICSP) resource, which only covered incremental changes to the image set for each mirroring operation with v1. -* Saves Operator versions that are filtered by bundle name. -// Can anyone elaborate what this means? I am mostly confused with what the word "filter" means here. - * Does not perform automatic pruning. v2 now uses a `Delete` feature, which grants users more control over deleting images. * Introduces support for `registries.conf` files. This change facilitates mirroring to multiple enclaves while using the same cache. From b298f31641cf4e3a19c9e944974f024c6e6367b0 Mon Sep 17 00:00:00 2001 From: Monica McClain Date: Fri, 31 Jan 2025 10:52:20 -0500 Subject: [PATCH 143/669] Correct term disable to disabled --- modules/insights-operator-configuring.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/insights-operator-configuring.adoc b/modules/insights-operator-configuring.adoc index 2af1b67f4240..15c685dfe7db 100644 --- a/modules/insights-operator-configuring.adoc +++ b/modules/insights-operator-configuring.adoc @@ -80,7 +80,7 @@ data: - networking - workload_names sca: - disable: false + disabled: false interval: 2h alerting: disabled: false From 4eb98abf1309facf41d85245b5ced9fece674443 Mon Sep 17 00:00:00 2001 From: Alberto Diaz Date: Wed, 5 Feb 2025 08:44:06 -0500 Subject: [PATCH 144/669] fixing CIDR default --- ...rosa-sts-overview-of-the-default-cluster-specifications.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/rosa-sts-overview-of-the-default-cluster-specifications.adoc b/modules/rosa-sts-overview-of-the-default-cluster-specifications.adoc index b0444403f418..02d4754f0ea0 100644 --- a/modules/rosa-sts-overview-of-the-default-cluster-specifications.adoc +++ b/modules/rosa-sts-overview-of-the-default-cluster-specifications.adoc @@ -129,7 +129,7 @@ endif::tf-classic,tf-hcp[] ifndef::tf-classic,tf-hcp[] * Machine CIDR: 10.0.0.0/16 * Service CIDR: 172.30.0.0/16 -* Pod CIDR: 10.128.0.0/16 +* Pod CIDR: 10.128.0.0/14 endif::tf-classic,tf-hcp[] * Host prefix: /23 + From 1b0816cd99382b69413ee05439ece4b39d033c62 Mon Sep 17 00:00:00 2001 From: Shane Lovern Date: Fri, 31 Jan 2025 16:57:02 +0000 Subject: [PATCH 145/669] TELCODOCS-1975 - ZTP Configuring the hub cluster for backup and restore --- .../ztp-preparing-the-hub-cluster.adoc | 11 ++ ...he-hub-cluster-for-backup-and-restore.adoc | 177 ++++++++++++++++++ 2 files changed, 188 insertions(+) create mode 100644 modules/ztp-configuring-the-hub-cluster-for-backup-and-restore.adoc diff --git a/edge_computing/ztp-preparing-the-hub-cluster.adoc b/edge_computing/ztp-preparing-the-hub-cluster.adoc index 7a93e3f19145..6d757bce6773 100644 --- a/edge_computing/ztp-preparing-the-hub-cluster.adoc +++ b/edge_computing/ztp-preparing-the-hub-cluster.adoc @@ -62,3 +62,14 @@ include::snippets/pgt-deprecation-notice.adoc[] * xref:../edge_computing/policygenerator_for_ztp/ztp-configuring-managed-clusters-policygenerator.adoc#ztp-comparing-pgt-and-rhacm-pg-patching-strategies_ztp-configuring-managed-clusters-policygenerator[Comparing {rh-rhacm} PolicyGenerator and PolicyGenTemplate resource patching] include::modules/ztp-preparing-the-ztp-git-repository-ver-ind.adoc[leveloffset=+1] + +include::modules/ztp-configuring-the-hub-cluster-for-backup-and-restore.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/latest/html/business_continuity/business-cont-overview#managed-cluster-activation-data[Restoring managed cluster activation data] + +* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/latest/html/business_continuity/business-cont-overview#active-passive-config[Active-passive configuration] + +* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/latest/html/business_continuity/business-cont-overview#restore-activation-resources[Restoring activation resources] diff --git a/modules/ztp-configuring-the-hub-cluster-for-backup-and-restore.adoc b/modules/ztp-configuring-the-hub-cluster-for-backup-and-restore.adoc new file mode 100644 index 000000000000..8a89a5da9636 --- /dev/null +++ b/modules/ztp-configuring-the-hub-cluster-for-backup-and-restore.adoc @@ -0,0 +1,177 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/ztp_far_edge/ztp-preparing-the-hub-cluster.adoc + +:_mod-docs-content-type: PROCEDURE +[id="ztp-configuring-the-hub-cluster-for-backup-and-restore_{context}"] += Configuring the hub cluster for backup and restore + +You can use {ztp} to configure a set of policies to backup `BareMetalHost` resources. +This allows you to recover data from a failed hub cluster and deploy a replacement cluster using {rh-rhacm-first}. + +.Prerequisites + +* You have installed the OpenShift CLI (`oc`). + +* You have logged in as a user with `cluster-admin` privileges. + +.Procedure + +. Create a policy to add the `cluster.open-cluster-management.io/backup=cluster-activation` label to all `BareMetalHost` resources that have the `infraenvs.agent-install.openshift.io` label. +Save the policy as `BareMetalHostBackupPolicy.yaml`. ++ +The following example adds the `cluster.open-cluster-management.io/backup` label to all `BareMetalHost` resources that have the `infraenvs.agent-install.openshift.io` label: ++ +.Example Policy +[source,yaml] +---- +apiVersion: policy.open-cluster-management.io/v1 +kind: Policy +metadata: + name: bmh-cluster-activation-label + annotations: + policy.open-cluster-management.io/description: Policy used to add the cluster.open-cluster-management.io/backup=cluster-activation label to all BareMetalHost resources +spec: + disabled: false + policy-templates: + - objectDefinition: + apiVersion: policy.open-cluster-management.io/v1 + kind: ConfigurationPolicy + metadata: + name: set-bmh-backup-label + spec: + object-templates-raw: | + {{- /* Set cluster-activation label on all BMH resources */ -}} + {{- $infra_label := "infraenvs.agent-install.openshift.io" }} + {{- range $bmh := (lookup "metal3.io/v1alpha1" "BareMetalHost" "" "" $infra_label).items }} + - complianceType: musthave + objectDefinition: + kind: BareMetalHost + apiVersion: metal3.io/v1alpha1 + metadata: + name: {{ $bmh.metadata.name }} + namespace: {{ $bmh.metadata.namespace }} + labels: + cluster.open-cluster-management.io/backup: cluster-activation <1> + {{- end }} + remediationAction: enforce + severity: high +--- +apiVersion: cluster.open-cluster-management.io/v1beta1 +kind: Placement +metadata: + name: bmh-cluster-activation-label-pr +spec: + predicates: + - requiredClusterSelector: + labelSelector: + matchExpressions: + - key: name + operator: In + values: + - local-cluster +--- +apiVersion: policy.open-cluster-management.io/v1 +kind: PlacementBinding +metadata: + name: bmh-cluster-activation-label-binding +placementRef: + name: bmh-cluster-activation-label-pr + apiGroup: cluster.open-cluster-management.io + kind: Placement +subjects: + - name: bmh-cluster-activation-label + apiGroup: policy.open-cluster-management.io + kind: Policy +--- +apiVersion: cluster.open-cluster-management.io/v1beta2 +kind: ManagedClusterSetBinding +metadata: + name: default + namespace: default +spec: + clusterSet: default +---- +<1> If you apply the `cluster.open-cluster-management.io/backup: cluster-activation` label to `BareMetalHost` resources, the {rh-rhacm} cluster backs up those resources. +You can restore the `BareMetalHost` resources if the active cluster becomes unavailable, when restoring the hub activation resources. + +. Apply the policy by running the following command: ++ +[source,terminal] +---- +$ oc apply -f BareMetalHostBackupPolicy.yaml +---- + +.Verification + +. Find all `BareMetalHost` resources with the label `infraenvs.agent-install.openshift.io` by running the following command: ++ +[source,terminal] +---- +$ oc get BareMetalHost -A -l infraenvs.agent-install.openshift.io +---- ++ +.Example output +[source,yaml] +---- +NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE +baremetal-ns baremetal-name false 50s +---- + +. Verify that the policy has applied the label `cluster.open-cluster-management.io/backup=cluster-activation` to all these resources, by runing the following command: ++ +[source,terminal] +---- +$ oc get BareMetalHost -A -l infraenvs.agent-install.openshift.io,cluster.open-cluster-management.io/backup=cluster-activation +---- ++ +.Example output +[source,yaml] +---- +NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE +baremetal-ns baremetal-name false 50s +---- ++ +The output must show the same list as in the previous step, which listed all `BareMetalHost` resources with the label `infraenvs.agent-install.openshift.io`. +This confirms that that all the `BareMetalHost` resources with the `infraenvs.agent-install.openshift.io` label also have the `cluster.open-cluster-management.io/backup: cluster-activation` label. ++ +The following example shows a `BareMetalHost` resource with the `infraenvs.agent-install.openshift.io` label. +The resource must also have the `cluster.open-cluster-management.io/backup: cluster-activation` label, which was added by the policy created in step 1. ++ +[source,yaml] +---- +apiVersion: metal3.io/v1alpha1 +kind: BareMetalHost +metadata: + labels: + cluster.open-cluster-management.io/backup: cluster-activation + infraenvs.agent-install.openshift.io: value + name: baremetal-name + namespace: baremetal-ns +---- + +You can now use {rh-rhacm-title} to restore a managed cluster. + +[IMPORTANT] +==== +When you restore `BareMetalHosts` resources as part of restoring the cluster activation data, you must restore the `BareMetalHosts` status. +The following {rh-rhacm} `Restore` resource example restores activation resources, including `BareMetalHosts`, and also restores the status for the `BareMetalHosts` resources: +[source,yaml] +---- + apiVersion: cluster.open-cluster-management.io/v1beta1 +kind: Restore +metadata: + name: restore-acm-bmh + namespace: open-cluster-management-backup +spec: + cleanupBeforeRestore: CleanupRestored + veleroManagedClustersBackupName: latest <1> + veleroCredentialsBackupName: latest + veleroResourcesBackupName: latest + restoreStatus: + includedResources: + - BareMetalHosts<2> +---- +==== +<1> Set `veleroManagedClustersBackupName: latest` to restore activation resources. +<2> Restores the status for `BareMetalHosts` resources. \ No newline at end of file From a8b6a2d1f17e71ce028d576aebbfa3e187fc66d9 Mon Sep 17 00:00:00 2001 From: Daniel Chadwick Date: Fri, 17 Jan 2025 12:24:43 -0500 Subject: [PATCH 146/669] osdocs8646b Rewriting introduction to networking (ContentX) --- _topic_maps/_topic_map.yml | 2 - modules/nw-load-balancing-about.adoc | 23 +++ ...-load-balancing-configure-define-type.adoc | 24 +++ ...-balancing-configure-specify-behavior.adoc | 35 +++++ modules/nw-load-balancing-configure.adoc | 9 ++ ...ing-networking-choosing-service-types.adoc | 34 +++++ ...rstanding-networking-common-practices.adoc | 13 ++ ...anding-networking-concepts-components.adoc | 27 ++++ .../nw-understanding-networking-controls.adoc | 15 ++ ...-understanding-networking-dns-example.adoc | 106 ++++++++++++++ ...nw-understanding-networking-dns-terms.adoc | 19 +++ modules/nw-understanding-networking-dns.adoc | 11 ++ ...ding-networking-exposing-applications.adoc | 11 ++ .../nw-understanding-networking-features.adoc | 35 +++++ ...nding-networking-how-pods-communicate.adoc | 9 ++ .../nw-understanding-networking-ingress.adoc | 14 ++ ...ng-networking-networking-in-OpenShift.adoc | 16 ++ ...ing-networking-nodes-clients-clusters.adoc | 9 ++ ...tanding-networking-pod-to-pod-example.adoc | 32 ++++ ...w-understanding-networking-pod-to-pod.adoc | 11 ++ ...ing-networking-routes-ingress-example.adoc | 137 ++++++++++++++++++ ...derstanding-networking-routes-ingress.adoc | 9 ++ ...standing-networking-routes-vs-ingress.adoc | 9 ++ .../nw-understanding-networking-routes.adoc | 12 ++ ...nding-networking-securing-connections.adoc | 17 +++ ...rstanding-networking-security-example.adoc | 78 ++++++++++ .../nw-understanding-networking-security.adoc | 9 ++ ...ing-networking-service-to-pod-example.adoc | 52 +++++++ ...derstanding-networking-service-to-pod.adoc | 56 +++++++ ...rstanding-networking-what-is-a-client.adoc | 9 ++ ...standing-networking-what-is-a-cluster.adoc | 9 ++ ...derstanding-networking-what-is-a-node.adoc | 9 ++ networking/about-networking.adoc | 30 ---- networking/understanding-networking.adoc | 81 +++++++++-- 34 files changed, 926 insertions(+), 46 deletions(-) create mode 100644 modules/nw-load-balancing-about.adoc create mode 100644 modules/nw-load-balancing-configure-define-type.adoc create mode 100644 modules/nw-load-balancing-configure-specify-behavior.adoc create mode 100644 modules/nw-load-balancing-configure.adoc create mode 100644 modules/nw-understanding-networking-choosing-service-types.adoc create mode 100644 modules/nw-understanding-networking-common-practices.adoc create mode 100644 modules/nw-understanding-networking-concepts-components.adoc create mode 100644 modules/nw-understanding-networking-controls.adoc create mode 100644 modules/nw-understanding-networking-dns-example.adoc create mode 100644 modules/nw-understanding-networking-dns-terms.adoc create mode 100644 modules/nw-understanding-networking-dns.adoc create mode 100644 modules/nw-understanding-networking-exposing-applications.adoc create mode 100644 modules/nw-understanding-networking-features.adoc create mode 100644 modules/nw-understanding-networking-how-pods-communicate.adoc create mode 100644 modules/nw-understanding-networking-ingress.adoc create mode 100644 modules/nw-understanding-networking-networking-in-OpenShift.adoc create mode 100644 modules/nw-understanding-networking-nodes-clients-clusters.adoc create mode 100644 modules/nw-understanding-networking-pod-to-pod-example.adoc create mode 100644 modules/nw-understanding-networking-pod-to-pod.adoc create mode 100644 modules/nw-understanding-networking-routes-ingress-example.adoc create mode 100644 modules/nw-understanding-networking-routes-ingress.adoc create mode 100644 modules/nw-understanding-networking-routes-vs-ingress.adoc create mode 100644 modules/nw-understanding-networking-routes.adoc create mode 100644 modules/nw-understanding-networking-securing-connections.adoc create mode 100644 modules/nw-understanding-networking-security-example.adoc create mode 100644 modules/nw-understanding-networking-security.adoc create mode 100644 modules/nw-understanding-networking-service-to-pod-example.adoc create mode 100644 modules/nw-understanding-networking-service-to-pod.adoc create mode 100644 modules/nw-understanding-networking-what-is-a-client.adoc create mode 100644 modules/nw-understanding-networking-what-is-a-cluster.adoc create mode 100644 modules/nw-understanding-networking-what-is-a-node.adoc delete mode 100644 networking/about-networking.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index bcf10252c48a..71a08c5a277c 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -1333,8 +1333,6 @@ Name: Networking Dir: networking Distros: openshift-enterprise,openshift-origin Topics: -- Name: About networking - File: about-networking - Name: Understanding networking File: understanding-networking - Name: Zero trust networking diff --git a/modules/nw-load-balancing-about.adoc b/modules/nw-load-balancing-about.adoc new file mode 100644 index 000000000000..fc4e001185ea --- /dev/null +++ b/modules/nw-load-balancing-about.adoc @@ -0,0 +1,23 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-load-balancing-about_{context}"] += Supported load balancers + +Load balancing distributes incoming network traffic across multiple servers to maintain the health and efficiency of your clusters by ensuring that no single server bears too much load. Load balancers are devices that perform load balancing. They act as intermediaries between clients and servers to manage and direct traffic based on predefined rules. + +{product-title} supports the following types of load balancers: + +* Classic Load Balancer (CLB) +* Elastic Load Balancing (ELB) +* Network Load Balancer (NLB) +* Application Load Balancer (ALB) + +ELB is the default load-balancer type for AWS routers. CLB is the default for self-managed environments. NLB is the default for Red Hat OpenShift Service on AWS (ROSA). + +[IMPORTANT] +==== +Use ALB in front of an application but not in front of a router. Using an ALB requires the AWS Load Balancer Operator add-on. This operator is not supported for all {aws-first} regions or for all {product-title} profiles. +==== \ No newline at end of file diff --git a/modules/nw-load-balancing-configure-define-type.adoc b/modules/nw-load-balancing-configure-define-type.adoc new file mode 100644 index 000000000000..178e733d54ec --- /dev/null +++ b/modules/nw-load-balancing-configure-define-type.adoc @@ -0,0 +1,24 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-load-balancing-configure-define-type_{context}"] += Define the default load balancer type + +When installing the cluster, you can specify the type of load balancer that you want to use. The type of load balancer you choose at cluster installation gets applied to the entire cluster. + +This example shows how to define the default load-balancer type for a cluster deployed on {aws-short}.You can apply the procedure on other supported platforms. + +[source,yaml] +---- +apiVersion: v1 +kind: Network +metadata: + name: cluster +platform: + aws: <1> + lbType: classic <2> +---- +<1> The `platform` key represents the platform on which you have deployed your cluster. This example uses `aws`. +<2> The `lbType` key represents the load balancer type. This example uses the Classic Load Balancer, `classic`. \ No newline at end of file diff --git a/modules/nw-load-balancing-configure-specify-behavior.adoc b/modules/nw-load-balancing-configure-specify-behavior.adoc new file mode 100644 index 000000000000..0420ac3ed7d4 --- /dev/null +++ b/modules/nw-load-balancing-configure-specify-behavior.adoc @@ -0,0 +1,35 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-load-balancing-configure-specify-behavior_{context}"] += Specify load balancer behavior for an Ingress Controller + +After you install a cluster, you can configure your Ingress Controller to specify how services are exposed to external networks, so that you can better control the settings and behavior of a load balancer. + +[NOTE] +==== +Changing the load balancer settings on an Ingress Controller might override the load balancer settings you specified at installation. +==== + +[source,yaml] +---- +apiVersion: v1 +kind: Network +metadata: + name: cluster +endpointPublishingStrategy: + loadBalancer: <1> + dnsManagementPolicy: Managed + providerParameters: + aws: + classicLoadBalancer: <2> + connectionIdleTimeout: 0s + type: Classic + type: AWS + scope: External + type: LoadBalancerService +---- +<1> The `loadBalancer' field specifies the load balancer configuration settings. +<2> The `classicLoadBalancer` field sets the load balancer to `classic` and includes settings specific to the CLB on {aws-short}. \ No newline at end of file diff --git a/modules/nw-load-balancing-configure.adoc b/modules/nw-load-balancing-configure.adoc new file mode 100644 index 000000000000..b4d50bdfafc4 --- /dev/null +++ b/modules/nw-load-balancing-configure.adoc @@ -0,0 +1,9 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-load-balancing-configure_{context}"] += Configuring Load balancers + +You can define your default load-balancer type during cluster installation. After installation, you can configure your ingress controller to behave in a specific way that is not covered by the global platform configuration that you defined at cluster installation. \ No newline at end of file diff --git a/modules/nw-understanding-networking-choosing-service-types.adoc b/modules/nw-understanding-networking-choosing-service-types.adoc new file mode 100644 index 000000000000..cc26f952937d --- /dev/null +++ b/modules/nw-understanding-networking-choosing-service-types.adoc @@ -0,0 +1,34 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-choosing-service-types_{context}"] += Choosing between service types and API resources + +Service types and API resources offer different benefits for exposing applications and securing network connections. By leveraging the appropriate service type or API resource, you can effectively manage how your applications are exposed and ensure secure, reliable access for both internal and external clients. + +{product-title} supports the following service types and API resources: + +* Service Types + +** `ClusterIP` is intended for internal-only exposure. It is easy to set up and provides a stable internal IP address for accessing services within the cluster. `ClusterIP` is suitable for communication between services within the cluster. + +** `NodePort` allows external access by exposing the service on each node's IP at a static port. It is straightforward to set up and useful for development and testing. `NodePort` is good for simple external access without the need for a load balancer from the cloud provider. + +** `LoadBalancer` automatically provisions an external load balancer to distribute traffic across multiple nodes. +It is ideal for production environments where reliable, high-availability access is needed. + +** `ExternalName` maps a service to an external DNS name to allow services outside the cluster to be accessed using the service's DNS name. It is good for integrating external services or legacy systems with the cluster. + +** Headless service is a DNS name that returns the list of pod IPs without providing a stable `ClusterIP`. This is ideal for stateful applications or scenarios where direct access to individual pod IPs is needed. + +* API Resources + +** `Ingress` provides control over routing HTTP and HTTPS traffic, including support for load balancing, SSL/TLS termination, and name-based virtual hosting. It is more flexible than services alone and supports multiple domains and paths. `Ingress` is ideal when complex routing is required. + +** `Route` is similar to `Ingress` but provides additional features, including TLS re-encryption and passthrough. It simplifies the process of exposing services externally. `Route` is best for when you need advanced features, such as integrated certificate management. + +If you need a simple way to expose a service to external traffic, `Route` or `Ingress` might be the best choice. These resources can be managed by a namespace admin or developer. The easiest approach is to create a route, check its external DNS name, and configure your DNS to have a CNAME that points to the external DNS name. + +For HTTP/HTTPS/TLS, `Route` or `Ingress` should suffice. Anything else is more complex and requires a cluster admin to ensure ports are accessible or MetalLB is configured. `LoadBalancer` services are also an option in cloud environments or appropriately configured bare-metal environments. \ No newline at end of file diff --git a/modules/nw-understanding-networking-common-practices.adoc b/modules/nw-understanding-networking-common-practices.adoc new file mode 100644 index 000000000000..de0ce2ed8a6c --- /dev/null +++ b/modules/nw-understanding-networking-common-practices.adoc @@ -0,0 +1,13 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-common-practices_{context}"] += Common practices for networking services + +In {product-title}, services create a single IP address for clients to use, even if multiple pods are providing that service. This abstraction enables seamless scaling, fault tolerance, and rolling upgrades without affecting clients. + +Network security policies manage traffic within the cluster. Network controls empower namespace administrators to define ingress and egress rules for their pods. By using network administration policies, cluster administrators can establish namespace policies, override namespace policies, or set default policies when none are defined. + +Egress firewall configurations control outbound traffic from pods. These configuration settings ensure that only authorized communication occurs. The ingress node firewall protects nodes by controlling incoming traffic. Additionally, the Universal Data Network manages data traffic across the cluster. \ No newline at end of file diff --git a/modules/nw-understanding-networking-concepts-components.adoc b/modules/nw-understanding-networking-concepts-components.adoc new file mode 100644 index 000000000000..e23c509e363a --- /dev/null +++ b/modules/nw-understanding-networking-concepts-components.adoc @@ -0,0 +1,27 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-concepts-components_{context}"] += Networking concepts and components + +Networking in {product-title} uses several key components and concepts. + +* Pods and services are the smallest deployable units in Kubernetes, and services provide stable IP addresses and DNS names for sets of pods. Each pod in a cluster is assigned a unique IP address. Pods use IP addresses to communicate directly with other pods, regardless of which node they are on. The pod IP addresses will change when pods are destroyed and created. Services are also assigned unique IP addresses. A service is associated with the pods that can provide the service. When accessed, the service IP address provides a stable way to access pods by sending traffic to one of the pods that backs the service. + +* Route and Ingress APIs define rules that route HTTP, HTTPS, and TLS traffic to services within the cluster. {product-title} provides both Route and Ingress APIs as part of the default installation, but you can add third-party Ingress Controllers to the cluster. + +* The Container Network Interface (CNI) plugin manages the pod network to enable pod-to-pod communication. + +* The Cluster Network Operator (CNO) CNO manages the networking plugin components of a cluster. Using the CNO, you can set the network configuration, such as the pod network CIDR and service network CIDR. + +* DNS operators manage DNS services within the cluster to ensure that services are reachable by their DNS names. + +* Network controls define how pods are allowed to communicate with each other and with other network endpoints. These policies help secure the cluster by controlling traffic flow and enforcing rules for pod communication. + +* Load balancing distributes network traffic across multiple servers to ensure reliability and performance. + +* Service discovery is a mechanism for services to find and communicate with each other within the cluster. + +* The Ingress Operator uses {product-title} Route to manage the router and enable external access to cluster services. \ No newline at end of file diff --git a/modules/nw-understanding-networking-controls.adoc b/modules/nw-understanding-networking-controls.adoc new file mode 100644 index 000000000000..547a7211bb8b --- /dev/null +++ b/modules/nw-understanding-networking-controls.adoc @@ -0,0 +1,15 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-controls_{context}"] += Network controls + +Network controls define rules for how pods are allowed to communicate with each other and with other network endpoints. Network controls are implemented at the network level to ensure that only allowed traffic can flow between pods. This helps secure the cluster by restricting traffic flow and preventing unauthorized access. + +* Admin network policies (ANP): ANPs are cluster-scoped custom resource definitions (CRDs). As a cluster administrator, you can use an ANP to define network policies at a cluster level. You cannot override these policies by using regular network policy objects. These policies enforce strict network security rules across the entire cluster. ANPs can specify ingress and egress rules to allow administrators to control the traffic that enters and leaves the cluster. + +* Egress firewall: The egress firewall restricts egress traffic leaving the cluster. With this firewall, administrators can limit the external hosts that pods can access from within the cluster. You can configure egress firewall policies to allow or deny traffic to specific IP ranges, DNS names, or external services. This helps prevent unauthorized access to external resources and ensures that only allowed traffic can leave the cluster. + +* Ingress node firewall: The ingress node firewall controls ingress traffic to the nodes in a cluster. With this firewall, administrators define rules that restrict which external hosts can initiate connections to the nodes. This helps protect the nodes from unauthorized access and ensures that only trusted traffic can reach the cluster. \ No newline at end of file diff --git a/modules/nw-understanding-networking-dns-example.adoc b/modules/nw-understanding-networking-dns-example.adoc new file mode 100644 index 000000000000..b2a2c78452f3 --- /dev/null +++ b/modules/nw-understanding-networking-dns-example.adoc @@ -0,0 +1,106 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nw-understanding-networking-dns-example_{context}"] += Example: DNS use case + +For this example, a front-end application is running in one set of pods and a back-end service is running in another set of pods. The front-end application needs to communicate with the back-end service. You create a service for the back-end pods that gives it a stable IP address and DNS name. The front-end pods use this DNS name to access the back-end service regardless of changes to individual pod IP addresses. + +By creating a service for the back-end pods, you provide a stable IP and DNS name, `backend-service.default.svc.cluster.local`, that the front-end pods can use to communicate with the back-end service. This setup would ensure that even if individual pod IP addresses change, the communication remains consistent and reliable. + +The following steps demonstrate an example of how to configure front-end pods to communicate with a back-end service using DNS. + +. Create the back-end service. + +.. Deploy the back-end pods. ++ +[source, yaml] +---- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: backend-deployment + labels: + app: backend +spec: + replicas: 3 + selector: + matchLabels: + app: backend + template: + metadata: + labels: + app: backend + spec: + containers: + - name: backend-container + image: your-backend-image + ports: + - containerPort: 8080 +---- + +.. Define a service to expose the back-end pods. ++ +[source, yaml] +---- +apiVersion: v1 +kind: Service +metadata: + name: backend-service +spec: + selector: + app: backend + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +---- + +. Create the front-end pods. + +.. Define the front-end pods. ++ +[source, yaml] +---- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend-deployment + labels: + app: frontend + spec: + replicas: 3 + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + spec: + containers: + - name: frontend-container + image: your-frontend-image + ports: + - containerPort: 80 +---- + +.. Apply the pod definition to your cluster. ++ +[source,terminal] +---- +$ oc apply -f frontend-deployment.yaml +---- + +. Configure the front-end to communicate with the back-end. ++ +In your front-end application code, use the DNS name of the back-end service to send requests. For example, if your front-end application needs to fetch data from the back-end pod, your application might include the following code: ++ +[source, JavaScript] +---- +fetch('http://backend-service.default.svc.cluster.local/api/data') + .then(response => response.json()) + .then(data => console.log(data)); +---- \ No newline at end of file diff --git a/modules/nw-understanding-networking-dns-terms.adoc b/modules/nw-understanding-networking-dns-terms.adoc new file mode 100644 index 000000000000..4611f512ed15 --- /dev/null +++ b/modules/nw-understanding-networking-dns-terms.adoc @@ -0,0 +1,19 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-dns-terms_{context}"] += Key DNS terms + +* CoreDNS: CoreDNS is the DNS server and provides name resolution for services and pods. + +* DNS names: Services are assigned DNS names based on their namespace and name. For example, a service named `my-service` in the `default` namespace would have the DNS name `my-service.default.svc.cluster.local`. + +* Domain names: Domain names are the human-friendly names used to access websites and services, such as `example.com`. + +* IP addresses: IP addresses are numerical labels assigned to each device connected to a computer network that uses IP for communication. An example of an IPv4 address is `192.0.2.1`. An example of an IPv6 address is `2001:0db8:85a3:0000:0000:8a2e:0370:7334`. + +* DNS servers: DNS servers are specialized servers that store DNS records. These records map domain names to IP addresses. When you type a domain name into your browser, your computer contacts a DNS server to find the corresponding IP address. + +* Resolution process: A DNS query is sent to a DNS resolver. The DNS resolver then contacts a series of DNS servers to find the IP address associated with the domain name. The resolver will try using the name with a series of domains, such as `.svc.cluster.local`, `svc.cluster.local`, and `cluster.local`. This process stops at the first match. The IP address is returned to your browser and then connects to the web server using the IP address. diff --git a/modules/nw-understanding-networking-dns.adoc b/modules/nw-understanding-networking-dns.adoc new file mode 100644 index 000000000000..aa062fa907d2 --- /dev/null +++ b/modules/nw-understanding-networking-dns.adoc @@ -0,0 +1,11 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-dns_{context}"] += The Domain Name System (DNS) + +The Domain Name System (DNS) is a hierarchical and decentralized naming system used to translate human-friendly domain names, such as www.example.com, into IP addresses that identify computers on a network. DNS plays a crucial role in service discovery and name resolution. + +{product-title} provides a built-in DNS to ensure that services can be reached by their DNS names. This helps maintain stable communication even if the underlying IP addresses change. When you start a pod, environment variables for service names, IP addresses, and ports are created automatically to enable the pod to communicate with other services. \ No newline at end of file diff --git a/modules/nw-understanding-networking-exposing-applications.adoc b/modules/nw-understanding-networking-exposing-applications.adoc new file mode 100644 index 000000000000..211dbc0eb5c9 --- /dev/null +++ b/modules/nw-understanding-networking-exposing-applications.adoc @@ -0,0 +1,11 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-exposing-applications_{context}"] += Exposing applications + +ClusterIP exposes services on an internal IP within the cluster to make the cluster accessible only to other services within the cluster. The NodePort service type exposes the service on a static port on each node's IP. This service type allows external traffic to access the service. Load balancers are typically used in cloud or bare-metal environments that use MetalLB. This service type provisions an external load balancer that routes external traffic to the service. On bare-metal environments, MetalLB uses VIPs and ARP announcements or BGP announcements. + +Ingress is an API object that manages external access to services, such as load balancing, SSL/TLS termination, and name-based virtual hosting. An Ingress Controller, such as NGINX or HAProxy, implements the Ingress API and handles traffic routing based on user-defined rules. \ No newline at end of file diff --git a/modules/nw-understanding-networking-features.adoc b/modules/nw-understanding-networking-features.adoc new file mode 100644 index 000000000000..1804efe0ad91 --- /dev/null +++ b/modules/nw-understanding-networking-features.adoc @@ -0,0 +1,35 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-features_{context}"] += Networking features + +{product-title} offers several networking features and enhancements. These features and enhancements are listed as follows: + +* Ingress Operator and Route API: {product-title} includes an Ingress Operator that implements the Ingress Controller API. This component enables external access to cluster services by deploying and managing HAProxy-based Ingress Controllers that support advanced routing configurations and load balancing. {product-title} uses the Route API to translate upstream Ingress objects to route objects. Routes are specific to networking in {product-title}, but you can also use third-party Ingress Controllers. + +* Enhanced security: {product-title} provides advanced network security features, such as the egress firewall and and the ingress node firewall. ++ +** Egress firewall: The egress firewall controls and restricts outbound traffic from pods within the cluster. You can set rules to limit which external hosts or IP ranges with which pods can communicate. +** Ingress node firewall: The ingress node firewall is managed by the Ingress Firewall Operator and provides firewall rules at the node level. You can protect your nodes from threats by configuring this firewall on specific nodes within the cluster to filter incoming traffic before it reaches these nodes. ++ +[NOTE] +==== +{product-title} also implements services, such as Network Policy, Admin Network Policy, and Security Context Constraints (SCC) to secure communication between pods and enforce access controls. +==== + +* Role-based access control (RBAC): {product-title} extends Kubernetes RBAC to provide more granular control over who can access and manage network resources. RBAC helps maintain security and compliance within the cluster. + +* Multi-tenancy support: {product-title} offers multi-tenancy support to enable multiple users and teams to share the same cluster while keeping their resources isolated and secure. + +* Hybrid and multi-cloud capabilities: {product-title} is designed to work seamlessly across on-premise, cloud, and multi-cloud environments. This flexibility allows organizations to deploy and manage containerized applications across different infrastructures. + +* Observability and monitoring: {product-title} provides integrated observability and monitoring tools that help manage and troubleshoot network issues. These tools include role-based access to network metrics and logs. + +* User-defined networks (UDN): UDNs allow administrators to customize network configurations. UDNs provide enhanced network isolation and IP address management. + +* Egress IP: Egress IP allows you to assign a fixed source IP address for all egress traffic originating from pods within a namespace. Egress IP can improve security and access control by ensuring consistent source IP addresses for external services. For example, if a pod needs to access an external database that only allows traffic from specific IP adresses, you can configure an egress IP for that pod to meet the access requirements. + +* Egress router: An egress router is a pod that acts as a bridge between the cluster and external systems. Egress routers allow traffic from pods to be routed through a specific IP address that is not used for any other purpose. With egress routers, you can enforce access controls or route traffic through a specific gateway. \ No newline at end of file diff --git a/modules/nw-understanding-networking-how-pods-communicate.adoc b/modules/nw-understanding-networking-how-pods-communicate.adoc new file mode 100644 index 000000000000..f53f1923be01 --- /dev/null +++ b/modules/nw-understanding-networking-how-pods-communicate.adoc @@ -0,0 +1,9 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-how-pods-communicate_{context}"] += How pods communicate + +Pods use IP addresses to communicate and a Dynamic Name System (DNS) to discover IP addresses for pods or services. Clusters use various policy types that control what communication is allowed. Pods communicate in two ways: pod-to-pod and service-to-pod. \ No newline at end of file diff --git a/modules/nw-understanding-networking-ingress.adoc b/modules/nw-understanding-networking-ingress.adoc new file mode 100644 index 000000000000..e063492f2747 --- /dev/null +++ b/modules/nw-understanding-networking-ingress.adoc @@ -0,0 +1,14 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-ingress_{context}"] += Ingress + +Ingress is a resource that provides advanced routing capabilities, including load balancing, SSL/TLS termination, and name-based virtual hosting. Here are some key points about Ingress: + +* HTTP/HTTPS routing: You can use Ingress to define rules for routing HTTP and HTTPS traffic to services within the cluster. +* Load balancing: Ingress Controllers, such as NGINX or HAProxy, manage traffic routing and load balancing based on user-defined defined rules. +* SSL/TLS termination: SSL/TLS termination is the process of decrypting incoming SSL/TLS traffic before passing it to the backend services. +* Multiple domains and paths: Ingress supports routing traffic for multiple domains and paths. \ No newline at end of file diff --git a/modules/nw-understanding-networking-networking-in-OpenShift.adoc b/modules/nw-understanding-networking-networking-in-OpenShift.adoc new file mode 100644 index 000000000000..b1ce91277422 --- /dev/null +++ b/modules/nw-understanding-networking-networking-in-OpenShift.adoc @@ -0,0 +1,16 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-networking-in-OpenShift_{context}"] += Networking in {product-title} + +{product-title} ensures seamless communication between various components within the cluster and between external clients and the cluster. Networking relies on the following core concepts and components: + +* Pod-to-pod communication +* Services +* DNS +* Ingress +* Network controls +* Load balancing \ No newline at end of file diff --git a/modules/nw-understanding-networking-nodes-clients-clusters.adoc b/modules/nw-understanding-networking-nodes-clients-clusters.adoc new file mode 100644 index 000000000000..2265afb6b9e4 --- /dev/null +++ b/modules/nw-understanding-networking-nodes-clients-clusters.adoc @@ -0,0 +1,9 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-nodes-clients-clusters_{context}"] += Networking with nodes, clients, and clusters + +A node is a machine in the cluster that can run either control-plane components, workload components, or both. A node is either a physical server or a virtual machine. A cluster is a collection of nodes that run containerized applications. Clients are the tools and users that interact with the cluster. \ No newline at end of file diff --git a/modules/nw-understanding-networking-pod-to-pod-example.adoc b/modules/nw-understanding-networking-pod-to-pod-example.adoc new file mode 100644 index 000000000000..c6d410a904e2 --- /dev/null +++ b/modules/nw-understanding-networking-pod-to-pod-example.adoc @@ -0,0 +1,32 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-pod-to-pod-example_{context}"] += Example: Controlling pod-to-pod communication + +In a microservices-based application with multiple pods, a frontend pod needs to communicate with the a backend pod to retrieve data. By using pod-to-pod communication, either directly or through services, these pods can efficiently exchange information. + +To control and secure pod-to-pod communication, you can define network controls. These controls enforce security and compliance requirements by specifying how pods interact with each other based on labels and selectors. + +[source, yaml] +---- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-some-pods + namespace: default +spec: + podSelector: + matchLabels: + role: app + ingress: + - from: + - podSelector: + matchLabels: + role: backend + ports: + - protocol: TCP + port: 80 +---- diff --git a/modules/nw-understanding-networking-pod-to-pod.adoc b/modules/nw-understanding-networking-pod-to-pod.adoc new file mode 100644 index 000000000000..a4bcffecbeb6 --- /dev/null +++ b/modules/nw-understanding-networking-pod-to-pod.adoc @@ -0,0 +1,11 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-pod-to-pod_{context}"] += Pod-to-pod communication + +Pod-to-pod communication is the ability of pods to communicate with each other within the cluster. This is crucial for the functioning of microservices and distributed applications. + +Each pod in a cluster is assigned a unique IP address that they use to communicate directly with other pods. Pod-to-pod communication is useful for intra-cluster communication where pods need to exchange data or perform tasks collaboratively. For example, Pod A can send requests directly to Pod B using Pod B's IP address. Pods can communicate over a flat network without Network Address Translation (NAT). This allows for seamless communication between pods across different nodes. \ No newline at end of file diff --git a/modules/nw-understanding-networking-routes-ingress-example.adoc b/modules/nw-understanding-networking-routes-ingress-example.adoc new file mode 100644 index 000000000000..816e832a56ac --- /dev/null +++ b/modules/nw-understanding-networking-routes-ingress-example.adoc @@ -0,0 +1,137 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nw-understanding-networking-routes-ingress-example_{context}"] += Example: Configuring routes and ingress to expose a web application + +A web application is running on your {product-title} cluster. You want to make the application accessible to external users. The application should be accessible through a specific domain name, and the traffic should be securely encrypted using TLS. The following example shows how to configure both routes and ingress to expose your web application to external traffic securely. + +[id="nw-understanding-networking-routes-ingress-example-routes_{context}"] +== Configuring Routes + +. Create a new project. ++ +[source, terminal] +---- +$ oc new-project webapp-project +---- + +. Deploy the web application. ++ +[source, terminal] +---- +$ oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git --name=webapp +---- + +. Expose the service with a Route. ++ +[source, terminal] +---- +$ oc expose svc/webapp --hostname=webapp.example.com +---- + +. Secure the Route with TLS. + +.. Create a TLS secret with your certificate and key. ++ +[source, terminal] +---- +$ oc create secret tls webapp-tls --cert=path/to/tls.crt --key=path/to/tls.key +---- + +.. Update the route to use the TLS secret. ++ +[source, terminal] +---- +$ oc patch route/webapp -p '{"spec":{"tls":{"termination":"edge","certificate":"path/to/tls.crt","key":"path/to/tls.key"}}}' +---- + +[id="nw-understanding-networking-routes-ingress-example-ingress_{context}"] +== Configuring ingress + +. Create an ingress resource. ++ +Ensure your ingress Controller is installed and running in the cluster. + +. Create a service for the web application. If not already created, expose the application as a service. ++ +[source, yaml] +---- +apiVersion: v1 +kind: Service +metadata: + name: webapp-service + namespace: webapp-project +spec: + selector: + app: webapp + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +---- + +. Create the ingress resource. ++ +[source, yaml] +---- +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: webapp-ingress + namespace: webapp-project + annotations: + kubernetes.io/ingress.class: "nginx" +spec: + rules: + - host: webapp.example.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: webapp-service + port: + number: 80 +---- + +. Secure the ingress with TLS. + +.. Create a TLS secret with your certificate and key. ++ +[source, terminal] +---- +$ oc create secret tls webapp-tls --cert=path/to/tls.crt --key=path/to/tls.key -n webapp-project +---- + +.. Update the ingress resource to use the TLS secret. ++ +[source, yaml] +---- +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: webapp-ingress + namespace: webapp-project +spec: + tls: <1> + - hosts: + - webapp.example.com + secretName: webapp-tls <2> + rules: + - host: webapp.example.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: webapp-service + port: + number: 80 +---- +<1> The `TLS` section specifies TLS settings. +<2> The `secretName` field is the name of Kubernetes secret that contains the TLS certificate and key. \ No newline at end of file diff --git a/modules/nw-understanding-networking-routes-ingress.adoc b/modules/nw-understanding-networking-routes-ingress.adoc new file mode 100644 index 000000000000..0bb89800d5bc --- /dev/null +++ b/modules/nw-understanding-networking-routes-ingress.adoc @@ -0,0 +1,9 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-routes-ingress_{context}"] += Routes and ingress + +Routes and ingress are both used to expose applications to external traffic. However, they serve slightly different purposes and have different capabilities. \ No newline at end of file diff --git a/modules/nw-understanding-networking-routes-vs-ingress.adoc b/modules/nw-understanding-networking-routes-vs-ingress.adoc new file mode 100644 index 000000000000..f873e4f203e0 --- /dev/null +++ b/modules/nw-understanding-networking-routes-vs-ingress.adoc @@ -0,0 +1,9 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-routes-vs-ingress_{context}"] += Comparing Routes and ingress + +Routes provide more flexibility and advanced features compared to ingress. This makes routes suitable for complex routing scenarios. Routes are simpler to set up and use, especially for basic external access needs. Ingress is often used for simpler, straightforward external access. Routes are used for more complex scenarios that require advanced routing and SSL/TLS termination. \ No newline at end of file diff --git a/modules/nw-understanding-networking-routes.adoc b/modules/nw-understanding-networking-routes.adoc new file mode 100644 index 000000000000..0e5b0299a944 --- /dev/null +++ b/modules/nw-understanding-networking-routes.adoc @@ -0,0 +1,12 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-routes_{context}"] += Routes + +Routes are specific to {product-title} resources that expose a service at a host name so that external clients can reach the service by name. + +Routes map a host name to a service. Route name mapping allows external clients to access the service using the host name. +Routes provide load balancing for the traffic directed to the service. The host name used in a route is resolved to the IP address of the router. Routes then forward the traffic to the appropriate service. Routes can also be secured using SSL/TLS to encrypt traffic between the client and the service. \ No newline at end of file diff --git a/modules/nw-understanding-networking-securing-connections.adoc b/modules/nw-understanding-networking-securing-connections.adoc new file mode 100644 index 000000000000..0aec88276ecb --- /dev/null +++ b/modules/nw-understanding-networking-securing-connections.adoc @@ -0,0 +1,17 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-securing-connections_{context}"] += Securing connections + +Ingress Controllers manage SSL/TLS termination to decrypt incoming SSL/TLS traffic before passing it to the backend services. SSL/TLS termination offloads the encryption/decryption process from the application pods. You can use TLS certificates to encrypt traffic between clients and your services. You can manage certificates with tools, such as `cert-manager`, to automate certificate distribution and renewal. + +Routes pass TLS traffic to a pod if it has the SNI field. This process allows services that run TCP to be exposed using TLS and not only HTTP/HTTPS. A site administrator can manage the certificates centrally and allow application developers to read private keys even without permission. + +The Route API enables encryption of router-to-pod traffic with cluster-managed certificates. This ensures external certificates are centrally managed while the internal leg remains encrypted. Application developers receive unique private keys for their applications. These keys can be mounted as a secret in the pod. + +Network controls define rules for how pods can communicate with each other and other network endpoints. This enhances security by controlling traffic flow within the cluster. These controls are implemented at the network plugin level to ensure that only allowed traffic flows between pods. + +Role-based access control (RBAC) manages permissions and control who can access resources within the cluster. Service accounts provide identity for pods that access the API. RBAC allows granular control over what each pod can do. diff --git a/modules/nw-understanding-networking-security-example.adoc b/modules/nw-understanding-networking-security-example.adoc new file mode 100644 index 000000000000..ef80c695aae4 --- /dev/null +++ b/modules/nw-understanding-networking-security-example.adoc @@ -0,0 +1,78 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nw-understanding-networking-security-example_{context}"] += Example: Exposing applications and securing connections + +In this example, a web application running in your cluster needs to be accessed by external users. + +. Create a service and expose the application as a service using a service type that suits your needs. ++ +[source,yaml] +---- +apiVersion: v1 +kind: Service +metadata: + name: my-web-app +spec: + type: LoadBalancer + selector: + app: my-web-app + ports: + - port: 80 + targetPort: 8080 +---- + +. Define an `Ingress` resource to manage HTTP/HTTPS traffic and route it to your service. ++ +[source,yaml] +---- +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: my-web-app-ingress + annotations: + kubernetes.io/ingress.class: "nginx" +spec: + rules: + - host: mywebapp.example.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: my-web-app + port: + number: 80 +---- + +. Configure TLS for your ingress to ensure secured, encrypted connections. ++ +[source,yaml] +---- +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: my-web-app-ingress + annotations: + kubernetes.io/ingress.class: "nginx" +spec: + tls: + - hosts: + - mywebapp.example.com + secretName: my-tls-secret + rules: + - host: mywebapp.example.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: my-web-app + port: + number: 80 +---- \ No newline at end of file diff --git a/modules/nw-understanding-networking-security.adoc b/modules/nw-understanding-networking-security.adoc new file mode 100644 index 000000000000..874e3d02785e --- /dev/null +++ b/modules/nw-understanding-networking-security.adoc @@ -0,0 +1,9 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-security_{context}"] += Security and traffic management + +Administrators can expose applications to external traffic and secure network connections using service types, such as `ClusterIP`, `NodePort`, and `LoadBalaner` and API resources such as `Ingress` and `Route`. The Ingress Operator and Cluster Network Operator (CNO) help configure and manage these services and resources. The Ingress Operator deploys and manages one or more Ingress Controllers. These controllers route external HTTP and HTTPS traffic to services within the cluster. A CNO deploys and manages the cluster network components, including pod networks, service networks, and DNS. \ No newline at end of file diff --git a/modules/nw-understanding-networking-service-to-pod-example.adoc b/modules/nw-understanding-networking-service-to-pod-example.adoc new file mode 100644 index 000000000000..af57d39699d9 --- /dev/null +++ b/modules/nw-understanding-networking-service-to-pod-example.adoc @@ -0,0 +1,52 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nw-understanding-networking-service-to-pod-example_{context}"] += Example: Controlling service-to-pod communication + +A cluster is running a microservices-based application with two components: a front-end and a backend. The front-end needs to communicate with the backend to fetch data. + +.Procedure + +. Create a backend service. ++ +[source, yaml] +---- +apiVersion: v1 +kind: Service +metadata: + name: backend +spec: + selector: + app: backend + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +---- + +. Configure backend pods. ++ +[source, yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + name: backend-pod + labels: + app: backend +spec: + containers: + - name: backend-container + image: my-backend-image + ports: + - containerPort: 8080 +---- + +. Establish front-end communication. ++ +The front-end pods can now use the DNS name `backend.default.svc.cluster.local` to communicate with the backend service. The service ensures that the traffic is routed to one of the backend pods. + +Service-to-pod communication abstracts the complexity of managing pod IPs and ensures reliable and efficient communication within the cluster. diff --git a/modules/nw-understanding-networking-service-to-pod.adoc b/modules/nw-understanding-networking-service-to-pod.adoc new file mode 100644 index 000000000000..c1c770a1ef98 --- /dev/null +++ b/modules/nw-understanding-networking-service-to-pod.adoc @@ -0,0 +1,56 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-service-to-pod_{context}"] += Service-to-pod communication + +Service-to-pod communication ensures that services can reliably route traffic to the appropriate pods. Services are objects that define a logical set of pods and provide a stable endpoint, such as IP addresses and DNS names. Pod IP addresses can change. Services abstract pod IP addresses to provide a consistent way to access the application components even as IP addresses change. + +Key concepts of service-to-pod communication include: + +* Endpoints: Endpoints define the IP addresses and ports of the pods that are associated with a service. + +* Selectors: Selectors use labels, such as key-value pairs, to define the criteria for selecting a set of objects that a service should target. + +* Services: Services provide a stable IP address and DNS name for a set of pods. This abstraction allows other components to communicate with the service rather than individual pods. + +* Service discovery: DNS makes services discoverable. When a service is created, it is assigned a DNS name. Other pods discover this DNS name and use it to communicate with the service. + +* Service Types: Service types control how services are exposed within or outside the cluster. + +** ClusterIP exposes the service on an internal cluster IP. It is the default service type and makes the service only reachable from within the cluster. + +** NodePort allows external traffic to access the service by exposing the service on each node's IP at a static port. + +** LoadBalancer uses a cloud provider's load balancer to expose the service externally. + +Services use selectors to identify the pods that should receive the traffic. The selectors match labels on the pods to determine which pods are part of the service. Example: A service with the selector `app: myapp` will route traffic to all pods with the label `app: myapp`. + +Endpoints are dynamically updated to reflect the current IP addresses of the pods that match the service selector. {product-name} maintains these endpoints and ensures that the service routes traffic to the correct pods. + +The communication flow refers to the sequence of steps and interactions that occur when a service in Kubernetes routes traffic to the appropriate pods. The typical communication flow for service-to-pod communication is as follows: + +* Service creation: When you create a service, you define the service type, the port on which the service listens, and the selector labels. ++ +[source, yaml] +---- + apiVersion: v1 + kind: Service + metadata: + name: my-service + spec: + selector: + app: myapp + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +---- + +* DNS resolution: Each pod has a DNS name that other pods can use to communicate with the service. For example, if the service is named `my-service` in the `my-app` namespace, its DNS name is `my-service.my-app.svc.cluster.local`. + +* Traffic routing: When a pod sends a request to the service’s DNS name, {product-title} resolves the name to the service’s ClusterIP. The service then uses the endpoints to route the traffic to one of the pods that match its selector. + +* Load balancing: Services also provide basic load balancing. They distribute incoming traffic across all the pods that match the selector. This ensures that no single pod is overwhelmed with too much traffic. diff --git a/modules/nw-understanding-networking-what-is-a-client.adoc b/modules/nw-understanding-networking-what-is-a-client.adoc new file mode 100644 index 000000000000..99789245a3f7 --- /dev/null +++ b/modules/nw-understanding-networking-what-is-a-client.adoc @@ -0,0 +1,9 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-what-is-a-client_{context}"] += Understanding external clients + +An external client is any entity outside the cluster that interacts with the services and applications running within the cluster. External can include end users, external services, and external devices. End users are people who access a web application hosted in the cluster through their browsers or mobile devices. External services are other software systems or applications that interact with the services in the cluster, often through APIs. External devices are any hardware outside the cluster network that needs to communicate with the cluster services, such as the Internet of Things (IoT) devices. \ No newline at end of file diff --git a/modules/nw-understanding-networking-what-is-a-cluster.adoc b/modules/nw-understanding-networking-what-is-a-cluster.adoc new file mode 100644 index 000000000000..3da3dd10545a --- /dev/null +++ b/modules/nw-understanding-networking-what-is-a-cluster.adoc @@ -0,0 +1,9 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-what-is-a-cluster_{context}"] += Understanding clusters + +A cluster is a collection of nodes that work together to run containerized applications. These nodes include control plane nodes and compute nodes. \ No newline at end of file diff --git a/modules/nw-understanding-networking-what-is-a-node.adoc b/modules/nw-understanding-networking-what-is-a-node.adoc new file mode 100644 index 000000000000..6aeefd91cd25 --- /dev/null +++ b/modules/nw-understanding-networking-what-is-a-node.adoc @@ -0,0 +1,9 @@ +// Module included in the following assemblies: +// +// * networking/understanding-networking.adoc + +:_mod-docs-content-type: CONCEPT +[id="nw-understanding-networking-what-is-a-node_{context}"] += What is a node? + +Nodes are the physical or virtual machines that run containerized applications. Nodes host the pods and provide resources, such as memory and storage for running the applications. Nodes enable communication between pods. Each pod is assigned an IP address. Pods within the same node can communicate with each other using these IP addresses. Nodes facilitate service discovery by allowing pods to discover and communicate with services within the cluster. Nodes help distribute network traffic among pods to ensure efficient load balancing and high availability of applications. Nodes provide a bridge between the internal cluster network and external networks to allowing external clients to access services running on the cluster. \ No newline at end of file diff --git a/networking/about-networking.adoc b/networking/about-networking.adoc deleted file mode 100644 index d9e1ead66a68..000000000000 --- a/networking/about-networking.adoc +++ /dev/null @@ -1,30 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="about-networking"] -= About networking -include::_attributes/common-attributes.adoc[] -:context: about-networking - -toc::[] - -{openshift-networking} is an ecosystem of features, plugins and advanced networking capabilities that extend Kubernetes networking with the advanced networking-related features that your cluster needs to manage its network traffic for one or multiple hybrid clusters. This ecosystem of networking capabilities integrates ingress, egress, load balancing, high-performance throughput, security, inter- and intra-cluster traffic management and provides role-based observability tooling to reduce its natural complexities. - - -The following list highlights some of the most commonly used {openshift-networking} features available on your cluster: - -- Primary cluster network provided by either of the following Container Network Interface (CNI) plugins: - * xref:../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc#about-ovn-kubernetes[OVN-Kubernetes network plugin], the default plugin - * xref:../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc#about-ovn-kubernetes[About the OVN-Kubernetes network plugin] -- Certified 3rd-party alternative primary network plugins -- Cluster Network Operator for network plugin management -- Ingress Operator for TLS encrypted web traffic -- DNS Operator for name assignment -- MetalLB Operator for traffic load balancing on bare metal clusters -- IP failover support for high-availability -- Additional hardware network support through multiple CNI plugins, including for macvlan, ipvlan, and SR-IOV hardware networks -- IPv4, IPv6, and dual stack addressing -- Hybrid Linux-Windows host clusters for Windows-based workloads -- {SMProductName} for discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring of services -- {sno-caps} -- Network Observability Operator for network debugging and insights -- link:https://catalog.redhat.com/software/container-stacks/detail/5f0c67b7ce85fb9e399f3a12[Submariner] for inter-cluster networking -- link:https://docs.redhat.com/en/documentation/red_hat_service_interconnect/[Red Hat Service Interconnect] for layer 7 inter-cluster networking diff --git a/networking/understanding-networking.adoc b/networking/understanding-networking.adoc index b1b505c15b7b..548c3025bd60 100644 --- a/networking/understanding-networking.adoc +++ b/networking/understanding-networking.adoc @@ -5,24 +5,77 @@ include::_attributes/common-attributes.adoc[] :context: understanding-networking toc::[] -Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections: -* Service types, such as node ports or load balancers +Understanding the fundamentals of networking in {product-title} ensures efficient and secure communication within your clusters and is essential for effective network administration. Key elements of networking in your environment include understanding how pods and services communicate, the role of IP addresses, and the use of DNS for service discovery. -* API resources, such as `Ingress` and `Route` +// Introduction +include::modules/nw-understanding-networking-networking-in-OpenShift.adoc[leveloffset=+1] -By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can network, but clients outside the cluster do not have networking access. When you expose your application to external traffic, giving each pod its own IP address means that pods can be treated like physical hosts or virtual machines in terms of port allocation, networking, naming, service discovery, load balancing, application configuration, and migration. +include::modules/nw-understanding-networking-common-practices.adoc[leveloffset=+2] -[NOTE] -==== -Some cloud platforms offer metadata APIs that listen on the 169.254.169.254 IP address, a link-local IP address in the IPv4 `169.254.0.0/16` CIDR block. +include::modules/nw-understanding-networking-features.adoc[leveloffset=+2] -This CIDR block is not reachable from the pod network. Pods that need access to these IP addresses must be given host network access by setting the `spec.hostNetwork` field in the pod spec to `true`. +// Nodes, clusters, clients +include::modules/nw-understanding-networking-nodes-clients-clusters.adoc[leveloffset=+1] -If you allow a pod host network access, you grant the pod privileged access to the underlying network infrastructure. -==== +include::modules/nw-understanding-networking-what-is-a-node.adoc[leveloffset=+2] -include::modules/nw-ne-openshift-dns.adoc[leveloffset=+1] -include::modules/nw-ne-openshift-ingress.adoc[leveloffset=+1] -include::modules/nw-ne-comparing-ingress-route.adoc[leveloffset=+2] -include::modules/nw-networking-glossary-terms.adoc[leveloffset=+1] +include::modules/nw-understanding-networking-what-is-a-cluster.adoc[leveloffset=+2] + +include::modules/nw-understanding-networking-what-is-a-client.adoc[leveloffset=+2] + +// Concepts and components +include::modules/nw-understanding-networking-concepts-components.adoc[leveloffset=+1] + +//Pod communication +include::modules/nw-understanding-networking-how-pods-communicate.adoc[leveloffset=+1] + +include::modules/nw-understanding-networking-pod-to-pod.adoc[leveloffset=+2] + +include::modules/nw-understanding-networking-pod-to-pod-example.adoc[leveloffset=+3] + +include::modules/nw-understanding-networking-service-to-pod.adoc[leveloffset=+2] + +include::modules/nw-understanding-networking-service-to-pod-example.adoc[leveloffset=+3] + +//Load balancing + +include::modules/nw-load-balancing-about.adoc[leveloffset=+1] + +include::modules/nw-load-balancing-configure.adoc[leveloffset=+2] + +include::modules/nw-load-balancing-configure-define-type.adoc[leveloffset=+3] + +include::modules/nw-load-balancing-configure-specify-behavior.adoc[leveloffset=+3] + +//DNS +include::modules/nw-understanding-networking-dns.adoc[leveloffset=+1] + +include::modules/nw-understanding-networking-dns-terms.adoc[leveloffset=+2] + +include::modules/nw-understanding-networking-dns-example.adoc[leveloffset=+2] + +//Controls +include::modules/nw-understanding-networking-controls.adoc[leveloffset=+1] + +//Routes and Ingress +include::modules/nw-understanding-networking-routes-ingress.adoc[leveloffset=+1] + +include::modules/nw-understanding-networking-routes.adoc[leveloffset=+2] + +include::modules/nw-understanding-networking-ingress.adoc[leveloffset=+2] + +include::modules/nw-understanding-networking-routes-vs-ingress.adoc[leveloffset=+2] + +include::modules/nw-understanding-networking-routes-ingress-example.adoc[leveloffset=+2] + +// Security +include::modules/nw-understanding-networking-security.adoc[leveloffset=+1] + +include::modules/nw-understanding-networking-exposing-applications.adoc[leveloffset=+2] + +include::modules/nw-understanding-networking-securing-connections.adoc[leveloffset=+2] + +include::modules/nw-understanding-networking-security-example.adoc[leveloffset=+2] + +include::modules/nw-understanding-networking-choosing-service-types.adoc[leveloffset=+2] \ No newline at end of file From b022e39048067ece2bd9d23140578e9c3ad2455c Mon Sep 17 00:00:00 2001 From: Elizabeth Hartman Date: Tue, 28 Jan 2025 13:59:20 -0500 Subject: [PATCH 147/669] Shared VPC doc updates. --- modules/rosa-sharing-vpc-dns-and-roles.adoc | 2 +- modules/rosa-sharing-vpc-hosted-zones.adoc | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/rosa-sharing-vpc-dns-and-roles.adoc b/modules/rosa-sharing-vpc-dns-and-roles.adoc index ae16900e7fb0..a5533ff26cd9 100644 --- a/modules/rosa-sharing-vpc-dns-and-roles.adoc +++ b/modules/rosa-sharing-vpc-dns-and-roles.adoc @@ -72,7 +72,7 @@ $ rosa create operator-roles --oidc-config-id <1> The Installer account role and the shared VPC role must have a one-to-one relationship. If you want to create multiple shared VPC roles, you should create one set of account roles per shared VPC role. ==== -. After you create the Operator roles, share the full domain name, which is created with `.`, your _Ingress Operator Cloud Credentials_ role's ARN, and your _Installer_ role's ARN with the *VPC Owner* to continue configuration. +. After you create the Operator roles, share the full domain name, which is created with `.`, your _Ingress Operator Cloud Credentials_ role's ARN, and your _Installer_ role's ARN with the *VPC Owner* to continue configuration. + The shared information resembles these examples: + diff --git a/modules/rosa-sharing-vpc-hosted-zones.adoc b/modules/rosa-sharing-vpc-hosted-zones.adoc index 1b2ce1901855..6673cf7e890f 100644 --- a/modules/rosa-sharing-vpc-hosted-zones.adoc +++ b/modules/rosa-sharing-vpc-hosted-zones.adoc @@ -39,7 +39,7 @@ image::372_OpenShift_on_AWS_persona_worflows_0923_3.png[] ] } ---- -. Create a private hosted zone in the link:https://us-east-1.console.aws.amazon.com/route53/v2/[Route 53 section of the AWS console]. In the hosted zone configuration, the domain name is `.`. The private hosted zone must be associated with the created VPC. +. Create a private hosted zone in the link:https://us-east-1.console.aws.amazon.com/route53/v2/[Route 53 section of the AWS console]. In the hosted zone configuration, the domain name is `.`. The private hosted zone must be associated with the created VPC. . After the hosted zone is created and associated with the VPC, provide the following to the *Cluster Creator* to continue configuration: * Hosted zone ID * AWS region From 36a2328e93d68c39593540c01dbff041accf5ac2 Mon Sep 17 00:00:00 2001 From: Andrea Hoffer Date: Mon, 3 Feb 2025 08:14:54 -0500 Subject: [PATCH 148/669] OCPBUGS#43809: Clarifying what's created for each service account --- ...erstanding-and-creating-service-accounts.adoc | 2 ++ modules/service-accounts-creating.adoc | 12 ++++++------ modules/service-accounts-granting-roles.adoc | 2 +- modules/service-accounts-overview.adoc | 16 ++++------------ 4 files changed, 13 insertions(+), 19 deletions(-) diff --git a/authentication/understanding-and-creating-service-accounts.adoc b/authentication/understanding-and-creating-service-accounts.adoc index ed1854708470..d0cec995b025 100644 --- a/authentication/understanding-and-creating-service-accounts.adoc +++ b/authentication/understanding-and-creating-service-accounts.adoc @@ -8,6 +8,8 @@ toc::[] include::modules/service-accounts-overview.adoc[leveloffset=+1] +include::modules/service-account-auto-secret-removed.adoc[leveloffset=+2] + // include::modules/service-accounts-enabling-authentication.adoc[leveloffset=+1] include::modules/service-accounts-creating.adoc[leveloffset=+1] diff --git a/modules/service-accounts-creating.adoc b/modules/service-accounts-creating.adoc index 78bdd4833421..1777bd46c5a0 100644 --- a/modules/service-accounts-creating.adoc +++ b/modules/service-accounts-creating.adoc @@ -22,9 +22,9 @@ $ oc get sa [source,terminal] ---- NAME SECRETS AGE -builder 2 2d -default 2 2d -deployer 2 2d +builder 1 2d +default 1 2d +deployer 1 2d ---- . To create a new service account in the current project: @@ -67,10 +67,10 @@ $ oc describe sa robot ---- Name: robot Namespace: project1 -Labels: -Annotations: +Labels: +Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb -Tokens: robot-token-f4khf +Tokens: Events: ---- diff --git a/modules/service-accounts-granting-roles.adoc b/modules/service-accounts-granting-roles.adoc index be123ea2213b..bcab0ad027c5 100644 --- a/modules/service-accounts-granting-roles.adoc +++ b/modules/service-accounts-granting-roles.adoc @@ -3,7 +3,7 @@ // * authentication/using-service-accounts.adoc [id="service-accounts-granting-roles_{context}"] -= Examples of granting roles to service accounts += Granting roles to service accounts You can grant roles to service accounts in the same way that you grant roles to a regular user account. diff --git a/modules/service-accounts-overview.adoc b/modules/service-accounts-overview.adoc index 3c330053edcc..c029e7a21a0c 100644 --- a/modules/service-accounts-overview.adoc +++ b/modules/service-accounts-overview.adoc @@ -15,11 +15,12 @@ When you use the {product-title} CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. ifdef::openshift-online,openshift-origin,openshift-enterprise,openshift-webscale[] + For example, service accounts can allow: -* Replication controllers to make API calls to create or delete pods. -* Applications inside containers to make API calls for discovery purposes. -* External applications to make API calls for monitoring or integration purposes. +* Replication controllers to make API calls to create or delete pods +* Applications inside containers to make API calls for discovery purposes +* External applications to make API calls for monitoring or integration purposes endif::[] Each service account's user name is derived from its project and name: @@ -45,12 +46,3 @@ Every service account is also a member of two groups: specified project. |=== - -Each service account automatically contains two secrets: - -* An API token -* Credentials for the OpenShift Container Registry - -The generated API token and registry credentials do not expire, but you can -revoke them by deleting the secret. When you delete the secret, a new one is -automatically generated to take its place. From ae6a194a1adfcc6b8bdaf8b94d59e35854554dae Mon Sep 17 00:00:00 2001 From: Sebastian Kopacz Date: Thu, 6 Feb 2025 13:09:15 -0500 Subject: [PATCH 149/669] fixing agent headings --- .../installing-with-agent-based-installer.adoc | 6 ++---- .../prepare-pxe-assets-agent.adoc | 2 +- 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc b/installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc index c509670af643..a85abd00e6c6 100644 --- a/installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc +++ b/installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc @@ -8,16 +8,14 @@ toc::[] Use the following procedures to install an {product-title} cluster using the Agent-based Installer. -[id="prerequisites_installing-with-agent-based-installer_{context}"] +[id="prerequisites_{context}"] == Prerequisites * You reviewed details about the xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] processes. * You read the documentation on xref:../../installing/overview/installing-preparing.adoc#installing-preparing[selecting a cluster installation method and preparing it for users]. * If you use a firewall or proxy, you xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configured it to allow the sites] that your cluster requires access to. - -// This anchor ID is extracted/replicated from the former installing-ocp-agent.adoc module to preserve links. -[id="installing-ocp-agent_installing-with-agent-based-installer_{context}"] +[id="installing-ocp-agent_{context}"] == Installing {product-title} with the Agent-based Installer The following procedures deploy a single-node {product-title} in a disconnected environment. You can use these procedures as a basis and modify according to your requirements. diff --git a/installing/installing_with_agent_based_installer/prepare-pxe-assets-agent.adoc b/installing/installing_with_agent_based_installer/prepare-pxe-assets-agent.adoc index 5d3d28d01006..d64342f7a0ba 100644 --- a/installing/installing_with_agent_based_installer/prepare-pxe-assets-agent.adoc +++ b/installing/installing_with_agent_based_installer/prepare-pxe-assets-agent.adoc @@ -12,7 +12,7 @@ The assets you create in these procedures will deploy a single-node {product-tit See xref:../../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#installing-with-agent-based-installer[Installing an {product-title} cluster with the Agent-based Installer] to learn about more configurations available with the Agent-based Installer. -[id="prerequisites_prepare-pxe-assets-agent_{context}"] +[id="prerequisites_{context}"] == Prerequisites * You reviewed details about the xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] processes. From e8158f934cda27b43553b51e48a5ca475ae13088 Mon Sep 17 00:00:00 2001 From: Jeana Routh Date: Fri, 7 Feb 2025 09:21:23 -0500 Subject: [PATCH 150/669] change short attribute to first --- modules/rotating-bound-service-keys.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/rotating-bound-service-keys.adoc b/modules/rotating-bound-service-keys.adoc index 11ddf5e57559..f018e9aa5d35 100644 --- a/modules/rotating-bound-service-keys.adoc +++ b/modules/rotating-bound-service-keys.adoc @@ -25,7 +25,7 @@ ifdef::rotate-azure[on {azure-first}] is configured to operate in manual mode with ifdef::rotate-aws[{sts-short},] ifdef::rotate-gcp[{gcp-wid-short},] -ifdef::rotate-azure[{entra-short},] +ifdef::rotate-azure[{entra-first},] you can rotate the bound service account signer key. To rotate the key, you delete the existing key on your cluster, which causes the Kubernetes API server to create a new key. From 8661b1425f2c31f3f6a56b56c9943735dfb91f53 Mon Sep 17 00:00:00 2001 From: Kathryn Alexander Date: Fri, 31 Jan 2025 10:48:06 -0500 Subject: [PATCH 151/669] adding providing feedback topic --- _topic_maps/_topic_map.yml | 2 ++ ...ding-feedback-on-red-hat-documentation.adoc | 18 ++++++++++++++++++ 2 files changed, 20 insertions(+) create mode 100644 welcome/providing-feedback-on-red-hat-documentation.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 71a08c5a277c..811b1d304e87 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -31,6 +31,8 @@ Topics: File: index - Name: Learn more about OpenShift Container Platform File: learn_more_about_openshift +- Name: Providing documentation feedback + File: providing-feedback-on-red-hat-documentation Distros: openshift-enterprise - Name: About OpenShift Kubernetes Engine File: oke_about diff --git a/welcome/providing-feedback-on-red-hat-documentation.adoc b/welcome/providing-feedback-on-red-hat-documentation.adoc new file mode 100644 index 000000000000..1c5c8614ae61 --- /dev/null +++ b/welcome/providing-feedback-on-red-hat-documentation.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: ASSEMBLY +[id="providing-feedback-on-red-hat-documentation"] += Providing feedback on {product-title} documentation +include::_attributes/common-attributes.adoc[] +:context: providing-feedback-on-red-hat-documentation + +toc::[] + +To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. + +.Procedure + +. Click one of the following links: +** To create a link:https://issues.redhat.com/secure/CreateIssueDetails!init.jspa?pid=12332330&summary=Documentation_issue&issuetype=1&components=12367614&priority=10200&versions=12431396[Jira issue] for {product-title} +** To create a link:https://issues.redhat.com/secure/CreateIssueDetails!init.jspa?pid=12323181&issuetype=1&priority=10200[Jira issue] for {VirtProductName} +. Enter a brief description of the issue in the *Summary*. +. Provide a detailed description of the issue or enhancement in the *Description*. Include a URL to where the issue occurs in the documentation. +. Click *Create* to create the issue. \ No newline at end of file From a6995f802915f1b37329daebd6bd9464a836349d Mon Sep 17 00:00:00 2001 From: Apurva Bhide Date: Fri, 7 Feb 2025 13:01:53 +0530 Subject: [PATCH 152/669] OADP-4882: Update OADP version to 1.4.2 --- _attributes/common-attributes.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_attributes/common-attributes.adoc b/_attributes/common-attributes.adoc index c6a90c372fd3..ba9b387a0c14 100644 --- a/_attributes/common-attributes.adoc +++ b/_attributes/common-attributes.adoc @@ -44,7 +44,7 @@ endif::[] :oadp-first: OpenShift API for Data Protection (OADP) :oadp-full: OpenShift API for Data Protection :oadp-short: OADP -:oadp-version: 1.4.1 +:oadp-version: 1.4.2 :oadp-version-1-3: 1.3.3 :oadp-version-1-4: 1.4.2 :oadp-bsl-api: backupstoragelocations.velero.io From 6da36874ccbe419f819e3209cbb19bf3ce1c7f34 Mon Sep 17 00:00:00 2001 From: srir Date: Mon, 27 Jan 2025 11:34:44 +0530 Subject: [PATCH 153/669] TELCODOCS#2136: Deprecation of SiteConfig v1 --- edge_computing/ztp-advanced-install-ztp.adoc | 2 ++ ...ztp-creating-ztp-crs-for-multiple-managed-clusters.adoc | 2 ++ modules/ztp-deploying-a-site.adoc | 2 ++ .../ztp-generating-install-and-config-crs-manually.adoc | 2 ++ modules/ztp-precaching-ztp-config.adoc | 2 ++ snippets/siteconfig-deprecation-notice.adoc | 7 +++++++ 6 files changed, 17 insertions(+) create mode 100644 snippets/siteconfig-deprecation-notice.adoc diff --git a/edge_computing/ztp-advanced-install-ztp.adoc b/edge_computing/ztp-advanced-install-ztp.adoc index 0374aea69c97..b3d281b5aed1 100644 --- a/edge_computing/ztp-advanced-install-ztp.adoc +++ b/edge_computing/ztp-advanced-install-ztp.adoc @@ -8,6 +8,8 @@ toc::[] You can use `SiteConfig` custom resources (CRs) to deploy custom functionality and configurations in your managed clusters at installation time. +include::snippets/siteconfig-deprecation-notice.adoc[] + include::modules/ztp-customizing-the-install-extra-manifests.adoc[leveloffset=+1] include::modules/ztp-filtering-ai-crs-using-siteconfig.adoc[leveloffset=+1] diff --git a/modules/ztp-creating-ztp-crs-for-multiple-managed-clusters.adoc b/modules/ztp-creating-ztp-crs-for-multiple-managed-clusters.adoc index 7cda513cc763..0e74611b5a79 100644 --- a/modules/ztp-creating-ztp-crs-for-multiple-managed-clusters.adoc +++ b/modules/ztp-creating-ztp-crs-for-multiple-managed-clusters.adoc @@ -15,3 +15,5 @@ You can provision single clusters manually or in batches with {ztp}: Provisioning a single cluster:: Create a single `SiteConfig` CR and related installation and configuration CRs for the cluster, and apply them in the hub cluster to begin cluster provisioning. This is a good way to test your CRs before deploying on a larger scale. Provisioning many clusters:: Install managed clusters in batches of up to 400 by defining `SiteConfig` and related CRs in a Git repository. ArgoCD uses the `SiteConfig` CRs to deploy the sites. The {rh-rhacm} policy generator creates the manifests and applies them to the hub cluster. This starts the cluster provisioning process. + +include::snippets/siteconfig-deprecation-notice.adoc[] diff --git a/modules/ztp-deploying-a-site.adoc b/modules/ztp-deploying-a-site.adoc index a692477e8552..0f6ec7d633da 100644 --- a/modules/ztp-deploying-a-site.adoc +++ b/modules/ztp-deploying-a-site.adoc @@ -8,6 +8,8 @@ Use the following procedure to create a `SiteConfig` custom resource (CR) and related files and initiate the {ztp-first} cluster deployment. +include::snippets/siteconfig-deprecation-notice.adoc[] + .Prerequisites * You have installed the OpenShift CLI (`oc`). diff --git a/modules/ztp-generating-install-and-config-crs-manually.adoc b/modules/ztp-generating-install-and-config-crs-manually.adoc index fed72f454cf2..1110fb11f26c 100644 --- a/modules/ztp-generating-install-and-config-crs-manually.adoc +++ b/modules/ztp-generating-install-and-config-crs-manually.adoc @@ -8,6 +8,8 @@ Use the `generator` entrypoint for the `ztp-site-generate` container to generate the site installation and configuration custom resource (CRs) for a cluster based on `SiteConfig` and `{policy-gen-cr}` CRs. +include::snippets/siteconfig-deprecation-notice.adoc[] + .Prerequisites * You have installed the OpenShift CLI (`oc`). diff --git a/modules/ztp-precaching-ztp-config.adoc b/modules/ztp-precaching-ztp-config.adoc index a84b72d6f7c8..2f90c86e4cf9 100644 --- a/modules/ztp-precaching-ztp-config.adoc +++ b/modules/ztp-precaching-ztp-config.adoc @@ -13,6 +13,8 @@ In the {ztp-first} provisioning workflow, the {factory-prestaging-tool} requires * `nodes.installerArgs` * `nodes.ignitionConfigOverride` +include::snippets/siteconfig-deprecation-notice.adoc[] + .Example SiteConfig with additional fields [source,yaml] ---- diff --git a/snippets/siteconfig-deprecation-notice.adoc b/snippets/siteconfig-deprecation-notice.adoc new file mode 100644 index 000000000000..bd1352eff21f --- /dev/null +++ b/snippets/siteconfig-deprecation-notice.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: SNIPPET +[IMPORTANT] +==== +SiteConfig v1 is deprecated starting with {product-title} version 4.18. Equivalent and improved functionality is now available through the SiteConfig Operator using the `ClusterInstance` custom resource. For more information, see link:https://access.redhat.com/articles/7105238[Procedure to transition from SiteConfig CRs to the ClusterInstance API]. + +For more information about the SiteConfig Operator, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/multicluster_engine_operator_with_red_hat_advanced_cluster_management/index#siteconfig-intro[SiteConfig]. +==== From a553267a79f3cd8b17c838be7b0dd0cc00ed01e3 Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Wed, 5 Feb 2025 10:43:16 +0000 Subject: [PATCH 154/669] OCPBUGS-49859: Added br-ex note to the networking route doc --- modules/virt-example-nmstate-IP-management.adoc | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/modules/virt-example-nmstate-IP-management.adoc b/modules/virt-example-nmstate-IP-management.adoc index 669d545a026f..9269d7b66682 100644 --- a/modules/virt-example-nmstate-IP-management.adoc +++ b/modules/virt-example-nmstate-IP-management.adoc @@ -103,7 +103,7 @@ To define a DNS configuration for a network interface, you must initially specif [IMPORTANT] ==== -You cannot use `br-ex` bridge, an OVNKubernetes-managed Open vSwitch bridge, as the interface when configuring DNS resolvers unless you manually configured a customized `br-ex` bridge. +You cannot use the `br-ex` bridge, an OVN-Kubernetes-managed Open vSwitch bridge, as the interface when configuring DNS resolvers unless you manually configured a customized `br-ex` bridge. For more information, see "Creating a manifest object that includes a customized br-ex bridge" in the _Deploying installer-provisioned clusters on bare metal_ document or the _Installing a user-provisioned cluster on bare metal_ document. ==== @@ -285,4 +285,11 @@ routes: # ... ---- <1> The static IP address for the Ethernet interface. -<2> Next hop address for the node traffic. This must be in the same subnet as the IP address set for the Ethernet interface. +<2> The next hop address for the node traffic. This must be in the same subnet as the IP address set for the Ethernet interface. + +[IMPORTANT] +==== +You cannot use the OVN-Kubernetes `br-ex` bridge as the next hop interface when configuring a static route unless you manually configured a customized `br-ex` bridge. + +For more information, see "Creating a manifest object that includes a customized br-ex bridge" in the _Deploying installer-provisioned clusters on bare metal_ document or the _Installing a user-provisioned cluster on bare metal_ document. +==== From 014316e70151ff8470834f7dc889a61ed4051b99 Mon Sep 17 00:00:00 2001 From: Shane Lovern Date: Thu, 6 Feb 2025 14:59:11 +0000 Subject: [PATCH 155/669] TELCODOCS-1871 - Corrected typo - Configuring an RDMA subsystem for SR-IOV --- _topic_maps/_topic_map.yml | 2 +- networking/hardware_networks/configuring-sriov-rdma-cni.adoc | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 811b1d304e87..d3a79c36354b 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -1568,7 +1568,7 @@ Topics: File: configuring-sriov-net-attach - Name: Configuring an SR-IOV InfiniBand network attachment File: configuring-sriov-ib-attach - - Name: Configuring an RDMA subsytem for SR-IOV + - Name: Configuring an RDMA subsystem for SR-IOV File: configuring-sriov-rdma-cni - Name: Adding a pod to an SR-IOV network File: add-pod diff --git a/networking/hardware_networks/configuring-sriov-rdma-cni.adoc b/networking/hardware_networks/configuring-sriov-rdma-cni.adoc index 3c2ae3dd8e12..8168fdacb9f8 100644 --- a/networking/hardware_networks/configuring-sriov-rdma-cni.adoc +++ b/networking/hardware_networks/configuring-sriov-rdma-cni.adoc @@ -1,6 +1,6 @@ :_mod-docs-content-type: ASSEMBLY [id="configuring-sriov-rdma-cni"] -= Configuring an RDMA subsytem for SR-IOV += Configuring an RDMA subsystem for SR-IOV include::_attributes/common-attributes.adoc[] :context: configuring-sriov-rdma-cni From 41f97ceb462e388cd8633436fc94912702567539 Mon Sep 17 00:00:00 2001 From: EricPonvelle Date: Mon, 16 Sep 2024 17:53:38 -0400 Subject: [PATCH 156/669] OSDOCS-11789 ROSA HCP/Classic split: Prepare your environment - Including changes from OSDOCS-11640 by cherry-picking in 16db23b - Rebased against main following merge of rosa_hcp_migration branch - Corrected missing 500 node max support limit - Applied peer and merge review feedback Squashed: 1 - Intial commit for the ROSA with HCP branch 2 - Adding the Upgrading HCP cherrypick 3 - Adding the Security HCP cherrypick 4 - Upgrading ROSA with HCP updates 5 - Updated the HCP migration to include the ROSA Tutorals and Learning sections 6 - Updated the HCP migration to add the rest of the books from the password protected preview 7 - Repaired the links in Introduction to ROSA book 8 - classic to hcp migration topic maps update commented in the end of section in topic map applied QE suggestions from gdoc applied more QE suggestions from gdoc applied conditions for new hcp distro to assemblies and modules fixed typo on line 13 of configuring registry operator replaced namespace as suggested by QE removed operator pod list removed space in rosa topic maps removed spacing in line 39 of checking status of pods --- _topic_maps/_topic_map_rosa.yml | 6 +- _topic_maps/_topic_map_rosa_hcp.yml | 459 ++++-------------- canary.txt | 0 .../rosa-cli-permission-examples.adoc | 9 + .../cloud-experts-custom-dns-resolver.adoc | 5 + ...-experts-getting-started-what-is-rosa.adoc | 17 +- .../cloud-experts-rosa-sts-explained.adoc | 6 +- ...obb-verify-permissions-sts-deployment.adoc | 2 +- ...ted-aws-vpc-verifying-troubleshooting.adoc | 4 + modules/dedicated-aws-vpn-verifying.adoc | 4 + .../mos-network-prereqs-min-bandwidth.adoc | 9 +- modules/rosa-aws-provisioned.adoc | 111 +++-- modules/rosa-create-objects.adoc | 54 +-- modules/rosa-delete-objects.adoc | 16 +- modules/rosa-edit-objects.adoc | 54 +-- ...g-started-install-configure-cli-tools.adoc | 127 +++-- ...g-account-wide-sts-roles-and-policies.adoc | 2 + modules/rosa-hcp-firewall-prerequisites.adoc | 71 +-- modules/rosa-hcp-vpc-manual.adoc | 26 +- modules/rosa-list-objects.adoc | 16 +- modules/rosa-oidc-understanding.adoc | 8 +- ...rosa-planning-environment-cluster-max.adoc | 2 +- modules/rosa-prereq-roles-overview.adoc | 52 ++ modules/rosa-required-aws-service-quotas.adoc | 68 ++- modules/rosa-sdpolicy-platform.adoc | 118 ++--- .../rosa-sts-aws-requirements-access-req.adoc | 11 - .../rosa-sts-aws-requirements-account.adoc | 24 +- ...-aws-requirements-association-concept.adoc | 4 +- ...equirements-attaching-boundary-policy.adoc | 2 +- ...aws-requirements-creating-association.adoc | 7 +- modules/rosa-sts-aws-requirements-ocm.adoc | 4 +- ...osa-sts-aws-requirements-security-req.adoc | 3 +- modules/rosa-sts-byo-oidc.adoc | 21 +- modules/rosa-sts-ocm-role-creation.adoc | 4 +- modules/rosa-sts-oidc-provider-command.adoc | 2 +- modules/rosa-sts-operator-roles.adoc | 1 + modules/rosa-sts-setting-up-environment.adoc | 1 - modules/rosa-sts-user-role-creation.adoc | 4 +- modules/sd-hcp-planning-cluster-maximums.adoc | 4 - modules/sre-cluster-access.adoc | 2 +- nodes/index.adoc | 13 +- ocm/ocm-overview.adoc | 2 +- rosa_architecture/about-hcp.adoc | 11 +- .../cloud-experts-rosa-hcp-sts-explained.adoc | 22 +- .../rosa-sts-about-iam-resources.adoc | 94 +++- rosa_architecture/rosa-understanding.adoc | 1 + .../rosa-hcp-instance-types.adoc | 3 +- .../rosa-hcp-service-definition.adoc | 8 +- .../rosa-service-definition.adoc | 11 +- .../rosa-sre-access.adoc | 6 +- .../rosa-cluster-notifications.adoc | 4 + rosa_hcp/rosa-hcp-deleting-cluster.adoc | 12 +- ...sa-hcp-sts-creating-a-cluster-quickly.adoc | 1 + .../rosa-shared-vpc-config.adoc | 9 +- .../rosa-sts-deleting-cluster.adoc | 2 +- .../rosa-cloud-expert-prereq-checklist.adoc | 236 +++++---- rosa_planning/rosa-hcp-iam-resources.adoc | 1 + .../rosa-hcp-prepare-iam-roles-resources.adoc | 53 ++ rosa_planning/rosa-hcp-prereqs.adoc | 86 ---- rosa_planning/rosa-planning-environment.adoc | 3 +- rosa_planning/rosa-sts-aws-prereqs.adoc | 102 ++-- rosa_planning/rosa-sts-ocm-role.adoc | 74 +-- .../rosa-sts-required-aws-service-quotas.adoc | 7 +- .../rosa-sts-setting-up-environment.adoc | 35 +- rosa_release_notes/rosa-release-notes.adoc | 12 +- security/audit-log-view.adoc | 5 +- snippets/rosa-existing-vpc-requirements.adoc | 28 ++ snippets/rosa-hcp-rn.adoc | 2 +- snippets/rosa-sts.adoc | 2 +- .../rosa-troubleshooting-iam-resources.adoc | 12 + upgrading/rosa-hcp-upgrading.adoc | 42 +- .../cloud-experts-rosa-hcp-sts-explained.adoc | 25 +- 72 files changed, 1137 insertions(+), 1127 deletions(-) create mode 100644 canary.txt create mode 100644 modules/rosa-prereq-roles-overview.adoc delete mode 100644 modules/rosa-sts-aws-requirements-access-req.adoc create mode 120000 rosa_planning/rosa-hcp-iam-resources.adoc create mode 100644 rosa_planning/rosa-hcp-prepare-iam-roles-resources.adoc delete mode 100644 rosa_planning/rosa-hcp-prereqs.adoc create mode 100644 snippets/rosa-existing-vpc-requirements.adoc diff --git a/_topic_maps/_topic_map_rosa.yml b/_topic_maps/_topic_map_rosa.yml index 6c94b98ef33e..c2dbc485c194 100644 --- a/_topic_maps/_topic_map_rosa.yml +++ b/_topic_maps/_topic_map_rosa.yml @@ -77,7 +77,7 @@ Topics: # - Name: About admission plugins # File: rosa-admission-plug-ins # Distros: openshift-rosa -- Name: About IAM resources for ROSA with STS +- Name: About IAM resources for STS clusters File: rosa-sts-about-iam-resources - Name: OpenID Connect Overview File: rosa-oidc-overview @@ -240,7 +240,7 @@ Topics: File: rosa-limits-scalability - Name: ROSA with HCP limits and scalability File: rosa-hcp-limits-scalability -- Name: Planning your environment +- Name: Planning resource usage in your cluster File: rosa-planning-environment - Name: Required AWS service quotas File: rosa-sts-required-aws-service-quotas @@ -897,8 +897,6 @@ Topics: File: configuring-registry-operator - Name: Accessing the registry File: accessing-the-registry -# - Name: Exposing the registry -# File: securing-exposing-registry --- Name: Operators Dir: operators diff --git a/_topic_maps/_topic_map_rosa_hcp.yml b/_topic_maps/_topic_map_rosa_hcp.yml index 74859cb3191d..62bcc2e717dc 100644 --- a/_topic_maps/_topic_map_rosa_hcp.yml +++ b/_topic_maps/_topic_map_rosa_hcp.yml @@ -60,6 +60,10 @@ Topics: File: rosa-sre-access - Name: Understanding security for ROSA File: rosa-policy-process-security +#Temporarily included the following to keep working through xref errors +- Name: About IAM resources + File: rosa-sts-about-iam-resources + Distros: openshift-rosa-hcp --- Name: Learning about ROSA Dir: rosa_learning @@ -110,25 +114,6 @@ Topics: File: learning-deploying-application-s2i-deployments - Name: Using Source-to-Image (S2I) webhooks for automated deployment File: learning-deploying-s2i-webhook-cicd -# --- -# Name: Architecture -# Dir: architecture -# Distros: openshift-rosa-hcp -# Topics: -# - Name: Architecture overview -# File: index -# - Name: Product architecture -# File: architecture -# - Name: Architecture models -# File: rosa-architecture-models -# - Name: Control plane architecture -# File: control-plane -# - Name: NVIDIA GPU architecture overview -# File: nvidia-gpu-architecture-overview -# - Name: Understanding OpenShift development -# File: understanding-development -# - Name: Admission plugins -# File: admission-plug-ins --- Name: Tutorials Dir: cloud_experts_tutorials @@ -140,28 +125,8 @@ Topics: File: cloud-experts-rosa-hcp-activation-and-account-linking-tutorial - Name: ROSA with HCP private offer acceptance and sharing File: cloud-experts-rosa-with-hcp-private-offer-acceptance-and-sharing -# - Name: Verifying Permissions for a ROSA STS Deployment -# File: rosa-mobb-verify-permissions-sts-deployment -# - Name: Using AWS WAF and Amazon CloudFront to protect ROSA workloads -# File: cloud-experts-using-cloudfront-and-waf -# - Name: Using AWS WAF and AWS ALBs to protect ROSA workloads -# File: cloud-experts-using-alb-and-waf -# - Name: Deploying OpenShift API for Data Protection on a ROSA cluster -# File: cloud-experts-deploy-api-data-protection -# - Name: AWS Load Balancer Operator on ROSA -# File: cloud-experts-aws-load-balancer-operator - Name: Configuring Microsoft Entra ID (formerly Azure Active Directory) as an identity provider File: cloud-experts-entra-id-idp -# - Name: Using AWS Secrets Manager CSI on ROSA with STS -# File: cloud-experts-aws-secret-manager -# - Name: Using AWS Controllers for Kubernetes on ROSA -# File: cloud-experts-using-aws-ack -# - Name: Deploying the External DNS Operator on ROSA -# File: cloud-experts-external-dns -# - Name: Dynamically issuing certificates using the cert-manager Operator on ROSA -# File: cloud-experts-dynamic-certificate-custom-domain -# - Name: Assigning consistent egress IP for external traffic -# File: cloud-experts-consistent-egress-ip # --- # Name: Getting started # Dir: rosa_getting_started @@ -173,27 +138,25 @@ Topics: # File: rosa-getting-started # - Name: Understanding the ROSA with STS deployment workflow # File: rosa-sts-getting-started-workflow -# --- -# Name: Prepare your environment -# Dir: rosa_planning -# Distros: openshift-rosa-hcp -# Topics: -# - Name: Prerequisites checklist for deploying ROSA using STS -# File: rosa-cloud-expert-prereq-checklist -# - Name: Detailed requirements for deploying ROSA using STS -# File: rosa-sts-aws-prereqs -# - Name: ROSA IAM role resources -# File: rosa-sts-ocm-role -# - Name: Limits and scalability -# File: rosa-limits-scalability -#- Name: ROSA with HCP limits and scalability -# File: rosa-hcp-limits-scalability -# - Name: Planning your environment -# File: rosa-planning-environment -# - Name: Required AWS service quotas -# File: rosa-sts-required-aws-service-quotas -# - Name: Setting up your environment -# File: rosa-sts-setting-up-environment +--- +Name: Prepare your environment +Dir: rosa_planning +Distros: openshift-rosa-hcp +Topics: +- Name: Prerequisites checklist for deploying ROSA with HCP + File: rosa-cloud-expert-prereq-checklist +- Name: Detailed requirements for deploying ROSA with HCP + File: rosa-sts-aws-prereqs +- Name: Required IAM roles and resources + File: rosa-hcp-prepare-iam-roles-resources +- Name: ROSA with HCP limits and scalability + File: rosa-hcp-limits-scalability +- Name: Required AWS service quotas + File: rosa-sts-required-aws-service-quotas +- Name: Setting up your environment + File: rosa-sts-setting-up-environment +- Name: Planning resource usage in your cluster + File: rosa-planning-environment # - Name: Preparing Terraform to install ROSA clusters # File: rosa-understanding-terraform --- @@ -266,6 +229,76 @@ Topics: # File: ocm-overview # - Name: Using the OpenShift web console # File: rosa-using-openshift-console +# OSDOCS-11789: Adding the minimum chapters of support and troubleshooting +# docs needed to ensure that xrefs in "Planning your environment" work; +# omit as required by further HCP migration work. +--- +Name: Support +Dir: support +Distros: openshift-rosa-hcp +Topics: +# - Name: Support overview +# File: index +# - Name: Managing your cluster resources +# File: managing-cluster-resources +# - Name: Approved Access +# File: approved-access +# - Name: Getting support +# File: getting-support +# Distros: openshift-rosa-hcp +# - Name: Remote health monitoring with connected clusters +# Dir: remote_health_monitoring +# Distros: openshift-rosa-hcp +# Topics: +# - Name: About remote health monitoring +# File: about-remote-health-monitoring +# - Name: Showing data collected by remote health monitoring +# File: showing-data-collected-by-remote-health-monitoring +# - Name: Using Insights to identify issues with your cluster +# File: using-insights-to-identify-issues-with-your-cluster +# - Name: Using Insights Operator +# File: using-insights-operator +# - Name: Gathering data about your cluster +# File: gathering-cluster-data +# Distros: openshift-rosa-hcp +# - Name: Summarizing cluster specifications +# File: summarizing-cluster-specifications +# Distros: openshift-rosa-hcp +- Name: Troubleshooting + Dir: troubleshooting + Distros: openshift-rosa-hcp + Topics: + - Name: Troubleshooting ROSA installations + File: rosa-troubleshooting-installations + - Name: Troubleshooting networking + File: rosa-troubleshooting-networking + - Name: Troubleshooting IAM roles + File: rosa-troubleshooting-iam-resources + Distros: openshift-rosa-hcp + - Name: Troubleshooting cluster deployments + File: rosa-troubleshooting-deployments + Distros: openshift-rosa-hcp + - Name: Red Hat OpenShift Service on AWS managed resources + File: sd-managed-resources + Distros: openshift-rosa-hcp +--- +# OSDOCS-11789: Adding the minimum chapters of CLI doc needed +# to ensure that xrefs in "Planning your environment" work; +# @BM feel free to alter as needed +Name: CLI tools +Dir: cli_reference +Distros: openshift-rosa-hcp +Topics: +- Name: OpenShift CLI (oc) + Dir: openshift_cli + Topics: + - Name: Getting started with the OpenShift CLI + File: getting-started-cli +- Name: ROSA CLI + Dir: rosa_cli + Topics: + - Name: Least privilege permissions for ROSA CLI commands + File: rosa-cli-permission-examples --- Name: Cluster administration Dir: rosa_cluster_admin @@ -309,90 +342,6 @@ Distros: openshift-rosa-hcp Topics: - Name: Adding additional constraints for IP-based AWS role assumption File: rosa-adding-additional-constraints-for-ip-based-aws-role-assumption -# --- -# - Name: Security -# File: rosa-security -# - Name: Application and cluster compliance -# File: rosa-app-security-compliance -# --- -# Name: Authentication and authorization -# Dir: authentication -# Distros: openshift-rosa-hcp -# Topics: -# - Name: Authentication and authorization overview -# File: index -# - Name: Understanding authentication -# File: understanding-authentication -# - Name: Configuring the internal OAuth server -# File: configuring-internal-oauth -# - Name: Configuring OAuth clients -# File: configuring-oauth-clients -# - Name: Managing user-owned OAuth access tokens -# File: managing-oauth-access-tokens -# - Name: Understanding identity provider configuration -# File: understanding-identity-provider -# - Name: Configuring identity providers -# File: sd-configuring-identity-providers -# - Name: Configuring identity providers -# Dir: identity_providers -# Topics: -# - Name: Configuring an htpasswd identity provider -# File: configuring-htpasswd-identity-provider -# - Name: Configuring a Keystone identity provider -# File: configuring-keystone-identity-provider -# - Name: Configuring an LDAP identity provider -# File: configuring-ldap-identity-provider -# - Name: Configuring a basic authentication identity provider -# File: configuring-basic-authentication-identity-provider -# - Name: Configuring a request header identity provider -# File: configuring-request-header-identity-provider -# - Name: Configuring a GitHub or GitHub Enterprise identity provider -# File: configuring-github-identity-provider -# - Name: Configuring a GitLab identity provider -# File: configuring-gitlab-identity-provider -# - Name: Configuring a Google identity provider -# File: configuring-google-identity-provider -# - Name: Configuring an OpenID Connect identity provider -# File: configuring-oidc-identity-provider -# - Name: Using RBAC to define and apply permissions -# File: using-rbac -# - Name: Removing the kubeadmin user -# File: remove-kubeadmin -# - Name: Configuring LDAP failover -# File: configuring-ldap-failover -# - Name: Understanding and creating service accounts -# File: understanding-and-creating-service-accounts -# - Name: Using service accounts in applications -# File: using-service-accounts-in-applications -# - Name: Using a service account as an OAuth client -# File: using-service-accounts-as-oauth-client -# - Name: Assuming an AWS IAM role for a service account -# File: assuming-an-aws-iam-role-for-a-service-account -# - Name: Scoping tokens -# File: tokens-scoping -# - Name: Using bound service account tokens -# File: bound-service-account-tokens -# - Name: Managing security context constraints -# File: managing-security-context-constraints -# - Name: Understanding and managing pod security admission -# File: understanding-and-managing-pod-security-admission -# - Name: Impersonating the system:admin user -# File: impersonating-system-admin -# - Name: Syncing LDAP groups -# File: ldap-syncing -# - Name: Managing cloud provider credentials -# Dir: managing_cloud_provider_credentials -# Topics: -# - Name: About the Cloud Credential Operator -# File: about-cloud-credential-operator -# - Name: Mint mode -# File: cco-mode-mint -# - Name: Passthrough mode -# File: cco-mode-passthrough -# - Name: Manual mode with long-term credentials for components -# File: cco-mode-manual -# - Name: Manual mode with short-term credentials for components -# File: cco-short-term-creds --- Name: Upgrading Dir: upgrading @@ -717,124 +666,6 @@ Topics: # File: nodes-secondary-scheduler-configuring # - Name: Uninstalling the Secondary Scheduler Operator # File: nodes-secondary-scheduler-uninstalling -# - Name: Using Jobs and DaemonSets -# Dir: jobs -# Topics: -# - Name: Running background tasks on nodes automatically with daemonsets -# File: nodes-pods-daemonsets -# Distros: openshift-rosa-hcp -# - Name: Running tasks in pods using jobs -# File: nodes-nodes-jobs -# - Name: Working with nodes -# Dir: nodes -# Distros: openshift-rosa-hcp -# Topics: -# - Name: Viewing and listing the nodes in your cluster -# File: nodes-nodes-viewing -# cannot use oc adm cordon; cannot patch resource "machinesets"; cannot patch resource "nodes" -# - Name: Working with nodes -# File: nodes-nodes-working -# cannot create resource "kubeletconfigs", "schedulers", "machineconfigs", "kubeletconfigs" -# - Name: Managing nodes -# File: nodes-nodes-managing -# cannot create resource "kubeletconfigs" -# - Name: Managing graceful node shutdown -# File: nodes-nodes-graceful-shutdown -# cannot create resource "kubeletconfigs" -# - Name: Managing the maximum number of pods per node -# File: nodes-nodes-managing-max-pods -# - Name: Using the Node Tuning Operator -# File: nodes-node-tuning-operator -# - Name: Remediating, fencing, and maintaining nodes -# File: nodes-remediating-fencing-maintaining-rhwa -# Cannot create namespace needed to oc debug and reboot; revisit after Operator book converted -# - Name: Understanding node rebooting -# File: nodes-nodes-rebooting -# cannot create resource "kubeletconfigs" -# - Name: Freeing node resources using garbage collection -# File: nodes-nodes-garbage-collection -# cannot create resource "kubeletconfigs" -# - Name: Allocating resources for nodes -# File: nodes-nodes-resources-configuring -# cannot create resource "kubeletconfigs" -# - Name: Allocating specific CPUs for nodes in a cluster -# File: nodes-nodes-resources-cpus -# cannot create resource "kubeletconfigs" -# - Name: Configuring the TLS security profile for the kubelet -# File: nodes-nodes-tls -# Distros: openshift-rosa-hcp -# - Name: Monitoring for problems in your nodes -# File: nodes-nodes-problem-detector -# - Name: Machine Config Daemon metrics -# File: nodes-nodes-machine-config-daemon-metrics -# cannot patch resource "nodes" -# - Name: Creating infrastructure nodes -# File: nodes-nodes-creating-infrastructure-nodes -# - Name: Working with containers -# Dir: containers -# Topics: -# - Name: Understanding containers -# File: nodes-containers-using -# - Name: Using Init Containers to perform tasks before a pod is deployed -# File: nodes-containers-init -# Distros: openshift-rosa-hcp -# - Name: Using volumes to persist container data -# File: nodes-containers-volumes -# - Name: Mapping volumes using projected volumes -# File: nodes-containers-projected-volumes -# - Name: Allowing containers to consume API objects -# File: nodes-containers-downward-api -# - Name: Copying files to or from a container -# File: nodes-containers-copying-files -# - Name: Executing remote commands in a container -# File: nodes-containers-remote-commands -# - Name: Using port forwarding to access applications in a container -# File: nodes-containers-port-forwarding -# cannot patch resource "configmaps" -# - Name: Using sysctls in containers -# File: nodes-containers-sysctls -# - Name: Working with clusters -# Dir: clusters -# Topics: -# - Name: Viewing system event information in a cluster -# File: nodes-containers-events -# - Name: Analyzing cluster resource levels -# File: nodes-cluster-resource-levels -# Distros: openshift-rosa-hcp -# - Name: Setting limit ranges -# File: nodes-cluster-limit-ranges -# - Name: Configuring cluster memory to meet container memory and risk requirements -# File: nodes-cluster-resource-configure -# Distros: openshift-rosa-hcp -# - Name: Configuring your cluster to place pods on overcommited nodes -# File: nodes-cluster-overcommit -# Distros: openshift-rosa-hcp -# - Name: Configuring the Linux cgroup version on your nodes -# File: nodes-cluster-cgroups-2 -# Distros: openshift-enterprise -# - Name: Configuring the Linux cgroup version on your nodes -# File: nodes-cluster-cgroups-okd -# Distros: openshift-origin -# The TechPreviewNoUpgrade Feature Gate is not allowed -# - Name: Enabling features using FeatureGates -# File: nodes-cluster-enabling-features -# Distros: openshift-rosa-hcp -# Error: nodes.config.openshift.io "cluster" could not be patched -# - Name: Improving cluster stability in high latency environments using worker latency profiles -# File: nodes-cluster-worker-latency-profiles -# Not supported per Michael McNeill -# - Name: Remote worker nodes on the network edge -# Dir: edge -# Topics: -# - Name: Using remote worker node at the network edge -# File: nodes-edge-remote-workers -# Not supported per Michael McNeill -# - Name: Worker nodes for single-node OpenShift clusters -# Dir: nodes -# Distros: openshift-rosa-hcp -# Topics: -# - Name: Adding worker nodes to single-node OpenShift clusters -# File: nodes-sno-worker-nodes - Name: Using jobs and daemon sets Dir: jobs Topics: @@ -945,107 +776,3 @@ Topics: # Topics: # - Name: Adding worker nodes to single-node OpenShift clusters # File: nodes-sno-worker-nodes ---- -Name: Service Mesh -Dir: service_mesh -Distros: openshift-rosa-hcp -Topics: -# Tech Preview -# - Name: Service Mesh 3.x -# Dir: v3x -# Topics: -# - Name: OpenShift Service Mesh 3.0 TP1 overview -# File: ossm-service-mesh-3-0-overview -- Name: Service Mesh 2.x - Dir: v2x - Topics: - - Name: About OpenShift Service Mesh - File: ossm-about - - Name: Service Mesh 2.x release notes - File: servicemesh-release-notes - - Name: Service Mesh architecture - File: ossm-architecture - - Name: Service Mesh deployment models - File: ossm-deployment-models - - Name: Service Mesh and Istio differences - File: ossm-vs-community - - Name: Preparing to install Service Mesh - File: preparing-ossm-installation - - Name: Installing the Operators - File: installing-ossm - - Name: Creating the ServiceMeshControlPlane - File: ossm-create-smcp - - Name: Adding workloads to a service mesh - File: ossm-create-mesh - - Name: Enabling sidecar injection - File: prepare-to-deploy-applications-ossm - - Name: Upgrading Service Mesh - File: upgrading-ossm - - Name: Managing users and profiles - File: ossm-profiles-users - - Name: Security - File: ossm-security - - Name: Traffic management - File: ossm-traffic-manage - - Name: Metrics, logs, and traces - File: ossm-observability - - Name: Performance and scalability - File: ossm-performance-scalability - - Name: Deploying to production - File: ossm-deploy-production - - Name: Federation - File: ossm-federation - - Name: Extensions - File: ossm-extensions - - Name: 3scale WebAssembly for 2.1 - File: ossm-threescale-webassembly-module - - Name: 3scale Istio adapter for 2.0 - File: threescale-adapter - - Name: Troubleshooting Service Mesh - File: ossm-troubleshooting-istio - - Name: Control plane configuration reference - File: ossm-reference-smcp - - Name: Kiali configuration reference - File: ossm-reference-kiali - - Name: Jaeger configuration reference - File: ossm-reference-jaeger - - Name: Uninstalling Service Mesh - File: removing-ossm -# Service Mesh 1.x is tech preview -# - Name: Service Mesh 1.x -# Dir: v1x -# Topics: -# - Name: Service Mesh 1.x release notes -# File: servicemesh-release-notes -# - Name: Service Mesh architecture -# File: ossm-architecture -# - Name: Service Mesh and Istio differences -# File: ossm-vs-community -# - Name: Preparing to install Service Mesh -# File: preparing-ossm-installation -# - Name: Installing Service Mesh -# File: installing-ossm -# - Name: Security -# File: ossm-security -# - Name: Traffic management -# File: ossm-traffic-manage -# - Name: Deploying applications on Service Mesh -# File: prepare-to-deploy-applications-ossm -# - Name: Data visualization and observability -# File: ossm-observability -# - Name: Custom resources -# File: ossm-custom-resources -# - Name: 3scale Istio adapter for 1.x -# File: threescale-adapter -# - Name: Removing Service Mesh -# File: removing-ossm ---- -Name: Serverless -Dir: serverless -Distros: openshift-rosa-hcp -Topics: -- Name: About Serverless - Dir: about - Topics: - - Name: Serverless overview - File: about-serverless diff --git a/canary.txt b/canary.txt new file mode 100644 index 000000000000..e69de29bb2d1 diff --git a/cli_reference/rosa_cli/rosa-cli-permission-examples.adoc b/cli_reference/rosa_cli/rosa-cli-permission-examples.adoc index 3328d473376c..8136e2bc740b 100644 --- a/cli_reference/rosa_cli/rosa-cli-permission-examples.adoc +++ b/cli_reference/rosa_cli/rosa-cli-permission-examples.adoc @@ -12,16 +12,25 @@ You can create roles with permissions that adhere to the principal of least priv Although the policies and commands presented in this topic will work in conjunction with one another, you might have other restrictions within your AWS environment that make the policies for these commands insufficient for your specific needs. Red{nbsp}Hat provides these examples as a baseline, assuming no other AWS Identity and Access Management (IAM) restrictions are present. ==== +// Omitting from HCP build until BM gets to review +ifdef::temp-ifdef[] [NOTE] ==== The examples listed cover several of the most common ROSA CLI commands. For more information regarding ROSA CLI commands, see xref:../../cli_reference/rosa_cli/rosa-manage-objects-cli.adoc#rosa-common-commands_rosa-managing-objects-cli[Common commands and arguments]. ==== +endif::[] For more information about configuring permissions, policies, and roles in the AWS console, see link:https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html[AWS Identity and Access Management] in the AWS documentation. include::modules/rosa-cli-hcp-classic-examples.adoc[leveloffset=+1] + +ifdef::temp-ifdef[] include::modules/rosa-cli-hcp-examples.adoc[leveloffset=+1] +endif::[] +ifdef::temp-ifdef[] include::modules/rosa-cli-classic-examples.adoc[leveloffset=+1] +endif::[] + include::modules/rosa-cli-no-permissions-required.adoc[leveloffset=+1] [role="_additional-resources"] diff --git a/cloud_experts_tutorials/cloud-experts-custom-dns-resolver.adoc b/cloud_experts_tutorials/cloud-experts-custom-dns-resolver.adoc index ce4df8e6cfd7..74b5903457a3 100644 --- a/cloud_experts_tutorials/cloud-experts-custom-dns-resolver.adoc +++ b/cloud_experts_tutorials/cloud-experts-custom-dns-resolver.adoc @@ -207,7 +207,12 @@ ROSA Classic clusters require you to configure DNS forwarding for one private ho This Amazon Route 53 private hosted zones is created during cluster creation. The `domain-prefix` is a customer-specified value, but the `unique-ID` is randomly generated during cluster creation and cannot be preselected. As such, you must wait for the cluster creation process to begin before configuring forwarding for the `p1.openshiftapps.com` private hosted zone. +ifdef::temp-ifdef[] . xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-account-wide-sts-roles-and-policies_rosa-sts-creating-a-cluster-quickly[Create your cluster]. +endif::[] +ifdef::temp-ifdef[] +* xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Create your cluster]. +endif::[] + . Once your cluster has begun the creation process, locate the newly created private hosted zone: + diff --git a/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-what-is-rosa.adoc b/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-what-is-rosa.adoc index 9566e15845f4..c58e0dd06505 100644 --- a/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-what-is-rosa.adoc +++ b/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-what-is-rosa.adoc @@ -60,7 +60,13 @@ For a complete list of supported instances for worker nodes see xref:../../rosa_ Autoscaling allows you to automatically adjust the size of the cluster based on the current workload. See xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#rosa-nodes-about-autoscaling-nodes[About autoscaling nodes on a cluster] for more details. === Maximum number of worker nodes -The maximum number of worker nodes in ROSA clusters versions 4.14.14 and later is 249. For earlier versions, the limit is 180 nodes. See xref:../../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[limits and scalability] for more details on node counts. +The maximum number of worker nodes in ROSA clusters versions 4.14.14 and later is 249. For earlier versions, the limit is 180 nodes. +ifdef::temp-ifdef[] +See xref:../../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[limits and scalability] for more details on node counts. +endif::[] +ifdef::temp-ifdef[] +See xref:../../rosa_planning/rosa-hcp-limits-scalability.adoc#rosa-hcp-limits-scalability[limits and scalability] for more details on node counts. +endif::[] A list of the account-wide and per-cluster roles is provided in the xref:../../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies-creation-methods_rosa-sts-about-iam-resources[ROSA documentation]. @@ -201,8 +207,13 @@ Ingress can be limited to a PrivateLink for Red{nbsp}Hat SREs and a VPN for cust ** xref:../../rosa_architecture/rosa_policy_service_definition/rosa-policy-process-security.adoc#rosa-policy-process-security[Understanding Process and Security] ** xref:../../rosa_architecture/rosa_policy_service_definition/rosa-policy-understand-availability.adoc#rosa-policy-understand-availability[About Availability] ** xref:../../rosa_architecture/rosa_policy_service_definition/rosa-life-cycle.adoc#rosa-life-cycle[Updates Lifecycle] -** xref:../../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[Limits and Scalability] +ifdef::temp-ifdef[] +** xref:../../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[limits and scalability] +endif::[] +ifdef::temp-ifdef[] +** xref:../../rosa_planning/rosa-hcp-limits-scalability.adoc#rosa-hcp-limits-scalability[limits and scalability] +endif::[] ** link:https://red.ht/rosa-roadmap[ROSA roadmap] * link:https://learn.openshift.com[Learn about OpenShift] * {cluster-manager-url} -* link:https://support.redhat.com[Red{nbsp}Hat Support] +* link:https://support.redhat.com[Red{nbsp}Hat Support] \ No newline at end of file diff --git a/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-rosa-sts-explained.adoc b/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-rosa-sts-explained.adoc index 10c83c203915..a67ec4224208 100644 --- a/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-rosa-sts-explained.adoc +++ b/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-rosa-sts-explained.adoc @@ -66,16 +66,16 @@ STS roles and policies must be created for each ROSA cluster. To make this easie * *OpenID Connect (OIDC)* - This provides a mechanism for cluster Operators to authenticate with AWS, assume the cluster roles through a trust policy, and obtain temporary credentials from STS to make the required API calls. * *Roles and policies* - The roles and policies are one of the main differences between ROSA with STS and ROSA with IAM Users. For ROSA with STS, the roles and policies used by ROSA are broken into account-wide roles and policies and Operator roles and policies. + -The policies determine the allowed actions for each of the roles. See xref:../../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources for ROSA clusters that use STS] for more details about the individual roles and policies. +The policies determine the allowed actions for each of the roles. See xref:../../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] for more details about the individual roles and policies. + -** The account-wide roles are: +** The following account-wide roles are required: + *** ManagedOpenShift-Installer-Role *** ManagedOpenShift-ControlPlane-Role *** ManagedOpenShift-Worker-Role *** ManagedOpenShift-Support-Role + -** The account-wide policies are: +** The following account-wide policies are required: + *** ManagedOpenShift-Installer-Role-Policy *** ManagedOpenShift-ControlPlane-Role-Policy diff --git a/cloud_experts_tutorials/rosa-mobb-verify-permissions-sts-deployment.adoc b/cloud_experts_tutorials/rosa-mobb-verify-permissions-sts-deployment.adoc index 80d543002c0f..f1f7f774b049 100644 --- a/cloud_experts_tutorials/rosa-mobb-verify-permissions-sts-deployment.adoc +++ b/cloud_experts_tutorials/rosa-mobb-verify-permissions-sts-deployment.adoc @@ -18,7 +18,7 @@ toc::[] To proceed with the deployment of a ROSA cluster, an account must support the required roles and permissions. AWS Service Control Policies (SCPs) cannot block the API calls made by the installer or operator roles. -Details about the IAM resources required for an STS-enabled installation of ROSA can be found here: xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources for ROSA clusters that use STS] +Details about the IAM resources required for an STS-enabled installation of ROSA can be found here: xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] This guide is validated for ROSA v4.11.X. diff --git a/modules/dedicated-aws-vpc-verifying-troubleshooting.adoc b/modules/dedicated-aws-vpc-verifying-troubleshooting.adoc index 3da569f90618..cf3373e7fb29 100644 --- a/modules/dedicated-aws-vpc-verifying-troubleshooting.adoc +++ b/modules/dedicated-aws-vpc-verifying-troubleshooting.adoc @@ -32,6 +32,7 @@ quick and clear output if a connection can be established: .. Create a temporary pod using the `busybox` image, which cleans up after itself: + +[source,terminal] ---- $ oc run netcat-test \ --image=busybox -i -t \ @@ -44,6 +45,7 @@ $ oc run netcat-test \ -- * Example successful connection results: + +[source,terminal] ---- / nc -zvv 192.168.1.1 8080 10.181.3.180 (10.181.3.180:8080) open @@ -52,6 +54,7 @@ sent 0, rcvd 0 * Example failed connection results: + +[source,terminal] ---- / nc -zvv 192.168.1.2 8080 nc: 10.181.3.180 (10.181.3.180:8081): Connection refused @@ -61,6 +64,7 @@ sent 0, rcvd 0 .. Exit the container, which automatically deletes the Pod: + +[source,terminal] ---- / exit ---- diff --git a/modules/dedicated-aws-vpn-verifying.adoc b/modules/dedicated-aws-vpn-verifying.adoc index 0e798d0352ca..06ba4e32d943 100644 --- a/modules/dedicated-aws-vpn-verifying.adoc +++ b/modules/dedicated-aws-vpn-verifying.adoc @@ -30,6 +30,7 @@ quick and clear output if a connection can be established: .. Create a temporary pod using the `busybox` image, which cleans up after itself: + +[source,terminal] ---- $ oc run netcat-test \ --image=busybox -i -t \ @@ -42,6 +43,7 @@ $ oc run netcat-test \ -- * Example successful connection results: + +[source,terminal] ---- / nc -zvv 192.168.1.1 8080 10.181.3.180 (10.181.3.180:8080) open @@ -50,6 +52,7 @@ sent 0, rcvd 0 * Example failed connection results: + +[source,terminal] ---- / nc -zvv 192.168.1.2 8080 nc: 10.181.3.180 (10.181.3.180:8081): Connection refused @@ -59,6 +62,7 @@ sent 0, rcvd 0 .. Exit the container, which automatically deletes the Pod: + +[source,terminal] ---- / exit ---- diff --git a/modules/mos-network-prereqs-min-bandwidth.adoc b/modules/mos-network-prereqs-min-bandwidth.adoc index 56675c3b2d6b..5e5900d061ac 100644 --- a/modules/mos-network-prereqs-min-bandwidth.adoc +++ b/modules/mos-network-prereqs-min-bandwidth.adoc @@ -2,14 +2,9 @@ // // * rosa_planning/rosa-sts-aws-prereqs.adoc -//Define the minimum bandwidth variable here so we can use this same module across all managed variants as testing completes for XCMSTRAT-422 -ifdef::openshift-rosa,openshift-rosa-classic,openshift-rosa-hcp,openshift-osd,openshift-azure[] -:mos-min-bandwidth: 120{nbsp}Mbps -endif::[] - [id="mos-network-prereqs-min-bandwidth_{context}"] = Minimum bandwidth -During cluster deployment, {product-title} requires a minimum bandwidth of {mos-min-bandwidth} between cluster resources and public internet resources. When network connectivity is slower than {mos-min-bandwidth} (for example, when connecting through a proxy) the cluster installation process times out and deployment fails. +During cluster deployment, {product-title} requires a minimum bandwidth of 120{nbsp}Mbps between cluster resources and public internet resources. When network connectivity is slower than 120{nbsp}Mbps (for example, when connecting through a proxy) the cluster installation process times out and deployment fails. -After deployment, network requirements are determined by your workload. However, a minimum bandwidth of {mos-min-bandwidth} helps to ensure timely cluster and operator upgrades. +After deployment, network requirements are determined by your workload. However, a minimum bandwidth of 120{nbsp}Mbps helps to ensure timely cluster and operator upgrades. diff --git a/modules/rosa-aws-provisioned.adoc b/modules/rosa-aws-provisioned.adoc index 6fb292fb2c75..ecc1373d8653 100644 --- a/modules/rosa-aws-provisioned.adoc +++ b/modules/rosa-aws-provisioned.adoc @@ -6,58 +6,96 @@ [id="rosa-aws-policy-provisioned_{context}"] = Provisioned AWS Infrastructure - -This is an overview of the provisioned Amazon Web Services (AWS) components on a deployed {product-title} (ROSA) cluster. For a more detailed listing of all provisioned AWS components, see the link:https://access.redhat.com/documentation/en-us/openshift_container_platform/[OpenShift Container Platform documentation]. +This is an overview of the provisioned {AWS} components on a deployed {product-title} (ROSA) cluster. [id="rosa-ec2-instances_{context}"] == EC2 instances -AWS EC2 instances are required for deploying the control plane and data plane functions of ROSA in the AWS public cloud. +AWS EC2 instances are required to deploy +ifndef::openshift-rosa-hcp[] +the control plane and data plane functions for +endif::openshift-rosa-hcp[] +{product-title}. + +ifndef::openshift-rosa-hcp[] +Instance types can vary for control plane and infrastructure nodes, depending on the worker node count. + +At a minimum, the following EC2 instances are deployed: + +* Three `m5.2xlarge` control plane nodes +* Two `r5.xlarge` infrastructure nodes +* Two `m5.xlarge` worker nodes +endif::openshift-rosa-hcp[] -Instance types can vary for control plane and infrastructure nodes, depending on the worker node count. At a minimum, the following EC2 instances will be deployed: +ifdef::openshift-rosa-hcp[] +At a minimum, two `m5.xlarge` EC2 instances are deployed for use as worker nodes. +endif::openshift-rosa-hcp[] -- Three `m5.2xlarge` control plane nodes -- Two `r5.xlarge` infrastructure nodes -- Two `m5.xlarge` customizable worker nodes +The instance type shown for worker nodes is the default value, but you can customize the instance type for worker nodes according to the needs of your workload. +ifndef::openshift-rosa-hcp[] For further guidance on worker node counts, see the information about initial planning considerations in the "Limits and scalability" topic listed in the "Additional resources" section of this page. +endif::openshift-rosa-hcp[] [id="rosa-ebs-storage_{context}"] == Amazon Elastic Block Store storage -Amazon Elastic Block Store (Amazon EBS) block storage is used for both local node storage and persistent volume storage. +Amazon Elastic Block Store (Amazon EBS) block storage is used for both local node storage and persistent volume storage. The following values are the default size of the local, ephemeral storage provisioned for each EC2 instance. Volume requirements for each EC2 instance: -- Control Plane Volume -* Size: 350GB -* Type: gp3 -* Input/Output Operations Per Second: 1000 +ifndef::openshift-rosa-hcp[] -- Infrastructure Volume -* Size: 300GB -* Type: gp3 -* Input/Output Operations Per Second: 900 +* Control Plane Volume +** Size: 350GB +** Type: gp3 +** Input/Output Operations Per Second: 1000 -- Worker Volume -* Size: 300GB -* Type: gp3 -* Input/Output Operations Per Second: 900 +* Infrastructure Volume +** Size: 300GB +** Type: gp3 +** Input/Output Operations Per Second: 900 +endif::openshift-rosa-hcp[] + +* Worker Volume +** Default size: 300GB +ifndef::openshift-rosa-hcp[] +** Minimum size: 128GB +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] +** Minimum size: 75GB +endif::openshift-rosa-hcp[] +** Type: gp3 +** Input/Output Operations Per Second: 900 + +ifndef::openshift-rosa-hcp[] [NOTE] ==== Clusters deployed before the release of {OCP} 4.11 use gp2 type storage by default. ==== +endif::openshift-rosa-hcp[] [id="rosa-elastic-load-balancers_{context}"] == Elastic Load Balancing +ifndef::openshift-rosa-hcp[] +Each cluster can use up to two Classic Load Balancers for application router and up to two Network Load Balancers for API. +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +By default, one Network Load Balancer is created for use by the default ingress controller. You can create additional load balancers of the following types according to the needs of your workload: + +* Classic Load Balancers +* Network Load Balancers +* Application Load Balancers -Up to two Network Load Balancers for API and up to two Classic Load Balancers for application router. For more information, see the link:https://aws.amazon.com/elasticloadbalancing/features/#Details_for_Elastic_Load_Balancing_Products[ELB documentation for AWS]. +endif::openshift-rosa-hcp[] +For more information, see the link:https://aws.amazon.com/elasticloadbalancing/features/#Details_for_Elastic_Load_Balancing_Products[ELB documentation for AWS]. [id="rosa-s3-storage_{context}"] == S3 storage -The image registry is backed by AWS S3 storage. Pruning of resources is performed regularly to optimize S3 usage and cluster performance. +The image registry is backed by AWS S3 storage. Resources Pruning of resources is performed regularly to optimize S3 usage and cluster performance. +//TODO OSDOCS-11789: Confirm that this is still valid [NOTE] ==== Two buckets are required with a typical size of 2TB each. @@ -65,29 +103,38 @@ Two buckets are required with a typical size of 2TB each. [id="rosa-vpc_{context}"] == VPC -Customers should expect to see one VPC per cluster. Additionally, the VPC will need the following configurations: + +Configure your VPC according to the following requirements: * *Subnets*: Two subnets for a cluster with a single availability zone, or six subnets for a cluster with multiple availability zones. + +Red{nbsp}Hat strongly recommends using unique subnets for each cluster. Sharing subnets between multiple clusters is not recommended. ++ [NOTE] ==== A *public subnet* connects directly to the internet through an internet gateway. A *private subnet* connects to the internet through a network address translation (NAT) gateway. ==== -+ + * *Route tables*: One route table per private subnet, and one additional table per cluster. * *Internet gateways*: One Internet Gateway per cluster. * *NAT gateways*: One NAT Gateway per public subnet. +//TODO OSDOCS-11789: This diagram needs to be confirmed for HCP before it is included +ifndef::openshift-rosa-hcp[] .Sample VPC Architecture image::VPC-Diagram.png[VPC Reference Architecture] +endif::openshift-rosa-hcp[] [id="rosa-security-groups_{context}"] == Security groups -AWS security groups provide security at the protocol and port access level; they are associated with EC2 instances and Elastic Load Balancing (ELB) load balancers. Each security group contains a set of rules that filter traffic coming in and out of one or more EC2 instances. You must ensure the ports required for the OpenShift installation are open on your network and configured to allow access between hosts. +AWS security groups provide security at the protocol and port access level; they are associated with EC2 instances and Elastic Load Balancing (ELB) load balancers. Each security group contains a set of rules that filter traffic coming in and out of one or more EC2 instances. +Ensure that the ports required for cluster installation and operation are open on your network and configured to allow access between hosts. The requirements for the default security groups are listed in xref:required-secgroup-ports_{context}[Required ports for default security groups]. + +[id="required-secgroup-ports_{context}"] .Required ports for default security groups [cols="2a,2a,2a,2a",options="header"] |=== @@ -97,7 +144,7 @@ AWS security groups provide security at the protocol and port access level; they |IP Protocol |Port range - +ifndef::openshift-rosa-hcp[] .4+|MasterSecurityGroup .4+|`AWS::EC2::SecurityGroup` |`icmp` @@ -111,6 +158,7 @@ AWS security groups provide security at the protocol and port access level; they |`tcp` |`22623` +endif::openshift-rosa-hcp[] .2+|WorkerSecurityGroup .2+|`AWS::EC2::SecurityGroup` @@ -120,7 +168,7 @@ AWS security groups provide security at the protocol and port access level; they |`tcp` |`22` - +ifndef::openshift-rosa-hcp[] .2+|BootstrapSecurityGroup .2+|`AWS::EC2::SecurityGroup` @@ -129,12 +177,19 @@ AWS security groups provide security at the protocol and port access level; they |`tcp` |`19531` +endif::openshift-rosa-hcp[] |=== [id="rosa-security-groups-custom_{context}"] === Additional custom security groups -When you create a cluster using an existing non-managed VPC, you can add additional custom security groups during cluster creation. Custom security groups are subject to the following limitations: +ifndef::openshift-rosa-hcp[] +When you create a cluster using an existing non-managed VPC, you +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +You +endif::openshift-rosa-hcp[] +can add additional custom security groups during cluster creation. Custom security groups are subject to the following limitations: * You must create the custom security groups in AWS before you create the cluster. For more information, see link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html[Amazon EC2 security groups for Linux instances]. * You must associate the custom security groups with the VPC that the cluster will be installed into. Your custom security groups cannot be associated with another VPC. diff --git a/modules/rosa-create-objects.adoc b/modules/rosa-create-objects.adoc index c1bc2d88a094..fd40ac15f0fa 100644 --- a/modules/rosa-create-objects.adoc +++ b/modules/rosa-create-objects.adoc @@ -666,14 +666,13 @@ $ rosa create ingress --cluster=mycluster --label-match=foo=bar,bar=baz [id="rosa-create-kubeletconfig_{context}"] == create kubeletconfig -Create a custom `KubeletConfig` object to allow custom configuration of nodes in a machine pool. For {product-title} clusters, these settings are cluster-wide. For {hcp-title-first} clusters, each machine pool can be configured differently. -//TODO OSDOCS-10439: Add conditions back when HCP and Classic are published separately -// ifdef::openshift-rosa-classic[] -// cluster. -// endif::openshift-rosa-classic[] -// ifdef::openshift-rosa-hcp[] -// machine pool. -// endif::openshift-rosa-hcp[] +Create a custom `KubeletConfig` object to allow custom configuration of nodes in a +ifdef::temp-ifdef[] +cluster. +endif::[] +ifdef::temp-ifdef[] +machine pool. +endif::[] .Syntax [source,terminal] @@ -687,27 +686,26 @@ $ rosa create kubeletconfig --cluster= --name= -a|Required. The maximum number of PIDs for each node in the machine pool associated with the `KubeletConfig` object. -//TODO OSDOCS-10439: Add conditions back when HCP and Classic are published separately -// ifdef::openshift-rosa-classic[] -// cluster. -// endif::openshift-rosa-classic[] -// ifdef::openshift-rosa-hcp[] -// machine pool associated with the `KubeletConfig` object. -// endif::openshift-rosa-hcp[] +a|Required. The maximum number of PIDs for each node in the +ifdef::temp-ifdef[] +cluster. +endif::[] +ifdef::temp-ifdef[] +machine pool associated with the `KubeletConfig` object. +endif::[] a|-c, --cluster \| |Required. The name or ID of the cluster in which to create the `KubeletConfig` object. |--name -a| Required for {hcp-title-first} clusters. Optional for {product-title}, as there is only one `KubeletConfig` for the cluster. Specifies a name for the `KubeletConfig` object. -//TODO OSDOCS-10439: Add conditions back when HCP and Classic are published separately -// ifdef::openshift-rosa-classic[] -// Optional. -// endif::openshift-rosa-classic[] -// ifdef::openshift-rosa-hcp[] -// Required. -// endif::openshift-rosa-hcp[] +a| +ifdef::temp-ifdef[] +Optional. +endif::[] +ifdef::temp-ifdef[] +Required. +endif::[] +Specifies a name for the `KubeletConfig` object. |-i, --interactive |Enable interactive mode. @@ -755,10 +753,10 @@ For {hcp-title} clusters, the minimum disk size is 75 GiB, and the maximum is 16 |The instance type (string) that should be used. Default: `m5.xlarge` //TODO OSDOCS-10439: Add conditions back when HCP and Classic are published separately -//ifdef::openshift-rosa-hcp[] +//ifdef::temp-ifdef[] a|--kubelet-configs | For {hcp-title-first} clusters, the names of any `KubeletConfig` objects to apply to nodes in a machine pool. -//endif::openshift-rosa-hcp[] +//endif::[] |--labels |The labels (string) for the machine pool. The format must be a comma-delimited list of key=value pairs. This list overwrites any modifications made to node labels on an ongoing basis. @@ -770,7 +768,7 @@ a|--kubelet-configs |Specifies the minimum number of compute nodes when enabling autoscaling. //OSDOCS-11160: HCP only, but need to wait on separate HCP publishing -//ifdef::openshift-rosa-hcp[] +//ifdef::temp-ifdef[] |--max-surge a| For {hcp-title-first} clusters, the `max-surge` parameter defines the number of new nodes that can be provisioned in excess of the desired number of replicas for the machine pool, as configured using the `--replicas` parameter, or as determined by the autoscaler when autoscaling is enabled. This can be an absolute number (for example, `2`) or a percentage of the machine pool size (for example, `20%`), but must use the same unit as the `max-unavailable` parameter. @@ -781,7 +779,7 @@ a|For {hcp-title-first} clusters, the `max-unavailable` parameter defines the nu The default value is `0`, meaning that no outdated nodes are removed before new nodes are provisioned. The valid range for this value is from `0` to the current machine pool size, or from `0%` to `100%`. The total number of nodes that can be upgraded simultaneously during an upgrade is `max-surge` plus `max-unavailable`. -//endif::openshift-rosa-hcp[] +//endif::[] // end OSDOCS-11160: HCP only, when separate docs are available |--name diff --git a/modules/rosa-delete-objects.adoc b/modules/rosa-delete-objects.adoc index e070bd940f0f..bc50123f832d 100644 --- a/modules/rosa-delete-objects.adoc +++ b/modules/rosa-delete-objects.adoc @@ -286,14 +286,14 @@ a|-c, --cluster \| |Shows help for this command. |--name -a| Required for {hcp-title-first} clusters. Optional for {product-title}, as there is only one `KubeletConfig` for the cluster. Specifies a name for the `KubeletConfig` object. -//TODO OSDOCS-10439: Add conditions back when HCP and Classic are published separately -// ifdef::openshift-rosa-classic[] -// Optional. -// endif::openshift-rosa-classic[] -// ifdef::openshift-rosa-hcp[] -// Required. -// endif::openshift-rosa-hcp[] +a| +ifdef::temp-ifdef[] +Optional. +endif::[] +ifdef::temp-ifdef[] +Required. +endif::[] +Specifies a name for the `KubeletConfig` object. |-y, --yes |Automatically answers `yes` to confirm the operation. diff --git a/modules/rosa-edit-objects.adoc b/modules/rosa-edit-objects.adoc index 6d6f64284881..c46dae85d563 100644 --- a/modules/rosa-edit-objects.adoc +++ b/modules/rosa-edit-objects.adoc @@ -178,14 +178,13 @@ $ rosa edit ingress --lb-type=nlb --cluster=mycluster apps2 [id="rosa-edit-kubeletconfig_{context}"] == edit kubeletconfig -Edit a custom `KubeletConfig` object in a machine pool. -//TODO OSDOCS-10439: Add conditions back when HCP and Classic are published separately -// ifdef::openshift-rosa-classic[] -// cluster. -// endif::openshift-rosa-classic[] -// ifdef::openshift-rosa-hcp[] -// machine pool. -// endif::openshift-rosa-hcp[] +Edit a custom `KubeletConfig` object in a +ifdef::temp-ifdef[] +cluster. +endif::[] +ifdef::temp-ifdef[] +machine pool. +endif::[] .Syntax [source,terminal] @@ -205,24 +204,23 @@ a|-c, --cluster \| |Enable interactive mode. |--pod-pids-limit -a|Required. The maximum number of PIDs for each node in the machine pool associated with the `KubeletConfig` object. -//TODO OSDOCS-10439: Add conditions back when HCP and Classic are published separately -// ifdef::openshift-rosa-classic[] -// cluster. -// endif::openshift-rosa-classic[] -// ifdef::openshift-rosa-hcp[] -// machine pool associated with the `KubeletConfig` object. -// endif::openshift-rosa-hcp[] +a|Required. The maximum number of PIDs for each node in the +ifdef::temp-ifdef[] +cluster. +endif::[] +ifdef::temp-ifdef[] +machine pool associated with the `KubeletConfig` object. +endif::[] |--name -a| Required for {hcp-title-first} clusters. Optional for {product-title}, as there is only one `KubeletConfig` for the cluster. Specifies a name for the `KubeletConfig` object. -//TODO OSDOCS-10439: Add conditions back when HCP and Classic are published separately -// ifdef::openshift-rosa-classic[] -// Optional. -// endif::openshift-rosa-classic[] -// ifdef::openshift-rosa-hcp[] -// Required. -// endif::openshift-rosa-hcp[] +a| +ifdef::temp-ifdef[] +Optional. +endif::[] +ifdef::temp-ifdef[] +Required. +endif::[] +Specifies a name for the `KubeletConfig` object. |-h, --help |Shows help for this command. @@ -256,10 +254,10 @@ $ rosa edit machinepool --cluster= [argum |The labels (string) for the machine pool. The format must be a comma-delimited list of key=value pairs. Editing this value only affects newly created nodes of the machine pool, which are created by increasing the node number, and does not affect the existing nodes. This list overwrites any modifications made to node labels on an ongoing basis. //TODO OSDOCS-10439: Add conditions back when HCP and Classic are published separately -//ifdef::openshift-rosa-hcp[] +//ifdef::temp-ifdef[] a|--kubelet-configs | For {hcp-title-first} clusters, the names of any `KubeletConfig` objects to apply to nodes in a machine pool. -//endif::openshift-rosa-hcp[] +//endif::[] |--max-replicas |Specifies the maximum number of compute nodes when enabling autoscaling. @@ -268,7 +266,7 @@ a|--kubelet-configs |Specifies the minimum number of compute nodes when enabling autoscaling. //OSDOCS-11160: HCP only, but need to wait on separate HCP publishing -//ifdef::openshift-rosa-hcp[] +//ifdef::temp-ifdef[] |--max-surge a| For {hcp-title-first} clusters, the `max-surge` parameter defines the number of new nodes that can be provisioned in excess of the desired number of replicas for the machine pool, as configured using the `--replicas` parameter, or as determined by the autoscaler when autoscaling is enabled. This can be an absolute number (for example, `2`) or a percentage of the machine pool size (for example, `20%`), but must use the same unit as the `max-unavailable` parameter. @@ -278,7 +276,7 @@ The default value is `1`, meaning that the maximum number of nodes in the machin a|For {hcp-title-first} clusters, the `max-unavailable` parameter defines the number of nodes that can be made unavailable in a machine pool during an upgrade, before new nodes are provisioned. This can be an absolute number (for example, `2`) or a percentage of the current replica count in the machine pool (for example, `20%`), but must use the same unit as the `max-surge` parameter. The default value is `0`, meaning that no outdated nodes are removed before new nodes are provisioned. The valid range for this value is from `0` to the current machine pool size, or from `0%` to `100%`. The total number of nodes that can be upgraded simultaneously during an upgrade is `max-surge` plus `max-unavailable`. -//endif::openshift-rosa-hcp[] +//endif::[] // end OSDOCS-11160: HCP only, when separate docs are available |--node-drain-grace-period diff --git a/modules/rosa-getting-started-install-configure-cli-tools.adoc b/modules/rosa-getting-started-install-configure-cli-tools.adoc index 3324e3b8514c..3a6f6f210a59 100644 --- a/modules/rosa-getting-started-install-configure-cli-tools.adoc +++ b/modules/rosa-getting-started-install-configure-cli-tools.adoc @@ -7,40 +7,23 @@ [id="rosa-getting-started-install-configure-cli-tools_{context}"] = Installing and configuring the required CLI tools -ifeval::["{context}" == "rosa-getting-started"] -:getting-started: -endif::[] -ifeval::["{context}" == "rosa-quickstart"] -:quickstart: -endif::[] +Several command line interface (CLI) tools are required to deploy and work with your cluster. -Use the following steps to install and configure -ifdef::quickstart[] -the AWS and {product-title} (ROSA) CLI tools -endif::[] -ifdef::getting-started[] -AWS, {product-title} (ROSA), and OpenShift CLI tools -endif::[] -on your workstation. - -ifdef::getting-started[] .Prerequisites * You have an AWS account. -* You created a Red{nbsp}Hat account. -+ -[NOTE] -==== -You can create a Red{nbsp}Hat account by navigating to link:https://console.redhat.com[console.redhat.com] and selecting *Register for a Red{nbsp}Hat account*. -==== -endif::[] +* You have a Red{nbsp}Hat account. .Procedure +. Log in to your Red{nbsp}Hat and AWS accounts to access the download page for each required tool. +.. Log in to your Red{nbsp}Hat account at link:https://console.redhat.com[console.redhat.com]. +.. Log in to your AWS account at link:https://aws.amazon.com[aws.amazon.com]. + +//This should be a separate module . Install and configure the latest AWS CLI (`aws`). -.. Follow the link:https://aws.amazon.com/cli/[AWS Command Line Interface] documentation to install and configure the AWS CLI for your operating system. -+ -Specify your `aws_access_key_id`, `aws_secret_access_key`, and `region` in the `.aws/credentials` file. See link:https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html[AWS Configuration basics] in the AWS documentation. +.. Install the AWS CLI by following the link:https://aws.amazon.com/cli/[AWS Command Line Interface] documentation appropriate for your workstation. +.. Configure the AWS CLI by specifying your `aws_access_key_id`, `aws_secret_access_key`, and `region` in the `.aws/credentials` file. For more information, see link:https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html[AWS Configuration basics] in the AWS documentation. + [NOTE] ==== @@ -59,21 +42,25 @@ $ aws sts get-caller-identity --output text arn:aws:iam:::user/ ---- +//This should be a separate module . Install and configure the latest ROSA CLI (`rosa`). -.. Download the latest version of the ROSA CLI for your operating system from the link:https://console.redhat.com/openshift/downloads[*Downloads*] page on the {cluster-manager-first} {hybrid-console-second}. +.. Navigate to link:https://console.redhat.com/openshift/downloads[*Downloads*]. +.. Find *Red Hat OpenShift Service on AWS command line interface (`rosa)* in the list of tools and click *Download*. ++ +The `rosa-linux.tar.gz` file is downloaded to your default download location. .. Extract the `rosa` binary file from the downloaded archive. The following example extracts the binary from a Linux tar archive: + [source,terminal] ---- $ tar xvf rosa-linux.tar.gz ---- -.. Add `rosa` to your path. In the following example, the `/usr/local/bin` directory is included in the path of the user: +.. Move the `rosa` binary file to a directory in your execution path. In the following example, the `/usr/local/bin` directory is included in the path of the user: + [source,terminal] ---- $ sudo mv rosa /usr/local/bin/rosa ---- -.. Verify if the ROSA CLI is installed correctly by querying the `rosa` version: +.. Verify that the ROSA CLI is installed correctly by querying the `rosa` version: + [source,terminal] ---- @@ -83,28 +70,32 @@ $ rosa version .Example output [source,terminal] ---- -1.2.15 +1.2.47 Your ROSA CLI is up to date. ---- -ifdef::getting-started[] -+ -.. Optional: Enable tab completion for the ROSA CLI. With tab completion enabled, you can press the `Tab` key twice to automatically complete subcommands and receive command suggestions. -+ -`rosa` tab completion is available for different shell types. The following example enables persistent tab completion for Bash on a Linux host. The command generates a `rosa` tab completion configuration file for Bash and saves it to the `/etc/bash_completion.d/` directory: -+ -[source,terminal] ----- -# rosa completion bash > /etc/bash_completion.d/rosa ----- -+ -You must open a new terminal to activate the configuration. -+ -[NOTE] -==== -For steps to configure `rosa` tab completion for different shell types, see the help menu by running `rosa completion --help`. -==== -endif::[] -.. Log in to your Red{nbsp}Hat account by using the ROSA CLI: +// OSDOCS-11789: PM recommended removing this step since it isn't required. +// ifdef::getting-started[] +// + +// .. Optional: Enable tab completion for the ROSA CLI. With tab completion enabled, you can press the `Tab` key twice to automatically complete subcommands and receive command suggestions. +// + +// `rosa` tab completion is available for different shell types. The following example enables persistent tab completion for Bash on a Linux host. The command generates a `rosa` tab completion configuration file for Bash and saves it to the `/etc/bash_completion.d/` directory: +// + +// [source,terminal] +// ---- +// # rosa completion bash > /etc/bash_completion.d/rosa +// ---- +// + +// You must open a new terminal to activate the configuration. +// + +// [NOTE] +// ==== +// For steps to configure `rosa` tab completion for different shell types, see the help menu by running `rosa completion --help`. +// ==== +// endif::[] + +//The following should probably also be a separate module +. Log in to the ROSA CLI using an offline access token. +.. Run the login command: + [source,terminal] ---- @@ -117,14 +108,21 @@ $ rosa login To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here: ---- +.. Navigate to the URL listed in the command output to view your offline access token. +.. Enter the offline access token at the command line prompt to log in. + -Go to the URL listed in the command output to obtain an offline access token. Specify the token at the CLI prompt to log in. +[source,terminal] +---- +? Copy the token and paste it here: ******************* +[full token length omitted] +---- + [NOTE] ==== -You can subsequently specify the offline access token by using the `--token=""` argument when you run the `rosa login` command. +In the future you can specify the offline access token by using the `--token=""` argument when you run the `rosa login` command. ==== -.. Verify if you are logged in successfully and check your credentials: + +.. Verify that you are logged in and confirm that your credentials are correct before proceeding: + [source,terminal] ---- @@ -146,12 +144,12 @@ OCM Organization ID: OCM Organization Name: Your organization OCM Organization External ID: ---- -+ -Check that the information in the output is correct before proceeding. -ifdef::getting-started[] +//This should be a separate module . Install and configure the latest OpenShift CLI (`oc`). -.. Use the ROSA CLI to download the latest version of the `oc` CLI: +.. Use the ROSA CLI to download the `oc` CLI. ++ +The following command downloads the latest version of the CLI to the current working directory: + [source,terminal] ---- @@ -163,13 +161,13 @@ $ rosa download openshift-client ---- $ tar xvf openshift-client-linux.tar.gz ---- -.. Add the `oc` binary to your path. In the following example, the `/usr/local/bin` directory is included in the path of the user: +.. Move the `oc` binary to a directory in your execution path. In the following example, the `/usr/local/bin` directory is included in the path of the user: + [source,terminal] ---- $ sudo mv oc /usr/local/bin/oc ---- -.. Verify if the `oc` CLI is installed correctly: +.. Verify that the `oc` CLI is installed correctly: + [source,terminal] ---- @@ -180,14 +178,5 @@ $ rosa verify openshift-client [source,terminal] ---- I: Verifying whether OpenShift command-line tool is available... -I: Current OpenShift Client Version: 4.9.12 ----- -endif::[] - - -ifeval::["{context}" == "rosa-getting-started"] -:getting-started: -endif::[] -ifeval::["{context}" == "rosa-quickstart"] -:quickstart: -endif::[] \ No newline at end of file +I: Current OpenShift Client Version: 4.17.3 +---- \ No newline at end of file diff --git a/modules/rosa-hcp-creating-account-wide-sts-roles-and-policies.adoc b/modules/rosa-hcp-creating-account-wide-sts-roles-and-policies.adoc index a07fd86fe24d..f6dd002f74b5 100644 --- a/modules/rosa-hcp-creating-account-wide-sts-roles-and-policies.adoc +++ b/modules/rosa-hcp-creating-account-wide-sts-roles-and-policies.adoc @@ -1,3 +1,5 @@ +// Module included in the following assemblies: +// * rosa_planning/rosa-hcp-prepare-iam-resources.adoc // * rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc diff --git a/modules/rosa-hcp-firewall-prerequisites.adoc b/modules/rosa-hcp-firewall-prerequisites.adoc index fc3a3e71e38d..5e9c26ab070e 100644 --- a/modules/rosa-hcp-firewall-prerequisites.adoc +++ b/modules/rosa-hcp-firewall-prerequisites.adoc @@ -1,34 +1,19 @@ // Module included in the following assemblies: // // * rosa_planning/rosa-sts-aws-prereqs.adoc -// * rosa_planning/rosa-hcp-prereqs.adoc +// * rosa_planning/rosa-hcp-prereqs.adoc <-- this is a symlink -ifeval::["{context}" == "rosa-sts-aws-prereqs"] -:rosa-classic-sts: -endif::[] -ifeval::["{context}" == "rosa-hcp-aws-prereqs"] -:hcp: -endif::[] +//TODO OSDOCS-11789: Why is this a procedure and not a reference? [id="rosa-hcp-firewall-prerequisites_{context}"] -// Conditionals are to change the title when displayed on the rosa-sts-aws-prereqs page -ifdef::rosa-classic-sts[] -= {hcp-title} -endif::rosa-classic-sts[] -ifndef::rosa-classic-sts[] -= AWS firewall prerequisites += Firewall prerequisites -If you are using a firewall to control egress traffic from {product-title}, you must configure your firewall to grant access to the certain domain and port combinations below. {product-title} requires this access to provide a fully managed OpenShift service. -endif::rosa-classic-sts[] +* If you are using a firewall to control egress traffic from {product-title}, your Virtual Private Cloud (VPC) must be able to complete requests from the cluster to the Amazon S3 service, for example, via an Amazon S3 gateway. -.Prerequisites +* You must also configure your firewall to grant access to the following domain and port combinations. +//TODO OSDOCS-11789: From your deploy machine? From your cluster? -* You have configured an Amazon S3 gateway endpoint in your AWS Virtual Private Cloud (VPC). This endpoint is required to complete requests from the cluster to the Amazon S3 service. - -.Procedure - -. Allowlist the following URLs that are used to download and install packages and tools: -+ +== Domains for installation packages and tools [cols="6,1,6",options="header"] |=== |Domain | Port | Function @@ -84,9 +69,8 @@ endif::rosa-classic-sts[] |443 |Required. Used to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator (CVO) needs only a single functioning source. |=== -+ -. Allowlist the following telemetry URLs: -+ + +== Domains for telemetry [cols="6,1,6",options="header"] |=== |Domain | Port | Function @@ -102,12 +86,11 @@ endif::rosa-classic-sts[] |443 |Required. The `https://console.redhat.com/openshift` site uses authentication from `sso.redhat.com` to download the pull secret and use Red{nbsp}Hat SaaS solutions to facilitate monitoring of your subscriptions, cluster inventory, chargeback reporting, etc. |=== -+ + Managed clusters require enabling telemetry to allow Red{nbsp}Hat to react more quickly to problems, better support the customers, and better understand how product upgrades impact clusters. -For more information about how remote health monitoring data is used by Red{nbsp}Hat, see _About remote health monitoring_ in the _Additional resources_ section. +For more information about how remote health monitoring data is used by Red{nbsp}Hat, see _About remote health monitoring_. -. Allowlist the following Amazon Web Services (AWS) API URls: -+ +== Domains for Amazon Web Services (AWS) APIs [cols="6,1,6",options="header"] |=== |Domain | Port | Function @@ -115,21 +98,21 @@ For more information about how remote health monitoring data is used by Red{nbsp |`sts..amazonaws.com` ^[1]^ |443 |Required. Used to access the AWS Secure Token Service (STS) regional endpoint. Ensure that you replace `` with the region that your cluster is deployed in. - -|`sts.amazonaws.com` ^[2]^ -|443 -|See footnote. Used to access the AWS Secure Token Service (STS) global endpoint. |=== -+ + [.small] -- 1. This can also be accomplished by configuring a private interface endpoint in your AWS Virtual Private Cloud (VPC) to the regional AWS STS endpoint. -2. The AWS STS global endpoint is only required to be allowed if you are running a version of OpenShift before 4.14.18 or 4.15.4. ROSA HCP version 4.14.18+, 4.15.4+, and 4.16.0+ use the AWS STS regional endpoint. -- -+ + +== Domains for your workload + +Your workload may require access to other sites that provide resources for programming languages or frameworks. + +* Allow access to sites that provide resources required by your builds. +* Allow access to outbound URLs required for your workload, for example, link:https://access.redhat.com/solutions/2998411[OpenShift Outbound URLs to Allow]. -. Allowlist the following URLs for optional third-party content: -+ +== Optional domains to enable third-party content [cols="6,1,6",options="header"] |=== |Domain | Port | Function @@ -144,14 +127,4 @@ For more information about how remote health monitoring data is used by Red{nbsp |`oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com` | 443 | Optional. Required for Sonatype Nexus, F5 Big IP operators. -|=== - -. Allowlist any site that provides resources for a language or framework that your builds require. -. Allowlist any outbound URLs that depend on the languages and frameworks used in OpenShift. See link:https://access.redhat.com/solutions/2998411[OpenShift Outbound URLs to Allow] for a list of recommended URLs to be allowed on the firewall or proxy. - -ifeval::["{context}" == "rosa-sts-aws-prereqs"] -:!rosa-classic-sts: -endif::[] -ifeval::["{context}" == "rosa-hcp-aws-prereqs"] -:!hcp: -endif::[] +|=== \ No newline at end of file diff --git a/modules/rosa-hcp-vpc-manual.adoc b/modules/rosa-hcp-vpc-manual.adoc index 65ef6d1681bf..1ef7872d0960 100644 --- a/modules/rosa-hcp-vpc-manual.adoc +++ b/modules/rosa-hcp-vpc-manual.adoc @@ -6,28 +6,6 @@ [id="rosa-hcp-vpc-manual_{context}"] = Creating a Virtual Private Cloud manually -If you choose to manually create your Virtual Private Cloud (VPC) instead of using Terraform, go to link:https://us-east-1.console.aws.amazon.com/vpc/[the VPC page in the AWS console]. Your VPC must meet the requirements shown in the following table. +If you choose to manually create your Virtual Private Cloud (VPC) instead of using Terraform, go to link:https://us-east-1.console.aws.amazon.com/vpc/[the VPC page in the AWS console]. -.Requirements for your VPC -[options="header",cols="50,50"] -|=== -| Requirement | Details - -| VPC name -| You need to have the specific VPC name and ID when creating your cluster. - -| CIDR range -| Your VPC CIDR range should match your machine CIDR. - -| Availability zone -| You need one availability zone for a single zone, and you need three for availability zones for multi-zone. - -| Public subnet -| You must have one public subnet with an internet gateway for public clusters. - -| Private subnet -| You must have exactly one private subnet in each availability zone (AZ) for installing machine pools in ROSA HCP clusters. A NAT gateway may be associated with this subnet to allow outbound internet access for the instances. Private clusters do not need a public subnet. - -| DNS hostname and resolution -| You must ensure that the DNS hostname and resolution are enabled. -|=== +include::snippets/rosa-existing-vpc-requirements.adoc[leveloffset=+0] diff --git a/modules/rosa-list-objects.adoc b/modules/rosa-list-objects.adoc index cb8888710d64..ca44cac373b5 100644 --- a/modules/rosa-list-objects.adoc +++ b/modules/rosa-list-objects.adoc @@ -719,14 +719,14 @@ a|-c, --cluster \| |Shows help for this command. |--name -a| Optional. Specifies the name of the `KubeletConfig` object to describe. -//TODO OSDOCS-10439: Add conditions back when HCP and Classic are published separately -// ifdef::openshift-rosa-classic[] -// Optional. -// endif::openshift-rosa-classic[] -// ifdef::openshift-rosa-hcp[] -// Required. -// endif::openshift-rosa-hcp[] +a| +ifdef::openshift-rosa[] +Optional. +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +Required. +endif::openshift-rosa-hcp[] +Specifies the name of the `KubeletConfig` object to describe. |-o, --output string diff --git a/modules/rosa-oidc-understanding.adoc b/modules/rosa-oidc-understanding.adoc index 3b4d26285ccd..72e2a1a80ab1 100644 --- a/modules/rosa-oidc-understanding.adoc +++ b/modules/rosa-oidc-understanding.adoc @@ -7,11 +7,13 @@ [id=rosa-oidc-understanding_{context}] = Understanding the OIDC verification options -There are three options for OIDC verification: +You can configure the following types of OIDC verification: +ifndef::openshift-rosa-hcp[] * Unregistered, managed OIDC configuration + An unregistered, managed OIDC configuration is created for you during the cluster installation process. The configuration is hosted under Red{nbsp}Hat's AWS account. This option does not give you the ID that links to the OIDC configuration, so you can only use this type of OIDC configuration on a single cluster. +endif::openshift-rosa-hcp[] * Registered, managed OIDC configuration + @@ -23,9 +25,11 @@ You can create a registered, unmanaged OIDC configuration before you start creat The registered options can be used to create the required IAM resources before you start creating a cluster. This option results in faster install times since there is a waiting period during cluster creation where the installation pauses until you create an OIDC provider and Operator roles. +ifndef::openshift-rosa-hcp[] For ROSA Classic, you may use any of the OIDC configuration options. If you are using {hcp-title}, you must create registered OIDC configuration, either as managed or unmanaged. You can share the registered OIDC configurations with other clusters. This ability to share the configuration also allows you to share the provider and Operator roles. +endif::openshift-rosa-hcp[] [NOTE] ==== -Reusing the OIDC configurations, OIDC provider, and Operator roles between clusters is not recommended for production clusters since the authentication verification is used throughout all of these clusters. Red{nbsp}Hat advises to only reuse resources on non-production test environments. +Reusing the OIDC configurations, OIDC provider, and Operator roles between clusters is not recommended for production clusters since the authentication verification is used throughout all of these clusters. Red{nbsp}Hat recommends only reusing resources between non-production test environments. ==== \ No newline at end of file diff --git a/modules/rosa-planning-environment-cluster-max.adoc b/modules/rosa-planning-environment-cluster-max.adoc index 88945a2a1ab1..ed60b0b25a6e 100644 --- a/modules/rosa-planning-environment-cluster-max.adoc +++ b/modules/rosa-planning-environment-cluster-max.adoc @@ -7,7 +7,7 @@ This document describes how to plan your {product-title} environment based on the tested cluster maximums. -Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping. +Oversubscribing the physical resources on a node affects the resource guarantees that the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping. Some of the tested maximums are stretched only in a single dimension. They will vary when many objects are running on the cluster. diff --git a/modules/rosa-prereq-roles-overview.adoc b/modules/rosa-prereq-roles-overview.adoc new file mode 100644 index 000000000000..5a7e7fafa299 --- /dev/null +++ b/modules/rosa-prereq-roles-overview.adoc @@ -0,0 +1,52 @@ +// Module included in the following assemblies: +// * rosa_planning/rosa-hcp-prepare-iam-resources.adoc + +:_mod-docs-content-type: MODULE +[id="rosa-prereq-roles-overview"] += Overview of required roles + +To create and manage your {product-title} cluster, you must create several account-wide and cluster-wide roles. If you intend to use {cluster-manager} to create or manage your cluster, you need some additional roles. + +To create and manage clusters:: Several account-wide roles are required to create and manage ROSA clusters. These roles only need to be created once per AWS account, and do not need to be created fresh for each cluster. You can specify your own prefix, or use the default prefix (`ManagedOpenShift`). ++ +The following account-wide roles are required: + +ifdef::openshift-rosa-hcp[] +** `-HCP-ROSA-Worker-Role` +** `-HCP-ROSA-Support-Role` +** `-HCP-ROSA-Installer-Role` +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] +** `-Worker-Role` +** `-Support-Role` +** `-Installer-Role` +** `-ControlPlane-Role` +endif::openshift-rosa-hcp[] + ++ +[NOTE] +==== +Role creation does not request your AWS access or secret keys. AWS Security Token Service (STS) is used as the basis of this workflow. AWS STS uses temporary, limited-privilege credentials to provide authentication. +==== + +To manage cluster features provided by Operators:: Cluster-specific Operator roles (`operator-roles` in the ROSA CLI), obtain the temporary permissions required to carry out cluster operations for features provided by Operators, such as managing back-end storage, ingress, and registry. This requires the configuration of an OpenID Connect (OIDC) provider, which connects to AWS Security Token Service (STS) to authenticate Operator access to AWS resources. ++ +Operator roles are required for every cluster, as several Operators are used to provide cluster features by default. ++ +The following Operator roles are required: + +** `--openshift-cluster-csi-drivers-ebs-cloud-credentials` +** `--openshift-cloud-network-config-controller-credentials` +** `--openshift-machine-api-aws-cloud-credentials` +** `--openshift-cloud-credential-operator-cloud-credentials` +** `--openshift-image-registry-installer-cloud-credentials` +** `--openshift-ingress-operator-cloud-credentials` + +To use {cluster-manager}:: The web user interface, {cluster-manager}, requires you to create additional roles in your AWS account to create a trust relationship between that AWS account and the {cluster-manager}. ++ +This trust relationship is achieved through the creation and association of the `ocm-role` AWS IAM role. This role has a trust policy with the AWS installer that links your Red{nbsp}Hat account to your AWS account. In addition, you also need a `user-role` AWS IAM role for each web UI user, which serves to identify these users. This `user-role` AWS IAM role has no permissions. ++ +The following AWS IAM roles are required to use {cluster-manager}: + +** `ocm-role` +** `user-role` \ No newline at end of file diff --git a/modules/rosa-required-aws-service-quotas.adoc b/modules/rosa-required-aws-service-quotas.adoc index 58175ad34212..95d8e7bf5b09 100644 --- a/modules/rosa-required-aws-service-quotas.adoc +++ b/modules/rosa-required-aws-service-quotas.adoc @@ -8,8 +8,16 @@ The table below describes the AWS service quotas and levels required to create and run one {product-title} cluster. Although most default values are suitable for most workloads, you might need to request additional quota for the following cases: -* ROSA (classic architecture) clusters require a minimum AWS EC2 service quota of 100 vCPUs to provide for cluster creation, availability, and upgrades. The default maximum value for vCPUs assigned to Running On-Demand Standard Amazon EC2 instances is `5`. Therefore if you have not created a ROSA cluster using the same AWS account previously, you must request additional EC2 quota for `Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances`. - +* ROSA clusters require a minimum AWS EC2 service quota of +ifndef::openshift-rosa-hcp[] +100{nbsp}vCPUs +endif::[] +ifdef::openshift-rosa-hcp[] +32{nbsp}vCPUs +endif::[] +to provide for cluster creation, availability, and upgrades. The default maximum value for vCPUs assigned to Running On-Demand Standard Amazon EC2 instances is `5`. Therefore if you have not created a ROSA cluster using the same AWS account previously, you must request additional EC2 quota for `Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances`. + +//TODO OSDOCS-11789 confirm number of secgroups on HCP clusters - Bala says 10, who can confirm? * Some optional cluster configuration features, such as custom security groups, might require you to request additional quota. For example, because ROSA associates 1 security group with network interfaces in worker machine pools by default, and the default quota for `Security groups per network interface` is `5`, if you want to add 5 custom security groups, you must request additional quota, because this would bring the total number of security groups on worker network interfaces to 6. [NOTE] @@ -30,27 +38,48 @@ If you need to modify or increase a specific quota, see Amazon's documentation o |ec2 |L-1216C47A |5 -|100 -| Maximum number of vCPUs assigned to the Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances. - -The default value of 5 vCPUs is not sufficient to create ROSA clusters. ROSA has a minimum requirement of 100 vCPUs for cluster creation. - +a| +ifndef::openshift-rosa-hcp[] +100 +endif::[] +ifdef::openshift-rosa-hcp[] +32 +endif::[] +|Maximum number of vCPUs assigned to the Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances. The default value of 5 vCPUs is not sufficient to create ROSA clusters. + +//gp2 is not used for HCP clusters +ifndef::openshift-rosa-hcp[] |Storage for General Purpose SSD (gp2) volume storage in TiB |ebs |L-D18FCD1D |50 |300 -| The maximum aggregated amount of storage, in TiB, that can be provisioned across General Purpose SSD (gp2) volumes in this Region. +|The maximum aggregated amount of storage, in TiB, that can be provisioned across General Purpose SSD (gp2) volumes in this Region. +endif::openshift-rosa-hcp[] +//HCP minimums assume that Prometheus/Grafana is not used |Storage for General Purpose SSD (gp3) volume storage in TiB |ebs |L-7A658B76 |50 -|300 -| The maximum aggregated amount of storage, in TiB, that can be provisioned across General Purpose SSD (gp3) volumes in this Region. - -300 TiB of storage is the required minimum for optimal performance. - +a| +ifndef::openshift-rosa-hcp[] +300 +endif::[] +ifdef::openshift-rosa-hcp[] +:fn-hcp-storage-quota: footnote:[The default quota of 50{nbsp}TiB is more than {hcp-title} clusters require; however, because AWS cost is based on usage rather than quota, Red{nbsp}Hat recommends using the default quota.] +1{fn-hcp-storage-quota} +endif::[] +a| The maximum aggregated amount of storage, in TiB, that can be provisioned across General Purpose SSD (gp3) volumes in this Region. +ifndef::openshift-rosa-hcp[] +300{nbsp}TiB +endif::[] +ifdef::openshift-rosa-hcp[] +1{nbsp}TiB +endif::[] +of storage is the required minimum for optimal performance. + +ifndef::openshift-rosa-hcp[] |Storage for Provisioned IOPS SSD (io1) volumes in TiB |ebs |L-FD252861 @@ -58,7 +87,8 @@ The default value of 5 vCPUs is not sufficient to create ROSA clusters. ROSA has |300 | The maximum aggregated amount of storage, in TiB, that can be provisioned across Provisioned IOPS SSD (io1) volumes in this Region. -300 TiB of storage is the required minimum for optimal performance. +300{nbsp}TiB of storage is the required minimum for optimal performance. +endif::[] |=== @@ -103,19 +133,23 @@ The default value of 5 vCPUs is not sufficient to create ROSA clusters. ROSA has |5 |The maximum number of security groups per network interface. This quota, multiplied by the quota for rules per security group, cannot exceed 1000. +ifndef::openshift-rosa-hcp[] |Snapshots per Region |ebs |L-309BACF6 |10,000 |10,000 | The maximum number of snapshots per Region +endif::[] +ifndef::openshift-rosa-hcp[] |IOPS for Provisioned IOPS SSD (Io1) volumes |ebs |L-B3A130E6 |300,000 |300,000 | The maximum aggregated number of IOPS that can be provisioned across Provisioned IOPS SDD (io1) volumes in this Region. +endif::openshift-rosa-hcp[] |Application Load Balancers per Region |elasticloadbalancing @@ -124,16 +158,20 @@ The default value of 5 vCPUs is not sufficient to create ROSA clusters. ROSA has |50 |The maximum number of Application Load Balancers that can exist in each region. +ifndef::openshift-rosa-hcp[] |Classic Load Balancers per Region |elasticloadbalancing |L-E9E9831D |20 |20 |The maximum number of Classic Load Balancers that can exist in each region. +endif::openshift-rosa-hcp[] + |=== [role="_additional-resources"] == Additional resources * link:https://aws.amazon.com/premiumsupport/knowledge-center/request-service-quota-increase-cli/[How can I request, view, and manage service quota increase requests using AWS CLI commands?] * link:https://docs.aws.amazon.com/ROSA/latest/userguide/service-quotas-rosa.html[ROSA service quotas] -* link:https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html[Request a quota increase] \ No newline at end of file +* link:https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html[Request a quota increase] +* link:https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-quotas.html[IAM and AWS STS quotas (AWS documentation)] \ No newline at end of file diff --git a/modules/rosa-sdpolicy-platform.adoc b/modules/rosa-sdpolicy-platform.adoc index b711fc162df9..b187871bc2f4 100644 --- a/modules/rosa-sdpolicy-platform.adoc +++ b/modules/rosa-sdpolicy-platform.adoc @@ -4,71 +4,68 @@ // * rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc // * rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc -ifeval::["{context}" == "rosa-hcp-service-definition"] -:rosa-with-hcp: -endif::[] - -:_mod-docs-content-type: ASSEMBLY +:_mod-docs-content-type: MODULE [id="rosa-sdpolicy-platform_{context}"] = Platform :productwinc: Red{nbsp}Hat OpenShift support for Windows Containers This section provides information about the service definition for the -ifdef::rosa-with-hcp[] +ifdef::openshift-rosa-hcp[] {hcp-title-first} platform. -endif::rosa-with-hcp[] -ifndef::rosa-with-hcp[] +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] {product-title} (ROSA) platform. -endif::rosa-with-hcp[] +endif::openshift-rosa-hcp[] [id="rosa-sdpolicy-autoscaling_{context}"] == Autoscaling Node autoscaling is available on -ifdef::rosa-with-hcp[] -{hcp-title-first}. -endif::rosa-with-hcp[] -ifndef::rosa-with-hcp[] +ifdef::openshift-rosa-hcp[] +{hcp-title}. +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] {product-title}. -endif::rosa-with-hcp[] +endif::openshift-rosa-hcp[] You can configure the autoscaler option to automatically scale the number of machines in a cluster. [id="rosa-sdpolicy-daemonsets_{context}"] == Daemonsets + Customers can create and run daemonsets on -ifdef::rosa-with-hcp[] -{hcp-title-first}. -endif::rosa-with-hcp[] -ifndef::rosa-with-hcp[] -{product-title}. To restrict daemonsets to only running on worker nodes, use the following `nodeSelector`: +ifdef::openshift-rosa-hcp[] +{hcp-title}. +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] +{product-title}. +endif::openshift-rosa-hcp[] +To restrict daemonsets to only running on worker nodes, use the following `nodeSelector`: + [source,yaml] ---- -... spec: nodeSelector: role: worker -... ---- -endif::rosa-with-hcp[] [id="rosa-sdpolicy-multiple-availability-zone_{context}"] == Multiple availability zone -ifdef::rosa-with-hcp[] +ifdef::openshift-rosa-hcp[] Control plane components are always deployed across multiple availability zones, regardless of a customer's worker node configuration. -endif::rosa-with-hcp[] -ifndef::rosa-with-hcp[] +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] In a multiple availability zone cluster, control plane nodes are distributed across availability zones and at least one worker node is required in each availability zone. -endif::rosa-with-hcp[] +endif::openshift-rosa-hcp[] [id="rosa-sdpolicy-node-labels_{context}"] == Node labels Custom node labels are created by Red{nbsp}Hat during node creation and cannot be changed on -ifdef::rosa-with-hcp[] -{hcp-title-first} -endif::rosa-with-hcp[] -ifndef::rosa-with-hcp[] +ifdef::openshift-rosa-hcp[] +{hcp-title} +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] {product-title} -endif::rosa-with-hcp[] +endif::openshift-rosa-hcp[] clusters at this time. However, custom labels are supported when creating new machine pools. [id="rosa-sdpolicy-backup-policy_{context}"] @@ -76,17 +73,24 @@ clusters at this time. However, custom labels are supported when creating new ma [IMPORTANT] ==== -Red Hat does not provide a backup method for ROSA clusters with STS. It is critical that customers have a backup plan for their applications and application data. +Red{nbsp}Hat does not provide a backup method for +ifndef::openshift-rosa-hcp[] +ROSA clusters that use STS. +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +{hcp-title} clusters. +endif::openshift-rosa-hcp[] +It is critical that customers have a backup plan for their applications and application data. ==== Application and application data backups are not a part of the -ifdef::rosa-with-hcp[] -{hcp-title-first} service. -endif::rosa-with-hcp[] -ifndef::rosa-with-hcp[] +ifdef::openshift-rosa-hcp[] +{hcp-title} service. +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] {product-title} service. -ifndef::rosa-with-hcp[] +ifndef::openshift-rosa-hcp[] [%collapsible] ==== @@ -121,19 +125,19 @@ The table below only applies to non-STS clusters. The following components are u |Nodes are considered to be short-term. Nothing critical should be stored on a node's root volume. |=== -endif::rosa-with-hcp[] +endif::openshift-rosa-hcp[] ==== -endif::rosa-with-hcp[] +endif::openshift-rosa-hcp[] [id="rosa-sdpolicy-openshift-version_{context}"] == OpenShift version -ifdef::rosa-with-hcp[] -{hcp-title-first} -endif::rosa-with-hcp[] -ifndef::rosa-with-hcp[] +ifdef::openshift-rosa-hcp[] +{hcp-title} +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] {product-title} -endif::rosa-with-hcp[] +endif::openshift-rosa-hcp[] is run as a service and is kept up to date with the latest OpenShift Container Platform version. Upgrade scheduling to the latest version is available. [id="rosa-sdpolicy-upgrades_{context}"] @@ -148,22 +152,22 @@ See the link:https://docs.openshift.com/rosa/rosa_policy/rosa-life-cycle.html[{p [id="rosa-sdpolicy-container-engine_{context}"] == Container engine -ifdef::rosa-with-hcp[] -{hcp-title-first} -endif::rosa-with-hcp[] -ifndef::rosa-with-hcp[] +ifdef::openshift-rosa-hcp[] +{hcp-title} +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] {product-title} -endif::rosa-with-hcp[] +endif::openshift-rosa-hcp[] runs on OpenShift 4 and uses link:https://www.redhat.com/en/blog/red-hat-openshift-container-platform-4-now-defaults-cri-o-underlying-container-engine[CRI-O] as the only available container engine. [id="rosa-sdpolicy-operating-system_{context}"] == Operating system -ifdef::rosa-with-hcp[] -{hcp-title-first} -endif::rosa-with-hcp[] -ifndef::rosa-with-hcp[] +ifdef::openshift-rosa-hcp[] +{hcp-title} +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] {product-title} -endif::rosa-with-hcp[] +endif::openshift-rosa-hcp[] runs on OpenShift 4 and uses Red{nbsp}Hat CoreOS as the operating system for all control plane and worker nodes. [id="rosa-sdpolicy-red-hat-operator_{context}"] @@ -180,8 +184,4 @@ Red{nbsp}Hat workloads typically refer to Red{nbsp}Hat-provided Operators made a [id="rosa-sdpolicy-kubernetes-operator_{context}"] == Kubernetes Operator support -All Operators listed in the OperatorHub marketplace should be available for installation. These Operators are considered customer workloads, and are not monitored by Red{nbsp}Hat SRE. - -ifeval::["{context}" == "rosa-hcp-service-definition"] -:!rosa-with-hcp: -endif::[] \ No newline at end of file +All Operators listed in the OperatorHub marketplace should be available for installation. These Operators are considered customer workloads, and are not monitored by Red{nbsp}Hat SRE. \ No newline at end of file diff --git a/modules/rosa-sts-aws-requirements-access-req.adoc b/modules/rosa-sts-aws-requirements-access-req.adoc deleted file mode 100644 index 69247f996f54..000000000000 --- a/modules/rosa-sts-aws-requirements-access-req.adoc +++ /dev/null @@ -1,11 +0,0 @@ -// Module included in the following assemblies: -// -// * rosa_planning/rosa-sts-aws-prereqs.adoc -:_mod-docs-content-type: CONCEPT -[id="rosa-access-requirements_{context}"] -= Access requirements - -* Red{nbsp}Hat must have AWS console access to the customer-provided AWS account. Red{nbsp}Hat protects and manages this access. -* You must not use the AWS account to elevate your permissions within the {product-title} (ROSA) cluster. -* Actions available in the ROSA CLI (`rosa`) or {cluster-manager-url} console must not be directly performed in your AWS account. -* You do not need to have a preconfigured domain to deploy ROSA clusters. If you wish to use a custom domain, see the Additional resources for information. diff --git a/modules/rosa-sts-aws-requirements-account.adoc b/modules/rosa-sts-aws-requirements-account.adoc index 3039854dbe7e..fee0788772ff 100644 --- a/modules/rosa-sts-aws-requirements-account.adoc +++ b/modules/rosa-sts-aws-requirements-account.adoc @@ -3,21 +3,9 @@ // * rosa_planning/rosa-sts-aws-prereqs.adocx :_mod-docs-content-type: CONCEPT [id="rosa-account_{context}"] -= Account -* You must ensure that the AWS limits are sufficient to support {product-title} provisioned within your AWS account. Running the `rosa verify quota` command in the CLI validates that you have the required quota to run a cluster. -+ -[NOTE] -==== -Quota verification checks your AWS quota, but it does not compare your consumption to your AWS quota. See the "Limits and scalability" link in Additional resources for more information. -==== -+ -* If SCP policies are applied and enforced, these policies must not be more restrictive than the roles and policies required by the cluster. -* Your AWS account should not be transferable to Red{nbsp}Hat. -* You should not impose additional AWS usage restrictions beyond the defined roles and policies on Red{nbsp}Hat activities. Imposing restrictions will severely hinder Red{nbsp}Hat's ability to respond to incidents. -* You may deploy native AWS services within the same AWS account. -* Your account must have a service-linked role set up as it is required for Elastic Load Balancing (ELB) to be configured. See the "Creating the Elastic Load Balancing (ELB) service-linked role" link in the Additional resources for information about creating a service-linked role for your ELB if you have not created a load balancer in your AWS account previously. -+ -[NOTE] -==== -You are encouraged, but not required, to deploy resources in a Virtual Private Cloud (VPC) separate from the VPC hosting {product-title} and other Red{nbsp}Hat supported services. -==== += AWS account + +* Your AWS account must allow sufficient quota to deploy your cluster. +* If your organization applies and enforces SCP policies, these policies must not be more restrictive than the roles and policies required by the cluster. +* You can deploy native AWS services within the same AWS account. +* Your account must have a service-linked role to allow the installation program to configure Elastic Load Balancing (ELB). See "Creating the Elastic Load Balancing (ELB) service-linked role" for more information. \ No newline at end of file diff --git a/modules/rosa-sts-aws-requirements-association-concept.adoc b/modules/rosa-sts-aws-requirements-association-concept.adoc index edc081880475..624acd315fdf 100644 --- a/modules/rosa-sts-aws-requirements-association-concept.adoc +++ b/modules/rosa-sts-aws-requirements-association-concept.adoc @@ -6,6 +6,6 @@ [id="rosa-associating-concept_{context}"] = AWS account association -{product-title} (ROSA) cluster-provisioning tasks require linking `ocm-role` and `user-role` IAM roles to your AWS account using your Amazon Resource Name (ARN). +When you provision {product-title} (ROSA) using {cluster-manager}, you must associate the `ocm-role` and `user-role` IAM roles with your AWS account using your Amazon Resource Name (ARN). This association process is also known as _account linking_. -The `ocm-role` ARN is stored as a label in your Red{nbsp}Hat organization while the `user-role` ARN is stored as a label inside your Red{nbsp}Hat user account. Red{nbsp}Hat uses these ARN labels to confirm that the user is a valid account holder and that the correct permissions are available to perform the necessary tasks in the AWS account. +The `ocm-role` ARN is stored as a label in your Red{nbsp}Hat organization while the `user-role` ARN is stored as a label inside your Red{nbsp}Hat user account. Red{nbsp}Hat uses these ARN labels to confirm that the user is a valid account holder and that the correct permissions are available to perform provisioning tasks in the AWS account. diff --git a/modules/rosa-sts-aws-requirements-attaching-boundary-policy.adoc b/modules/rosa-sts-aws-requirements-attaching-boundary-policy.adoc index 96087f9e4879..500c73ba9931 100644 --- a/modules/rosa-sts-aws-requirements-attaching-boundary-policy.adoc +++ b/modules/rosa-sts-aws-requirements-attaching-boundary-policy.adoc @@ -7,7 +7,7 @@ = Permission boundaries for the installer role You can apply a policy as a _permissions boundary_ on an installer role. -You can use an AWS-managed policy or a customer-managed policy to set the boundary for an Amazon Web Services(AWS) Identity and Access Management (IAM) entity (user or role). The combination of policy and boundary policy limits the maximum permissions for the user or role. ROSA includes a set of three prepared permission boundary policy files, with which you can restrict permissions for the installer role since changing the installer policy itself is not supported. +You can use an AWS-managed policy or a customer-managed policy to set the boundary for an Amazon Web Services (AWS) Identity and Access Management (IAM) entity (user or role). The combination of policy and boundary policy limits the maximum permissions for the user or role. ROSA includes a set of three prepared permission boundary policy files, with which you can restrict permissions for the installer role since changing the installer policy itself is not supported. [NOTE] ==== diff --git a/modules/rosa-sts-aws-requirements-creating-association.adoc b/modules/rosa-sts-aws-requirements-creating-association.adoc index 2e16ee917e48..4d05cd876897 100644 --- a/modules/rosa-sts-aws-requirements-creating-association.adoc +++ b/modules/rosa-sts-aws-requirements-creating-association.adoc @@ -4,17 +4,16 @@ // * rosa_planning/rosa-sts-aws-prereqs.adoc :_mod-docs-content-type: PROCEDURE [id="rosa-associating-account_{context}"] -= Linking your AWS account += Associating your AWS account with IAM roles -You can link your AWS account to existing IAM roles by using the {product-title} (ROSA) CLI, `rosa`. +You can associate or link your AWS account with existing IAM roles by using the {product-title} (ROSA) CLI, `rosa`. .Prerequisites * You have an AWS account. -* You are using {cluster-manager-url} to create clusters. * You have the permissions required to install AWS account-wide roles. See the "Additional resources" of this section for more information. * You have installed and configured the latest AWS (`aws`) and ROSA (`rosa`) CLIs on your installation host. -* You have created your `ocm-role` and `user-role` IAM roles, but have not yet linked them to your AWS account. You can check whether your IAM roles are already linked by running the following commands: +* You have created the `ocm-role` and `user-role` IAM roles, but have not yet linked them to your AWS account. You can check whether your IAM roles are already linked by running the following commands: + [source,terminal] ---- diff --git a/modules/rosa-sts-aws-requirements-ocm.adoc b/modules/rosa-sts-aws-requirements-ocm.adoc index c429affbf157..4ada3a8b7573 100644 --- a/modules/rosa-sts-aws-requirements-ocm.adoc +++ b/modules/rosa-sts-aws-requirements-ocm.adoc @@ -6,6 +6,6 @@ [id="rosa-ocm-requirements_{context}"] = Requirements for using {cluster-manager} -The following sections describe requirements for {cluster-manager-url}. If you use the CLI tools exclusively, then you can disregard the requirements. +The following configuration details are required only if you use {cluster-manager-url} to manage your clusters. If you use the CLI tools exclusively, then you can disregard these requirements. -To use {cluster-manager}, you must link your AWS accounts. This linking concept is also known as account association. \ No newline at end of file +To use {cluster-manager}, you must link the roles in your Red{nbsp}Hat user organization and your Red{nbsp}Hat account to your AWS account. This linking concept is also known as account association. \ No newline at end of file diff --git a/modules/rosa-sts-aws-requirements-security-req.adoc b/modules/rosa-sts-aws-requirements-security-req.adoc index 9bb9e913219a..4012e28aa3ea 100644 --- a/modules/rosa-sts-aws-requirements-security-req.adoc +++ b/modules/rosa-sts-aws-requirements-security-req.adoc @@ -5,5 +5,6 @@ :_mod-docs-content-type: CONCEPT [id="rosa-security-requirements_{context}"] = Security requirements +//TODO OSDOCS-11789: Red Hat as in RHSRE? Red Hat as in RH services in the cluster? * Red{nbsp}Hat must have ingress access to EC2 hosts and the API server from allow-listed IP addresses. -* Red{nbsp}Hat must have egress allowed to the documented domains. See the "AWS firewall prerequisites" section for the designated domains. +* Red{nbsp}Hat must have egress allowed to the domains documented in the "Firewall prerequisites" section. diff --git a/modules/rosa-sts-byo-oidc.adoc b/modules/rosa-sts-byo-oidc.adoc index b926312c6b58..39b8ef57d22e 100644 --- a/modules/rosa-sts-byo-oidc.adoc +++ b/modules/rosa-sts-byo-oidc.adoc @@ -5,32 +5,29 @@ // * rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc // * rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc // * rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc - -ifeval::["{context}" == "rosa-hcp-sts-creating-a-cluster-quickly"] -:rosa-hcp: -endif::[] +// * rosa_planning/rosa-hcp-prepare-iam-resources.adoc :_mod-docs-content-type: PROCEDURE [id="rosa-sts-byo-oidc_{context}"] = Creating an OpenID Connect configuration When using a -ifdef::rosa-hcp[] +ifdef::openshift-rosa-hcp[] {hcp-title} cluster, you must -endif::rosa-hcp[] -ifndef::rosa-hcp[] +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] {product-title} cluster, you can -endif::rosa-hcp[] +endif::openshift-rosa-hcp[] create the OpenID Connect (OIDC) configuration prior to creating your cluster. This configuration is registered to be used with OpenShift Cluster Manager. .Prerequisites -ifdef::rosa-hcp[] +ifdef::openshift-rosa-hcp[] * You have completed the AWS prerequisites for {hcp-title}. -endif::rosa-hcp[] -ifdef::rosa-hcp[] +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] * You have completed the AWS prerequisites for {product-title}. -endif::rosa-hcp[] +endif::openshift-rosa-hcp[] * You have installed and configured the latest {product-title} (ROSA) CLI, `rosa`, on your installation host. .Procedure diff --git a/modules/rosa-sts-ocm-role-creation.adoc b/modules/rosa-sts-ocm-role-creation.adoc index d3851897092a..7f3d1551a55d 100644 --- a/modules/rosa-sts-ocm-role-creation.adoc +++ b/modules/rosa-sts-ocm-role-creation.adoc @@ -2,6 +2,7 @@ //* rosa_architecture/rosa-sts-about-iam-resources.adoc // * support/rosa-troubleshooting-iam-resources.adoc // * rosa_planning/rosa-sts-ocm-role.adoc +// * rosa_planning/rosa-hcp-prepare-iam-resources.adoc :_mod-docs-content-type: PROCEDURE [id="rosa-sts-ocm-roles-and-permissions-iam-basic-role_{context}"] = Creating an ocm-role IAM role @@ -30,7 +31,8 @@ $ rosa create ocm-role $ rosa create ocm-role --admin ---- + -This command allows you create the role by specifying specific attributes. The following example output shows the "auto mode" selected, which lets the ROSA CLI (`rosa`) create your Operator roles and policies. See "Methods of account-wide role creation" in the Additional resources for more information. +This command allows you to create the role by specifying specific attributes. The following example output shows the "auto mode" selected, which lets the ROSA CLI (`rosa`) create your Operator roles and policies. +See "Methods of account-wide role creation" for more information. .Example output [source,terminal] diff --git a/modules/rosa-sts-oidc-provider-command.adoc b/modules/rosa-sts-oidc-provider-command.adoc index e2a072d44743..799d45b4047d 100644 --- a/modules/rosa-sts-oidc-provider-command.adoc +++ b/modules/rosa-sts-oidc-provider-command.adoc @@ -2,7 +2,7 @@ // // * rosa_architecture/rosa-sts-about-iam-resources.adoc // * rosa_architecture/rosa_policy_service_definition/rosa-oidc-overview.adoc - +// * rosa_planning/rosa-hcp-prepare-iam-resources.adoc :_mod-docs-content-type: PROCEDURE [id="rosa-sts-oidc-provider-for-operators-aws-cli_{context}"] = Creating an OIDC provider using the CLI diff --git a/modules/rosa-sts-operator-roles.adoc b/modules/rosa-sts-operator-roles.adoc index 25433e3f4dbf..0b7273018efa 100644 --- a/modules/rosa-sts-operator-roles.adoc +++ b/modules/rosa-sts-operator-roles.adoc @@ -1,6 +1,7 @@ // Module included in the following assemblies: // // * rosa_architecture/rosa-sts-about-iam-resources.adoc +// * rosa_planning/rosa-hcp-prepare-iam-resources.adoc :_mod-docs-content-type: REFERENCE [id="rosa-sts-operator-roles_{context}"] diff --git a/modules/rosa-sts-setting-up-environment.adoc b/modules/rosa-sts-setting-up-environment.adoc index 0d6b5fcf4b13..e80b909ce78e 100644 --- a/modules/rosa-sts-setting-up-environment.adoc +++ b/modules/rosa-sts-setting-up-environment.adoc @@ -4,7 +4,6 @@ :_mod-docs-content-type: PROCEDURE [id="rosa-sts-setting-up-environment_{context}"] - = Setting up the environment for STS Before you create a {product-title} (ROSA) cluster that uses the AWS Security Token Service (STS), complete the following steps to set up your environment. diff --git a/modules/rosa-sts-user-role-creation.adoc b/modules/rosa-sts-user-role-creation.adoc index 7963336b0fd1..3f0a839f52a8 100644 --- a/modules/rosa-sts-user-role-creation.adoc +++ b/modules/rosa-sts-user-role-creation.adoc @@ -2,6 +2,7 @@ // // * support/rosa-troubleshooting-iam-resources.adoc // * rosa_planning/rosa-sts-ocm-role.adoc +// * rosa_planning/rosa-hcp-prepare-iam-resources.adoc :_mod-docs-content-type: PROCEDURE [id="rosa-sts-user-role-iam-basic-role_{context}"] = Creating a user-role IAM role @@ -21,7 +22,8 @@ You can create your `user-role` IAM roles by using the command-line interface (C $ rosa create user-role ---- + -This command allows you create the role by specifying specific attributes. The following example output shows the "auto mode" selected, which lets the ROSA CLI (`rosa`) to create your Operator roles and policies. See "Understanding the auto and manual deployment modes" in the Additional resources for more information. +This command allows you to create the role by specifying specific attributes. The following example output shows the "auto mode" selected, which lets the ROSA CLI (`rosa`) to create your Operator roles and policies. +See "Understanding the auto and manual deployment modes" for more information. .Example output [source,terminal] diff --git a/modules/sd-hcp-planning-cluster-maximums.adoc b/modules/sd-hcp-planning-cluster-maximums.adoc index b8fd076b159e..f7483d71d41b 100644 --- a/modules/sd-hcp-planning-cluster-maximums.adoc +++ b/modules/sd-hcp-planning-cluster-maximums.adoc @@ -10,10 +10,6 @@ Consider the following tested object maximums when you plan a {hcp-title-first} These guidelines are based on a cluster of 500 compute (also known as worker) nodes. For smaller clusters, the maximums are lower. -[NOTE] -==== -Customers running {hcp-title} 4.14.x and 4.15.x clusters require a minimum z-stream version of 4.14.28 or 4.15.15 and greater to scale to 500 worker nodes. For earlier versions, the maximum is 90 worker nodes. -==== .Tested cluster maximums [options="header",cols="50,50"] diff --git a/modules/sre-cluster-access.adoc b/modules/sre-cluster-access.adoc index 7e354b9ba040..af33ee9683c0 100644 --- a/modules/sre-cluster-access.adoc +++ b/modules/sre-cluster-access.adoc @@ -82,7 +82,7 @@ When SREs are on a VPN through two-factor authentication, they and Red Hat Suppo All activities performed by SREs arrive from Red Hat IP addresses and are logged to CloudTrail to allow you to audit and review all activity. This role is only used in cases where access to AWS services is required to assist you. The majority of permissions are read-only. However, a select few permissions have more access, including the ability to reboot an instance or spin up a new instance. SRE access is limited to the policy permissions attached to the `ManagedOpenShift-Support-Role`. -For a full list of permissions, see sts_support_permission_policy.json in the link:https://docs.openshift.com/rosa/rosa_architecture/rosa-sts-about-iam-resources.html[About IAM resources for ROSA clusters that use STS] user guide. +For a full list of permissions, see `sts_support_permission_policy.json` in the link:https://docs.openshift.com/rosa/rosa_architecture/rosa-sts-about-iam-resources.html[About IAM resources] user guide. [id="rosa-sre-access-privatelink-vpc.adoc_{context}"] == SRE access through PrivateLink VPC endpoint service diff --git a/nodes/index.adoc b/nodes/index.adoc index c1d269280284..c9dd4335a78e 100644 --- a/nodes/index.adoc +++ b/nodes/index.adoc @@ -50,7 +50,7 @@ The read operations allow an administrator or a developer to get information abo * Get information about a node, such as memory and CPU usage, health, status, and age. * xref:../nodes/nodes/nodes-nodes-viewing.adoc#nodes-nodes-viewing-listing-pods_nodes-nodes-viewing[List pods running on a node]. -ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifndef::openshift-enterprise,openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [discrete] === Management operations @@ -64,7 +64,7 @@ through several tasks: * xref:../nodes/nodes/nodes-nodes-managing-max-pods.adoc#nodes-nodes-managing-max-pods-proc_nodes-nodes-managing-max-pods[Configure the number of pods that can run on a node] based on the number of processor cores on the node, a hard limit, or both. * Reboot a node gracefully using xref:../nodes/nodes/nodes-nodes-rebooting.adoc#nodes-nodes-rebooting-affinity_nodes-nodes-rebooting[pod anti-affinity]. * xref:../nodes/nodes/nodes-nodes-working.adoc#deleting-nodes[Delete a node from a cluster] by scaling down the cluster using a compute machine set. To delete a node from a bare-metal cluster, you must first drain all pods on the node and then manually delete the node. -endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +endif::openshift-enterprise,openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [discrete] === Enhancement operations @@ -72,7 +72,6 @@ endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] {product-title} allows you to do more than just access and manage nodes; as an administrator, you can perform the following tasks on nodes to make the cluster more efficient, application-friendly, and to provide a better environment for your developers. * Manage node-level tuning for high-performance applications that require some level of kernel tuning by xref:../nodes/nodes/nodes-node-tuning-operator.adoc#nodes-node-tuning-operator[using the Node Tuning Operator]. -* xref:../nodes/jobs/nodes-pods-daemonsets.adoc#nodes-pods-daemonsets[Run background tasks on nodes automatically with daemon sets]. You can create and use daemon sets to create shared storage, run a logging pod on every node, or deploy a monitoring agent on all nodes. ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * Enable TLS security profiles on the node to protect communication between the kubelet and the Kubernetes API server. endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] @@ -96,6 +95,7 @@ As an administrator, you can get information about pods in a project through the * xref:../nodes/pods/nodes-pods-viewing.adoc#nodes-pods-viewing-project_nodes-pods-viewing[List pods associated with a project], including information such as the number of replicas and restarts, current status, and age. * xref:../nodes/pods/nodes-pods-viewing.adoc#nodes-pods-viewing-usage_nodes-pods-viewing[View pod usage statistics] such as CPU, memory, and storage consumption. + [discrete] === Management operations @@ -145,7 +145,6 @@ As a developer, use a vertical pod autoscaler to ensure your pods stay up during |Administrator |Some applications need sensitive information, such as passwords and usernames. You can use the `Secret` object to provide such information to an application pod. - |=== endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] @@ -218,12 +217,12 @@ garbage collection:: The process of cleaning up cluster resources, such as terminated containers and images that are not referenced by any running pods. //cannot create the required namespace for these operators -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [discrete] [id="commonterms-node-hpa"] Horizontal Pod Autoscaler(HPA):: Implemented as a Kubernetes API resource and a controller. You can use the HPA to specify the minimum and maximum number of pods that you want to run. You can also specify the CPU or memory utilization that your pods should target. The HPA scales out and scales in pods when a given CPU or memory threshold is crossed. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [discrete] [id="commonterms-node-ingress"] @@ -269,4 +268,4 @@ Indicates that the pod is allowed (but not required) to be scheduled on nodes or [discrete] [id="commonterms-node-taint"] Taint:: -A core object that comprises a key,value, and effect. Taints and tolerations work together to ensure that pods are not scheduled on irrelevant nodes. +A core object that comprises a key, value, and effect. Taints and tolerations work together to ensure that pods are not scheduled on irrelevant nodes. \ No newline at end of file diff --git a/ocm/ocm-overview.adoc b/ocm/ocm-overview.adoc index 8f3dd247374f..39630927d5dd 100644 --- a/ocm/ocm-overview.adoc +++ b/ocm/ocm-overview.adoc @@ -69,7 +69,7 @@ include::modules/ocm-settings-tab.adoc[leveloffset=+2] == Additional resources * For the complete documentation for {cluster-manager}, see link:https://access.redhat.com/documentation/en-us/openshift_cluster_manager/2022/html-single/managing_clusters/index[{cluster-manager} documentation]. -ifdef::openshift-rosa,openshift-rosa-hcp,openshift-rosa-classic[] +ifdef::openshift-rosa,openshift-rosa-hcp[] * For steps to add cluster notification contacts, see xref:../rosa_cluster_admin/rosa-cluster-notifications.adoc#add-notification-contact_rosa-cluster-notifications[Adding cluster notification contacts] endif::[] ifdef::openshift-dedicated[] diff --git a/rosa_architecture/about-hcp.adoc b/rosa_architecture/about-hcp.adoc index cf18ed19b9ee..3e035a66f4de 100644 --- a/rosa_architecture/about-hcp.adoc +++ b/rosa_architecture/about-hcp.adoc @@ -89,7 +89,7 @@ ifdef::openshift-rosa-hcp[] link:https://docs.openshift.com/rosa/rosa_architecture/rosa_policy_service_definition/rosa-policy-process-security.html#rosa-policy-process-security[Understanding process and security] endif::openshift-rosa-hcp[] ifndef::openshift-rosa-hcp[] -xref:../rosa_architecture/rosa_policy_service_definition/rosa-policy-process-security.adoc#rosa-policy-process-security[Understanding process and security] +xref:../../rosa_architecture/rosa_policy_service_definition/rosa-policy-process-security.adoc#rosa-policy-process-security[Understanding process and security] endif::openshift-rosa-hcp[] | xref:../rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc#rosa-hcp-service-definition[{hcp-title} service definition] @@ -148,6 +148,15 @@ endif::openshift-rosa-hcp[] | link:https://learn.openshift.com/?extIdCarryOver=true&sc_cid=701f2000001Css5AAC[OpenShift Interactive Learning Portal] | xref:../storage/index.adoc#storage-overview[Storage] +| +ifdef::openshift-rosa-hcp[] +link:https://docs.openshift.com/rosa/observability/monitoring/monitoring-overview.html#monitoring-overview_virt-monitoring-overview[Monitoring overview] +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] +xref:../observability/monitoring/monitoring-overview.adoc#monitoring-overview_virt-monitoring-overview[Monitoring overview] +endif::openshift-rosa-hcp[] +| +xref:../rosa_architecture/rosa_policy_service_definition/rosa-hcp-life-cycle.adoc#rosa-hcp-life-cycle[{hcp-title} life cycle] | ifdef::openshift-rosa-hcp[] link:https://docs.openshift.com/rosa/rosa_architecture/rosa_policy_service_definition/rosa-policy-responsibility-matrix.html#rosa-policy-responsibility-matrix[ROSA responsibility matrix] diff --git a/rosa_architecture/cloud-experts-rosa-hcp-sts-explained.adoc b/rosa_architecture/cloud-experts-rosa-hcp-sts-explained.adoc index 83d134e93ed9..f4ef23ebc129 100644 --- a/rosa_architecture/cloud-experts-rosa-hcp-sts-explained.adoc +++ b/rosa_architecture/cloud-experts-rosa-hcp-sts-explained.adoc @@ -37,13 +37,28 @@ Security features for AWS STS include: [id="components-specific-to-rosa-hcp-with-sts"] == Components of {hcp-title} -* *AWS infrastructure* - The infrastructure required for the cluster including the Amazon EC2 instances, Amazon EBS storage, and networking components. See link:https://docs.openshift/rosa/rosa_architecture/rosa_policy_service_definition/rosa-service-definition.html#rosa-sdpolicy-aws-compute-types_rosa-service-definition[AWS compute types] to see the supported instance types for compute nodes and link:https://docs.openshift/rosa/rosa_planning/rosa-sts-aws-prereqs.html#rosa-ec2-instances_rosa-sts-aws-prereqs[provisioned AWS infrastructure] for more information on cloud resource configuration. +* *AWS infrastructure* - The infrastructure required for the cluster including the Amazon EC2 instances, Amazon EBS storage, and networking components. See xref:../rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc#rosa-sdpolicy-instance-types_rosa-hcp-service-definition[AWS compute types] to see the supported instance types for compute nodes and xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-ec2-instances_rosa-sts-aws-prereqs[provisioned AWS infrastructure] for more information on cloud resource configuration. * *AWS STS* - A method for granting short-term, dynamic tokens to provide users the necessary permissions to temporarily interact with your AWS account resources. * *OpenID Connect (OIDC)* - A mechanism for cluster Operators to authenticate with AWS, assume the cluster roles through a trust policy, and obtain temporary credentials from AWS IAM STS to make the required API calls. * *Roles and policies* - The roles and policies used by {hcp-title} can be divided into account-wide roles and policies and Operator roles and policies. + -The policies determine the allowed actions for each of the roles. See link:https://docs.openshift/rosa/rosa_architecture/rosa-sts-about-iam-resources.html#rosa-sts-about-iam-resources[About IAM resources for ROSA clusters that use STS] for more details about the individual roles and policies and link:https://docs.openshift/rosa/rosa_planning/rosa-sts-ocm-role.html#rosa-sts-ocm-role[ROSA IAM role resource] for more details about trust policies. +The policies determine the allowed actions for each of the roles. +ifdef::openshift-rosa[] +See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] for more details about the individual roles and policies. See xref:../rosa_planning/rosa-sts-ocm-role.adoc#rosa-sts-ocm-role[ROSA IAM role resource] for more details about trust policies. +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-hcp-about-iam-resources[About IAM resources] for more details about the individual roles and policies. See xref:../rosa_planning/rosa-hcp-prepare-iam-roles-resources.adoc#rosa-hcp-prepare-iam-roles-resources[Required IAM roles and resources] for more details on preparing these resources in your cluster. +endif::openshift-rosa-hcp[] + +-- +** The following account-wide roles are required: + +*** `-HCP-ROSA-Worker-Role` +*** `-HCP-ROSA-Support-Role` +*** `-HCP-ROSA-Installer-Role` + +** The following account-wide AWS-managed policies are required: + *** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAInstallerPolicy.html[ROSAInstallerPolicy] *** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAWorkerInstancePolicy.html[ROSAWorkerInstancePolicy] *** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSASRESupportPolicy.html[ROSASRESupportPolicy] @@ -56,6 +71,7 @@ The policies determine the allowed actions for each of the roles. See link:https *** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAKubeControllerPolicy.html[ROSAKubeControllerPolicy] *** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAManageSubscription.html[ROSAManageSubscription] *** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSANodePoolManagementPolicy.html[ROSANodePoolManagementPolicy] +-- + [NOTE] ==== @@ -63,7 +79,7 @@ Certain policies are used by the cluster Operator roles, listed below. The Opera ==== + ** The Operator roles are: -+ + *** -openshift-cluster-csi-drivers-ebs-cloud-credentials *** -openshift-cloud-network-config-controller-cloud-credentials *** -openshift-machine-api-aws-cloud-credentials diff --git a/rosa_architecture/rosa-sts-about-iam-resources.adoc b/rosa_architecture/rosa-sts-about-iam-resources.adoc index c36e6b7b7959..4e43c336d01a 100644 --- a/rosa_architecture/rosa-sts-about-iam-resources.adoc +++ b/rosa_architecture/rosa-sts-about-iam-resources.adoc @@ -1,41 +1,72 @@ :_mod-docs-content-type: ASSEMBLY +// This assembly is the target of a symbolic link. +// The symbolic link at openshift-docs/rosa_planning/rosa-hcp-iam-resources.adoc +// points to this file's real location, which is +// openshift-docs/rosa_architecture/rosa-sts-about-iam-resources.adoc +ifndef::openshift-rosa-hcp[] [id="rosa-sts-about-iam-resources"] -= About IAM resources for ROSA clusters that use STS += Required IAM resources for STS clusters include::_attributes/attributes-openshift-dedicated.adoc[] :context: rosa-sts-about-iam-resources +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +[id="rosa-hcp-about-iam-resources"] += Required IAM resources +include::_attributes/attributes-openshift-dedicated.adoc[] +:context: rosa-sts-about-iam-resources +endif::openshift-rosa-hcp[] toc::[] -To deploy a {product-title} (ROSA) cluster that uses the AWS Security Token Service (STS), you must create the following AWS Identity Access Management (IAM) resources: - -* Specific account-wide IAM roles and policies that provide the STS permissions required for ROSA support, installation, control plane, and compute functionality. This includes account-wide Operator policies. +ifndef::openshift-rosa-hcp[] +To deploy a {product-title} (ROSA) cluster that uses the AWS Security Token Service (STS), +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +{hcp-title-first} uses the AWS Security Token Service (STS) to provide temporary, limited-permission credentials for your cluster. This means that before you deploy your cluster, +endif::openshift-rosa-hcp[] +you must create the following AWS Identity Access Management (IAM) resources: + +* Specific account-wide IAM roles and policies that provide the STS permissions required for ROSA support, installation, +ifndef::openshift-rosa-hcp[] +control plane, +endif::openshift-rosa-hcp[] +and compute functionality. This includes account-wide Operator policies. * Cluster-specific Operator IAM roles that permit the ROSA cluster Operators to carry out core OpenShift functionality. * An OpenID Connect (OIDC) provider that the cluster Operators use to authenticate. -* If you deploy ROSA by using {cluster-manager}, you must create the additional resources: +* If you deploy and manage your cluster using {cluster-manager}, you must create the following additional resources: ** An {cluster-manager} IAM role to complete the installation on your cluster. ** A user role without any permissions to verify your AWS account identity. -This document provides reference information about the IAM resources that you must deploy when you create a ROSA cluster that uses STS. It also includes the `aws` CLI commands that are generated when you use `manual` mode with the `rosa create` command. +This document provides reference information about the IAM resources that you must deploy +ifdef::openshift-rosa[] +when you create a ROSA cluster that uses STS. +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +when you create a {hcp-title} cluster. +endif::openshift-rosa-hcp[] +It also includes the `aws` CLI commands that are generated when you use `manual` mode with the `rosa create` command. [role="_additional-resources"] .Additional resources - -* For steps to quickly create a ROSA cluster with STS, including the AWS IAM resources, see xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Creating a ROSA cluster with STS using the default options]. -* For steps to create a ROSA cluster with STS using customizations, including the AWS IAM resources, see xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[Creating a ROSA cluster with STS using customizations]. +ifndef::openshift-rosa-hcp[] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Creating a ROSA cluster with STS using the default options] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[Creating a ROSA cluster with STS using customizations] +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +* xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Creating a {hcp-title} cluster quickly] +endif::openshift-rosa-hcp[] [id="rosa-sts-ocm-roles-and-permissions_{context}"] == {cluster-manager} roles and permissions -If you create ROSA clusters by using {cluster-manager-url}, you must have the following AWS IAM roles linked to your AWS account to create and manage the clusters. For more information about linking your IAM roles to your AWS account, see xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-associating-account_rosa-sts-aws-prereqs[Associating your AWS account]. - -[TIP] -==== -If you only use the ROSA CLI (`rosa`), then you do not need to create these IAM roles. -==== +If you create ROSA clusters by using {cluster-manager-url}, you must have the following AWS IAM roles linked to your AWS account to create and manage the clusters. +ifndef::openshift-rosa-hcp[] +For more information about linking your IAM roles to your AWS account, see xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-associating-account_rosa-sts-aws-prereqs[Associating your AWS account]. +endif::openshift-rosa-hcp[] These AWS IAM roles are as follows: -* The ROSA user role is an AWS role used by Red{nbsp}Hat to verify the customer's AWS identity. This role has no additional permissions, and the role has a trust relationship with the Red{nbsp}Hat installer account. +* The ROSA user role (`user-role`) is an AWS role used by Red{nbsp}Hat to verify the customer's AWS identity. This role has no additional permissions, and the role has a trust relationship with the Red{nbsp}Hat installer account. * An `ocm-role` resource grants the required permissions for installation of ROSA clusters in {cluster-manager}. You can apply basic or administrative permissions to the `ocm-role` resource. If you create an administrative `ocm-role` resource, {cluster-manager} can create the needed AWS Operator roles and OpenID Connect (OIDC) provider. This IAM role also creates a trust relationship with the Red{nbsp}Hat installer account as well. + [NOTE] @@ -54,7 +85,10 @@ include::modules/rosa-sts-understanding-ocm-role.adoc[leveloffset=+2] [discrete] include::modules/rosa-sts-ocm-role-creation.adoc[leveloffset=+2] -AWS IAM roles link to your AWS account to create and manage the clusters. For more information about linking your IAM roles to your AWS account, see xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-associating-account_rosa-sts-aws-prereqs[Associating your AWS account]. +AWS IAM roles link to your AWS account to create and manage the clusters. +ifndef::openshift-rosa-hcp[] +For more information about linking your IAM roles to your AWS account, see xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-associating-account_rosa-sts-aws-prereqs[Associating your AWS account]. +endif::openshift-rosa-hcp[] [role="_additional-resources"] [id="additional-resources_about-iam-resources_{context}"] @@ -63,17 +97,17 @@ AWS IAM roles link to your AWS account to create and manage the clusters. For mo * link:https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_Types.html[Amazon Elastic Computer Cloud Data Types] * link:https://docs.aws.amazon.com/STS/latest/APIReference/API_Types.html[AWS Token Security Service Data Types] * xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies-creation-methods_rosa-sts-about-iam-resources[Methods of account-wide role creation] -// -// Keep this commented out until PR # 45306 is merged -// -//* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[Understanding the auto and manual deployment modes] include::modules/rosa-sts-account-wide-roles-and-policies.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources - -* For a definition of OpenShift major, minor, and patch versions, see xref:../rosa_architecture/rosa_policy_service_definition/rosa-life-cycle.adoc#rosa-life-cycle-definitions_rosa-life-cycle[the {product-title} update life cycle]. +ifdef::openshift-rosa[] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-life-cycle.adoc#rosa-life-cycle-definitions_rosa-life-cycle[{product-title} update life cycle] +endif::[] +ifdef::openshift-rosa-hcp[] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-hcp-life-cycle.adoc#rosa-hcp-life-cycle[{hcp-title} update life cycle] +endif::openshift-rosa-hcp[] include::modules/rosa-sts-account-wide-role-and-policy-commands.adoc[leveloffset=+2] include::modules/rosa-sts-aws-requirements-attaching-boundary-policy.adoc[leveloffset=+1] @@ -81,16 +115,24 @@ include::modules/rosa-sts-aws-requirements-attaching-boundary-policy.adoc[levelo [role="_additional-resources"] .Additional resources -* For more information, see link:https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html[Permissions boundaries for IAM entities] (AWS documentation). -* For more information about creating the required account-wide STS roles and policies see xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-account-wide-sts-roles-and-policies_rosa-sts-creating-a-cluster-quickly[Creating the account-wide STS roles and policies]. +* link:https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html[Permissions boundaries for IAM entities] (AWS documentation) +ifdef::openshift-rosa[] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-account-wide-sts-roles-and-policies_rosa-sts-creating-a-cluster-quickly[Creating the account-wide STS roles and policies] +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +* xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-account-wide-sts-roles-and-policies_rosa-hcp-sts-creating-a-cluster-quickly[Creating account-wide roles and policies] +endif::openshift-rosa-hcp[] include::modules/rosa-sts-operator-roles.adoc[leveloffset=+1] include::modules/rosa-sts-operator-role-commands.adoc[leveloffset=+2] include::modules/rosa-sts-about-operator-role-prefixes.adoc[leveloffset=+2] +ifdef::openshift-rosa[] [role="_additional-resources"] .Additional resources - For steps to create the cluster-specific Operator IAM roles using a custom prefix, see xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations-cli_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations using the CLI] or xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations-ocm_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations by using {cluster-manager}]. +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations-cli_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations using the CLI] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations-ocm_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations by using {cluster-manager}] +endif::openshift-rosa[] [id="rosa-sts-oidc-provider-requirements-for-operators_{context}"] == Open ID Connect (OIDC) requirements for Operator authentication diff --git a/rosa_architecture/rosa-understanding.adoc b/rosa_architecture/rosa-understanding.adoc index 0d4c63888b31..663c87307350 100644 --- a/rosa_architecture/rosa-understanding.adoc +++ b/rosa_architecture/rosa-understanding.adoc @@ -60,6 +60,7 @@ To get started with deploying your cluster, ensure your AWS account has met the == Additional resources * xref:../ocm/ocm-overview.adoc#ocm-overview[OpenShift Cluster Manager] +//* xref ../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] * xref:../rosa_getting_started/rosa-getting-started.adoc#rosa-getting-started[Getting started with {product-title}] * link:https://aws.amazon.com/rosa/pricing/[AWS pricing page] diff --git a/rosa_architecture/rosa_policy_service_definition/rosa-hcp-instance-types.adoc b/rosa_architecture/rosa_policy_service_definition/rosa-hcp-instance-types.adoc index 79685a49b51e..d62b811f68b4 100644 --- a/rosa_architecture/rosa_policy_service_definition/rosa-hcp-instance-types.adoc +++ b/rosa_architecture/rosa_policy_service_definition/rosa-hcp-instance-types.adoc @@ -7,9 +7,10 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] {hcp-title} offers the following worker node instance types and sizes: +//TODO OSDOCS-11789: Confirm this [NOTE] ==== -Currently, {hcp-title} supports a maximum of 250 worker nodes. +Currently, {hcp-title} supports a maximum of 500 worker nodes. ==== include::modules/rosa-sdpolicy-am-aws-compute-types.adoc[leveloffset=+1] diff --git a/rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc b/rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc index 2e74983e0097..241126dc28fb 100644 --- a/rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc +++ b/rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc @@ -26,7 +26,7 @@ include::modules/rosa-sdpolicy-instance-types.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources -* xref:../rosa_policy_service_definition/rosa-hcp-instance-types.adoc#rosa-hcp-instance-types[{hcp-title} instance types] +For a detailed listing of supported instance types, see xref:../rosa_policy_service_definition/rosa-hcp-instance-types.adoc#rosa-hcp-instance-types[{hcp-title} instance types]. include::modules/rosa-sdpolicy-am-regions-az.adoc[leveloffset=+2] @@ -66,8 +66,4 @@ ifdef::openshift-rosa-hcp[] * link:https://docs.openshift.com/rosa/rosa_architecture/rosa_policy_service_definition/rosa-policy-process-security.html[Understanding security for ROSA] endif::openshift-rosa-hcp[] -ifndef::openshift-rosa-hcp[] -* xref:../rosa_policy_service_definition/rosa-policy-process-security.adoc#rosa-policy-process-security[Understanding security for ROSA] -endif::openshift-rosa-hcp[] - -* xref:../rosa_policy_service_definition/rosa-hcp-life-cycle.adoc#rosa-hcp-life-cycle[ROSA life cycle] \ No newline at end of file +* See xref:../rosa_policy_service_definition/rosa-hcp-life-cycle.adoc#rosa-hcp-life-cycle[ROSA life cycle] diff --git a/rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc b/rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc index a5bbdb0f7bc5..f7f96c0e766a 100644 --- a/rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc +++ b/rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc @@ -19,6 +19,13 @@ include::modules/rosa-sdpolicy-am-cluster-self-service.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources +ifdef::openshift-rosa-hcp[] +* xref:../../rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc#rosa-sdpolicy-red-hat-operator_rosa-hcp-service-definition[Red{nbsp}Hat Operator Support] +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] +* xref:../../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-red-hat-operator_rosa-service-definition[Red{nbsp}Hat Operator Support] +endif::openshift-rosa-hcp[] + ifdef::openshift-rosa-hcp[] * link:https://docs.openshift.com/rosa/rosa_cluster_admin/rosa-configuring-pid-limits.html#rosa-configuring-pid-limits[Configuring PID limits] endif::openshift-rosa-hcp[] @@ -35,9 +42,7 @@ ifdef::openshift-rosa-hcp[] * xref:../rosa_policy_service_definition/rosa-hcp-instance-types.adoc#rosa-instance-types[{product-title} instance types]. endif::openshift-rosa-hcp[] ifndef::openshift-rosa-hcp[] -* xref:../rosa_policy_service_definition/rosa-instance-types.adoc#rosa-instance-types[{product-title} instance types] - -* xref:../../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[Limits and scalability] +xref:../rosa_policy_service_definition/rosa-instance-types.adoc#rosa-instance-types[{product-title} instance types]. endif::openshift-rosa-hcp[] include::modules/rosa-sdpolicy-am-regions-az.adoc[leveloffset=+2] diff --git a/rosa_architecture/rosa_policy_service_definition/rosa-sre-access.adoc b/rosa_architecture/rosa_policy_service_definition/rosa-sre-access.adoc index def40f6e3e24..9948eb0dbf18 100644 --- a/rosa_architecture/rosa_policy_service_definition/rosa-sre-access.adoc +++ b/rosa_architecture/rosa_policy_service_definition/rosa-sre-access.adoc @@ -29,8 +29,6 @@ include::modules/how-service-accounts-assume-aws-iam-roles-in-sre-owned-projects [role="_additional-resources"] .Additional resources -* link:https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/introduction_to_rosa/rosa-sts-about-iam-resources#rosa-sts-operator-roles_rosa-sts-about-iam-resources[Cluster-specific Operator IAM role reference]. +* For more information about the AWS IAM roles used by the cluster Operators, see xref:../../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-operator-roles_rosa-sts-about-iam-resources[Cluster-specific Operator IAM role reference]. -* See policies and permissions that the cluster Operators require, link:https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/introduction_to_rosa/rosa-sts-about-iam-resources#rosa-sts-account-wide-roles-and-policies-creation-methods_rosa-sts-about-iam-resources[Methods of account-wide role creation]. - -* link:https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/support/approved-access[Approved Access]. +* For more information about the policies and permissions that the cluster Operators require, see xref:../../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies-creation-methods_rosa-sts-about-iam-resources[Methods of account-wide role creation]. diff --git a/rosa_cluster_admin/rosa-cluster-notifications.adoc b/rosa_cluster_admin/rosa-cluster-notifications.adoc index fe2dbde3f948..a07bc3d3f187 100644 --- a/rosa_cluster_admin/rosa-cluster-notifications.adoc +++ b/rosa_cluster_admin/rosa-cluster-notifications.adoc @@ -12,6 +12,10 @@ ifdef::openshift-rosa,openshift-rosa-hcp[] [role="_additional-resources"] == Additional resources // TODO: Add this xref to ARO HCP. +// TODO OSDOCS-11789: Confirm plan for responsibility matrix for HCP +ifdef::openshift-rosa-hcp[] +* link:https://docs.openshift.com/rosa/rosa_architecture/rosa_policy_service_definition/rosa-policy-responsibility-matrix.adoc#notifications_rosa-policy-responsibility-matrix[Customer responsibilities: Review and action cluster notifications] +endif::openshift-rosa-hcp[] ifndef::openshift-rosa-hcp[] * xref:../rosa_architecture/rosa_policy_service_definition/rosa-policy-responsibility-matrix.adoc#notifications_rosa-policy-responsibility-matrix[Customer responsibilities: Review and action cluster notifications] endif::openshift-rosa-hcp[] diff --git a/rosa_hcp/rosa-hcp-deleting-cluster.adoc b/rosa_hcp/rosa-hcp-deleting-cluster.adoc index 80ea903d2744..21a91e595cb7 100644 --- a/rosa_hcp/rosa-hcp-deleting-cluster.adoc +++ b/rosa_hcp/rosa-hcp-deleting-cluster.adoc @@ -11,23 +11,25 @@ If you want to delete a {hcp-title-first} cluster, you can use either the {clust include::modules/rosa-hcp-deleting-cluster.adoc[leveloffset=+1] .Troubleshooting +ifdef::openshift-rosa[] * If the cluster cannot be deleted because of missing IAM roles, see xref:../support/troubleshooting/rosa-troubleshooting-deployments.adoc#rosa-troubleshooting-cluster-deletion_rosa-troubleshooting-cluster-deployments[Repairing a cluster that cannot be deleted]. -* If the cluster cannot be deleted for other reasons: -** Ensure that there are no add-ons for your cluster pending in the link:https://console.redhat.com/openshift[Hybrid Cloud Console]. -** Ensure that all AWS resources and dependencies have been deleted in the Amazon Web Console. +endif::openshift-rosa[] +* Ensure that there are no add-ons for your cluster pending in the link:https://console.redhat.com/openshift[Hybrid Cloud Console]. +* Ensure that all AWS resources and dependencies have been deleted in the Amazon Web Console. include::modules/rosa-deleting-sts-iam-resources-account-wide.adoc[leveloffset=+1] +ifdef::openshift-rosa[] [role="_additional-resources"] .Additional resources - * xref:../support/troubleshooting/rosa-troubleshooting-deployments.adoc#rosa-troubleshooting-cluster-deletion_rosa-troubleshooting-cluster-deployments[Repairing a cluster that cannot be deleted] +endif::openshift-rosa[] include::modules/rosa-deleting-account-wide-iam-roles-and-policies.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources -* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources for ROSA clusters that use STS] +* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] include::modules/rosa-unlinking-and-deleting-ocm-and-user-iam-roles.adoc[leveloffset=+2] \ No newline at end of file diff --git a/rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc b/rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc index ff12e72e0091..3144153f4c37 100644 --- a/rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc +++ b/rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc @@ -57,6 +57,7 @@ endif::openshift-rosa-hcp[] include::modules/rosa-sts-overview-of-the-default-cluster-specifications.adoc[leveloffset=+1] +//TODO OSDOCS-11789: Move these out of the deployment doc and into the prepare doc? Keep in both locations? [id="rosa-hcp-prereqs"] == {hcp-title} Prerequisites diff --git a/rosa_install_access_delete_clusters/rosa-shared-vpc-config.adoc b/rosa_install_access_delete_clusters/rosa-shared-vpc-config.adoc index a101f0560f62..ac35a7fe7cf6 100644 --- a/rosa_install_access_delete_clusters/rosa-shared-vpc-config.adoc +++ b/rosa_install_access_delete_clusters/rosa-shared-vpc-config.adoc @@ -31,7 +31,14 @@ image::372_OpenShift_on_AWS_persona_worflows_0923_all.png[] .Prerequisites for the *Cluster Creator* * You installed the link:https://console.redhat.com/openshift/downloads#tool-rosa[ROSA CLI (`rosa`)] 1.2.26 or later. -* You created all of the required xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-account-wide-sts-roles-and-policies_rosa-sts-creating-a-cluster-quickly[ROSA account roles] for creating a cluster. +* You created all of the required +ifdef::openshift-rosa[] +xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-account-wide-sts-roles-and-policies_rosa-sts-creating-a-cluster-quickly[account-wide roles and policies] +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +* xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-account-wide-sts-roles-and-policies_rosa-hcp-sts-creating-a-cluster-quickly[account-wide roles and policies] +endif::openshift-rosa-hcp[] +for creating a cluster. * The *Cluster Creator's* AWS account is separate from the *VPC Owner's* AWS account. * Both AWS accounts belong to the same AWS organization. diff --git a/rosa_install_access_delete_clusters/rosa-sts-deleting-cluster.adoc b/rosa_install_access_delete_clusters/rosa-sts-deleting-cluster.adoc index 477554d4d429..54ca0e5ab06f 100644 --- a/rosa_install_access_delete_clusters/rosa-sts-deleting-cluster.adoc +++ b/rosa_install_access_delete_clusters/rosa-sts-deleting-cluster.adoc @@ -41,5 +41,5 @@ include::modules/rosa-unlinking-and-deleting-ocm-and-user-iam-roles.adoc[levelof == Additional resources * For information about the cluster delete protection feature, see xref:../cli_reference/rosa_cli/rosa-manage-objects-cli.adoc#rosa-edit-objects_rosa-managing-objects-cli[Edit objects]. -* For information about the AWS IAM resources for ROSA clusters that use STS, see xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources for ROSA clusters that use STS]. +* For information about the AWS IAM resources for ROSA clusters that use STS, see xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources]. * For information on cluster errors that are due to missing IAM roles, see xref:../support/troubleshooting/rosa-troubleshooting-deployments.adoc#rosa-troubleshooting-cluster-deletion_rosa-troubleshooting-cluster-deployments[Repairing a cluster that cannot be deleted]. diff --git a/rosa_planning/rosa-cloud-expert-prereq-checklist.adoc b/rosa_planning/rosa-cloud-expert-prereq-checklist.adoc index 69cc09436b42..0d182f7d5be3 100644 --- a/rosa_planning/rosa-cloud-expert-prereq-checklist.adoc +++ b/rosa_planning/rosa-cloud-expert-prereq-checklist.adoc @@ -2,7 +2,12 @@ include::_attributes/attributes-openshift-dedicated.adoc[] :context: rosa-cloud-expert-prereq-checklist [id="rosa-cloud-expert-prereq-checklist"] +ifndef::openshift-rosa-hcp[] = Prerequisites checklist for deploying ROSA using STS +endif::[] +ifdef::openshift-rosa-hcp[] += Prerequisites checklist for deploying ROSA with HCP +endif::openshift-rosa-hcp[] toc::[] @@ -18,147 +23,145 @@ toc::[] // - Diana Sari //--- -This is a checklist of prerequisites needed to create a {product-title} (ROSA) classic cluster with link:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html[STS]. +This is a high level checklist of prerequisites needed to create a +ifdef::openshift-rosa[] +{rosa-classic-first} cluster with link:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html[STS]. +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +{hcp-title-first} cluster. +endif::openshift-rosa-hcp[] -[NOTE] -==== -This is a high level checklist and your implementation can vary. -==== +//TODO OSDOCS-11789: Consider adding the following to a subsection about the initiating/control machine, along with CLI sections? +The machine that you run the installation process from must have access to the following: -Before running the installation process, verify that you deploy this from a machine that has access to: - -* The API services for the cloud to which you provision. -* Access to `api.openshift.com`, `oidc.op1.openshiftapps.com`, and `sso.redhat.com`. -* The hosts on the network that you provision. -* The internet to obtain installation media. +* Amazon Web Services API and authentication service endpoints +* Red Hat OpenShift API and authentication service endpoints (`api.openshift.com` and `sso.redhat.com`) +* Internet connectivity to obtain installation artifacts +//TODO OSDOCS-11789: This needs to be accessible from parts of the cluster, but not the deploying machine - omit entirely, or leave in place for Classic? +ifdef::openshift-rosa[] [IMPORTANT] ==== Starting with version 1.2.7 of the ROSA CLI, all OIDC provider endpoint URLs on new clusters use Amazon CloudFront and the link:http://oidc.op1.openshiftapps.com/[oidc.op1.openshiftapps.com] domain. This change improves access speed, reduces latency, and improves resiliency for new clusters created with the ROSA CLI 1.2.7 or later. There are no supported migration paths for existing OIDC provider configurations. ==== +endif::openshift-rosa[] -== Accounts and CLIs Prerequisites +//TODO OSDOCS-11789: Consider combining with other account and permission needs to avoid duplication of headers? This does need to happen to download the CLIs though. +== Accounts and permissions -Accounts and CLIs you must install to deploy the cluster. +Ensure that you have the following accounts, credentials, and permissions. === AWS account -* Gather the following details: -** AWS IAM User -** AWS Access Key ID -** AWS Secret Access Key -* Ensure that you have the right permissions as detailed link:https://docs.aws.amazon.com/ROSA/latest/userguide/security-iam-awsmanpol.html[AWS managed IAM policies for ROSA] and xref:../rosa_architecture/rosa-sts-about-iam-resources.html[About IAM resources for ROSA clusters that use STS]. -* See xref:../rosa_planning/rosa-sts-aws-prereqs.html#rosa-account_rosa-sts-aws-prereqs[Account] for more details. -* See xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-associating-your-aws-account_rosa-sts-creating-a-cluster-quickly[Associating your AWS account with your Red Hat organization] for intructions on associating your account. +* Create an AWS account if you do not already have one. +* Gather the credentials required to log in to your AWS account. +* Ensure that your AWS account has sufficient permissions to use the ROSA CLI: xref:../cli_reference/rosa_cli/rosa-cli-permission-examples.adoc#rosa-cli-permission-examples[Least privilege permissions for common ROSA CLI commands] +//OSDOCS-11789: Moving these here because it is a permission / account level enablement +* Enable ROSA for your AWS account on the link:https://console.aws.amazon.com/rosa/[AWS console]. +** If your account is the management account for your organization (used for AWS billing purposes), you must have `aws-marketplace:Subscribe` permissions available on your account. See _Service control policy (SCP) prerequisites_ for more information, or see the AWS documentation for troubleshooting: link:https://docs.aws.amazon.com/rosa/latest/userguide/security-iam-troubleshoot.html#error-aws-orgs-scp-denies-permissions[AWS Organizations service control policy denies required AWS Marketplace permissions]. + +=== Red{nbsp}Hat account + +//TODO OSDOCS-11789: Do we need to mention RH Organization here also? +* Create a Red Hat account for the link:https://console.redhat.com/[{hybrid-console}] if you do not already have one. +* Gather the credentials required to log in to your Red Hat account. + +== CLI requirements + +You need to download and install several CLI (command line interface) tools to be able to deploy a cluster. === AWS CLI (`aws`) -* Install from link:https://aws.amazon.com/cli/[AWS Command Line Interface] if you have not already. -* Configure the CLI: -+ -. Enter `aws configure` in the terminal: -+ -[source,terminal] ----- -$ aws configure ----- -+ -. Enter the AWS Access Key ID and press *enter*. -. Enter the AWS Secret Access Key and press *enter*. -. Enter the default region you want to deploy into. -. Enter the output format you want, “table” or “json”. -. Verify the output by running: +. Install the link:https://aws.amazon.com/cli/[AWS Command Line Interface]. +. Log in to your AWS account using the AWS CLI: link:https://docs.aws.amazon.com/signin/latest/userguide/command-line-sign-in.html[Sign in through the AWS CLI] +. Verify your account identity: + [source,terminal] ---- $ aws sts get-caller-identity ---- -+ -. Ensure that the service role for ELB already exists by running: +. Check whether the service role for ELB (Elastic Load Balancing) exists: + [source,terminal] ---- $ aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" ---- + -.. If it does not exist, run: +If the role does not exist, create it by running the following command: + [source,terminal] ---- $ aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com" ---- -=== Red{nbsp}Hat account - -* Create a link:https://console.redhat.com/[{hybrid-console}] account if you have not already. - === ROSA CLI (`rosa`) -. Enable ROSA from your AWS account on the link:https://console.aws.amazon.com/rosa/[AWS console] if you have not already. -. Install the CLI from xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-installing-rosa.html[Installing the Red{nbsp}Hat OpenShift Service on AWS (ROSA) CLI, rosa] or from the OpenShift console link:https://console.redhat.com/openshift/downloads#tool-rosa[AWS console]. -. Enter `rosa login` in a terminal, and this will prompt you to go to the link:https://console.redhat.com/openshift/token/rosa[token page] through the console: +. Install the ROSA CLI from the link:https://console.redhat.com/openshift/downloads#tool-rosa[web console]. +ifdef::openshift-rosa[] +See xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-installing-rosa.adoc[Installing the Red{nbsp}Hat OpenShift Service on AWS (ROSA) CLI, rosa] for detailed instructions. +endif::openshift-rosa[] +. Log in to your Red Hat account by running `rosa login` and following the instructions in the command output: + [source,terminal] ---- $ rosa login +To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa +? Copy the token and paste it here: ---- + -. Log in with your Red{nbsp}Hat account credentials. -. Click the *Load token* button. -. Copy the token and paste it back into the CLI prompt and press *enter*. -+ -* Alternatively, you can copy the full `$ rosa login --token=abc...` command and paste that in the terminal: +Alternatively, you can copy the full `$ rosa login --token=abc...` command and paste that in the terminal: + [source,terminal] ---- $ rosa login --token= ---- -+ -. Verify your credentials by running: +. Confirm you are logged in using the correct account and credentials: + [source,terminal] ---- $ rosa whoami ---- -+ -. Ensure you have sufficient quota by running: -+ -[source,terminal] ----- -$ rosa verify quota ----- -+ -* See xref:../rosa_planning/rosa-sts-aws-prereqs.html#rosa-aws-policy-provisioned_rosa-sts-aws-prereqs[Provisioned AWS Infrastructure] for more details on AWS services provisioned for ROSA cluster. -* See xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[Required AWS service quotas] for more details on AWS services quota. - === OpenShift CLI (`oc`) -. Install from xref:../cli_reference/openshift_cli/getting-started-cli.adoc#cli-getting-started[Getting started with the OpenShift CLI] or from the OpenShift console link:https://console.redhat.com/openshift/downloads#tool-oc[Command-line interface (CLI) tools]. -. Verify that the OpenShift CLI has been installed correctly by running: +The OpenShift CLI (`oc`) is not required to deploy a {product-title} cluster, but is a useful tool for interacting with your cluster after it is deployed. + +. Download and install`oc` from the {cluster-manager} link:https://console.redhat.com/openshift/downloads#tool-oc[Command-line interface (CLI) tools] page, or follow the instructions in xref:../cli_reference/openshift_cli/getting-started-cli.adoc#cli-getting-started[Getting started with the OpenShift CLI]. +. Verify that the OpenShift CLI has been installed correctly by running the following command: + [source,terminal] ---- $ rosa verify openshift-client ---- -Once you have the above prerequisites installed and enabled, proceed to the next steps. - -//This content is pulled from rosa-sts-associating-your-aws-account.adoc -include::modules/rosa-sts-associating-your-aws-account.adoc[leveloffset=+2] +//TODO OSDOCS-11789: Moved quota check to the point where it is actually useful - yes, this is checked during install, but it's also worth checking ahead of time so that any issues are known during preparation rather than deployment. +== AWS infrastructure prerequisites +* Optionally, ensure that your AWS account has sufficient quota available to deploy a cluster. ++ +[source,terminal] +---- +$ rosa verify quota +---- ++ +This command only checks the total quota allocated to your account; it does not reflect the amount of quota already consumed from that quota. Running this command is optional because your quota is verified during cluster deployment. However, Red Hat recommends running this command to confirm your quota ahead of time so that deployment is not interrupted by issues with quota availability. +ifdef::openshift-rosa[] +* For more information about resources provisioned during ROSA cluster deployment, see xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-aws-policy-provisioned_rosa-sts-aws-prereqs[Provisioned AWS Infrastructure]. +* For more information about the required AWS service quotas, see xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[Required AWS service quotas]. +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +* For more information about resources provisioned during ROSA cluster deployment, see xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-aws-policy-provisioned_rosa-hcp-prereqs[Provisioned AWS Infrastructure]. +* For more information about the required AWS service quotas, see xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[Required AWS service quotas]. +endif::openshift-rosa-hcp[] -== SCP Prerequisites +== Service Control Policy (SCP) prerequisites ROSA clusters are hosted in an AWS account within an AWS organizational unit. A link:https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html[service control policy (SCP)] is created and applied to the AWS organizational unit that manages what services the AWS sub-accounts are permitted to access. -* Ensure that your organization's SCPs are not more restrictive than the roles and policies required by the cluster. -* Ensure that your SCP is configured to allow the required `aws-marketplace:Subscribe` permission when you choose *Enable ROSA* from the console, and see link:https://docs.aws.amazon.com/ROSA/latest/userguide/troubleshoot-rosa-enablement.html#error-aws-orgs-scp-denies-permissions[AWS Organizations service control policy (SCP) is denying required AWS Marketplace permissions] for more details. -* When you create a ROSA classic cluster, an associated AWS OpenID Connect (OIDC) identity provider is created. -** This OIDC provider configuration relies on a public key that is located in the `us-east-1` AWS region. -** Customers with AWS SCPs must allow the use of the `us-east-1` AWS region, even if these clusters are deployed in a different region. +* Ensure that your organization's SCPs are not more restrictive than the roles and policies required by the cluster. For more information, see the xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-minimum-scp_rosa-sts-about-iam-resources[Minimum set of effective permissions for SCPs]. +* When you create a ROSA cluster, an associated AWS OpenID Connect (OIDC) identity provider is created. -== Networking Prerequisites +== Networking prerequisites Prerequisites needed from a networking standpoint. @@ -166,31 +169,18 @@ include::modules/mos-network-prereqs-min-bandwidth.adoc[leveloffset=+2] === Firewall -* Configure your firewall to allow access to the domains and ports listed in xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#osd-aws-privatelink-firewall-prerequisites_rosa-sts-aws-prereqs[AWS firewall prerequisites]. +//TODO OSDOCS-11789: Are these things that your cluster needs access to, or your deploying machine needs access to? +* Configure your firewall to allow access to the domains and ports listed in +ifdef::openshift-rosa[] +xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#osd-aws-privatelink-firewall-prerequisites_rosa-sts-aws-prereqs[AWS firewall prerequisites]. +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-hcp-firewall-prerequisites_rosa-hcp-prereqs[AWS firewall prerequisites] +endif::openshift-rosa-hcp[] -=== Additional custom security groups - -When you create a cluster using an existing non-managed VPC, you can add additional custom security groups during cluster creation. Complete these prerequisites before you create the cluster: - -* Create the custom security groups in AWS before you create the cluster. -* Associate the custom security groups with the VPC that you are using to create the cluster. Do not associate the custom security groups with any other VPC. -* You may need to request additional AWS quota for `Security groups per network interface`. - -For more details see the detailed requirements for xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-aws-prereqs.adoc#rosa-security-groups_prerequisites[Security groups]. - -=== Custom DNS - -* If you want to use custom DNS, then the ROSA installer must be able to use VPC DNS with default DHCP options so it can resolve hosts locally. -** To do so, run `aws ec2 describe-dhcp-options` and see if the VPC is using VPC Resolver: -+ -[source,terminal] ----- -$ aws ec2 describe-dhcp-options ----- -+ -* Otherwise, the upstream DNS will need to forward the cluster scope to this VPC so the cluster can resolve internal IPs and services. - -== PrivateLink Prerequisites +//Moving up prereqs that are actually required for deployment +ifdef::openshift-rosa[] +== VPC requirements for PrivateLink clusters If you choose to deploy a PrivateLink cluster, then be sure to deploy the cluster in the pre-existing BYO VPC: @@ -216,3 +206,49 @@ xref:../networking/configuring-cluster-wide-proxy.adoc#configuring-cluster-wide- ==== You can install a non-PrivateLink ROSA cluster in a pre-existing BYO VPC. ==== +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +=== Create VPC before cluster deployment + +{hcp-title} clusters must be deployed into an existing Virtual Private Cloud (VPC). + +include::snippets/rosa-existing-vpc-requirements.adoc[leveloffset=+0] + +//TODO OSDOCS-11789: Does the following section need to be moved into this document only? +// Is it reasonable to include it in preparation and in deployment process? +// https://docs.openshift.com/rosa/rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.html#rosa-hcp-prereqs + +//TODO OSDOCS-11789: We also link out into https://docs.openshift.com/container-platform/4.17/installing/installing_aws/ipi/installing-aws-vpc.html#installation-custom-aws-vpc_installing-aws-vpc for more information about what your VPC should be able to handle, but this needs review to omit OCP-only content if we're going to use it. +endif::openshift-rosa-hcp[] + +=== Additional custom security groups + +During cluster creation, you can add additional custom security groups to a cluster that has an existing non-managed VPC. To do so, complete these prerequisites before you create the cluster: + +* Create the custom security groups in AWS before you create the cluster. +* Associate the custom security groups with the VPC that you are using to create the cluster. Do not associate the custom security groups with any other VPC. +* You may need to request additional AWS quota for `Security groups per network interface`. + +ifdef::openshift-rosa[] +For more details see the detailed requirements for xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-aws-prereqs.adoc#rosa-security-groups_prerequisites[Security groups]. +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +For more details see the detailed requirements for xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-security-groups_rosa-hcp-prereqs[Security groups]. +endif::openshift-rosa-hcp[] + +=== Custom DNS and domains + +You can configure a custom domain name server and custom domain name for your cluster. To do so, complete the following prerequisites before you create the cluster: + +//TODO OSDOCS-11789: Needs verification from mmcneill +* By default, ROSA clusters require you to set the `domain name servers` option to `AmazonProvidedDNS` to ensure successful cluster creation and operation. +* To use a custom DNS server and domain name for your cluster, the ROSA installer must be able to use VPC DNS with default DHCP options so that it can resolve internal IPs and services. This means that you must create a custom DHCP option set to forward DNS lookups to your DNS server, and associate this option set with your VPC before you create the cluster. +ifdef::openshift-rosa[] +For more information, see xref:../cloud_experts_tutorials/cloud-experts-custom-dns-resolver.adoc#cloud-experts-custom-dns-resolver[Deploying ROSA with a custom DNS resolver]. +endif::openshift-rosa[] +* Confirm that your VPC is using VPC Resolver by running the following command: ++ +[source,terminal] +---- +$ aws ec2 describe-dhcp-options +---- \ No newline at end of file diff --git a/rosa_planning/rosa-hcp-iam-resources.adoc b/rosa_planning/rosa-hcp-iam-resources.adoc new file mode 120000 index 000000000000..48658d4aa84e --- /dev/null +++ b/rosa_planning/rosa-hcp-iam-resources.adoc @@ -0,0 +1 @@ +../rosa_architecture/rosa-sts-about-iam-resources.adoc \ No newline at end of file diff --git a/rosa_planning/rosa-hcp-prepare-iam-roles-resources.adoc b/rosa_planning/rosa-hcp-prepare-iam-roles-resources.adoc new file mode 100644 index 000000000000..4bfab7c057d5 --- /dev/null +++ b/rosa_planning/rosa-hcp-prepare-iam-roles-resources.adoc @@ -0,0 +1,53 @@ +:_mod-docs-content-type: ASSEMBLY +[id="rosa-hcp-prepare-iam-roles-resources"] += Required IAM roles and resources +include::_attributes/attributes-openshift-dedicated.adoc[] +:context: prepare-role-resources + +toc::[] + +You must create several role resources on your AWS account in order to create and manage a {product-title} (ROSA) cluster. + +include::modules/rosa-prereq-roles-overview.adoc[leveloffset=+1] + +[id="rosa-prepare-am-resources-roles-account"] +== Roles required to create and manage clusters + +Several account-wide roles (`account-roles` in the ROSA CLI) are required to create or manage ROSA clusters. These roles must be created using the ROSA CLI (`rosa`), regardless of whether you typically use {cluster-manager} or the ROSA CLI to create and manage your clusters. These roles only need to be created once, and do not need to be created for every cluster you install. + +//account roles +include::modules/rosa-hcp-creating-account-wide-sts-roles-and-policies.adoc[leveloffset=+2] + +[id="rosa-prepare-am-resources-roles-operator"] +== Roles required to manage Operator features +//operator roles +//created per-cluster or per-OIDC provider if that is shared between clusters +Cluster-specific Operator roles (`operator-roles` in the ROSA CLI) obtain the temporary permissions required to perform cluster operations that are provided by Operators, such as managing back-end storage, ingress, and registry. These roles are required for every cluster, as several Operators are used to provide cluster features by default. + +[id="rosa-prepare-iam-resources-oidc"] +=== Required OIDC authentication resources + +{product-title} clusters use OIDC and the AWS Security Token Service (STS) to authenticate Operator access to AWS resources they require to perform their functions. Each production cluster requires its own OIDC configuration. + +include::modules/rosa-sts-byo-oidc.adoc[leveloffset=+3] + +include::modules/rosa-sts-operator-roles.adoc[leveloffset=+2] + +[id="rosa-prepare-iam-resources-roles-ocm"] +== Roles required to use {cluster-manager} + +The roles in this section are only required when you want to use {cluster-manager} to create and manage clusters. If you intend to create and manage clusters using only the ROSA CLI (`rosa`) and the OpenShift CLI (`oc`), these roles are not required. + +include::modules/rosa-sts-ocm-role-creation.adoc[leveloffset=+2] + +[role="_additional-resources"] +[id="additional-resources_ocm-role-creation_{context}"] +.Additional resources +* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies-creation-methods_rosa-sts-about-iam-resources[Methods of account-wide role creation] + +include::modules/rosa-sts-user-role-creation.adoc[leveloffset=+2] + +[role="_additional-resources"] +[id="additional-resources_user-role-creation_{context}"] +.Additional resources +* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies-creation-methods_rosa-sts-about-iam-resources[Methods of account-wide role creation] \ No newline at end of file diff --git a/rosa_planning/rosa-hcp-prereqs.adoc b/rosa_planning/rosa-hcp-prereqs.adoc deleted file mode 100644 index 2f865dd32728..000000000000 --- a/rosa_planning/rosa-hcp-prereqs.adoc +++ /dev/null @@ -1,86 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -include::_attributes/attributes-openshift-dedicated.adoc[] -:context: rosa-hcp-aws-prereqs -[id="rosa-hcp-prereqs"] -= AWS prerequisites for {hcp-title} - -toc::[] - -{hcp-title-first} provides a model that Red Hat hosts the control plane and uses your AWS account to deploy clusters. - -Ensure that the following AWS prerequisites are met before installing ROSA with STS. - -[IMPORTANT] -==== -When you create a ROSA cluster using AWS STS, an associated AWS OpenID Connect (OIDC) identity provider is created as well. This OIDC provider configuration relies on a public key that is located in the `us-east-1` AWS region. Customers with AWS SCPs must allow the use of the `us-east-1` AWS region, even if these clusters are deployed in a different region. -==== - -[id="rosa-sts-customer-requirements_{context}"] -== Customer requirements when using STS for deployment - -The following prerequisites must be complete before you deploy a {product-title} (ROSA) cluster that uses the AWS Security Token Service (STS). - -include::modules/rosa-sts-aws-requirements-account.adoc[leveloffset=+2] -[role="_additional-resources"] -[id="additional-resources_aws-account-requirements_{context}"] -.Additional resources -* xref:../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[Limits and scalability] -* xref:../support/troubleshooting/rosa-troubleshooting-deployments.adoc#rosa-troubleshooting-elb-service-role_rosa-troubleshooting-cluster-deployments[Creating the Elastic Load Balancing (ELB) service-linked role] - -include::modules/rosa-sts-aws-requirements-access-req.adoc[leveloffset=+2] - -[role="_additional-resources"] -[id="additional-resources_aws-access-requirements_{context}"] -.Additional resources -* See xref:../applications/deployments/rosa-config-custom-domains-applications.adoc#rosa-applications-config-custom-domains[Configuring custom domains for applications] - -include::modules/rosa-sts-aws-requirements-support-req.adoc[leveloffset=+2] -include::modules/rosa-sts-aws-requirements-security-req.adoc[leveloffset=+2] - -[role="_additional-resources"] -[id="additional-resources_aws-security-requirements_{context}"] -.Additional resources -* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#osd-aws-privatelink-firewall-prerequisites_rosa-sts-aws-prereqs[AWS firewall prerequisites] - -[role="_additional-resources"] -.Additional resources - -* xref:../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] - -include::modules/rosa-sts-aws-requirements-ocm.adoc[leveloffset=+2] -include::modules/rosa-sts-aws-requirements-association-concept.adoc[leveloffset=+3] -include::modules/rosa-sts-aws-requirements-creating-association.adoc[leveloffset=+3] - -[discrete] -[role="_additional-resources"] -[id="additional-resources_creating-association_{context}"] -== Additional resources -* See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies_rosa-sts-about-iam-resources[Account-wide IAM role and policy reference] for a list of IAM roles needed for cluster creation. - -include::modules/rosa-sts-aws-requirements-creating-multi-association.adoc[leveloffset=+3] - - -include::modules/rosa-requirements-deploying-in-opt-in-regions.adoc[leveloffset=+1] -include::modules/rosa-setting-the-aws-security-token-version.adoc[leveloffset=+2] - -[id="rosa-sts-policy-iam_{context}"] -== Red Hat managed IAM references for AWS - -With the STS deployment model, Red Hat is no longer responsible for creating and managing Amazon Web Services (AWS) IAM policies, IAM users, or IAM roles. For information on creating these roles and policies, see the following sections on IAM roles. - -* To use the `ocm` CLI, you must have an `ocm-role` and `user-role` resource. See xref:../rosa_planning/rosa-sts-ocm-role.adoc#rosa-sts-ocm-role[OpenShift Cluster Manager IAM role resources]. -* If you have a single cluster, see xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies_rosa-sts-about-iam-resources[Account-wide IAM role and policy reference]. -* For every cluster, you must have the necessary operator roles. See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-operator-roles_rosa-sts-about-iam-resources[Cluster-specific Operator IAM role reference]. - -include::modules/rosa-aws-provisioned.adoc[leveloffset=+1] -include::modules/rosa-hcp-firewall-prerequisites.adoc[leveloffset=+1] - -== Next steps -* xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[Review the required AWS service quotas] - -[role="_additional-resources"] -[id="additional-resources_aws-prerequisites_{context}"] -== Additional resources -* xref:../rosa_architecture/rosa_policy_service_definition/rosa-policy-process-security.adoc#rosa-policy-sre-access_rosa-policy-process-security[SRE access to all Red Hat OpenShift Service on AWS clusters] -* xref:../applications/deployments/rosa-config-custom-domains-applications.adoc#rosa-applications-config-custom-domains[Configuring custom domains for applications] -* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-instance-types_rosa-service-definition[Instance types] diff --git a/rosa_planning/rosa-planning-environment.adoc b/rosa_planning/rosa-planning-environment.adoc index 10416b9abea9..07153dda457e 100644 --- a/rosa_planning/rosa-planning-environment.adoc +++ b/rosa_planning/rosa-planning-environment.adoc @@ -2,10 +2,11 @@ include::_attributes/attributes-openshift-dedicated.adoc[] [id="rosa-planning-environment"] -= Planning your environment += Planning resource usage in your cluster :context: rosa-planning-environment toc::[] include::modules/rosa-planning-environment-cluster-max.adoc[leveloffset=+1] include::modules/rosa-planning-environment-application-reqs.adoc[leveloffset=+1] + \ No newline at end of file diff --git a/rosa_planning/rosa-sts-aws-prereqs.adoc b/rosa_planning/rosa-sts-aws-prereqs.adoc index 7481540a9fd5..f93e6bbb96f2 100644 --- a/rosa_planning/rosa-sts-aws-prereqs.adoc +++ b/rosa_planning/rosa-sts-aws-prereqs.adoc @@ -1,60 +1,85 @@ :_mod-docs-content-type: ASSEMBLY include::_attributes/attributes-openshift-dedicated.adoc[] +//title and ID conditions so this can be shared between Classic and HCP docs while it remains accurate for both +ifndef::openshift-rosa-hcp[] :context: rosa-sts-aws-prereqs [id="rosa-sts-aws-prereqs"] = Detailed requirements for deploying ROSA using STS +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +:context: rosa-hcp-prereqs +[id="rosa-hcp-prereqs"] += Detailed requirements for deploying {hcp-title} +endif::openshift-rosa-hcp[] toc::[] {product-title} (ROSA) provides a model that allows Red{nbsp}Hat to deploy clusters into a customer’s existing Amazon Web Service (AWS) account. -include::snippets/rosa-sts.adoc[] +ifndef::openshift-rosa-hcp[] +include::snippets/rosa-sts.adoc[leveloffset=+0] +endif::openshift-rosa-hcp[] -Ensure that the following AWS prerequisites are met before installing ROSA with STS. - -[IMPORTANT] -==== -When you create a ROSA cluster using AWS STS, an associated AWS OpenID Connect (OIDC) identity provider is created as well. This OIDC provider configuration relies on a public key that is located in the `us-east-1` AWS region. Customers with AWS SCPs must allow the use of the `us-east-1` AWS region, even if these clusters are deployed in a different region. -==== +Ensure that the following prerequisites are met before installing your cluster. +ifndef::openshift-rosa-hcp[] [id="rosa-sts-customer-requirements_{context}"] == Customer requirements when using STS for deployment The following prerequisites must be complete before you deploy a {product-title} (ROSA) cluster that uses the AWS Security Token Service (STS). +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +[id="rosa-hcp-customer-requirements_{context}"] +== Customer requirements for all {hcp-title} clusters + +The following prerequisites must be complete before you deploy a {hcp-title} cluster. +endif::openshift-rosa-hcp[] include::modules/rosa-sts-aws-requirements-account.adoc[leveloffset=+2] + +//Adding conditions around these in case the Additional resources don't get ported to HCP or have different file names / locations; keeping all included for now +ifndef::openshift-rosa-hcp[] [role="_additional-resources"] [id="additional-resources_aws-account-requirements_{context}"] .Additional resources -* xref:../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[Limits and scalability] * xref:../support/troubleshooting/rosa-troubleshooting-deployments.adoc#rosa-troubleshooting-elb-service-role_rosa-troubleshooting-cluster-deployments[Creating the Elastic Load Balancing (ELB) service-linked role] +endif::openshift-rosa-hcp[] -include::modules/rosa-sts-aws-requirements-access-req.adoc[leveloffset=+2] - -[role="_additional-resources"] -[id="additional-resources_aws-access-requirements_{context}"] -.Additional resources -* See xref:../applications/deployments/rosa-config-custom-domains-applications.adoc#rosa-applications-config-custom-domains[Configuring custom domains for applications] - +//TODO OSDOCS-11789: Nothing in the following module is actually a requirement, it's purely informative/recommended and needs to be re-validated by SRE/Support include::modules/rosa-sts-aws-requirements-support-req.adoc[leveloffset=+2] + +//TODO OSDOCS-11789: Need to have this re-validated by SRE/Support include::modules/rosa-sts-aws-requirements-security-req.adoc[leveloffset=+2] +//Adding conditions around these in case the Additional resources don't get ported to HCP or have different file names / locations; keeping all included for now [role="_additional-resources"] [id="additional-resources_aws-security-requirements_{context}"] .Additional resources +ifndef::openshift-rosa-hcp[] * xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#osd-aws-privatelink-firewall-prerequisites_rosa-sts-aws-prereqs[AWS firewall prerequisites] +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-hcp-firewall-prerequisites_rosa-hcp-prereqs[AWS firewall prerequisites] +endif::openshift-rosa-hcp[] + +[id="rosa-ocm-requirements_{context}"] +== Requirements for using {cluster-manager} + +The following configuration details are required only if you use {cluster-manager-url} to manage your clusters. If you use the CLI tools exclusively, then you can disregard these requirements. -include::modules/rosa-sts-aws-requirements-ocm.adoc[leveloffset=+2] -include::modules/rosa-sts-aws-requirements-association-concept.adoc[leveloffset=+3] -include::modules/rosa-sts-aws-requirements-creating-association.adoc[leveloffset=+3] +//TODO OSDOCS-11789: when are ocm-role and user-role actually created? Pretty sure this happens as part of the cluster install process, so doesn't need to be done ahead of time?? +include::modules/rosa-sts-aws-requirements-association-concept.adoc[leveloffset=+2] +include::modules/rosa-sts-aws-requirements-creating-association.adoc[leveloffset=+2] +ifdef::openshift-rosa,openshift-rosa-hcp[] [discrete] [role="_additional-resources"] [id="additional-resources_creating-association_{context}"] == Additional resources * See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies_rosa-sts-about-iam-resources[Account-wide IAM role and policy reference] for a list of IAM roles needed for cluster creation. +endif::openshift-rosa,openshift-rosa-hcp[] -include::modules/rosa-sts-aws-requirements-creating-multi-association.adoc[leveloffset=+3] +include::modules/rosa-sts-aws-requirements-creating-multi-association.adoc[leveloffset=+2] include::modules/rosa-requirements-deploying-in-opt-in-regions.adoc[leveloffset=+1] @@ -63,11 +88,20 @@ include::modules/rosa-setting-the-aws-security-token-version.adoc[leveloffset=+2 [id="rosa-sts-policy-iam_{context}"] == Red{nbsp}Hat managed IAM references for AWS -With the STS deployment model, Red{nbsp}Hat is no longer responsible for creating and managing Amazon Web Services (AWS) IAM policies, IAM users, or IAM roles. For information on creating these roles and policies, see the following sections on IAM roles. - -* To use the `ocm` CLI, you must have an `ocm-role` and `user-role` resource. See xref:../rosa_planning/rosa-sts-ocm-role.adoc#rosa-sts-ocm-role[OpenShift Cluster Manager IAM role resources]. +ifndef::openshift-rosa-hcp[] +When you use STS as your cluster credential method, +endif::openshift-rosa-hcp[] +Red{nbsp}Hat is not responsible for creating and managing Amazon Web Services (AWS) IAM policies, IAM users, or IAM roles. For information on creating these roles and policies, see the following sections on IAM roles. + +* To use the `ocm` CLI, you must have an `ocm-role` and `user-role` resource. +ifndef::openshift-rosa-hcp[] +See xref:../rosa_planning/rosa-sts-ocm-role.adoc#rosa-sts-ocm-role[OpenShift Cluster Manager IAM role resources]. +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +See xref:../rosa_planning/rosa-hcp-prepare-iam-roles-resources.adoc#rosa-prepare-iam-resources-roles-ocm[Required IAM roles and resources]. +endif::openshift-rosa-hcp[] * If you have a single cluster, see xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies_rosa-sts-about-iam-resources[Account-wide IAM role and policy reference]. -* For every cluster, you must have the necessary operator roles. See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-operator-roles_rosa-sts-about-iam-resources[Cluster-specific Operator IAM role reference]. +* For each cluster, you must have the necessary operator roles. See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-operator-roles_rosa-sts-about-iam-resources[Cluster-specific Operator IAM role reference]. include::modules/rosa-aws-provisioned.adoc[leveloffset=+1] @@ -77,25 +111,39 @@ include::modules/rosa-aws-provisioned.adoc[leveloffset=+1] include::modules/mos-network-prereqs-min-bandwidth.adoc[leveloffset=+2] // Keeping existing ID to prevent link breakage +ifdef::openshift-rosa[] [id="osd-aws-privatelink-firewall-prerequisites_rosa-sts-aws-prereqs"] === AWS firewall prerequisites If you are using a firewall to control egress traffic from your {product-title}, you must configure your firewall to grant access to the certain domain and port combinations below. {product-title} requires this access to provide a fully managed OpenShift service. include::modules/osd-aws-privatelink-firewall-prerequisites.adoc[leveloffset=+3] -include::modules/rosa-hcp-firewall-prerequisites.adoc[leveloffset=+3] +endif::openshift-rosa[] + +ifdef::openshift-rosa-hcp[] +include::modules/rosa-hcp-firewall-prerequisites.adoc[leveloffset=+2] +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa[] [role="_additional-resources"] .Additional resources - -* xref:../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] +* xref:../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] +endif::openshift-rosa[] == Next steps -* xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[Review the required AWS service quotas] +* xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-required-aws-service-quotas_rosa-sts-required-aws-service-quotas[Review the required AWS service quotas] [role="_additional-resources"] [id="additional-resources_aws-prerequisites_{context}"] == Additional resources +ifdef::openshift-rosa[] * xref:../rosa_architecture/rosa_policy_service_definition/rosa-policy-process-security.adoc#rosa-policy-sre-access_rosa-policy-process-security[SRE access to all Red{nbsp}Hat OpenShift Service on AWS clusters] * xref:../applications/deployments/rosa-config-custom-domains-applications.adoc#rosa-applications-config-custom-domains[Configuring custom domains for applications] * xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-instance-types_rosa-service-definition[Instance types] +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-sre-access.adoc#rosa-sre-access[SRE and service account access] +//Omitted until Applications has been ported for HCP +//* xref ../applications/deployments/rosa-config-custom-domains-applications.adoc#rosa-applications-config-custom-domains[Configuring custom domains for applications] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-hcp-instance-types.adoc#rosa-hcp-instance-types[Instance types] +endif::openshift-rosa-hcp[] \ No newline at end of file diff --git a/rosa_planning/rosa-sts-ocm-role.adoc b/rosa_planning/rosa-sts-ocm-role.adoc index 2f7326cfe3ce..46fb0e012107 100644 --- a/rosa_planning/rosa-sts-ocm-role.adoc +++ b/rosa_planning/rosa-sts-ocm-role.adoc @@ -6,49 +6,37 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] -{product-title} (ROSA) web UI requires that you have specific permissions on your AWS account that create a trust relationship to provide the end-user experience at {cluster-manager-url} and for the `rosa` command line interface (CLI). +You must create several role resources on your AWS account in order to create and manage a {product-title} (ROSA) cluster. -This trust relationship is achieved through the creation and association of the `ocm-role` AWS IAM role. This role has a trust policy with the AWS installer that links your Red{nbsp}Hat account to your AWS account. In addition, you also need a `user-role` AWS IAM role for each web UI user, which serves to identify these users. This `user-role` AWS IAM role has no permissions. +include::modules/rosa-prereq-roles-overview.adoc[leveloffset=+1] -The AWS IAM roles required to use {cluster-manager} are: +.Additional resources +ifndef::openshift-rosa-hcp[] +* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies[Account-wide IAM role and policy reference] +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa[] +* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-operator-roles_rosa-sts-about-iam-resources[Cluster-specific Operator IAM role reference] +endif::openshift-rosa[] -* `ocm-role` -* `user-role` - -Whether you manage your clusters using the ROSA CLI (`rosa`) or {cluster-manager} web UI, you must create the account-wide roles, known as `account-roles` in the ROSA CLI, by using the ROSA CLI. These account roles are necessary for your first cluster, and these roles can be used across multiple clusters. These required account roles are: - -* `Worker-Role` -* `Support-Role` -* `Installer-Role` -* `ControlPlane-Role` - -[NOTE] -==== -Role creation does not request your AWS access or secret keys. AWS Security Token Service (STS) is used as the basis of this workflow. AWS STS uses temporary, limited-privilege credentials to provide authentication. -==== - -For more information about creating these roles, see xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies[Account-wide IAM role and policy reference]. - -Cluster-specific Operator roles, known as `operator-roles` in the ROSA CLI, obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage, ingress, and registry. These roles are required by the cluster that you create. These required Operator roles are: - -* `--openshift-cluster-csi-drivers-ebs-cloud-credentials` -* `--openshift-cloud-network-config-controller-credentials` -* `--openshift-machine-api-aws-cloud-credentials` -* `--openshift-cloud-credential-operator-cloud-credentials` -* `--openshift-image-registry-installer-cloud-credentials` -* `--openshift-ingress-operator-cloud-credentials` - -For more information on creating these roles, see xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-operator-roles_rosa-sts-about-iam-resources[Cluster-specific Operator IAM role reference]. +//Roles required to use {cluster-manager} include::modules/rosa-sts-about-ocm-role.adoc[leveloffset=+1] +ifdef::openshift-rosa[] [discrete] [id="additional-resources-about-ocm-role"] [role="_additional-resources"] == Additional resources -* See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-understanding-ocm-role[Understanding the OpenShift Cluster Manager role] +* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-understanding-ocm-role[Understanding the {cluster-manager} role] +endif::openshift-rosa[] include::modules/rosa-sts-ocm-role-creation.adoc[leveloffset=+2] + +[role="_additional-resources"] +[id="additional-resources_ocm-role-creation_{context}"] +.Additional resources +* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies-creation-methods_rosa-sts-about-iam-resources[Methods of account-wide role creation] + include::modules/rosa-sts-about-user-role.adoc[leveloffset=+1] include::modules/rosa-sts-user-role-creation.adoc[leveloffset=+2] @@ -57,14 +45,30 @@ include::modules/rosa-sts-user-role-creation.adoc[leveloffset=+2] If you unlink or delete your `user-role` IAM role prior to deleting your cluster, an error prevents you from deleting your cluster. You must create or relink this role to proceed with the deletion process. See xref:../support/troubleshooting/rosa-troubleshooting-deployments.adoc#rosa-troubleshooting-cluster-deletion_rosa-troubleshooting-cluster-deployments[Repairing a cluster that cannot be deleted] for more information. ==== +[role="_additional-resources"] +[id="additional-resources_user-role-creation_{context}"] +.Additional resources +* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies-creation-methods_rosa-sts-about-iam-resources[Methods of account-wide role creation] + include::modules/rosa-sts-aws-requirements-association-concept.adoc[leveloffset=+1] include::modules/rosa-sts-aws-requirements-creating-association.adoc[leveloffset=+2] include::modules/rosa-sts-aws-requirements-creating-multi-association.adoc[leveloffset=+2] + +ifndef::openshift-rosa-hcp[] include::modules/rosa-sts-aws-requirements-attaching-boundary-policy.adoc[leveloffset=+1] +endif::openshift-rosa-hcp[] [role="_additional-resources"] == Additional resources -* See link:https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html[Permissions boundaries for IAM entities] (AWS documentation). -* See xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-account-wide-sts-roles-and-policies_rosa-sts-creating-a-cluster-quickly[Creating the account-wide STS roles and policies]. -* See xref:../support/troubleshooting/rosa-troubleshooting-iam-resources.adoc#rosa-sts-ocm-roles-and-permissions-troubleshooting[Troubleshooting IAM roles]. -* See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies[Account-wide IAM role and policy reference] for a list of IAM roles needed for cluster creation. \ No newline at end of file +* link:https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html[Permissions boundaries for IAM entities (AWS documentation)] +ifdef::openshift-rosa[] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-account-wide-sts-roles-and-policies_rosa-sts-creating-a-cluster-quickly[Creating the account-wide STS roles and policies] +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +* xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-account-wide-sts-roles-and-policies_rosa-hcp-sts-creating-a-cluster-quickly[Creating account-wide roles and policies] +endif::openshift-rosa-hcp[] +* xref:../support/troubleshooting/rosa-troubleshooting-iam-resources.adoc#rosa-sts-ocm-roles-and-permissions-troubleshooting[Troubleshooting IAM roles] +ifdef::openshift-rosa[] +* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies[Account-wide IAM role and policy reference] +endif::openshift-rosa[] + diff --git a/rosa_planning/rosa-sts-required-aws-service-quotas.adoc b/rosa_planning/rosa-sts-required-aws-service-quotas.adoc index d397920bd51a..07764c2f9272 100644 --- a/rosa_planning/rosa-sts-required-aws-service-quotas.adoc +++ b/rosa_planning/rosa-sts-required-aws-service-quotas.adoc @@ -11,4 +11,9 @@ Review this list of the required Amazon Web Service (AWS) service quotas that ar include::modules/rosa-required-aws-service-quotas.adoc[leveloffset=+1] == Next steps -* xref:../rosa_planning/rosa-sts-setting-up-environment.adoc#rosa-sts-setting-up-environment[Set up the environment and install ROSA] +ifndef::openshift-rosa-hcp[] +* xref:../rosa_planning/rosa-sts-setting-up-environment.adoc#rosa-sts-setting-up-environment[Setting up the environment] +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +* xref:../rosa_planning/rosa-sts-setting-up-environment.adoc#rosa-hcp-setting-up-environment[Setting up the environment] +endif::openshift-rosa-hcp[] \ No newline at end of file diff --git a/rosa_planning/rosa-sts-setting-up-environment.adoc b/rosa_planning/rosa-sts-setting-up-environment.adoc index 2ae41b6ccd6a..f774af2fb7fe 100644 --- a/rosa_planning/rosa-sts-setting-up-environment.adoc +++ b/rosa_planning/rosa-sts-setting-up-environment.adoc @@ -1,25 +1,54 @@ :_mod-docs-content-type: ASSEMBLY +ifndef::openshift-rosa-hcp[] [id="rosa-sts-setting-up-environment"] = Setting up the environment for using STS include::_attributes/attributes-openshift-dedicated.adoc[] :context: rosa-sts-setting-up-environment +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +[id="rosa-hcp-setting-up-environment"] += Setting up the environment +include::_attributes/attributes-openshift-dedicated.adoc[] +:context: rosa-hcp-setting-up-environment +endif::openshift-rosa-hcp[] toc::[] After you meet the AWS prerequisites, set up your environment and install {product-title} (ROSA). -include::snippets/rosa-sts.adoc[] +//For ROSA clusters +ifndef::openshift-rosa-hcp[] + +include::snippets/rosa-sts.adoc[leveloffset=+0] include::modules/rosa-sts-setting-up-environment.adoc[leveloffset=+1] +endif::openshift-rosa-hcp[] + +//For HCP clusters +ifdef::openshift-rosa-hcp[] + +include::modules/rosa-getting-started-install-configure-cli-tools.adoc[leveloffset=+1] + +endif::openshift-rosa-hcp[] + [id="next-steps_rosa-sts-setting-up-environment"] == Next steps - +ifndef::openshift-rosa-hcp[] * xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Create a ROSA cluster with STS quickly] or xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[create a cluster using customizations]. +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +* xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Create a ROSA with HCP cluster] +endif::openshift-rosa-hcp[] [id="additional-resources"] [role="_additional-resources"] == Additional resources - +ifndef::openshift-rosa-hcp[] * xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS Prerequisites] * xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[Required AWS service quotas and increase requests] +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-hcp-prereqs[AWS Prerequisites] +// TODO OSDOCS-11789: AWS quotas for HCP +endif::openshift-rosa-hcp[] diff --git a/rosa_release_notes/rosa-release-notes.adoc b/rosa_release_notes/rosa-release-notes.adoc index 586853dda86a..ef84ee6fb09e 100644 --- a/rosa_release_notes/rosa-release-notes.adoc +++ b/rosa_release_notes/rosa-release-notes.adoc @@ -8,7 +8,7 @@ toc::[] {product-title} (ROSA) is a fully-managed, turnkey application platform that allows you to focus on delivering value to your customers by building and deploying applications. Red{nbsp}Hat and AWS site reliability engineering (SRE) experts manage the underlying platform so you do not have to worry about the complexity of infrastructure management. ROSA provides seamless integration with a wide range of AWS compute, database, analytics, machine learning, networking, mobile, and other services to further accelerate the building and delivering of differentiating experiences to your customers. -{product-title} clusters are available on the link:https://console.redhat.com/openshift[Hybrid Cloud Console]. With the Red{nbsp}Hat OpenShift Cluster Manager application for ROSA, you can deploy {product-title} clusters to either on-premises or cloud environments. +{product-title} clusters are available on the link:https://console.redhat.com/openshift[Hybrid Cloud Console]. With the Red{nbsp}Hat {cluster-manager} application for ROSA, you can deploy {product-title} clusters to either on-premises or cloud environments. [id="rosa-new-changes-and-updates_{context}"] == New changes and updates @@ -158,7 +158,13 @@ For more information on region availabilities, see xref:../rosa_architecture/ros * **Configurable process identifier (PID) limits.** With the release of ROSA CLI (`rosa`) version 1.2.31, administrators can use the `rosa create kubeletconfig` and `rosa edit kubeletconfig` commands to set the maximum PIDs for an existing cluster. For more information, see link:https://access.redhat.com/articles/7033551[Changing the maximum number of process IDs per pod (podPidsLimit) for ROSA]. -* **Configure custom security groups.** With the release of ROSA CLI (`rosa`) version 1.2.31, administrators can use the `rosa create` command or the OpenShift Cluster Manager to create a new cluster or a new machine pool with up to 5 additional custom security groups. Configuring custom security groups gives administrators greater control over resource access in new clusters and machine pools. For more information, see xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-aws-prereqs.adoc#rosa-security-groups_prerequisites[Security groups]. +* **Configure custom security groups.** With the release of ROSA CLI (`rosa`) version 1.2.31, administrators can use the `rosa create` command or the OpenShift Cluster Manager to create a new cluster or a new machine pool with up to 5 additional custom security groups. Configuring custom security groups gives administrators greater control over resource access in new clusters and machine pools. For more information, see +ifdef::openshift-rosa[] +xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-aws-prereqs.adoc#rosa-security-groups_prerequisites[Security groups]. +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-security-groups_rosa-hcp-prereqs[Security groups]. +endif::openshift-rosa-hcp[] * **Command update.** With the release of ROSA CLI (`rosa`) version 1.2.28, a new command, `rosa describe machinepool`, was added that allows you to check detailed information regarding a specific ROSA cluster machine pool. For more information, see xref:../cli_reference/rosa_cli/rosa-manage-objects-cli.adoc#rosa-describe-machinepool_rosa-managing-objects-cli[describe machinepool]. @@ -233,4 +239,4 @@ Some features available in previous releases have been deprecated or removed. De * **ROSA non-STS deployment mode.** ROSA non-STS deployment mode is no longer the preferred method for new clusters. Instead, users must deploy ROSA with the STS mode. This deprecation is in line with our new ROSA provisioning wizard UI experience at https://console.redhat.com/openshift/create/rosa/wizard. -* **Label removal on core namespaces.** ROSA is no longer labeling OpenShift core using the `name` label. Customers should migrate to referencing the `kubernetes.io/metadata.name` label if needed for Network Policies or other use cases. \ No newline at end of file +* **Label removal on core namespaces.** ROSA is no longer labeling OpenShift core using the `name` label. Customers should migrate to referencing the `kubernetes.io/metadata.name` label if needed for Network Policies or other use cases. diff --git a/security/audit-log-view.adoc b/security/audit-log-view.adoc index 4a29965d3ba4..db79c924e6db 100644 --- a/security/audit-log-view.adoc +++ b/security/audit-log-view.adoc @@ -22,7 +22,8 @@ include::modules/security-audit-log-filtering.adoc[leveloffset=+1] // Gathering audit logs include::modules/gathering-data-audit-logs.adoc[leveloffset=+1] - +//removed xrefs for hcp migration local test builds. Will update conditionals once hcp docs can be tested with live builds. +ifndef::openshift-rosa-hcp[] [id="viewing-audit-logs-additional-resources"] [role="_additional-resources"] == Additional resources @@ -33,4 +34,4 @@ ifndef::openshift-rosa,openshift-dedicated[] * xref:../security/audit-log-policy-config.adoc#audit-log-policy-config[Configuring the audit log policy] endif::[] * xref:../observability/logging/log_collection_forwarding/log-forwarding.adoc#log-forwarding[About log forwarding] - +endif::openshift-rosa-hcp[] diff --git a/snippets/rosa-existing-vpc-requirements.adoc b/snippets/rosa-existing-vpc-requirements.adoc new file mode 100644 index 000000000000..bc37f78551af --- /dev/null +++ b/snippets/rosa-existing-vpc-requirements.adoc @@ -0,0 +1,28 @@ +//Included in: +// * modules/rosa-hcp-vpc-manual.adoc +// * rosa_planning/rosa-cloud-expert-prereq-checklist.adoc + +:_mod-docs-content-type: SNIPPET + +Your VPC must meet the requirements shown in the following table. + +.Requirements for your VPC +[options="header",cols="50,50"] +|=== +| Requirement | Details + +| VPC name +| You need to have the specific VPC name and ID when creating your cluster. + +| CIDR range +| Your VPC CIDR range should match your machine CIDR. + +| Availability zone +| You need one availability zone for a single zone, and you need three for availability zones for multi-zone. + +| Public subnet +| You must have one public subnet with a NAT gateway for public clusters. Private clusters do not need a public subnet. + +| DNS hostname and resolution +| You must ensure that the DNS hostname and resolution are enabled. +|=== \ No newline at end of file diff --git a/snippets/rosa-hcp-rn.adoc b/snippets/rosa-hcp-rn.adoc index 7bd29c9930e5..023c229696e9 100644 --- a/snippets/rosa-hcp-rn.adoc +++ b/snippets/rosa-hcp-rn.adoc @@ -3,4 +3,4 @@ // * rosa_release_notes/rosa-release-notes.adoc :_mod-docs-content-type: SNIPPET -* **Hosted control planes.** {hcp-title-first} clusters are now available as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature. This new architecture provides a lower-cost, more resilient ROSA architecture. For more information, see xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc[Creating {hcp-title} clusters using the default options]. \ No newline at end of file +* **Hosted control planes.** {hcp-title-first} clusters are now available as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature. This new architecture provides a lower-cost, more resilient ROSA architecture. For more information, see xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Creating {hcp-title} clusters using the default options]. \ No newline at end of file diff --git a/snippets/rosa-sts.adoc b/snippets/rosa-sts.adoc index 22abfb5af77d..2ce7395abcc7 100644 --- a/snippets/rosa-sts.adoc +++ b/snippets/rosa-sts.adoc @@ -2,5 +2,5 @@ [TIP] ==== -AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS (ROSA) because it provides enhanced security. +AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on {product-title} because it provides enhanced security. ==== diff --git a/support/troubleshooting/rosa-troubleshooting-iam-resources.adoc b/support/troubleshooting/rosa-troubleshooting-iam-resources.adoc index 8c8521c5a30a..ecd1d060fc4b 100644 --- a/support/troubleshooting/rosa-troubleshooting-iam-resources.adoc +++ b/support/troubleshooting/rosa-troubleshooting-iam-resources.adoc @@ -8,6 +8,18 @@ toc::[] include::modules/rosa-sts-ocm-and-user-role-troubleshooting.adoc[leveloffset=+1] include::modules/rosa-sts-ocm-role-creation.adoc[leveloffset=+2] + +[role="_additional-resources"] +[id="additional-resources_ocm-role-creation_{context}"] +.Additional resources +* xref:../../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies-creation-methods_rosa-sts-about-iam-resources[Methods of account-wide role creation] + include::modules/rosa-sts-user-role-creation.adoc[leveloffset=+2] + +[role="_additional-resources"] +[id="additional-resources_user-role-creation_{context}"] +.Additional resources +* xref:../../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies-creation-methods_rosa-sts-about-iam-resources[Methods of account-wide role creation] + include::modules/rosa-sts-aws-requirements-creating-association.adoc[leveloffset=+2] include::modules/rosa-sts-aws-requirements-creating-multi-association.adoc[leveloffset=+2] diff --git a/upgrading/rosa-hcp-upgrading.adoc b/upgrading/rosa-hcp-upgrading.adoc index 0d1a222b591e..603a5202ca7b 100644 --- a/upgrading/rosa-hcp-upgrading.adoc +++ b/upgrading/rosa-hcp-upgrading.adoc @@ -8,8 +8,13 @@ toc::[] include::modules/rosa-hcp-upgrade-options.adoc[leveloffset=+1] -// .Additional resources -// * ../cli_reference/rosa_cli/rosa-manage-objects-cli.adoc#rosa-edit-machinepool_rosa-managing-objects-cli[ROSA CLI reference: `rosa edit machinepool`] +.Additional resources +ifdef::openshift-rosa-hcp[] +* link:https://docs.openshift.com/rosa/cli_reference/rosa_cli/rosa-manage-objects-cli.html#rosa-edit-machinepool_rosa-managing-objects-cli[ROSA CLI reference: `rosa edit machinepool`] +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa-hcp[] +* xref:../cli_reference/rosa_cli/rosa-manage-objects-cli.adoc#rosa-edit-machinepool_rosa-managing-objects-cli[ROSA CLI reference: `rosa edit machinepool`] +endif::openshift-rosa-hcp[] //This cannot be a module if we want to use the xrefs [id="rosa-lifecycle-policy_{context}"] @@ -37,36 +42,3 @@ include::modules/rosa-hcp-upgrading-cli-control-plane.adoc[leveloffset=+1] include::modules/rosa-hcp-upgrading-cli-machinepool.adoc[leveloffset=+1] -[id="rosa-hcp-upgrading-cli-cluster_{context}"] -== Upgrading the whole cluster with the ROSA CLI - -Upgrading the entire cluster involves upgrading both the hosted control plane and nodes in the machine pools. However, these components cannot be upgraded at the same time. They must be upgraded in sequence. This can be done in any order. However, to maintain compatibility between nodes in the cluster, nodes in machine pools cannot use a newer version than the hosted control plane. Therefore, if both the hosted control plane and the nodes in your machine pools require upgrade to the same OpenShift version, you must upgrade the hosted control plane first, followed by the machine pools. - -[discrete] -=== Prerequisites -* You have installed and configured the latest version of the ROSA CLI. -* No other upgrades are in progress or scheduled to take place at the same time as this upgrade. - - -ifdef::context[:prevcontext: {context}] -:context: rosa-hcp-upgrading-whole-cluster - -include::modules/rosa-hcp-upgrading-cli-control-plane.adoc[leveloffset=+2] - -ifdef::prevcontext[:context: {prevcontext}] -ifdef::context[:prevcontext: {context}] - -:context: rosa-hcp-upgrading-whole-cluster - -include::modules/rosa-hcp-upgrading-cli-machinepool.adoc[leveloffset=+2] - -ifdef::prevcontext[:context: {prevcontext}] -ifndef::prevcontext[:!context:] -//LB: Remove until here if we don't want the "whole cluster" upgrade section - -include::modules/rosa-hcp-upgrading-cli-tutorial.adoc[leveloffset=+1] - -include::modules/rosa-upgrading-manual-ocm.adoc[leveloffset=+1] - -include::modules/rosa-deleting-cluster-upgrade-ocm.adoc[leveloffset=+1] - diff --git a/welcome/cloud-experts-rosa-hcp-sts-explained.adoc b/welcome/cloud-experts-rosa-hcp-sts-explained.adoc index 4f8bb5f4e6d7..ed97cc7e4c06 100644 --- a/welcome/cloud-experts-rosa-hcp-sts-explained.adoc +++ b/welcome/cloud-experts-rosa-hcp-sts-explained.adoc @@ -42,17 +42,23 @@ Security features for AWS STS include: * *OpenID Connect (OIDC)* - A mechanism for cluster Operators to authenticate with AWS, assume the cluster roles through a trust policy, and obtain temporary credentials from AWS IAM STS to make the required API calls. * *Roles and policies* - The roles and policies used by {hcp-title} can be divided into account-wide roles and policies and Operator roles and policies. + -The policies determine the allowed actions for each of the roles. See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources for ROSA clusters that use STS] for more details about the individual roles and policies and xref:../rosa_planning/rosa-sts-ocm-role.adoc#rosa-sts-ocm-role[ROSA IAM role resource] for more details about trust policies. +The policies determine the allowed actions for each of the roles. +ifdef::openshift-rosa[] +See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] for more details about the individual roles and policies. See xref:../rosa_planning/rosa-sts-ocm-role.adoc#rosa-sts-ocm-role[ROSA IAM role resource] for more details about trust policies. +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] for more details about the individual roles and policies. See xref:../rosa_planning/rosa-hcp-prepare-iam-roles-resources.adoc#rosa-hcp-prepare-iam-roles-resources[Required IAM roles and resources] for more details on preparing these resources in your cluster. +endif::openshift-rosa-hcp[] + -- ** The account-wide roles are: -+ -*** ManagedOpenShift-Installer-Role -*** ManagedOpenShift-Worker-Role -*** ManagedOpenShift-Support-Role -+ + +*** `-HCP-ROSA-Worker-Role` +*** `-HCP-ROSA-Support-Role` +*** `-HCP-ROSA-Installer-Role` + ** The account-wide AWS-managed policies are: -+ + *** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAInstallerPolicy.html[ROSAInstallerPolicy] *** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAWorkerInstancePolicy.html[ROSAWorkerInstancePolicy] *** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSASRESupportPolicy.html[ROSASRESupportPolicy] @@ -73,7 +79,7 @@ Certain policies are used by the cluster Operator roles, listed below. The Opera ==== + ** The Operator roles are: -+ + *** -openshift-cluster-csi-drivers-ebs-cloud-credentials *** -openshift-cloud-network-config-controller-cloud-credentials *** -openshift-machine-api-aws-cloud-credentials @@ -96,8 +102,7 @@ Deploying a {hcp-title} cluster follows the following steps: During the cluster creation process, the ROSA CLI creates the required JSON files for you and outputs the commands you need. If desired, the ROSA CLI can also run the commands for you. -The ROSA CLI can automatically create the roles for you, or you can manually create them by using the `--mode manual` or `--mode auto` flags. For further details about deployment, see xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations]. -//Change the above xref when we have HCP specific docs +The ROSA CLI can automatically create the roles for you, or you can manually create them by using the `--mode manual` or `--mode auto` flags. For further details about deployment, see xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-cluster-using-customizations_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations]. [id="hcp-sts-process"] == {hcp-title} workflow From a9f9fda4940aabbd53c7422bd1dd5ae69beff22e Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Fri, 24 Jan 2025 16:37:00 +0000 Subject: [PATCH 157/669] OCPBUGS-48847: Documented IPoIB support for nmstate --- ...stall-post-installation-configuration.adoc | 3 + ...manifest-file-customized-br-ex-bridge.adoc | 2 +- ...reating-infiniband-interface-on-nodes.adoc | 64 +++++++++++++++++++ 3 files changed, 68 insertions(+), 1 deletion(-) create mode 100644 modules/virt-creating-infiniband-interface-on-nodes.adoc diff --git a/installing/installing_bare_metal/ipi/ipi-install-post-installation-configuration.adoc b/installing/installing_bare_metal/ipi/ipi-install-post-installation-configuration.adoc index e7e78adba8c5..29dd146b2118 100644 --- a/installing/installing_bare_metal/ipi/ipi-install-post-installation-configuration.adoc +++ b/installing/installing_bare_metal/ipi/ipi-install-post-installation-configuration.adoc @@ -17,6 +17,9 @@ include::modules/nw-enabling-a-provisioning-network-after-installation.adoc[leve // Creating a manifest object that includes a customized `br-ex` bridge include::modules/creating-manifest-file-customized-br-ex-bridge.adoc[leveloffset=+1] +// Creating an InfiniBand interface on nodes +include::modules/virt-creating-infiniband-interface-on-nodes.adoc[leveloffset=+1] + // Services for a user-managed load balancer include::modules/nw-osp-services-external-load-balancer.adoc[leveloffset=+1] diff --git a/modules/creating-manifest-file-customized-br-ex-bridge.adoc b/modules/creating-manifest-file-customized-br-ex-bridge.adoc index 352d807dbb30..6b8eae334994 100644 --- a/modules/creating-manifest-file-customized-br-ex-bridge.adoc +++ b/modules/creating-manifest-file-customized-br-ex-bridge.adoc @@ -17,7 +17,7 @@ endif::[] :_mod-docs-content-type: PROCEDURE [id="creating-manifest-file-customized-br-ex-bridge_{context}"] -== Creating a manifest object that includes a customized `br-ex` bridge += Creating a manifest object that includes a customized `br-ex` bridge ifndef::postinstall-bare-metal-ipi,postinstall-bare-metal-upi[] As an alternative to using the `configure-ovs.sh` shell script to set a `br-ex` bridge on a bare-metal platform, you can create a `MachineConfig` object that includes an NMState configuration file. The NMState configuration file creates a customized `br-ex` bridge network configuration on each node in your cluster. diff --git a/modules/virt-creating-infiniband-interface-on-nodes.adoc b/modules/virt-creating-infiniband-interface-on-nodes.adoc new file mode 100644 index 000000000000..264fc5bf953c --- /dev/null +++ b/modules/virt-creating-infiniband-interface-on-nodes.adoc @@ -0,0 +1,64 @@ +// Module included in the following assemblies: +// +// * installing/installing_bare_metal/ipi/ipi-install-post-installation-configuration.adoc + +:_mod-docs-content-type: PROCEDURE +[id="virt-creating-infiniband-interface-on-nodes_{context}"] += Creating an IP over InfiniBand interface on nodes + +On the {product-title} web console, you can install a Red{nbsp}Hat certified third-party Operator, such as the NVIDIA Network Operator, that supports InfiniBand (IPoIB) mode. Typically, you would use the third-party Operator with other vendor infrastructure to manage resources in an {product-title} cluster. To create an IPoIB interface on nodes in your cluster, you must define an InfiniBand (IPoIB) interface in a `NodeNetworkConfigurationPolicy` (NNCP) manifest file. + +[IMPORTANT] +==== +The {product-title} documentation describes defining only the IPoIB interface configuration in a `NodeNetworkConfigurationPolicy` (NNCP) manifest file. You must refer to the NVIDIA and other third-party vendor documentation for the majority of the configuring steps. Red{nbsp}Hat support does not extend to anything external to the NNCP configuration. + +For more information about the NVIDIA Operator, see link:https://docs.nvidia.com/networking/display/kubernetes2410/getting+started+with+red+hat+openshift[Getting Started with Red{nbsp}Hat OpenShift] (NVIDIA Docs Hub). +==== + +.Prerequisites + +* You installed a Red{nbsp}Hat certified third-party Operator that supports an IPoIB interface. + + +.Procedure + +. Create or edit a `NodeNetworkConfigurationPolicy` (NNCP) manifest file, and then specify an IPoIB interface in the file. ++ + +[source,yaml] +---- +apiVersion: nmstate.io/v1 +kind: NodeNetworkConfigurationPolicy +metadata: + name: worker-0-ipoib +spec: +# ... + interfaces: + - description: "" + infiniband: + mode: datagram <1> + pkey: "0xffff" <2> + ipv4: + address: + - ip: 100.125.3.4 + prefix-length: 16 + dhcp: false + enabled: true + ipv6: + enabled: false + name: ibp27s0 + state: up + type: infiniband <3> +# ... +---- +<1> `datagram` is the default mode for an IPoIB interface, and this mode improves optimizes performance and latency. `connected` mode is a supported mode but consider only using this mode when you need to adjust the maximum transmission unit (MTU) value to improve node connectivity with surrounding network devices. +<2> Supports a string or an integer value. The parameter defines the protection key, or _P-key_, for the interface for the purposes of authentication and encrypted communications with a third-party vendor, such as NVIDIA. Values `None` and `0xffff` indicate the protection key for the base interface in an InfiniBand system. +<3> Sets the type of interface to `infiniband `. + +. Apply the NNCP configuration to each node in your cluster by running the following command. The Kubernetes NMState Operator can then create an IPoIB interface on each node. ++ +[source,yaml] +---- +$ oc apply -f <1> +---- +<1> Replace `` with the name of your NNCP file. From 9af44230e997f61d1aea763df59365162ff39e05 Mon Sep 17 00:00:00 2001 From: sbeskin Date: Mon, 10 Feb 2025 14:24:59 +0200 Subject: [PATCH 158/669] CNV-55143 --- ...-configure-higher-vm-workload-density.adoc | 27 ++++++++++++++----- 1 file changed, 21 insertions(+), 6 deletions(-) diff --git a/modules/virt-using-wasp-agent-to-configure-higher-vm-workload-density.adoc b/modules/virt-using-wasp-agent-to-configure-higher-vm-workload-density.adoc index 160a7dde02f6..0deb83fc4252 100644 --- a/modules/virt-using-wasp-agent-to-configure-higher-vm-workload-density.adoc +++ b/modules/virt-using-wasp-agent-to-configure-higher-vm-workload-density.adoc @@ -13,6 +13,8 @@ The `wasp-agent` component facilitates memory overcommitment by assigning swap r Swap resources can be only assigned to virtual machine workloads (VM pods) of the `Burstable` Quality of Service (QoS) class. VM pods of the `Guaranteed` QoS class and pods of any QoS class that do not belong to VMs cannot swap resources. For descriptions of QoS classes, see link:https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/[Configure Quality of Service for Pods] (Kubernetes documentation). + +Using `spec.domain.resources.requests.memory` in the VM manifest disables the memory overcommit configuration. Use `spec.domain.memory.guest` instead. ==== .Prerequisites @@ -76,7 +78,8 @@ spec: [Unit] Description=Provision and enable swap ConditionFirstBoot=no - + ConditionPathExists=!/var/tmp/swapfile + [Service] Type=oneshot Environment=SWAP_SIZE_MB=5000 @@ -84,13 +87,25 @@ spec: sudo chmod 600 /var/tmp/swapfile && \ sudo mkswap /var/tmp/swapfile && \ sudo swapon /var/tmp/swapfile && \ - free -h && \ - sudo systemctl set-property --runtime system.slice MemorySwapMax=0 IODeviceLatencyTargetSec=\"/ 50ms\"" - + free -h" + [Install] RequiredBy=kubelet-dependencies.target enabled: true name: swap-provision.service + - contents: | + [Unit] + Description=Restrict swap for system slice + ConditionFirstBoot=no + + [Service] + Type=oneshot + ExecStart=/bin/sh -c "sudo systemctl set-property --runtime system.slice MemorySwapMax=0 IODeviceLatencyTargetSec=\"/ 50ms\"" + + [Install] + RequiredBy=kubelet-dependencies.target + enabled: true + name: cgroup-system-slice-config.service ---- + To have enough swap space for the worst-case scenario, make sure to have at least as much swap space provisioned as overcommitted RAM. Calculate the amount of swap space to be provisioned on a node by using the following formula: @@ -174,9 +189,9 @@ spec: - name: SWAP_UTILIZATION_THRESHOLD_FACTOR value: "0.8" - name: MAX_AVERAGE_SWAP_IN_PAGES_PER_SECOND - value: "1000" + value: "1000000000" - name: MAX_AVERAGE_SWAP_OUT_PAGES_PER_SECOND - value: "1000" + value: "1000000000" - name: AVERAGE_WINDOW_SIZE_SECONDS value: "30" - name: VERBOSITY From 35e5a87652ddcb528ba83d5e62e9e20e80373e78 Mon Sep 17 00:00:00 2001 From: Jaromir Hradilek Date: Mon, 27 Jan 2025 17:22:55 +0100 Subject: [PATCH 159/669] CNV-51078: Documented bulk VM operations --- modules/virt-controlling-multiple-vms.adoc | 16 ++++++++++++++++ .../managing_vms/virt-controlling-vm-states.adoc | 4 +++- 2 files changed, 19 insertions(+), 1 deletion(-) create mode 100644 modules/virt-controlling-multiple-vms.adoc diff --git a/modules/virt-controlling-multiple-vms.adoc b/modules/virt-controlling-multiple-vms.adoc new file mode 100644 index 000000000000..ce0dd64a0219 --- /dev/null +++ b/modules/virt-controlling-multiple-vms.adoc @@ -0,0 +1,16 @@ +// Module included in the following assemblies: +// +// * virt/managing_vms/virt-controlling-vm-states.adoc + +:_mod-docs-content-type: PROCEDURE +[id="virt-controlling-multiple-vms-web_{context}"] += Controlling the state of multiple virtual machines + +You can start, stop, restart, pause, and unpause multiple virtual machines from the web console. + +.Procedure + +. Navigate to *Virtualization* -> *VirtualMachines* in the web console. +. Optional: To limit the number of displayed virtual machines, select a relevant project from the *Projects* list. +. Select a checkbox next to the virtual machines you want to work with. To select all virtual machines, click the checkbox in the *VirtualMachines* table header. +. Click *Actions* and select the intended action from the menu. diff --git a/virt/managing_vms/virt-controlling-vm-states.adoc b/virt/managing_vms/virt-controlling-vm-states.adoc index efe8d51d9f82..029c2831a76f 100644 --- a/virt/managing_vms/virt-controlling-vm-states.adoc +++ b/virt/managing_vms/virt-controlling-vm-states.adoc @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[] toc::[] -You can stop, start, restart, and unpause virtual machines from the web console. +You can stop, start, restart, pause, and unpause virtual machines from the web console. You can use xref:../../virt/getting_started/virt-using-the-cli-tools.adoc#virt-using-the-cli-tools[`virtctl`] to manage virtual machine states and perform other actions from the CLI. For example, you can use `virtctl` to force stop a VM or expose a port. @@ -19,3 +19,5 @@ include::modules/virt-restarting-vm-web.adoc[leveloffset=+1] include::modules/virt-pausing-vm-web.adoc[leveloffset=+1] include::modules/virt-unpausing-vm-web.adoc[leveloffset=+1] + +include::modules/virt-controlling-multiple-vms.adoc[leveloffset=+1] From b8db43b481048e0f8dd8d650d77c6e15723c6d0d Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Thu, 30 Jan 2025 12:28:01 +0000 Subject: [PATCH 160/669] OCPBUGS-41970: Updated MTU data type in MACVLAN table --- modules/nw-multus-macvlan-object.adoc | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/nw-multus-macvlan-object.adoc b/modules/nw-multus-macvlan-object.adoc index d11e64ebd3d4..374b1eef60ec 100644 --- a/modules/nw-multus-macvlan-object.adoc +++ b/modules/nw-multus-macvlan-object.adoc @@ -5,9 +5,9 @@ :_mod-docs-content-type: REFERENCE [id="nw-multus-macvlan-object_{context}"] -= Configuration for a macvlan additional network += Configuration for a MACVLAN additional network -The following object describes the configuration parameters for the MACVLAN CNI plugin: +The following object describes the configuration parameters for the MAC Virtual LAN (MACVLAN) Container Network Interface (CNI) plugin: .MACVLAN CNI plugin JSON configuration object [cols=".^2,.^2,.^6",options="header"] @@ -39,7 +39,7 @@ The following object describes the configuration parameters for the MACVLAN CNI |Optional: The host network interface to associate with the newly created macvlan interface. If a value is not specified, then the default route interface is used. |`mtu` -|`string` +|`integer` |Optional: The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |`linkInContainer` From d36f6f32efab63a21309245afa7c9172c4079125 Mon Sep 17 00:00:00 2001 From: Katie Drake Date: Thu, 6 Feb 2025 13:14:46 -0500 Subject: [PATCH 161/669] HCIDOCS-417: Remove IDRAC 8 from Firmware requirements for installing with virtual media table --- ...-firmware-requirements-for-installing-with-virtual-media.adoc | 1 - 1 file changed, 1 deletion(-) diff --git a/modules/ipi-install-firmware-requirements-for-installing-with-virtual-media.adoc b/modules/ipi-install-firmware-requirements-for-installing-with-virtual-media.adoc index 8bd9b2647a2a..2cf81d765138 100644 --- a/modules/ipi-install-firmware-requirements-for-installing-with-virtual-media.adoc +++ b/modules/ipi-install-firmware-requirements-for-installing-with-virtual-media.adoc @@ -29,7 +29,6 @@ Red Hat does not test every combination of firmware, hardware, or other third-pa | 16th Generation | iDRAC 9 | v7.10.70.00 | 15th Generation | iDRAC 9 | v6.10.30.00 and v7.10.70.00 | 14th Generation | iDRAC 9 | v6.10.30.00 -| 13th Generation .2+| iDRAC 8 | v2.75.75.75 or later |==== From 1c438999efac3e0b71b3a2f3d8350d7d29a2002d Mon Sep 17 00:00:00 2001 From: Alex Dellapenta Date: Tue, 4 Feb 2025 13:11:13 -0700 Subject: [PATCH 162/669] Granting user access to OLMv1 extension resources --- _topic_maps/_topic_map.yml | 2 + extensions/ce/user-access-resources.adoc | 35 ++++ .../olmv1-default-cluster-roles-users.adoc | 14 ++ modules/olmv1-finding-ce-resources.adoc | 42 +++++ ...olmv1-granting-user-access-aggregated.adoc | 66 ++++++++ .../olmv1-granting-user-access-binding.adoc | 155 ++++++++++++++++++ 6 files changed, 314 insertions(+) create mode 100644 extensions/ce/user-access-resources.adoc create mode 100644 modules/olmv1-default-cluster-roles-users.adoc create mode 100644 modules/olmv1-finding-ce-resources.adoc create mode 100644 modules/olmv1-granting-user-access-aggregated.adoc create mode 100644 modules/olmv1-granting-user-access-binding.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index d3a79c36354b..ebee8d276c49 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2100,6 +2100,8 @@ Topics: Topics: - Name: Managing extensions File: managing-ce + - Name: User access to extension resources + File: user-access-resources - Name: Update paths File: update-paths - Name: CRD upgrade safety diff --git a/extensions/ce/user-access-resources.adoc b/extensions/ce/user-access-resources.adoc new file mode 100644 index 000000000000..0ec88cda86ff --- /dev/null +++ b/extensions/ce/user-access-resources.adoc @@ -0,0 +1,35 @@ +:_mod-docs-content-type: ASSEMBLY +[id="user-access-resources"] += User access to extension resources +include::_attributes/common-attributes.adoc[] +:context: user-access-resources + +toc::[] + +After a cluster extension has been installed and is being managed by {olmv1-first}, the extension can often provide `CustomResourceDefinition` objects (CRDs) that expose new API resources on the cluster. Cluster administrators typically have full management access to these resources by default, whereas non-cluster administrator users, or _regular users_, might lack sufficient permissions. + +{olmv1} does not automatically configure or manage role-based access control (RBAC) for regular users to interact with the APIs provided by installed extensions. Cluster administrators must define the required RBAC policy to create, view, or edit these custom resources (CRs) for such users. + +[NOTE] +==== +The RBAC permissions described for user access to extension resources are different from the permissions that must be added to a service account to enable {olmv1}-based initial installation of a cluster extension itself. For more on RBAC requirements while installing an extension, see "Cluster extension permissions" in "Managing extensions". +==== + +[role="_additional-resources"] +.Additional resources +* xref:../../extensions/ce/managing-ce.adoc#managing-ce["Managing extensions" -> "Cluster extension permissions"] + +include::modules/olmv1-default-cluster-roles-users.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources +* link:https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles[User-facing roles] (Kubernetes documentation) + +include::modules/olmv1-finding-ce-resources.adoc[leveloffset=+1] +include::modules/olmv1-granting-user-access-binding.adoc[leveloffset=+1] +include::modules/olmv1-granting-user-access-aggregated.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources +* link:https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles[Aggregated ClusterRoles] (Kubernetes documentation) + diff --git a/modules/olmv1-default-cluster-roles-users.adoc b/modules/olmv1-default-cluster-roles-users.adoc new file mode 100644 index 000000000000..5f7229ec0791 --- /dev/null +++ b/modules/olmv1-default-cluster-roles-users.adoc @@ -0,0 +1,14 @@ +// Module included in the following assemblies: +// +// * extensions/ce/user-access-resources.adoc + +:_mod-docs-content-type: REFERENCE + +[id="olmv1-default-cluster-roles-users_{context}"] += Common default cluster roles for users + +An installed cluster extension might include default cluster roles to determine role-based access control (RBAC) for regular users to API resources provided by the extension. A common set of cluster roles can resemble the following policies: + +`view` cluster role:: Grants read-only access to all custom resource (CR) objects of specified API resources across the cluster. Intended for regular users who require visibility into the resources without any permissions to modify them. Ideal for monitoring purposes and limited access viewing. +`edit` cluster role:: Allows users to modify all CR objects within the cluster. Enables users to create, update, and delete resources, making it suitable for team members who must manage resources but should not control RBAC or manage permissions for others. +`admin` cluster role:: Provides full permissions, including `create`, `update`, and `delete` verbs, over all custom resource objects for the specified API resources across the cluster. \ No newline at end of file diff --git a/modules/olmv1-finding-ce-resources.adoc b/modules/olmv1-finding-ce-resources.adoc new file mode 100644 index 000000000000..fdcc46e4018e --- /dev/null +++ b/modules/olmv1-finding-ce-resources.adoc @@ -0,0 +1,42 @@ +// Module included in the following assemblies: +// +// * extensions/ce/user-access-resources.adoc + +:_mod-docs-content-type: PROCEDURE + +[id="olmv1-finding-ce-resources_{context}"] += Finding API groups and resources exposed by a cluster extension + +To create appropriate RBAC policies for granting user access to cluster extension resources, you must know which API groups and resources are exposed by the installed extension. As an administrator, you can inspect custom resource definitions (CRDs) installed on the cluster by using {oc-first}. + +.Prerequisites + +* A cluster extension has been installed on your cluster. + +.Procedure + +* To list installed CRDs while specifying a label selector targeting a specific cluster extension by name to find only CRDs owned by that extension, run the following command: ++ +[source,terminal] +---- +$ oc get crds -l 'olm.operatorframework.io/owner-kind=ClusterExtension,olm.operatorframework.io/owner-name=' +---- + +* Alternatively, you can search through all installed CRDs and individually inspect them by CRD name: + +.. List all available custom resource definitions (CRDs) currently installed on the cluster by running the following command: ++ +[source,terminal] +---- +$ oc get crds +---- ++ +Find the CRD you are looking for in the output. + +.. Inspect the individual CRD further to find its API groups by running the following command: ++ +[source,terminal] +---- +$ oc get crd -o yaml +---- + diff --git a/modules/olmv1-granting-user-access-aggregated.adoc b/modules/olmv1-granting-user-access-aggregated.adoc new file mode 100644 index 000000000000..5e44c0fd9cca --- /dev/null +++ b/modules/olmv1-granting-user-access-aggregated.adoc @@ -0,0 +1,66 @@ +// Module included in the following assemblies: +// +// * extensions/ce/user-access-resources.adoc + +:_mod-docs-content-type: PROCEDURE + +[id="olmv1-granting-user-access-aggregated_{context}"] += Granting user access to extension resources by using aggregated cluster roles + +As a cluster administrator, you can configure role-based access control (RBAC) policies to grant user access to extension resources by using aggregated cluster roles. + +To automatically extend existing default cluster roles, you can add _aggregation labels_ by adding one or more of the following labels to a `ClusterRole` object: + +.Aggregation labels in a `ClusterRole` object +[source,yaml] +---- +# .. +metadata: + labels: + rbac.authorization.k8s.io/aggregate-to-admin: "true" + rbac.authorization.k8s.io/aggregate-to-edit: "true" + rbac.authorization.k8s.io/aggregate-to-view: "true" +# .. +---- + +This allows users who already have `view`, `edit`, or `admin` roles to interact with the custom resource specified by the `ClusterRole` object without requiring additional role or cluster role bindings to specific users or groups. + +.Prerequisites + +* A cluster extension has been installed on your cluster. +* You have a list of API groups and resource names, as described in "Finding API groups and resources exposed by a cluster extension". + +.Procedure + +. Create an object definition for a cluster role that specifies the API groups and resources provided by the cluster extension and add an aggregation label to extend one or more existing default cluster roles: ++ +.Example `ClusterRole` object with an aggregation label +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: view-custom-resource-aggregated + labels: + rbac.authorization.k8s.io/aggregate-to-view: "true" +rules: + - apiGroups: + - + resources: + - + verbs: + - get + - list + - watch +---- ++ +You can create similar `ClusterRole` objects for `edit` and `admin` with appropriate verbs, such as `create`, `update`, and `delete`. By using aggregation labels, the permissions for the custom resources are added to the default roles. + +. Save your object definition to a YAML file. + +. Create the object by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- \ No newline at end of file diff --git a/modules/olmv1-granting-user-access-binding.adoc b/modules/olmv1-granting-user-access-binding.adoc new file mode 100644 index 000000000000..2fb7cb789f13 --- /dev/null +++ b/modules/olmv1-granting-user-access-binding.adoc @@ -0,0 +1,155 @@ +// Module included in the following assemblies: +// +// * extensions/ce/user-access-resources.adoc + +:_mod-docs-content-type: PROCEDURE + +[id="olmv1-granting-user-access-binding_{context}"] += Granting user access to extension resources by using custom role bindings + +As a cluster administrator, you can manually create and configure role-based access control (RBAC) policies to grant user access to extension resources by using custom role bindings. + +.Prerequisites + +* A cluster extension has been installed on your cluster. +* You have a list of API groups and resource names, as described in "Finding API groups and resources exposed by a cluster extension". + +.Procedure + +. If the installed cluster extension does not provide default cluster roles, manually create one or more roles: + +.. Consider the use cases for the set of roles described in "Common default cluster roles for users". ++ +For example, create one or more of the following `ClusterRole` object definitions, replacing `` and `` with the actual API group and resource names provided by the installed cluster extension: ++ +.Example `view-custom-resource.yaml` file +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: view-custom-resource +rules: +- apiGroups: + - + resources: + - + verbs: + - get + - list + - watch +---- ++ +.Example `edit-custom-resource.yaml` file +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: edit-custom-resource +rules: +- apiGroups: + - + resources: + - + verbs: + - get + - list + - watch + - create + - update + - patch + - delete +---- ++ +.Example `admin-custom-resource.yaml` file +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: admin-custom-resource +rules: +- apiGroups: + - + resources: + - + verbs: + - '*' <1> +---- +<1> Setting a wildcard (`*`) in `verbs` allows all actions on the specified resources. + +.. Create the cluster roles by running the following command for any YAML files you created: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- + +. Associate a cluster role to specific users or groups to grant them the necessary permissions for the resource by binding the cluster roles to individual user or group names: + +.. Create an object definition for either a _cluster role binding_ to grant access across all namespaces or a _role binding_ to grant access within a specific namespace: ++ +-- +*** The following example cluster role bindings grant read-only `view` access to the custom resource across all namespaces: ++ +.Example `ClusterRoleBinding` object for a user +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: view-custom-resource-binding +subjects: +- kind: User + name: +roleRef: + kind: ClusterRole + name: view-custom-resource + apiGroup: rbac.authorization.k8s.io +---- ++ +.Example `ClusterRoleBinding` object for a user +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: view-custom-resource-binding +subjects: +- kind: Group + name: +roleRef: + kind: ClusterRole + name: view-custom-resource + apiGroup: rbac.authorization.k8s.io +---- + +*** The following role binding restricts `edit` permissions to a specific namespace: ++ +.Example `RoleBinding` object for a user +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: edit-custom-resource-edit-binding + namespace: +subjects: +- kind: User + name: +roleRef: + kind: Role + name: custom-resource-edit + apiGroup: rbac.authorization.k8s.io +---- +-- + +.. Save your object definition to a YAML file. + +.. Create the object by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- \ No newline at end of file From d0641e74d4f0f8c6f71713354864b7a5c21dcf7b Mon Sep 17 00:00:00 2001 From: Max Bridges Date: Wed, 27 Nov 2024 11:44:26 -0500 Subject: [PATCH 163/669] Add single-stack IPv6 for ShiftStack GH#85554 OSDOCS-12598 --- ...installing-openstack-installer-custom.adoc | 5 + ...installation-configuration-parameters.adoc | 15 +++ ...on-configuring-shiftstack-single-ipv6.adoc | 92 +++++++++++++++++++ modules/nw-ovn-kubernetes-features.adoc | 4 +- 4 files changed, 114 insertions(+), 2 deletions(-) create mode 100644 modules/installation-configuring-shiftstack-single-ipv6.adoc diff --git a/installing/installing_openstack/installing-openstack-installer-custom.adoc b/installing/installing_openstack/installing-openstack-installer-custom.adoc index de78454f63b7..e3f0948fa9da 100644 --- a/installing/installing_openstack/installing-openstack-installer-custom.adoc +++ b/installing/installing_openstack/installing-openstack-installer-custom.adoc @@ -55,6 +55,11 @@ include::modules/installation-osp-config-yaml.adoc[leveloffset=+2] //Dual-stack networking include::modules/install-osp-dualstack.adoc[leveloffset=+2] include::modules/install-osp-deploy-dualstack.adoc[leveloffset=+3] +include::modules/installation-configuring-shiftstack-single-ipv6.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources +* See xref:../../disconnected/mirroring/installing-mirroring-creating-registry.adoc#installing-mirroring-creating-registry[Creating a mirror registry with mirror registry for Red Hat OpenShift] include::modules/installation-osp-external-lb-config.adoc[leveloffset=+2] // include::modules/installation-osp-setting-worker-affinity.adoc[leveloffset=+1] diff --git a/modules/installation-configuration-parameters.adoc b/modules/installation-configuration-parameters.adoc index 4c89056cc088..b0c0b03e3baf 100644 --- a/modules/installation-configuration-parameters.adoc +++ b/modules/installation-configuration-parameters.adoc @@ -1264,6 +1264,21 @@ You can use this property to exceed the default persistent volume (PV) limit for You can also use this property to enable the QEMU guest agent by including the `hw_qemu_guest_agent` property with a value of `yes`. |A list of key-value string pairs. For example, `["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"]`. +|platform: + openstack: + controlPlanePort: + fixedIPs: +|Subnets for the machines to use. +|A list of subnet names or UUIDs to use in cluster installation. + + +|platform: + openstack: + controlPlanePort: + network: +|A network for the machines to use. +|The UUID or name of an {rh-openstack} network to use in cluster installation. + |platform: openstack: defaultMachinePlatform: diff --git a/modules/installation-configuring-shiftstack-single-ipv6.adoc b/modules/installation-configuring-shiftstack-single-ipv6.adoc new file mode 100644 index 000000000000..5131e4d6c0bc --- /dev/null +++ b/modules/installation-configuring-shiftstack-single-ipv6.adoc @@ -0,0 +1,92 @@ +// Module included in the following assemblies: +// +// * installing/installing_openstack/installing-openstack-installer-custom.adoc + +:_mod-docs-content-type: PROCEDURE +[id="installation-configuring-shiftstack-single-ipv6_{context}"] += Configuring a cluster with single-stack IPv6 networking + +You can create a single-stack IPv6 cluster on {rh-openstack-first} after you configure your {rh-openstack} deployment. + +IMPORTANT: You cannot convert a dual-stack cluster into a single-stack IPv6 cluster. + +.Prerequisites + +* Your {rh-openstack} deployment has an existing network with a DHCPv6-stateful IPv6 subnet to use as the machine network. +* DNS is configured for the existing IPv6 subnet. +* The IPv6 subnet is added to a {rh-openstack} router, and the router is configured to send router advertisements (RAs). +* You added any additional IPv6 subnets that are used in the cluster to an {rh-openstack} router to enable router advertisements. ++ +NOTE: Using an IPv6 SLAAC subnet is not supported because any `dns_nameservers` addresses are not enforced by {rh-openstack} Neutron. +* You have a mirror registry with an IPv6 interface. +* The {rh-openstack} network accepts a minimum MTU size of 1442 bytes. +* You created API and ingress virtual IP addresses (VIPs) as {rh-openstack} ports on the machine network and included those addresses in the `install-config.yaml` file. + +.Procedure + +. Create the API VIP port on the network by running the following command: ++ +[source,bash] +---- +$ openstack port create api --network +---- + +. Create the Ingress VIP port on the network by running the following command: ++ +[source,bash] +---- +$ openstack port create ingress --network +---- + +. After the networking resources are pre-created, deploy a cluster by using an `install-config.yaml` file that reflects your IPv6 network configuration. As an example: ++ +[source,yaml] +---- +apiVersion: v1 +baseDomain: mydomain.test +compute: +- name: worker + platform: + openstack: + type: m1.xlarge + replicas: 3 +controlPlane: + name: master + platform: + openstack: + type: m1.xlarge + replicas: 3 +metadata: + name: mycluster +networking: + machineNetwork: + - cidr: "fd2e:6f44:5dd8:c956::/64" # <1> + clusterNetwork: + - cidr: fd01::/48 + hostPrefix: 64 + serviceNetwork: + - fd02::/112 +platform: + openstack: + ingressVIPs: ['fd2e:6f44:5dd8:c956::383'] # <2> + apiVIPs: ['fd2e:6f44:5dd8:c956::9a'] # <2> + controlPlanePort: + fixedIPs: # <3> + - subnet: + name: subnet-v6 + network: # <3> + name: v6-network +imageContentSources: #<4> +- mirrors: + - + source: quay.io/openshift-release-dev/ocp-v4.0-art-dev +- mirrors: + - + source: registry.ci.openshift.org/ocp/release +additionalTrustBundle: | + +---- +<1> The CIDR of the subnet specified in this field must match the CIDR of the subnet that is specified in the `controlPlanePort` section. +<2> Use the address from the ports you generated in the previous steps as the values for the parameters `platform.openstack.ingressVIPs` and `platform.openstack.apiVIPs`. +<3> Items under the `platform.openstack.controlPlanePort.fixedIPs` and `platform.openstack.controlPlanePort.network` keys can contain an ID, a name, or both. +<4> The `imageContentSources` section contains the mirror details. For more information on configuring a local image registry, see "Creating a mirror registry with mirror registry for Red Hat OpenShift". diff --git a/modules/nw-ovn-kubernetes-features.adoc b/modules/nw-ovn-kubernetes-features.adoc index 3e94b47d546c..3cc0f4bd6c8e 100644 --- a/modules/nw-ovn-kubernetes-features.adoc +++ b/modules/nw-ovn-kubernetes-features.adoc @@ -15,8 +15,8 @@ The OVN-Kubernetes network plugin supports the following capabilities: * Hybrid clusters that can run both Linux and Microsoft Windows workloads. This environment is known as _hybrid networking_. * Offloading of network data processing from the host central processing unit (CPU) to compatible network cards and data processing units (DPUs). This is known as _hardware offloading_. -* IPv4-primary dual-stack networking on bare-metal, {vmw-full}, {ibm-power-name}, {ibm-z-name}, and {rh-openstack} platforms. -* IPv6 single-stack networking on a bare-metal platform. +* IPv4-primary dual-stack networking on bare-metal, {vmw-full}, {ibm-power-name}, {ibm-z-name}, and {rh-openstack-first} platforms. +* IPv6 single-stack networking on {rh-openstack} and bare metal platforms. * IPv6-primary dual-stack networking for a cluster running on a bare-metal, a {vmw-full}, or an {rh-openstack} platform. * Egress firewall devices and egress IP addresses. * Egress router devices that operate in redirect mode. From 8f896089d697b9b1349ac32ecd0ea1a5026c7acd Mon Sep 17 00:00:00 2001 From: William Gabor Date: Fri, 31 Jan 2025 13:50:33 -0500 Subject: [PATCH 164/669] OSDOCS-9718 removed step 2 and corresponding example output --- ...w-ingress-reencrypt-route-custom-cert.adoc | 20 +++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/modules/nw-ingress-reencrypt-route-custom-cert.adoc b/modules/nw-ingress-reencrypt-route-custom-cert.adoc index a9b11ff079d3..0c065a84d43c 100644 --- a/modules/nw-ingress-reencrypt-route-custom-cert.adoc +++ b/modules/nw-ingress-reencrypt-route-custom-cert.adoc @@ -17,6 +17,26 @@ The `route.openshift.io/destination-ca-certificate-secret` annotation can be use .Procedure +. Create a secret for the destination CA certificate by entering the following command: ++ +[source,terminal] +---- +$ oc create secret generic dest-ca-cert --from-file=tls.crt= +---- ++ +For example: ++ +[source,terminal] +---- +$ oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt +---- ++ +.Example output +[source,terminal] +---- +secret/dest-ca-cert created +---- + . Add the `route.openshift.io/destination-ca-certificate-secret` to the Ingress annotations: + [source,yaml] From 424db56e26fbabca4a9c0840ad76d33a32340dc6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E2=80=9CShauna=20Diaz=E2=80=9D?= Date: Thu, 6 Feb 2025 13:47:30 -0500 Subject: [PATCH 165/669] OSDOCS-13318: adds missing steps to EUS repo config MicroShift --- ...croshift-embed-ostree-enable-eus-repos.adoc | 18 ++++++++++++++++-- modules/microshift-updating-rpms-y.adoc | 4 ++-- 2 files changed, 18 insertions(+), 4 deletions(-) diff --git a/modules/microshift-embed-ostree-enable-eus-repos.adoc b/modules/microshift-embed-ostree-enable-eus-repos.adoc index ad1078781581..627c8ac94d87 100644 --- a/modules/microshift-embed-ostree-enable-eus-repos.adoc +++ b/modules/microshift-embed-ostree-enable-eus-repos.adoc @@ -6,11 +6,11 @@ [id="microshift-enable-eus-repos_{context}"] = Enabling extended support repositories for image building -If you have an extended support (EUS) release of {microshift-short}, you must enable the {op-system-base-full} EUS repositories for image builder to use. If you do not have an EUS version, you can skip these steps. +If you have an extended support (EUS) release of {microshift-short} or {op-system-base-full}, you must enable the {op-system-base} EUS repositories for image builder to use. If you do not have an EUS version, you can skip these steps. .Prerequisites -* You have an EUS version of {microshift-short} or are updating to one. +* You have an EUS version of {microshift-short} or {op-system-base} or are updating to one. * You have root-user access to your build host. * You reviewed the link:https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/{ocp-version}/html/getting_ready_to_install_microshift/microshift-install-get-ready#get-ready-install-rhde-compatibility-table_microshift-install-get-ready[{op-system-bundle} release compatibility matrix]. @@ -18,6 +18,20 @@ include::snippets/microshift-unsupported-config-warn.adoc[leveloffset=+1] .Procedure +. Create the `/etc/osbuild-composer/repositories` directory by running the following command: ++ +[source,terminal] +---- +$ sudo mkdir -p /etc/osbuild-composer/repositories +---- + +. Copy the `/usr/share/osbuild-composer/repositories/rhel-9.4.json` file into the `/etc/osbuild-composer/repositories` directory by running the following command: ++ +[source,terminal] +---- +$ sudo cp /usr/share/osbuild-composer/repositories/rhel-9.4.json /etc/osbuild-composer/repositories/rhel-9.4.json +---- + . Update the `baseos` source by modifying the `/etc/osbuild-composer/repositories/rhel-9.4.json` file with the following values: + [source,terminal] diff --git a/modules/microshift-updating-rpms-y.adoc b/modules/microshift-updating-rpms-y.adoc index c3291740a15a..e27f0363adec 100644 --- a/modules/microshift-updating-rpms-y.adoc +++ b/modules/microshift-updating-rpms-y.adoc @@ -29,10 +29,10 @@ You cannot downgrade {microshift-short} with this process. Downgrades are not su [source,terminal,subs="attributes+"] ---- $ sudo subscription-manager repos \ - --enable rhocp-<4.18>-for-<9>-$(uname -m)-rpms \ # <1> + --enable rhocp--for-<9>-$(uname -m)-rpms \ # <1> --enable fast-datapath-for-<9>-$(uname -m)-rpms # <2> ---- -<1> Replace _<4.18>_ and _<9>_ with the compatible versions of your {microshift-short} and {op-system-base-full}. +<1> Replace __ and _<9>_ with the compatible versions of your {microshift-short} and {op-system-base-full}. <2> Replace _<9>_ with the compatible version of {op-system-base}. . For extended support (EUS) releases, also enable the EUS repositories by running the following command: From 2ddf25a647ac54c5cb031f36e982d471a2eb3565 Mon Sep 17 00:00:00 2001 From: Lisa Pettyjohn Date: Wed, 11 Dec 2024 08:49:00 -0500 Subject: [PATCH 166/669] OSDOCS-12842#Remove Share Resource CSI Driver --- _topic_maps/_topic_map.yml | 2 - ...ibutes-on-shared-resource-pod-volumes.adoc | 0 ...ations-for-shared-resource-csi-driver.adoc | 0 ...nsights-operator-and-openshift-builds.adoc | 0 ...-sharing-configmaps-across-namespaces.adoc | 0 ...age-sharing-secrets-across-namespaces.adoc | 0 ...ing-a-sharedconfigmap-object-in-a-pod.adoc | 0 ...ing-a-sharedsecrets-resource-in-a-pod.adoc | 0 ...e-shared-resource-csi-driver-operator.adoc | 0 ...tled-builds-with-sharedsecret-objects.adoc | 2 +- ...rage-csi-inline-overview-admin-plugin.adoc | 2 - ...ephemeral-storage-csi-inline-overview.adoc | 6 +-- modules/gathering-data-specific-features.adoc | 3 -- ...nodes-cluster-enabling-features-about.adoc | 43 +++++++++++++++++++ ...sistent-storage-csi-drivers-supported.adoc | 1 - .../ephemeral-storage-csi-inline.adoc | 1 - 16 files changed, 45 insertions(+), 15 deletions(-) rename {modules => _unused_topics/Storage_Shared_Resource_CSI_Driver/Modules}/ephemeral-storage-additional-details-about-volumeattributes-on-shared-resource-pod-volumes.adoc (100%) rename {modules => _unused_topics/Storage_Shared_Resource_CSI_Driver/Modules}/ephemeral-storage-additional-support-limitations-for-shared-resource-csi-driver.adoc (100%) rename {modules => _unused_topics/Storage_Shared_Resource_CSI_Driver/Modules}/ephemeral-storage-integration-between-shared-resources-insights-operator-and-openshift-builds.adoc (100%) rename {modules => _unused_topics/Storage_Shared_Resource_CSI_Driver/Modules}/ephemeral-storage-sharing-configmaps-across-namespaces.adoc (100%) rename {modules => _unused_topics/Storage_Shared_Resource_CSI_Driver/Modules}/ephemeral-storage-sharing-secrets-across-namespaces.adoc (100%) rename {modules => _unused_topics/Storage_Shared_Resource_CSI_Driver/Modules}/ephemeral-storage-using-a-sharedconfigmap-object-in-a-pod.adoc (100%) rename {modules => _unused_topics/Storage_Shared_Resource_CSI_Driver/Modules}/ephemeral-storage-using-a-sharedsecrets-resource-in-a-pod.adoc (100%) rename {storage/container_storage_interface => _unused_topics/Storage_Shared_Resource_CSI_Driver}/ephemeral-storage-shared-resource-csi-driver-operator.adoc (100%) diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index ebee8d276c49..8e8dab6abe4f 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -1759,8 +1759,6 @@ Topics: File: persistent-storage-csi - Name: CSI inline ephemeral volumes File: ephemeral-storage-csi-inline - - Name: Shared Resource CSI Driver Operator - File: ephemeral-storage-shared-resource-csi-driver-operator - Name: CSI volume snapshots File: persistent-storage-csi-snapshots - Name: CSI volume cloning diff --git a/modules/ephemeral-storage-additional-details-about-volumeattributes-on-shared-resource-pod-volumes.adoc b/_unused_topics/Storage_Shared_Resource_CSI_Driver/Modules/ephemeral-storage-additional-details-about-volumeattributes-on-shared-resource-pod-volumes.adoc similarity index 100% rename from modules/ephemeral-storage-additional-details-about-volumeattributes-on-shared-resource-pod-volumes.adoc rename to _unused_topics/Storage_Shared_Resource_CSI_Driver/Modules/ephemeral-storage-additional-details-about-volumeattributes-on-shared-resource-pod-volumes.adoc diff --git a/modules/ephemeral-storage-additional-support-limitations-for-shared-resource-csi-driver.adoc b/_unused_topics/Storage_Shared_Resource_CSI_Driver/Modules/ephemeral-storage-additional-support-limitations-for-shared-resource-csi-driver.adoc similarity index 100% rename from modules/ephemeral-storage-additional-support-limitations-for-shared-resource-csi-driver.adoc rename to _unused_topics/Storage_Shared_Resource_CSI_Driver/Modules/ephemeral-storage-additional-support-limitations-for-shared-resource-csi-driver.adoc diff --git a/modules/ephemeral-storage-integration-between-shared-resources-insights-operator-and-openshift-builds.adoc b/_unused_topics/Storage_Shared_Resource_CSI_Driver/Modules/ephemeral-storage-integration-between-shared-resources-insights-operator-and-openshift-builds.adoc similarity index 100% rename from modules/ephemeral-storage-integration-between-shared-resources-insights-operator-and-openshift-builds.adoc rename to _unused_topics/Storage_Shared_Resource_CSI_Driver/Modules/ephemeral-storage-integration-between-shared-resources-insights-operator-and-openshift-builds.adoc diff --git a/modules/ephemeral-storage-sharing-configmaps-across-namespaces.adoc b/_unused_topics/Storage_Shared_Resource_CSI_Driver/Modules/ephemeral-storage-sharing-configmaps-across-namespaces.adoc similarity index 100% rename from modules/ephemeral-storage-sharing-configmaps-across-namespaces.adoc rename to _unused_topics/Storage_Shared_Resource_CSI_Driver/Modules/ephemeral-storage-sharing-configmaps-across-namespaces.adoc diff --git a/modules/ephemeral-storage-sharing-secrets-across-namespaces.adoc b/_unused_topics/Storage_Shared_Resource_CSI_Driver/Modules/ephemeral-storage-sharing-secrets-across-namespaces.adoc similarity index 100% rename from modules/ephemeral-storage-sharing-secrets-across-namespaces.adoc rename to _unused_topics/Storage_Shared_Resource_CSI_Driver/Modules/ephemeral-storage-sharing-secrets-across-namespaces.adoc diff --git a/modules/ephemeral-storage-using-a-sharedconfigmap-object-in-a-pod.adoc b/_unused_topics/Storage_Shared_Resource_CSI_Driver/Modules/ephemeral-storage-using-a-sharedconfigmap-object-in-a-pod.adoc similarity index 100% rename from modules/ephemeral-storage-using-a-sharedconfigmap-object-in-a-pod.adoc rename to _unused_topics/Storage_Shared_Resource_CSI_Driver/Modules/ephemeral-storage-using-a-sharedconfigmap-object-in-a-pod.adoc diff --git a/modules/ephemeral-storage-using-a-sharedsecrets-resource-in-a-pod.adoc b/_unused_topics/Storage_Shared_Resource_CSI_Driver/Modules/ephemeral-storage-using-a-sharedsecrets-resource-in-a-pod.adoc similarity index 100% rename from modules/ephemeral-storage-using-a-sharedsecrets-resource-in-a-pod.adoc rename to _unused_topics/Storage_Shared_Resource_CSI_Driver/Modules/ephemeral-storage-using-a-sharedsecrets-resource-in-a-pod.adoc diff --git a/storage/container_storage_interface/ephemeral-storage-shared-resource-csi-driver-operator.adoc b/_unused_topics/Storage_Shared_Resource_CSI_Driver/ephemeral-storage-shared-resource-csi-driver-operator.adoc similarity index 100% rename from storage/container_storage_interface/ephemeral-storage-shared-resource-csi-driver-operator.adoc rename to _unused_topics/Storage_Shared_Resource_CSI_Driver/ephemeral-storage-shared-resource-csi-driver-operator.adoc diff --git a/modules/builds-running-entitled-builds-with-sharedsecret-objects.adoc b/modules/builds-running-entitled-builds-with-sharedsecret-objects.adoc index c074881836ef..4e7cadfcf1ed 100644 --- a/modules/builds-running-entitled-builds-with-sharedsecret-objects.adoc +++ b/modules/builds-running-entitled-builds-with-sharedsecret-objects.adoc @@ -8,7 +8,7 @@ The `SharedSecret` object allows you to share and synchronize secrets across nam [IMPORTANT] ==== -The Shared Resource CSI Driver feature is now generally available in link:https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.1[{builds-v2title} 1.1]. This feature is now deprecated in {product-title}. To use this feature, ensure you are using {builds-v2title} 1.1 or a more recent version. +The Shared Resource CSI Driver feature is now generally available in link:https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.1[{builds-v2title} 1.1]. This feature is now removed in {product-title} 4.18 and later. To use this feature, ensure that you are using {builds-v2title} 1.1 or later. ==== .Prerequisites diff --git a/modules/ephemeral-storage-csi-inline-overview-admin-plugin.adoc b/modules/ephemeral-storage-csi-inline-overview-admin-plugin.adoc index 755ae1cee9d8..7c58bc37b5ad 100644 --- a/modules/ephemeral-storage-csi-inline-overview-admin-plugin.adoc +++ b/modules/ephemeral-storage-csi-inline-overview-admin-plugin.adoc @@ -110,8 +110,6 @@ If the referenced CSI driver for a CSI ephemeral volume does not have the `csi-e The CSI drivers that ship with {product-title} and support ephemeral volumes have a reasonable default set for the `csi-ephemeral-volume-profile` label: -* Shared Resource CSI driver: restricted - * Azure File CSI driver: privileged An admin can change the default value of the label if desired. \ No newline at end of file diff --git a/modules/ephemeral-storage-csi-inline-overview.adoc b/modules/ephemeral-storage-csi-inline-overview.adoc index 33f80c4b73e9..05590386d307 100644 --- a/modules/ephemeral-storage-csi-inline-overview.adoc +++ b/modules/ephemeral-storage-csi-inline-overview.adoc @@ -14,16 +14,12 @@ This feature allows you to specify CSI volumes directly in the `Pod` specificati [IMPORTANT] ==== -The Shared Resource CSI Driver feature is now generally available in link:https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.1[{builds-v2title} 1.1]. This feature is now deprecated in {product-title}. To use this feature, ensure you are using {builds-v2title} 1.1 or a more recent version. +The Shared Resource CSI Driver feature is now generally available in link:https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.1[{builds-v2title} 1.1]. This feature is now removed in {product-title} 4.18 and later. To use this feature, ensure that you are using {builds-v2title} 1.1 or later. ==== By default, {product-title} supports CSI inline ephemeral volumes with these limitations: * Support is only available for CSI drivers. In-tree and FlexVolumes are not supported. -* The Shared Resource CSI Driver supports using inline ephemeral volumes only to access `Secrets` or `ConfigMaps` across multiple namespaces as a Technology Preview feature in {product-title}. * Community or storage vendors provide other CSI drivers that support these volumes. Follow the installation instructions provided by the CSI driver provider. CSI drivers might not have implemented the inline volume functionality, including `Ephemeral` capacity. For details, see the CSI driver documentation. - -:FeatureName: Shared Resource CSI Driver -include::snippets/technology-preview.adoc[leveloffset=+0] diff --git a/modules/gathering-data-specific-features.adoc b/modules/gathering-data-specific-features.adoc index e301c1b8734c..7690ce2aff9b 100644 --- a/modules/gathering-data-specific-features.adoc +++ b/modules/gathering-data-specific-features.adoc @@ -48,9 +48,6 @@ endif::openshift-rosa,openshift-dedicated[] |`quay.io/netobserv/must-gather` |Data collection for the Network Observability Operator. -|`registry.redhat.io/openshift4/ose-csi-driver-shared-resource-mustgather-rhel8` -|Data collection for OpenShift Shared Resource CSI Driver. - ifndef::openshift-rosa,openshift-dedicated[] |`registry.redhat.io/openshift4/ose-local-storage-mustgather-rhel9:v` |Data collection for Local Storage Operator. diff --git a/modules/nodes-cluster-enabling-features-about.adoc b/modules/nodes-cluster-enabling-features-about.adoc index 903c24112c70..56dc13302ffa 100644 --- a/modules/nodes-cluster-enabling-features-about.adoc +++ b/modules/nodes-cluster-enabling-features-about.adoc @@ -20,6 +20,49 @@ Enabling the `TechPreviewNoUpgrade` feature set on your cluster cannot be undone The following Technology Preview features are enabled by this feature set: + -- +** External cloud providers. Enables support for external cloud providers for clusters on vSphere, AWS, Azure, and GCP. Support for OpenStack is GA. This is an internal feature that most users do not need to interact with. (`ExternalCloudProvider`) +** Swap memory on nodes. Enables swap memory use for {product-title} workloads on a per-node basis. (`NodeSwap`) +** OpenStack Machine API Provider. This gate has no effect and is planned to be removed from this feature set in a future release. (`MachineAPIProviderOpenStack`) +** Insights Operator. Enables the `InsightsDataGather` CRD, which allows users to configure some Insights data gathering options. The feature set also enables the `DataGather` CRD, which allows users to run Insights data gathering on-demand. (`InsightsConfigAPI`) +** Insights Operator. Enables a new data collection feature called 'Insights Runtime Extractor' which, when enabled, allows Red{nbsp}Hat to gather more runtime workload data about your {product-title} containers. (`InsightsRuntimeExtractor`) +** Dynamic Resource Allocation API. Enables a new API for requesting and sharing resources between pods and containers. This is an internal feature that most users do not need to interact with. (`DynamicResourceAllocation`) +** Pod security admission enforcement. Enables the restricted enforcement mode for pod security admission. Instead of only logging a warning, pods are rejected if they violate pod security standards. (`OpenShiftPodSecurityAdmission`) +** StatefulSet pod availability upgrading limits. Enables users to define the maximum number of statefulset pods unavailable during updates which reduces application downtime. (`MaxUnavailableStatefulSet`) +** `OVNObservability` resource allows you to verify expected network behavior. Supports the following network APIs: `NetworkPolicy`, `AdminNetworkPolicy`, `BaselineNetworkPolicy`, `UserDefinesdNetwork` isolation, multicast ACLs, and egress firewalls. When enabled, you can view network events in the terminal. +** `gcpLabelsTags` +** `vSphereStaticIPs` +** `routeExternalCertificate` +** `automatedEtcdBackup` +** `gcpClusterHostedDNS` +** `vSphereControlPlaneMachineset` +** `dnsNameResolver` +** `machineConfigNodes` +** `metricsServer` +** `installAlternateInfrastructureAWS` +** `mixedCPUsAllocation` +** `managedBootImages` +** `onClusterBuild` +** `signatureStores` +** `SigstoreImageVerification` +** `DisableKubeletCloudCredentialProviders` +** `BareMetalLoadBalancer` +** `ClusterAPIInstallAWS` +** `ClusterAPIInstallAzure` +** `ClusterAPIInstallNutanix` +** `ClusterAPIInstallOpenStack` +** `ClusterAPIInstallVSphere` +** `HardwareSpeed` +** `KMSv1` +** `NetworkDiagnosticsConfig` +** `VSphereDriverConfiguration` +** `ExternalOIDC` +** `ChunkSizeMiB` +** `ClusterAPIInstallGCP` +** `ClusterAPIInstallPowerVS` +** `EtcdBackendQuota` +** `InsightsConfig` +** `InsightsOnDemandDataGather` +** `MetricsCollectionProfiles` ** `NewOLM` ** `AWSClusterHostedDNS` ** `AdditionalRoutingCapabilities` diff --git a/modules/persistent-storage-csi-drivers-supported.adoc b/modules/persistent-storage-csi-drivers-supported.adoc index 022a9a4f774d..5cc5c20ad96f 100644 --- a/modules/persistent-storage-csi-drivers-supported.adoc +++ b/modules/persistent-storage-csi-drivers-supported.adoc @@ -56,7 +56,6 @@ ifndef::openshift-dedicated,openshift-rosa[] |OpenStack Cinder | ✅ | ✅ | ✅| |OpenShift Data Foundation | ✅ | ✅ | ✅| |OpenStack Manila | ✅ | | ✅ | -|Shared Resource | | | | ✅ |CIFS/SMB | | ✅ | | |VMware vSphere | ✅^[1]^ | | ✅^[2]^| endif::openshift-dedicated,openshift-rosa[] diff --git a/storage/container_storage_interface/ephemeral-storage-csi-inline.adoc b/storage/container_storage_interface/ephemeral-storage-csi-inline.adoc index 1ab952de8df0..e7a9c2f76ffb 100644 --- a/storage/container_storage_interface/ephemeral-storage-csi-inline.adoc +++ b/storage/container_storage_interface/ephemeral-storage-csi-inline.adoc @@ -12,7 +12,6 @@ Container Storage Interface (CSI) inline ephemeral volumes allow you to define a This feature is only available with supported Container Storage Interface (CSI) drivers: -* Shared Resource CSI driver * Azure File CSI driver * {secrets-store-driver} From 897897b3e1bea74290a597e4185c81ad6778c61f Mon Sep 17 00:00:00 2001 From: Apurva Bhide Date: Mon, 3 Feb 2025 18:33:24 +0530 Subject: [PATCH 167/669] OADP-4883: Added 1.4.3 release notes --- .../release-notes/oadp-1-4-release-notes.adoc | 2 +- modules/oadp-1-4-3-release-notes.adoc | 26 +++++++++++++++++++ 2 files changed, 27 insertions(+), 1 deletion(-) create mode 100644 modules/oadp-1-4-3-release-notes.adoc diff --git a/backup_and_restore/application_backup_and_restore/release-notes/oadp-1-4-release-notes.adoc b/backup_and_restore/application_backup_and_restore/release-notes/oadp-1-4-release-notes.adoc index 3172471d771c..b6154cafc6c7 100644 --- a/backup_and_restore/application_backup_and_restore/release-notes/oadp-1-4-release-notes.adoc +++ b/backup_and_restore/application_backup_and_restore/release-notes/oadp-1-4-release-notes.adoc @@ -13,7 +13,7 @@ The release notes for {oadp-first} describe new features and enhancements, depre ==== For additional information about {oadp-short}, see link:https://access.redhat.com/articles/5456281[{oadp-first} FAQs] ==== - +include::modules/oadp-1-4-3-release-notes.adoc[leveloffset=+1] include::modules/oadp-1-4-2-release-notes.adoc[leveloffset=+1] [role="_additional-resources"] diff --git a/modules/oadp-1-4-3-release-notes.adoc b/modules/oadp-1-4-3-release-notes.adoc new file mode 100644 index 000000000000..cdf4677725b9 --- /dev/null +++ b/modules/oadp-1-4-3-release-notes.adoc @@ -0,0 +1,26 @@ +// Module included in the following assemblies: +// +// * backup_and_restore/oadp-1-4-release-notes.adoc + +:_mod-docs-content-type: REFERENCE + +[id="oadp-1-4-3-release-notes_{context}"] += {oadp-short} 1.4.3 release notes + +The {oadp-first} 1.4.3 release notes lists the following new feature. + +[id="new-features-1-4-3_{context}"] +== New features + +.Notable changes in the `kubevirt` velero plugin in version 0.7.1 + +With this release, the `kubevirt` velero plugin has been updated to version 0.7.1. Notable improvements include the following bug fix and new features: + +* Virtual machine instances (VMIs) are no longer ignored from backup when the owner VM is excluded. +* Object graphs now include all extra objects during backup and restore operations. +* Optionally generated labels are now added to new firmware Universally Unique Identifiers (UUIDs) during restore operations. +* Switching VM run strategies during restore operations is now possible. +* Clearing a MAC address by label is now supported. +* The restore-specific checks during the backup operation are now skipped. +* The `VirtualMachineClusterInstancetype` and `VirtualMachineClusterPreference` custom resource definitions (CRDs) are now supported. +//link:https://issues.redhat.com/browse/OADP-5551[OADP-5551] \ No newline at end of file From d68063053e9cb90b8af73a599b7d5210b6e96846 Mon Sep 17 00:00:00 2001 From: Ben Hardesty Date: Thu, 24 Oct 2024 16:25:51 -0400 Subject: [PATCH 168/669] OSDOCS-11831: Add Support book to ROSA HCP --- _topic_maps/_topic_map_osd.yml | 5 +- _topic_maps/_topic_map_rosa.yml | 5 +- _topic_maps/_topic_map_rosa_hcp.yml | 86 +++++++++++++++++++ modules/accessing-running-pods.adoc | 8 +- modules/cluster-resources.adoc | 8 +- .../copying-files-pods-and-containers.adoc | 8 +- .../inspecting-pod-and-container-logs.adoc | 8 +- ...hy-prometheus-is-consuming-disk-space.adoc | 16 ++-- ...-user-defined-metrics-are-unavailable.adoc | 8 +- ...fillingup-alert-firing-for-prometheus.adoc | 8 +- modules/olm-cs-status-cli.adoc | 8 +- modules/olm-status-viewing-cli.adoc | 8 +- .../querying-cluster-node-journal-logs.adoc | 20 ++++- modules/querying-operator-pod-status.adoc | 12 +-- ...g-node-status-usage-and-configuration.adoc | 8 +- modules/reviewing-pod-status.adoc | 8 +- ...aws-requirements-creating-association.adoc | 2 + ...sts-ocm-and-user-role-troubleshooting.adoc | 2 +- modules/rosa-sts-ocm-role-creation.adoc | 4 +- modules/rosa-sts-user-role-creation.adoc | 2 +- ...rosa-troubleshooting-cluster-deletion.adoc | 2 +- ...rosa-troubleshooting-elb-service-role.adoc | 6 +- ...sa-troubleshooting-general-deployment.adoc | 3 +- modules/rosa-troubleshooting-installing.adoc | 2 + .../rosa-troubleshooting-networking-nlb.adoc | 4 +- .../starting-debug-pods-with-root-access.adoc | 8 +- ...specifications-through-clusterversion.adoc | 8 +- ...support-collecting-host-network-trace.adoc | 14 +-- modules/support-collecting-network-trace.adoc | 21 ++--- ...-providing-diagnostic-data-to-red-hat.adoc | 17 ++-- ...ss-request-from-an-email-notification.adoc | 4 +- ...ccess-request-from-the-hybrid-console.adoc | 4 +- ...mitting-a-case-enable-approved-access.adoc | 8 +- ...owing-data-collected-from-the-cluster.adoc | 8 +- ...lemetry-what-information-is-collected.adoc | 1 - support/approved-access.adoc | 4 +- support/gathering-cluster-data.adoc | 27 +++--- support/index.adoc | 32 +++---- .../about-remote-health-monitoring.adoc | 52 ++++++----- ...collected-by-remote-health-monitoring.adoc | 4 +- .../using-insights-operator.adoc | 24 +++--- .../investigating-monitoring-issues.adoc | 10 ++- .../investigating-pod-issues.adoc | 2 +- .../rosa-troubleshooting-iam-resources.adoc | 7 ++ .../troubleshooting-operator-issues.adoc | 31 ++++--- .../verifying-node-health.adoc | 4 +- 46 files changed, 339 insertions(+), 202 deletions(-) diff --git a/_topic_maps/_topic_map_osd.yml b/_topic_maps/_topic_map_osd.yml index 398fcf62b232..b6d8332ee5b8 100644 --- a/_topic_maps/_topic_map_osd.yml +++ b/_topic_maps/_topic_map_osd.yml @@ -210,9 +210,8 @@ Topics: File: troubleshooting-operator-issues - Name: Investigating pod issues File: investigating-pod-issues -# Hiding from ROSA and OSD until it is decided who should port the Build book -# - Name: Troubleshooting the Source-to-Image process -# File: troubleshooting-s2i + - Name: Troubleshooting the Source-to-Image process + File: troubleshooting-s2i - Name: Troubleshooting storage issues File: troubleshooting-storage-issues # Not supported per WINC team diff --git a/_topic_maps/_topic_map_rosa.yml b/_topic_maps/_topic_map_rosa.yml index c2dbc485c194..4563aff95dea 100644 --- a/_topic_maps/_topic_map_rosa.yml +++ b/_topic_maps/_topic_map_rosa.yml @@ -402,9 +402,8 @@ Topics: File: troubleshooting-operator-issues - Name: Investigating pod issues File: investigating-pod-issues -# Hiding from ROSA and OSD until it is decided who should port the Build book -# - Name: Troubleshooting the Source-to-Image process -# File: troubleshooting-s2i + - Name: Troubleshooting the Source-to-Image process + File: troubleshooting-s2i - Name: Troubleshooting storage issues File: troubleshooting-storage-issues # Not supported per WINC team diff --git a/_topic_maps/_topic_map_rosa_hcp.yml b/_topic_maps/_topic_map_rosa_hcp.yml index 62bcc2e717dc..8b7e62d22911 100644 --- a/_topic_maps/_topic_map_rosa_hcp.yml +++ b/_topic_maps/_topic_map_rosa_hcp.yml @@ -300,6 +300,92 @@ Topics: - Name: Least privilege permissions for ROSA CLI commands File: rosa-cli-permission-examples --- +Name: Support +Dir: support +Distros: openshift-rosa-hcp +Topics: +- Name: Support overview + File: index +- Name: Managing your cluster resources + File: managing-cluster-resources +- Name: Approved Access + File: approved-access +- Name: Getting support + File: getting-support +- Name: Remote health monitoring with connected clusters + Dir: remote_health_monitoring + Topics: + - Name: About remote health monitoring + File: about-remote-health-monitoring + - Name: Showing data collected by remote health monitoring + File: showing-data-collected-by-remote-health-monitoring +# cannot get resource "secrets" in API group "" in the namespace "openshift-config" +# - Name: Opting out of remote health reporting +# File: opting-out-of-remote-health-reporting +# cannot get resource "secrets" in API group "" in the namespace "openshift-config" +# - Name: Enabling remote health reporting +# File: enabling-remote-health-reporting + - Name: Using Insights to identify issues with your cluster + File: using-insights-to-identify-issues-with-your-cluster + - Name: Using Insights Operator + File: using-insights-operator +# Not supported per Michael McNeill +# - Name: Using remote health reporting in a restricted network +# File: remote-health-reporting-from-restricted-network +# cannot list resource "secrets" in API group "" in the namespace "openshift-config" +# - Name: Importing simple content access entitlements with Insights Operator +# File: insights-operator-simple-access +- Name: Gathering data about your cluster + File: gathering-cluster-data +- Name: Summarizing cluster specifications + File: summarizing-cluster-specifications +- Name: Troubleshooting + Dir: troubleshooting + Topics: +# rosa has own troubleshooting installations +# - Name: Troubleshooting installations +# File: troubleshooting-installations + - Name: Troubleshooting ROSA installations + File: rosa-troubleshooting-installations + - Name: Troubleshooting networking + File: rosa-troubleshooting-networking + - Name: Verifying node health + File: verifying-node-health +# cannot create resource "namespaces", cannot patch resource "nodes" +# - Name: Troubleshooting CRI-O container runtime issues +# File: troubleshooting-crio-issues +# requires ostree, butane, and other plug-ins +# - Name: Troubleshooting operating system issues +# File: troubleshooting-operating-system-issues +# Distros: openshift-rosa +# cannot patch resource "nodes", "nodes/proxy", "namespaces" +# - Name: Troubleshooting network issues +# File: troubleshooting-network-issues +# Distros: openshift-rosa + - Name: Troubleshooting Operator issues + File: troubleshooting-operator-issues + - Name: Investigating pod issues + File: investigating-pod-issues + - Name: Troubleshooting the Source-to-Image process + File: troubleshooting-s2i + - Name: Troubleshooting storage issues + File: troubleshooting-storage-issues +# Not supported per WINC team +# - Name: Troubleshooting Windows container workload issues +# File: troubleshooting-windows-container-workload-issues + - Name: Investigating monitoring issues + File: investigating-monitoring-issues + - Name: Diagnosing OpenShift CLI (oc) issues + File: diagnosing-oc-issues + - Name: Troubleshooting expired offline access tokens + File: rosa-troubleshooting-expired-tokens + - Name: Troubleshooting IAM roles + File: rosa-troubleshooting-iam-resources + - Name: Troubleshooting cluster deployments + File: rosa-troubleshooting-deployments + - Name: Red Hat OpenShift Service on AWS managed resources + File: sd-managed-resources +--- Name: Cluster administration Dir: rosa_cluster_admin Distros: openshift-rosa-hcp diff --git a/modules/accessing-running-pods.adoc b/modules/accessing-running-pods.adoc index 656f9a0257df..2ead4efe3e34 100644 --- a/modules/accessing-running-pods.adoc +++ b/modules/accessing-running-pods.adoc @@ -10,12 +10,12 @@ You can review running pods dynamically by opening a shell inside a pod or by ga .Prerequisites -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * Your API service is still functional. * You have installed the OpenShift CLI (`oc`). diff --git a/modules/cluster-resources.adoc b/modules/cluster-resources.adoc index 863254078ea4..0fb3d07f049c 100644 --- a/modules/cluster-resources.adoc +++ b/modules/cluster-resources.adoc @@ -6,12 +6,12 @@ You can interact with cluster resources by using the OpenShift CLI (`oc`) tool i .Prerequisites -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the web console or you have installed the `oc` CLI tool. .Procedure diff --git a/modules/copying-files-pods-and-containers.adoc b/modules/copying-files-pods-and-containers.adoc index d6f16c2c77c0..025e8994aad5 100644 --- a/modules/copying-files-pods-and-containers.adoc +++ b/modules/copying-files-pods-and-containers.adoc @@ -10,12 +10,12 @@ You can copy files to and from a pod to test configuration changes or gather dia .Prerequisites -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * Your API service is still functional. * You have installed the OpenShift CLI (`oc`). diff --git a/modules/inspecting-pod-and-container-logs.adoc b/modules/inspecting-pod-and-container-logs.adoc index 1ececd35f154..ff9197ce795a 100644 --- a/modules/inspecting-pod-and-container-logs.adoc +++ b/modules/inspecting-pod-and-container-logs.adoc @@ -10,12 +10,12 @@ You can inspect pod and container logs for warnings and error messages related t .Prerequisites -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * Your API service is still functional. * You have installed the OpenShift CLI (`oc`). diff --git a/modules/monitoring-determining-why-prometheus-is-consuming-disk-space.adoc b/modules/monitoring-determining-why-prometheus-is-consuming-disk-space.adoc index 313fed881400..6b109386e24e 100644 --- a/modules/monitoring-determining-why-prometheus-is-consuming-disk-space.adoc +++ b/modules/monitoring-determining-why-prometheus-is-consuming-disk-space.adoc @@ -28,12 +28,12 @@ Using attributes that are bound to a limited set of possible values reduces the .Prerequisites -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa-hcp,openshift-rosa[] * You have access to the cluster as a user with the `cluster-admin` cluster role. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa-hcp,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa-hcp,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa-hcp,openshift-rosa[] * You have installed the OpenShift CLI (`oc`). .Procedure @@ -64,12 +64,12 @@ topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h]))) * *If the metrics relate to a core {product-title} project*, create a Red Hat support case on the link:https://access.redhat.com/[Red Hat Customer Portal]. . Review the TSDB status using the Prometheus HTTP API by following these steps when logged in as a -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa-hcp,openshift-rosa[] cluster administrator: -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa-hcp,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa-hcp,openshift-rosa[] `dedicated-admin`: -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa-hcp,openshift-rosa[] + .. Get the Prometheus API route URL by running the following command: + diff --git a/modules/monitoring-investigating-why-user-defined-metrics-are-unavailable.adoc b/modules/monitoring-investigating-why-user-defined-metrics-are-unavailable.adoc index 7820be33adf4..5b9b96e8a63f 100644 --- a/modules/monitoring-investigating-why-user-defined-metrics-are-unavailable.adoc +++ b/modules/monitoring-investigating-why-user-defined-metrics-are-unavailable.adoc @@ -11,12 +11,12 @@ .Prerequisites -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have installed the OpenShift CLI (`oc`). * You have enabled and configured monitoring for user-defined projects. * You have created a `ServiceMonitor` resource. diff --git a/modules/monitoring-resolving-the-kubepersistentvolumefillingup-alert-firing-for-prometheus.adoc b/modules/monitoring-resolving-the-kubepersistentvolumefillingup-alert-firing-for-prometheus.adoc index d7609ad9168a..34c41e7c61f4 100644 --- a/modules/monitoring-resolving-the-kubepersistentvolumefillingup-alert-firing-for-prometheus.adoc +++ b/modules/monitoring-resolving-the-kubepersistentvolumefillingup-alert-firing-for-prometheus.adoc @@ -23,12 +23,12 @@ To address this issue, you can remove Prometheus time-series database (TSDB) blo .Prerequisites -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa-hcp,openshift-rosa[] * You have access to the cluster as a user with the `cluster-admin` cluster role. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa-hcp,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa-hcp,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa-hcp,openshift-rosa[] * You have installed the OpenShift CLI (`oc`). .Procedure diff --git a/modules/olm-cs-status-cli.adoc b/modules/olm-cs-status-cli.adoc index 8b2bf0e730dd..3425ed28ac39 100644 --- a/modules/olm-cs-status-cli.adoc +++ b/modules/olm-cs-status-cli.adoc @@ -18,12 +18,12 @@ You can view the status of an Operator catalog source by using the CLI. .Prerequisites -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have installed the OpenShift CLI (`oc`). .Procedure diff --git a/modules/olm-status-viewing-cli.adoc b/modules/olm-status-viewing-cli.adoc index dff8103a3c8f..791bbaad3d78 100644 --- a/modules/olm-status-viewing-cli.adoc +++ b/modules/olm-status-viewing-cli.adoc @@ -11,12 +11,12 @@ You can view Operator subscription status by using the CLI. .Prerequisites -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have installed the OpenShift CLI (`oc`). .Procedure diff --git a/modules/querying-cluster-node-journal-logs.adoc b/modules/querying-cluster-node-journal-logs.adoc index 2528c2064a55..e09bcacada84 100644 --- a/modules/querying-cluster-node-journal-logs.adoc +++ b/modules/querying-cluster-node-journal-logs.adoc @@ -21,21 +21,32 @@ In {product-title} deployments, customers who are not using the Customer Cloud S + endif::openshift-dedicated[] * You have installed the OpenShift CLI (`oc`). -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * Your API service is still functional. * You have SSH access to your hosts. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure +ifndef::openshift-rosa-hcp[] . Query `kubelet` `journald` unit logs from {product-title} cluster nodes. The following example queries control plane nodes only: +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +* Query `kubelet` `journald` unit logs from {product-title} cluster nodes. The following example queries worker nodes only: +endif::openshift-rosa-hcp[] + [source,terminal] ---- +ifndef::openshift-rosa-hcp[] $ oc adm node-logs --role=master -u kubelet <1> +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +$ oc adm node-logs --role=worker -u kubelet <1> +endif::openshift-rosa-hcp[] ---- <1> Replace `kubelet` as appropriate to query other unit logs. +ifndef::openshift-rosa-hcp[] . Collect logs from specific subdirectories under `/var/log/` on cluster nodes. .. Retrieve a list of logs contained within a `/var/log/` subdirectory. The following example lists files in `/var/log/openshift-apiserver/` on all control plane nodes: + @@ -50,8 +61,9 @@ $ oc adm node-logs --role=master --path=openshift-apiserver ---- $ oc adm node-logs --role=master --path=openshift-apiserver/audit.log ---- +endif::openshift-rosa-hcp[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] + -ifndef::openshift-rosa,openshift-dedicated[] .. If the API is not functional, review the logs on each node using SSH instead. The following example tails `/var/log/openshift-apiserver/audit.log`: + [source,terminal] @@ -63,4 +75,4 @@ $ ssh core@.. sudo tail -f /var/log/open ==== {product-title} {product-version} cluster nodes running {op-system-first} are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running `oc adm must gather` and other `oc` commands is sufficient instead. However, if the {product-title} API is not available, or the kubelet is not properly functioning on the target node, `oc` operations will be impacted. In such situations, it is possible to access nodes using `ssh core@..`. ==== -endif::openshift-rosa,openshift-dedicated[] \ No newline at end of file +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] diff --git a/modules/querying-operator-pod-status.adoc b/modules/querying-operator-pod-status.adoc index d0c0f8e73c4d..ce32a1aa7639 100644 --- a/modules/querying-operator-pod-status.adoc +++ b/modules/querying-operator-pod-status.adoc @@ -10,12 +10,12 @@ You can list Operator pods within a cluster and their status. You can also colle .Prerequisites -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * Your API service is still functional. * You have installed the OpenShift CLI (`oc`). @@ -42,7 +42,7 @@ $ oc get pod -n $ oc describe pod -n ---- -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] . If an Operator issue is node-specific, query Operator container status on that node. .. Start a debug pod for the node: + @@ -78,4 +78,4 @@ $ oc debug node/my-node ---- + .. Exit from the debug shell. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] diff --git a/modules/reviewing-node-status-usage-and-configuration.adoc b/modules/reviewing-node-status-usage-and-configuration.adoc index 3f0a26f5d0d0..fb37331cecb3 100644 --- a/modules/reviewing-node-status-usage-and-configuration.adoc +++ b/modules/reviewing-node-status-usage-and-configuration.adoc @@ -10,12 +10,12 @@ Review cluster node health status, resource consumption statistics, and node log .Prerequisites -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have installed the OpenShift CLI (`oc`). .Procedure diff --git a/modules/reviewing-pod-status.adoc b/modules/reviewing-pod-status.adoc index c6185abb5fd5..adaf100a3357 100644 --- a/modules/reviewing-pod-status.adoc +++ b/modules/reviewing-pod-status.adoc @@ -10,12 +10,12 @@ You can query pod status and error states. You can also query a pod's associated .Prerequisites -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have installed the OpenShift CLI (`oc`). * `skopeo` is installed. diff --git a/modules/rosa-sts-aws-requirements-creating-association.adoc b/modules/rosa-sts-aws-requirements-creating-association.adoc index 4d05cd876897..42d99f3409d6 100644 --- a/modules/rosa-sts-aws-requirements-creating-association.adoc +++ b/modules/rosa-sts-aws-requirements-creating-association.adoc @@ -2,6 +2,8 @@ // // * rosa_planning/rosa-sts-ocm-role.adoc // * rosa_planning/rosa-sts-aws-prereqs.adoc +// * support/troubleshooting/rosa-troubleshooting-iam-resources.adoc + :_mod-docs-content-type: PROCEDURE [id="rosa-associating-account_{context}"] = Associating your AWS account with IAM roles diff --git a/modules/rosa-sts-ocm-and-user-role-troubleshooting.adoc b/modules/rosa-sts-ocm-and-user-role-troubleshooting.adoc index a0abccddaf83..ea557028cc17 100644 --- a/modules/rosa-sts-ocm-and-user-role-troubleshooting.adoc +++ b/modules/rosa-sts-ocm-and-user-role-troubleshooting.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * support/rosa-troubleshooting-iam-resources.adoc +// * support/troubleshooting/rosa-troubleshooting-iam-resources.adoc :_mod-docs-content-type: PROCEDURE [id="rosa-sts-ocm-roles-and-permissions-troubleshooting_{context}"] diff --git a/modules/rosa-sts-ocm-role-creation.adoc b/modules/rosa-sts-ocm-role-creation.adoc index 7f3d1551a55d..b2048b24a0c1 100644 --- a/modules/rosa-sts-ocm-role-creation.adoc +++ b/modules/rosa-sts-ocm-role-creation.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: -//* rosa_architecture/rosa-sts-about-iam-resources.adoc -// * support/rosa-troubleshooting-iam-resources.adoc +// * rosa_architecture/rosa-sts-about-iam-resources.adoc +// * support/troubleshooting/rosa-troubleshooting-iam-resources.adoc // * rosa_planning/rosa-sts-ocm-role.adoc // * rosa_planning/rosa-hcp-prepare-iam-resources.adoc :_mod-docs-content-type: PROCEDURE diff --git a/modules/rosa-sts-user-role-creation.adoc b/modules/rosa-sts-user-role-creation.adoc index 3f0a839f52a8..5f6a0988e1dd 100644 --- a/modules/rosa-sts-user-role-creation.adoc +++ b/modules/rosa-sts-user-role-creation.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * support/rosa-troubleshooting-iam-resources.adoc +// * support/troubleshooting/rosa-troubleshooting-iam-resources.adoc // * rosa_planning/rosa-sts-ocm-role.adoc // * rosa_planning/rosa-hcp-prepare-iam-resources.adoc :_mod-docs-content-type: PROCEDURE diff --git a/modules/rosa-troubleshooting-cluster-deletion.adoc b/modules/rosa-troubleshooting-cluster-deletion.adoc index 4de908f74cb0..29247cf04c2e 100644 --- a/modules/rosa-troubleshooting-cluster-deletion.adoc +++ b/modules/rosa-troubleshooting-cluster-deletion.adoc @@ -38,4 +38,4 @@ $ rosa create user-role [source,terminal] ---- I: Successfully linked role ARN with account ----- \ No newline at end of file +---- diff --git a/modules/rosa-troubleshooting-elb-service-role.adoc b/modules/rosa-troubleshooting-elb-service-role.adoc index 3dbb6c4083a8..64783c1be897 100644 --- a/modules/rosa-troubleshooting-elb-service-role.adoc +++ b/modules/rosa-troubleshooting-elb-service-role.adoc @@ -14,13 +14,13 @@ Error: Error creating network Load Balancer: AccessDenied: User: arn:aws:sts::xx .Procedure -To resolve this issue, ensure that the role exists on your AWS account. If not, create this role with the following command: - +* To resolve this issue, ensure that the role exists on your AWS account. If not, create this role with the following command: ++ [source,terminal] ---- aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" || aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com" ---- - ++ [NOTE] ==== This command only needs to be executed once per account. diff --git a/modules/rosa-troubleshooting-general-deployment.adoc b/modules/rosa-troubleshooting-general-deployment.adoc index e7eb9db4afa3..02069708904b 100644 --- a/modules/rosa-troubleshooting-general-deployment.adoc +++ b/modules/rosa-troubleshooting-general-deployment.adoc @@ -8,8 +8,9 @@ If a cluster deployment fails, the cluster is put into an "error" state. .Procedure -Run the following command to get more information: +* Run the following command to get more information: ++ [source,terminal] ---- $ rosa describe cluster -c --debug diff --git a/modules/rosa-troubleshooting-installing.adoc b/modules/rosa-troubleshooting-installing.adoc index 72288e9a85d4..54fcf243f7dd 100644 --- a/modules/rosa-troubleshooting-installing.adoc +++ b/modules/rosa-troubleshooting-installing.adoc @@ -39,6 +39,7 @@ $ rosa logs uninstall --cluster= $ rosa logs uninstall --cluster= --watch ---- +ifndef::openshift-rosa-hcp[] [id="rosa-faq-verify-permissions-for-clusters-without-sts_{context}"] == Verify your AWS account permissions for clusters without STS @@ -50,6 +51,7 @@ $ rosa verify permissions ---- If you receive any errors, double check to ensure than an link:https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_type-auth.html#orgs_manage_policies_scp[SCP] is not applied to your AWS account. If you are required to use an SCP, see link:https://www.openshift.com/dedicated/ccs#scp[Red{nbsp}Hat Requirements for Customer Cloud Subscriptions] for details on the minimum required SCP. +endif::openshift-rosa-hcp[] [id="rosa-faq-verify-aws-quota_{context}"] == Verify your AWS account and quota diff --git a/modules/rosa-troubleshooting-networking-nlb.adoc b/modules/rosa-troubleshooting-networking-nlb.adoc index a86f35328dac..003c9eac415e 100644 --- a/modules/rosa-troubleshooting-networking-nlb.adoc +++ b/modules/rosa-troubleshooting-networking-nlb.adoc @@ -5,6 +5,6 @@ [id="rosa-troubleshooting-general-deployment-failure_{context}"] = Connectivity issues on clusters with private Network Load Balancers -{product-title} and {hcp-title} clusters created with version {product-version} deploy AWS Network Load Balancers (NLB) by default for the `default` ingress controller. In the case of a private NLB, the NLB's client IP address preservation might cause connections to be dropped where the source and destination are the same host. See the AWS's documentation about how to link:https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-troubleshooting.html#loopback-timeout[Troubleshoot your Network Load Balancer]. This IP address preservation has the implication that any customer workloads cohabitating on the same node with the router pods, may not be able send traffic to the private NLB fronting the ingress controller router. +{product-title} clusters created with version {product-version} deploy AWS Network Load Balancers (NLB) by default for the `default` ingress controller. In the case of a private NLB, the NLB's client IP address preservation might cause connections to be dropped where the source and destination are the same host. See the AWS's documentation about how to link:https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-troubleshooting.html#loopback-timeout[Troubleshoot your Network Load Balancer]. This IP address preservation has the implication that any customer workloads cohabitating on the same node with the router pods, may not be able send traffic to the private NLB fronting the ingress controller router. -To mitigate this impact, customer's should reschedule their workloads onto nodes separate from those where the router pods are scheduled. Alternatively, customers should rely on the internal pod and service networks for accessing other workloads co-located within the same cluster. \ No newline at end of file +To mitigate this impact, customers should reschedule their workloads onto nodes separate from those where the router pods are scheduled. Alternatively, customers should rely on the internal pod and service networks for accessing other workloads co-located within the same cluster. diff --git a/modules/starting-debug-pods-with-root-access.adoc b/modules/starting-debug-pods-with-root-access.adoc index c5a35ce6df37..64e82118b777 100644 --- a/modules/starting-debug-pods-with-root-access.adoc +++ b/modules/starting-debug-pods-with-root-access.adoc @@ -10,12 +10,12 @@ You can start a debug pod with root access, based on a problematic pod's deploym .Prerequisites -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * Your API service is still functional. * You have installed the OpenShift CLI (`oc`). diff --git a/modules/summarizing-cluster-specifications-through-clusterversion.adoc b/modules/summarizing-cluster-specifications-through-clusterversion.adoc index b8ff5c188201..80a3e1fdcdf2 100644 --- a/modules/summarizing-cluster-specifications-through-clusterversion.adoc +++ b/modules/summarizing-cluster-specifications-through-clusterversion.adoc @@ -10,12 +10,12 @@ You can obtain a summary of {product-title} cluster specifications by querying t .Prerequisites -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have installed the OpenShift CLI (`oc`). .Procedure diff --git a/modules/support-collecting-host-network-trace.adoc b/modules/support-collecting-host-network-trace.adoc index c2091326f082..fceaa747f7ff 100644 --- a/modules/support-collecting-host-network-trace.adoc +++ b/modules/support-collecting-host-network-trace.adoc @@ -49,14 +49,14 @@ ifndef::openshift-origin[] [source,terminal] ---- $ oc adm must-gather \ - --dest-dir /tmp/captures \ <.> - --source-dir '/tmp/tcpdump/' \ <.> - --image registry.redhat.io/openshift4/network-tools-rhel8:latest \ <.> - --node-selector 'node-role.kubernetes.io/worker' \ <.> - --host-network=true \ <.> - --timeout 30s \ <.> + --dest-dir /tmp/captures \// <.> + --source-dir '/tmp/tcpdump/' \// <.> + --image registry.redhat.io/openshift4/network-tools-rhel8:latest \// <.> + --node-selector 'node-role.kubernetes.io/worker' \// <.> + --host-network=true \// <.> + --timeout 30s \// <.> -- \ - tcpdump -i any \ <.> + tcpdump -i any \// <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300 ---- <.> The `--dest-dir` argument specifies that `oc adm must-gather` stores the packet captures in directories that are relative to `/tmp/captures` on the client machine. You can specify any writable directory. diff --git a/modules/support-collecting-network-trace.adoc b/modules/support-collecting-network-trace.adoc index c94c7ee97cfc..58ec2980cfe4 100644 --- a/modules/support-collecting-network-trace.adoc +++ b/modules/support-collecting-network-trace.adoc @@ -21,11 +21,11 @@ endif::openshift-dedicated[] + * You have installed the OpenShift CLI (`oc`). * You have an existing Red Hat Support case ID. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have a Red Hat standard or premium Subscription. * You have a Red Hat Customer Portal account. * You have SSH access to your hosts. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure @@ -49,14 +49,14 @@ $ oc debug node/my-cluster-node ---- # chroot /host ---- +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] + -ifndef::openshift-rosa,openshift-dedicated[] [NOTE] ==== {product-title} {product-version} cluster nodes running {op-system-first} are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the {product-title} API is not available, or the kubelet is not properly functioning on the target node, `oc` operations will be impacted. In such situations, it is possible to access nodes using `ssh core@..` instead. ==== -+ -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] + . From within the `chroot` environment console, obtain the node's interface names: + [source,terminal] @@ -75,7 +75,7 @@ endif::openshift-rosa,openshift-dedicated[] ==== If an existing `toolbox` pod is already running, the `toolbox` command outputs `'toolbox-' already exists. Trying to start...`. To avoid `tcpdump` issues, remove the running toolbox container with `podman rm toolbox-` and spawn a new toolbox container. ==== -+ + . Initiate a `tcpdump` session on the cluster node and redirect output to a capture file. This example uses `ens5` as the interface name: + [source,terminal] @@ -119,6 +119,7 @@ $ tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_ <1> The toolbox container mounts the host's root directory at `/host`. Reference the absolute path from the toolbox container's root directory, including `/host/`, when specifying files to upload through the `redhat-support-tool` command. + * Upload the file to an existing Red Hat support case. + .. Concatenate the `sosreport` archive by running the `oc debug node/` command and redirect the output to a file. This command assumes you have exited the previous `oc debug` session: + [source,terminal] @@ -126,16 +127,16 @@ $ tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_ $ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap <1> ---- <1> The debug container mounts the host's root directory at `/host`. Reference the absolute path from the debug container's root directory, including `/host`, when specifying target files for concatenation. +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] + -ifndef::openshift-rosa,openshift-dedicated[] [NOTE] ==== {product-title} {product-version} cluster nodes running {op-system-first} are immutable and rely on Operators to apply cluster changes. Transferring a `tcpdump` capture file from a cluster node by using `scp` is not recommended. However, if the {product-title} API is not available, or the kubelet is not properly functioning on the target node, `oc` operations will be impacted. In such situations, it is possible to copy a `tcpdump` capture file from a node by running `scp core@..: `. ==== -+ -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] + .. Navigate to an existing support case within link:https://access.redhat.com/support/cases/#/case/list[the *Customer Support* page] of the Red Hat Customer Portal. -+ + .. Select *Attach files* and follow the prompts to upload the file. // TODO - Add details relating to https://github.com/openshift/must-gather/pull/156 within the procedure. diff --git a/modules/support-providing-diagnostic-data-to-red-hat.adoc b/modules/support-providing-diagnostic-data-to-red-hat.adoc index a9e9d07548da..9d37dfb886ff 100644 --- a/modules/support-providing-diagnostic-data-to-red-hat.adoc +++ b/modules/support-providing-diagnostic-data-to-red-hat.adoc @@ -20,32 +20,31 @@ In {product-title} deployments, customers who are not using the Customer Cloud S endif::openshift-dedicated[] + * You have installed the OpenShift CLI (`oc`). -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have SSH access to your hosts. * You have a Red Hat standard or premium Subscription. * You have a Red Hat Customer Portal account. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have an existing Red Hat Support case ID. .Procedure * Upload diagnostic data to an existing Red Hat support case through the Red Hat Customer Portal. -. Concatenate a diagnostic file contained on an {product-title} node by using the `oc debug node/` command and redirect the output to a file. The following example copies `/host/var/tmp/my-diagnostic-data.tar.gz` from a debug container to `/var/tmp/my-diagnostic-data.tar.gz`: +.. Concatenate a diagnostic file contained on an {product-title} node by using the `oc debug node/` command and redirect the output to a file. The following example copies `/host/var/tmp/my-diagnostic-data.tar.gz` from a debug container to `/var/tmp/my-diagnostic-data.tar.gz`: + [source,terminal] ---- $ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz <1> ---- <1> The debug container mounts the host's root directory at `/host`. Reference the absolute path from the debug container's root directory, including `/host`, when specifying target files for concatenation. +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] + -ifndef::openshift-rosa,openshift-dedicated[] [NOTE] ==== {product-title} {product-version} cluster nodes running {op-system-first} are immutable and rely on Operators to apply cluster changes. Transferring files from a cluster node by using `scp` is not recommended. However, if the {product-title} API is not available, or the kubelet is not properly functioning on the target node, `oc` operations will be impacted. In such situations, it is possible to copy diagnostic files from a node by running `scp core@..: `. ==== -+ -endif::openshift-rosa,openshift-dedicated[] -. Navigate to an existing support case within link:https://access.redhat.com/support/cases/#/case/list[the *Customer Support* page] of the Red Hat Customer Portal. -+ -. Select *Attach files* and follow the prompts to upload the file. +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] + +.. Navigate to an existing support case within link:https://access.redhat.com/support/cases/#/case/list[the *Customer Support* page] of the Red Hat Customer Portal. +.. Select *Attach files* and follow the prompts to upload the file. diff --git a/modules/support-reviewing-an-access-request-from-an-email-notification.adoc b/modules/support-reviewing-an-access-request-from-an-email-notification.adoc index 9285de31547f..6b02b2642f59 100644 --- a/modules/support-reviewing-an-access-request-from-an-email-notification.adoc +++ b/modules/support-reviewing-an-access-request-from-an-email-notification.adoc @@ -11,10 +11,10 @@ Cluster owners will receive an email notification when Red{nbsp}Hat Site Reliability Engineering (SRE) request access to their cluster with a link to review the request in the {hybrid-console-second}. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Prerequisites * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure diff --git a/modules/support-reviewing-an-access-request-from-the-hybrid-console.adoc b/modules/support-reviewing-an-access-request-from-the-hybrid-console.adoc index 15b85176e704..d7b8323ccc4a 100644 --- a/modules/support-reviewing-an-access-request-from-the-hybrid-console.adoc +++ b/modules/support-reviewing-an-access-request-from-the-hybrid-console.adoc @@ -9,10 +9,10 @@ Review access requests for your {product-rosa} clusters from the {hybrid-console-second}. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Prerequisites * You have access to the cluster as a user with the `Cluster Owner` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure diff --git a/modules/support-submitting-a-case-enable-approved-access.adoc b/modules/support-submitting-a-case-enable-approved-access.adoc index 84a1795d0952..1387faff9617 100644 --- a/modules/support-submitting-a-case-enable-approved-access.adoc +++ b/modules/support-submitting-a-case-enable-approved-access.adoc @@ -23,7 +23,12 @@ . Enter the following information: -.. In the *Product* field, select *{product-title}* or *{product-title} {hcp-capital}*. +ifdef::openshift-rosa[] +.. In the *Product* field, select *{product-title}*. +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +.. In the *Product* field, select *{product-title} {hcp-capital}*. +endif::openshift-rosa-hcp[] .. In the *Problem statement* field, enter *Enable ROSA Access Protection*. .. Click *See more options*. @@ -41,4 +46,3 @@ . Select *Severity* as *4(Low)* and click *Continue*. . Preview the case details and click *Submit*. - diff --git a/modules/telemetry-showing-data-collected-from-the-cluster.adoc b/modules/telemetry-showing-data-collected-from-the-cluster.adoc index 749e5581f9c9..6eb8ebc4c1bd 100644 --- a/modules/telemetry-showing-data-collected-from-the-cluster.adoc +++ b/modules/telemetry-showing-data-collected-from-the-cluster.adoc @@ -18,12 +18,12 @@ ifndef::openshift-enterprise,openshift-webscale,openshift-origin[] OpenShift Container Platform endif::openshift-enterprise,openshift-webscale,openshift-origin[] CLI (`oc`). -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `cluster-admin` role or the `cluster-monitoring-view` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] .Procedure diff --git a/modules/telemetry-what-information-is-collected.adoc b/modules/telemetry-what-information-is-collected.adoc index 9bb5fd72d6c7..57ab5a237a4b 100644 --- a/modules/telemetry-what-information-is-collected.adoc +++ b/modules/telemetry-what-information-is-collected.adoc @@ -40,4 +40,3 @@ endif::openshift-dedicated[] * Usage details about Technology Previews and unsupported configurations Telemetry does not collect identifying information such as usernames or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the link:https://www.redhat.com/en/about/privacy-policy[Red Hat Privacy Statement] for more information about Red Hat's privacy practices. - diff --git a/support/approved-access.adoc b/support/approved-access.adoc index d476f2f8cd5a..28437a09f0f6 100644 --- a/support/approved-access.adoc +++ b/support/approved-access.adoc @@ -17,7 +17,7 @@ Elevated access requests to clusters on {product-rosa} clusters and the correspo When _Approved Access_ is enabled and an SRE creates an access request, _cluster owners_ receive an email notification informing them of a new access request. The email notification contains a link allowing the cluster owner to quickly approve or deny the access request. You must respond in a timely manner otherwise there is a risk to your SLA for {product-rosa}. -* If customers require additional users that are not the cluster owner to receive the email, they can link:https://docs.openshift.com/rosa/rosa_cluster_admin/rosa-cluster-notifications.html#add-notification-contact_rosa-cluster-notifications[add notification cluster contacts]. +* If customers require additional users that are not the cluster owner to receive the email, they can xref:../rosa_cluster_admin/rosa-cluster-notifications.adoc#add-notification-contact_rosa-cluster-notifications[add notification cluster contacts]. * Pending access requests are available in the {hybrid-console-second} on the clusters list or *Access Requests* tab on the cluster overview for the specific cluster. [NOTE] @@ -28,5 +28,3 @@ Denying an access request requires you to complete the *Justification* field. In include::modules/support-submitting-a-case-enable-approved-access.adoc[leveloffset=+1] include::modules/support-reviewing-an-access-request-from-an-email-notification.adoc[leveloffset=+1] include::modules/support-reviewing-an-access-request-from-the-hybrid-console.adoc[leveloffset=+1] - - diff --git a/support/gathering-cluster-data.adoc b/support/gathering-cluster-data.adoc index b099c20bad85..3d903bc7a1ce 100644 --- a/support/gathering-cluster-data.adoc +++ b/support/gathering-cluster-data.adoc @@ -9,7 +9,7 @@ endif::[] toc::[] -ifndef::openshift-origin,openshift-rosa,openshift-dedicated[] +ifndef::openshift-origin[] When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. @@ -17,11 +17,11 @@ It is recommended to provide: * xref:../support/gathering-cluster-data.adoc#support_gathering_data_gathering-cluster-data[Data gathered using the `oc adm must-gather` command] * The xref:../support/gathering-cluster-data.adoc#support-get-cluster-id_gathering-cluster-data[unique cluster ID] -endif::openshift-origin,openshift-rosa,openshift-dedicated[] +endif::openshift-origin[] -ifdef::openshift-origin,openshift-rosa,openshift-dedicated[] +ifdef::openshift-origin[] You can use the following tools to get debugging information about your {product-title} cluster. -endif::openshift-origin,openshift-rosa,openshift-dedicated[] +endif::openshift-origin[] // About the must-gather tool include::modules/about-must-gather.adoc[leveloffset=+1] @@ -39,17 +39,22 @@ endif::openshift-origin[] // Gathering data about specific features include::modules/gathering-data-specific-features.adoc[leveloffset=+2] -== Additional resources +[role="_additional-resources"] +.Additional resources -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * xref:../nodes/cma/nodes-cma-autoscaling-custom.adoc#nodes-cma-autoscaling-custom-gather[Gathering debugging data] for the Custom Metrics Autoscaler. * link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] ifdef::openshift-rosa[] * xref:../rosa_architecture/rosa_policy_service_definition/rosa-life-cycle.html[{product-title} update life cycle] endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-hcp-life-cycle.html[{product-title} update life cycle] +endif::openshift-rosa-hcp[] + ifdef::openshift-dedicated[] * xref:../osd_architecture/osd_policy/osd-life-cycle.html[{product-title} update life cycle] endif::openshift-dedicated[] @@ -65,18 +70,18 @@ ifndef::openshift-origin[] include::modules/support-get-cluster-id.adoc[leveloffset=+1] endif::openshift-origin[] -ifndef::openshift-origin,openshift-rosa,openshift-dedicated[] +ifndef::openshift-origin,openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] // About `sosreport` include::modules/about-sosreport.adoc[leveloffset=+1] // Generating a `sosreport` archive for an {product-title} cluster node include::modules/support-generating-a-sosreport-archive.adoc[leveloffset=+1] -endif::openshift-origin,openshift-rosa,openshift-dedicated[] +endif::openshift-origin,openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] // Querying bootstrap node journal logs include::modules/querying-bootstrap-node-journal-logs.adoc[leveloffset=+1] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] // Querying cluster node journal logs include::modules/querying-cluster-node-journal-logs.adoc[leveloffset=+1] diff --git a/support/index.adoc b/support/index.adoc index 8d8b54c92ad2..1b0a662cfd8c 100644 --- a/support/index.adoc +++ b/support/index.adoc @@ -12,20 +12,20 @@ Red Hat offers cluster administrators tools for gathering data for your cluster, == Get support xref:../support/getting-support.adoc#getting-support[Get support]: Visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [id='support-overview-remote-health-monitoring'] == Remote health monitoring issues xref:../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[Remote health monitoring issues]: {product-title} collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. Red Hat uses this data to understand and resolve issues in _connected cluster_. Similar to connected clusters, you can xref:../support/remote_health_monitoring/remote-health-reporting-from-restricted-network.adoc#remote-health-reporting-from-restricted-network[Use remote health monitoring in a restricted network]. {product-title} collects data and monitors health using the following: -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] // Removed sentence on restricted networks, not supported in ROSA/OSD -ifdef::openshift-rosa,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [id='support-overview-remote-health-monitoring'] == Remote health monitoring issues xref:../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[Remote health monitoring issues]: {product-title} collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. Red Hat uses this data to understand and resolve issues in _connected cluster_. {product-title} collects data and monitors health using the following: -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * *Telemetry*: The Telemetry Client gathers and uploads the metrics values to Red Hat every four minutes and thirty seconds. Red Hat uses this data to: @@ -42,8 +42,6 @@ You can xref:../support/remote_health_monitoring/showing-data-collected-by-remot If you have enabled remote health reporting, xref:../support/remote_health_monitoring/using-insights-to-identify-issues-with-your-cluster.adoc#using-insights-to-identify-issues-with-your-cluster[Use Insights to identify issues]. You can optionally disable remote health reporting. -// must-gather not supported for customers, per Dustin Row, cannot create resource "namespaces" -ifndef::openshift-rosa,openshift-dedicated[] [id='support-overview-gather-data-cluster'] == Gather data about your cluster xref:../support/gathering-cluster-data.adoc#gathering-cluster-data[Gather data about your cluster]: Red Hat recommends gathering your debugging information when opening a support case. This helps Red Hat Support to perform a root cause analysis. A cluster administrator can use the following to gather data about your cluster: @@ -51,32 +49,33 @@ xref:../support/gathering-cluster-data.adoc#gathering-cluster-data[Gather data a * *The must-gather tool*: Use the `must-gather` tool to collect information about your cluster and to debug the issues. * *sosreport*: Use the `sosreport` tool to collect configuration details, system information, and diagnostic data for debugging purposes. * *Cluster ID*: Obtain the unique identifier for your cluster, when providing information to Red Hat Support. +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * *Bootstrap node journal logs*: Gather `bootkube.service` `journald` unit logs and container logs from the bootstrap node to troubleshoot bootstrap-related issues. +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * *Cluster node journal logs*: Gather `journald` unit logs and logs within `/var/log` on individual cluster nodes to troubleshoot node-related issues. * *A network trace*: Provide a network packet trace from a specific {product-title} cluster node or a container to Red Hat Support to help troubleshoot network-related issues. * *Diagnostic data*: Use the `redhat-support-tool` command to gather(?) diagnostic data about your cluster. -endif::openshift-rosa,openshift-dedicated[] [id='support-overview-troubleshooting-issues'] == Troubleshooting issues A cluster administrator can monitor and troubleshoot the following {product-title} component issues: -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * xref:../support/troubleshooting/troubleshooting-installations.adoc#troubleshooting-installations[Installation issues]: {product-title} installation proceeds through various stages. You can perform the following: ** Monitor the installation stages. ** Determine at which stage installation issues occur. ** Investigate multiple installation issues. ** Gather logs from a failed installation. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * xref:../support/troubleshooting/verifying-node-health.adoc#verifying-node-health[Node issues]: A cluster administrator can verify and troubleshoot node-related issues by reviewing the status, resource usage, and configuration of a node. You can query the following: -** Kubelet’s status on a node. +** Kubelet's status on a node. ** Cluster node journal logs. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * xref:../support/troubleshooting/troubleshooting-crio-issues.adoc#troubleshooting-crio-issues[Crio issues]: A cluster administrator can verify CRI-O container runtime engine status on each cluster node. If you experience container runtime issues, perform the following: ** Gather CRI-O journald unit logs. @@ -87,15 +86,15 @@ ifndef::openshift-rosa,openshift-dedicated[] ** Enable kdump. ** Test the kdump configuration. ** Analyze a core dump. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * xref:../support/troubleshooting/troubleshooting-network-issues.adoc#troubleshooting-network-issues[Network issues]: To troubleshoot Open vSwitch issues, a cluster administrator can perform the following: ** Configure the Open vSwitch log level temporarily. ** Configure the Open vSwitch log level permanently. ** Display Open vSwitch logs. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * xref:../support/troubleshooting/troubleshooting-operator-issues.adoc#troubleshooting-operator-issues[Operator issues]: A cluster administrator can do the following to resolve Operator issues: @@ -108,12 +107,10 @@ endif::openshift-rosa,openshift-dedicated[] ** Review pod and container logs. ** Start debug pods with root access. -ifndef::openshift-rosa,openshift-dedicated[] * xref:../support/troubleshooting/troubleshooting-s2i.adoc#troubleshooting-s2i[Source-to-image issues]: A cluster administrator can observe the S2I stages to determine where in the S2I process a failure occurred. Gather the following to resolve Source-to-Image (S2I) issues: ** Source-to-Image diagnostic data. ** Application diagnostic data to investigate application failure. -endif::openshift-rosa,openshift-dedicated[] * xref:../support/troubleshooting/troubleshooting-storage-issues.adoc#troubleshooting-storage-issues[Storage issues]: A multi-attach storage error occurs when the mounting volume on a new node is not possible because the failed node cannot unmount the attached volume. A cluster administrator can do the following to resolve multi-attach storage issues: @@ -125,11 +122,14 @@ endif::openshift-rosa,openshift-dedicated[] ** Investigate why user-defined metrics are unavailable. ** Determine why Prometheus is consuming a lot of disk space. +// TODO: Include this in ROSA HCP when the Logging book is migrated. +ifndef::openshift-rosa-hcp[] * xref:../observability/logging/cluster-logging.adoc#cluster-logging[Logging issues]: A cluster administrator can follow the procedures in the "Support" and "Troubleshooting logging" sections to resolve logging issues: ** xref:../observability/logging/troubleshooting/cluster-logging-cluster-status.adoc#cluster-logging-clo-status_cluster-logging-cluster-status[Viewing the status of the {clo}] ** xref:../observability/logging/troubleshooting/cluster-logging-cluster-status.adoc#cluster-logging-clo-status-comp_cluster-logging-cluster-status[Viewing the status of {logging} components] ** xref:../observability/logging/troubleshooting/troubleshooting-logging-alerts.adoc#troubleshooting-logging-alerts[Troubleshooting logging alerts] ** xref:../observability/logging/cluster-logging-support.adoc#cluster-logging-must-gather-collecting_cluster-logging-support[Collecting information about your logging environment by using the `oc adm must-gather` command] +endif::openshift-rosa-hcp[] * xref:../support/troubleshooting/diagnosing-oc-issues.adoc#diagnosing-oc-issues[{oc-first} issues]: Investigate {oc-first} issues by increasing the log level. diff --git a/support/remote_health_monitoring/about-remote-health-monitoring.adoc b/support/remote_health_monitoring/about-remote-health-monitoring.adoc index c464e50b9d37..02c695ba95a4 100644 --- a/support/remote_health_monitoring/about-remote-health-monitoring.adoc +++ b/support/remote_health_monitoring/about-remote-health-monitoring.adoc @@ -33,9 +33,9 @@ Telemetry and the Insights Operator enable the following benefits for end-users: * *Predictive analytics*. The insights displayed for your cluster on {cluster-manager-url} are enabled by the information collected from connected clusters. Red Hat is investing in applying deep learning, machine learning, and artificial intelligence automation to help identify issues that {product-title} clusters are exposed to. -ifdef::openshift-rosa,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] On {product-title}, remote health reporting is always enabled. You cannot opt out of it. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] ifdef::openshift-origin[] {product-title} may be installed without a pull secret received at console.redhat.com. In this case default imagestreams will not be imported and telemetry data will not be sent. @@ -43,19 +43,28 @@ endif::[] include::modules/telemetry-about-telemetry.adoc[leveloffset=+1] -ifndef::openshift-rosa,openshift-dedicated[] - [role="_additional-resources"] .Additional resources +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * See the xref:../../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[{product-title} update documentation] for more information about updating or upgrading a cluster. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifdef::openshift-rosa[] +* See the xref:../../upgrading/rosa-upgrading-sts.adoc#rosa-upgrading-sts[{product-title} upgrade documentation] for more information about upgrading a cluster. +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +* See the xref:../../upgrading/rosa-hcp-upgrading.adoc#rosa-hcp-upgrading[{product-title} upgrade documentation] for more information about upgrading a cluster. +endif::openshift-rosa-hcp[] +ifdef::openshift-dedicated[] +* See the xref:../../upgrading/osd-upgrades.adoc#osd-upgrades[{product-title} upgrade documentation] for more information about upgrading a cluster. +endif::openshift-dedicated[] include::modules/telemetry-what-information-is-collected.adoc[leveloffset=+2] + // Module is not in OCP -ifdef::openshift-rosa,openshift-dedicated[] +ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/telemetry-user-telemetry.adoc[leveloffset=+2] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [role="_additional-resources"] .Additional resources @@ -64,46 +73,49 @@ endif::openshift-rosa,openshift-dedicated[] * See the link:https://github.com/openshift/cluster-monitoring-operator/blob/master/manifests/0000_50_cluster-monitoring-operator_04-config.yaml[upstream cluster-monitoring-operator source code] for a list of the attributes that Telemetry gathers from Prometheus. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * Telemetry is installed and enabled by default. If you need to opt out of remote health reporting, see xref:../../support/remote_health_monitoring/opting-out-of-remote-health-reporting.adoc#opting-out-remote-health-reporting[Opting out of remote health reporting]. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/insights-operator-about.adoc[leveloffset=+1] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [role="_additional-resources"] .Additional resources * The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see xref:../../support/remote_health_monitoring/opting-out-of-remote-health-reporting.adoc#opting-out-remote-health-reporting[Opting out of remote health reporting]. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/insights-operator-what-information-is-collected.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * See xref:../../support/remote_health_monitoring/showing-data-collected-by-remote-health-monitoring.adoc#insights-operator-showing-data-collected-from-the-cluster_showing-data-collected-by-remote-health-monitoring[Showing data collected by the Insights Operator] for details about how to review the data that is collected by the Insights Operator. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * link:https://access.redhat.com/solutions/7066188[What data is being collected by the Insights Operator in OpenShift?] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * xref:../../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling[Enabling features using feature gates] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * The Insights Operator source code is available for review and contribution. See the link:https://github.com/openshift/insights-operator/blob/master/docs/gathered-data.md[Insights Operator upstream project] for a list of the items collected by the Insights Operator. include::modules/understanding-telemetry-and-insights-operator-data-flow.adoc[leveloffset=+1] +// TODO: Add the first xref to ROSA HCP when the monitoring book is migrated. +ifndef::openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources * See xref:../../observability/monitoring/monitoring-overview.adoc#monitoring-overview_monitoring-overview[Monitoring overview] for more information about the {product-title} monitoring stack. +endif::openshift-rosa-hcp[] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * See xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[Configuring your firewall] for details about configuring a firewall and enabling endpoints for Telemetry and Insights -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [id="additional-details-about-how-remote-health-monitoring-data-is-used"] == Additional details about how remote health monitoring data is used @@ -118,7 +130,7 @@ Red Hat employs technical and organizational measures designed to protect the te .Sharing -Red Hat may share the data collected through Telemetry and the Insights Operator internally within Red Hat to improve your user experience. Red Hat may share telemetry and configuration data with its business partners in an aggregated form that does not identify customers to help the partners better understand their markets and their customers’ use of Red Hat offerings or to ensure the successful integration of products jointly supported by those partners. +Red Hat may share the data collected through Telemetry and the Insights Operator internally within Red Hat to improve your user experience. Red Hat may share telemetry and configuration data with its business partners in an aggregated form that does not identify customers to help the partners better understand their markets and their customers' use of Red Hat offerings or to ensure the successful integration of products jointly supported by those partners. .Third parties @@ -126,6 +138,6 @@ Red Hat may engage certain third parties to assist in the collection, analysis, .User control / enabling and disabling telemetry and configuration data collection -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] You may disable {product-title} Telemetry and the Insights Operator by following the instructions in xref:../../support/remote_health_monitoring/opting-out-of-remote-health-reporting.adoc#opting-out-remote-health-reporting[Opting out of remote health reporting]. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] diff --git a/support/remote_health_monitoring/showing-data-collected-by-remote-health-monitoring.adoc b/support/remote_health_monitoring/showing-data-collected-by-remote-health-monitoring.adoc index 2844c800ae1a..af97adfc1e32 100644 --- a/support/remote_health_monitoring/showing-data-collected-by-remote-health-monitoring.adoc +++ b/support/remote_health_monitoring/showing-data-collected-by-remote-health-monitoring.adoc @@ -14,6 +14,6 @@ As an administrator, you can review the metrics collected by Telemetry and the I include::modules/telemetry-showing-data-collected-from-the-cluster.adoc[leveloffset=+1] // cannot create resource "pods/exec" in API group "" in the namespace "openshift-insights" -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/insights-operator-showing-data-collected-from-the-cluster.adoc[leveloffset=+1] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] diff --git a/support/remote_health_monitoring/using-insights-operator.adoc b/support/remote_health_monitoring/using-insights-operator.adoc index 3dd0b0253898..15208afd8ac5 100644 --- a/support/remote_health_monitoring/using-insights-operator.adoc +++ b/support/remote_health_monitoring/using-insights-operator.adoc @@ -14,32 +14,32 @@ The Insights Operator periodically gathers configuration and component failure s [role="_additional-resources"] .Additional resources -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see xref:../../support/remote_health_monitoring/opting-out-of-remote-health-reporting.adoc#opting-out-remote-health-reporting[Opting out of remote health reporting]. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * For more information on using Insights Advisor to identify issues with your cluster, see xref:../../support/remote_health_monitoring/using-insights-to-identify-issues-with-your-cluster.adoc#using-insights-to-identify-issues-with-your-cluster[Using Insights to identify issues with your cluster]. -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/insights-operator-configuring.adoc[leveloffset=+1] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/understanding-insights-operator-alerts.adoc[leveloffset=+1] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/disabling-insights-operator-alerts.adoc[leveloffset=+2] include::modules/enabling-insights-operator-alerts.adoc[leveloffset=+2] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] // cannot create resource "pods/exec" in API group "" in the namespace "openshift-insights" -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/insights-operator-downloading-archive.adoc[leveloffset=+1] // cannot download archive using previous module -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] // InsightsDataGather is a Tech Preview feature. When the feature goes GA, verify if it can be added to ROSA/OSD. // tech preview feature -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [id="running-insights-operator-gather_using-insights-operator"] == Running an Insights Operator gather operation @@ -61,11 +61,11 @@ include::modules/running-insights-operator-gather-cli.adoc[leveloffset=+2] .Additional resources * link:https://github.com/openshift/insights-operator/blob/master/docs/gathered-data.md[Insights Operator Gathered Data GitHub repository] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] // cannot list resource "secrets" in API group "" in the namespace "openshift-config" -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/disabling-insights-operator-gather.adoc[leveloffset=+2] include::modules/enabling-insights-operator-gather.adoc[leveloffset=+2] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/obfuscating-dvo-data.adoc[leveloffset=+1] diff --git a/support/troubleshooting/investigating-monitoring-issues.adoc b/support/troubleshooting/investigating-monitoring-issues.adoc index b3496d34621f..c3d2ca739341 100644 --- a/support/troubleshooting/investigating-monitoring-issues.adoc +++ b/support/troubleshooting/investigating-monitoring-issues.adoc @@ -18,20 +18,28 @@ Use these procedures if the following issues occur: // Investigating why user-defined metrics are unavailable include::modules/monitoring-investigating-why-user-defined-metrics-are-unavailable.adoc[leveloffset=+1] +// TODO: Add the additional resources for ROSA HCP when the Observability book is added. +ifndef::openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources * xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#creating-user-defined-workload-monitoring-configmap_configuring-the-monitoring-stack[Creating a user-defined workload monitoring config map] * See xref:../../observability/monitoring/managing-metrics.adoc#specifying-how-a-service-is-monitored_managing-metrics[Specifying how a service is monitored] for details on how to create a service monitor or pod monitor * See xref:../../observability/monitoring/managing-metrics.adoc#getting-detailed-information-about-a-target_managing-metrics[Getting detailed information about a metrics target] +endif::openshift-rosa-hcp[] // Determining why Prometheus is consuming a lot of disk space include::modules/monitoring-determining-why-prometheus-is-consuming-disk-space.adoc[leveloffset=+1] +// TODO: Add the additional resources for ROSA HCP when the Observability book is added. +ifndef::openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources * xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#setting-scrape-and-evaluation-intervals-limits-for-user-defined-projects_configuring-the-monitoring-stack[Setting scrape and evaluation intervals and enforced limits for user-defined projects] +endif::openshift-rosa-hcp[] // Resolving the KubePersistentVolumeFillingUp alert firing for Prometheus -include::modules/monitoring-resolving-the-kubepersistentvolumefillingup-alert-firing-for-prometheus.adoc[leveloffset=+1] \ No newline at end of file +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +include::modules/monitoring-resolving-the-kubepersistentvolumefillingup-alert-firing-for-prometheus.adoc[leveloffset=+1] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] diff --git a/support/troubleshooting/investigating-pod-issues.adoc b/support/troubleshooting/investigating-pod-issues.adoc index 67860e98e0af..f768a51a788e 100644 --- a/support/troubleshooting/investigating-pod-issues.adoc +++ b/support/troubleshooting/investigating-pod-issues.adoc @@ -8,7 +8,7 @@ toc::[] {product-title} leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host. A pod is the smallest compute unit that can be defined, deployed, and managed on {product-title} {product-version}. -After a pod is defined, it is assigned to run on a node until its containers exit, or until it is removed. Depending on policy and exit code, Pods are either removed after exiting or retained so that their logs can be accessed. +After a pod is defined, it is assigned to run on a node until its containers exit, or until it is removed. Depending on policy and exit code, pods are either removed after exiting or retained so that their logs can be accessed. The first thing to check when pod issues arise is the pod's status. If an explicit pod failure has occurred, observe the pod's error state to identify specific image, container, or pod network issues. Focus diagnostic data collection according to the error state. Review pod event messages, as well as pod and container log information. Diagnose issues dynamically by accessing running Pods on the command line, or start a debug pod with root access based on a problematic pod's deployment configuration. diff --git a/support/troubleshooting/rosa-troubleshooting-iam-resources.adoc b/support/troubleshooting/rosa-troubleshooting-iam-resources.adoc index ecd1d060fc4b..1e8628911388 100644 --- a/support/troubleshooting/rosa-troubleshooting-iam-resources.adoc +++ b/support/troubleshooting/rosa-troubleshooting-iam-resources.adoc @@ -23,3 +23,10 @@ include::modules/rosa-sts-user-role-creation.adoc[leveloffset=+2] include::modules/rosa-sts-aws-requirements-creating-association.adoc[leveloffset=+2] include::modules/rosa-sts-aws-requirements-creating-multi-association.adoc[leveloffset=+2] + +// TODO: Add the additional resource to ROSA HCP when the Architecture book is added. +ifndef::openshift-rosa-hcp[] +[role="_additional-resources"] +== Additional resources +* See xref:../../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies[Account-wide IAM role and policy reference] for a list of IAM roles needed for cluster creation. +endif::openshift-rosa-hcp[] diff --git a/support/troubleshooting/troubleshooting-operator-issues.adoc b/support/troubleshooting/troubleshooting-operator-issues.adoc index d96ec389f167..0e969a9127f7 100644 --- a/support/troubleshooting/troubleshooting-operator-issues.adoc +++ b/support/troubleshooting/troubleshooting-operator-issues.adoc @@ -18,10 +18,14 @@ If you experience Operator issues, verify Operator subscription status. Check Op // Operator subscription condition types include::modules/olm-status-conditions.adoc[leveloffset=+1] + +// TODO: Add this xref when the Operators book is added to ROSA HCP. +ifndef::openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources * xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-cs-health_olm-understanding-olm[Catalog health requirements] +endif::openshift-rosa-hcp[] // Viewing Operator subscription status by using the CLI include::modules/olm-status-viewing-cli.adoc[leveloffset=+1] @@ -32,46 +36,45 @@ include::modules/olm-cs-status-cli.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -ifndef::openshift-rosa,openshift-dedicated[] +// TODO: Add this xref when the Operators book is added to ROSA HCP. +ifndef::openshift-rosa-hcp[] * xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-catalogsource_olm-understanding-olm[Operator Lifecycle Manager concepts and resources -> Catalog source] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa-hcp[] * gRPC documentation: link:https://grpc.github.io/grpc/core/md_doc_connectivity-semantics-and-api.html[States of Connectivity] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-accessing-images-private-registries_olm-managing-custom-catalogs[Accessing images for Operators from private registries] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] // Querying Operator Pod status include::modules/querying-operator-pod-status.adoc[leveloffset=+1] // Gathering Operator logs +ifndef::openshift-rosa-hcp[] include::modules/gathering-operator-logs.adoc[leveloffset=+1] +endif::openshift-rosa-hcp[] // cannot patch resource "machineconfigpools" -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] // Disabling Machine Config Operator from autorebooting include::modules/troubleshooting-disabling-autoreboot-mco.adoc[leveloffset=+1] include::modules/troubleshooting-disabling-autoreboot-mco-console.adoc[leveloffset=+2] include::modules/troubleshooting-disabling-autoreboot-mco-cli.adoc[leveloffset=+2] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] // Refreshing failing subscriptions // cannot delete resource "clusterserviceversions", "jobs" in API group "operators.coreos.com" in the namespace "openshift-apiserver" -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/olm-refresh-subs.adoc[leveloffset=+1] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] // Reinstalling Operators after failed uninstallation // cannot delete resource "customresourcedefinitions" -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] include::modules/olm-reinstall.adoc[leveloffset=+1] -endif::openshift-rosa,openshift-dedicated[] -ifndef::openshift-rosa,openshift-dedicated[] [role="_additional-resources"] .Additional resources * xref:../../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster[Deleting Operators from a cluster] * xref:../../operators/admin/olm-adding-operators-to-cluster.adoc#olm-adding-operators-to-a-cluster[Adding Operators to a cluster] -endif::openshift-rosa,openshift-dedicated[] - - +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] diff --git a/support/troubleshooting/verifying-node-health.adoc b/support/troubleshooting/verifying-node-health.adoc index ea0503438e17..83b1b818402a 100644 --- a/support/troubleshooting/verifying-node-health.adoc +++ b/support/troubleshooting/verifying-node-health.adoc @@ -10,12 +10,12 @@ toc::[] include::modules/reviewing-node-status-usage-and-configuration.adoc[leveloffset=+1] // cannot create resource "namespaces" -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] // Querying the kubelet's status on a node include::modules/querying-kubelet-status-on-a-node.adoc[leveloffset=+1] // cannot get resource "nodes/proxy" // Querying node journal logs include::modules/querying-cluster-node-journal-logs.adoc[leveloffset=+1] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] From 267fee822bd4e98eacb28098cefd8be323992838 Mon Sep 17 00:00:00 2001 From: Shane Lovern Date: Mon, 10 Feb 2025 09:30:13 +0000 Subject: [PATCH 169/669] TELCODOCS-1975 - fixing typo --- ...-configuring-the-hub-cluster-for-backup-and-restore.adoc | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/ztp-configuring-the-hub-cluster-for-backup-and-restore.adoc b/modules/ztp-configuring-the-hub-cluster-for-backup-and-restore.adoc index 8a89a5da9636..131109fdd42d 100644 --- a/modules/ztp-configuring-the-hub-cluster-for-backup-and-restore.adoc +++ b/modules/ztp-configuring-the-hub-cluster-for-backup-and-restore.adoc @@ -6,7 +6,7 @@ [id="ztp-configuring-the-hub-cluster-for-backup-and-restore_{context}"] = Configuring the hub cluster for backup and restore -You can use {ztp} to configure a set of policies to backup `BareMetalHost` resources. +You can use {ztp} to configure a set of policies to back up `BareMetalHost` resources. This allows you to recover data from a failed hub cluster and deploy a replacement cluster using {rh-rhacm-first}. .Prerequisites @@ -118,7 +118,7 @@ NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE baremetal-ns baremetal-name false 50s ---- -. Verify that the policy has applied the label `cluster.open-cluster-management.io/backup=cluster-activation` to all these resources, by runing the following command: +. Verify that the policy has applied the label `cluster.open-cluster-management.io/backup=cluster-activation` to all these resources, by running the following command: + [source,terminal] ---- @@ -158,7 +158,7 @@ When you restore `BareMetalHosts` resources as part of restoring the cluster act The following {rh-rhacm} `Restore` resource example restores activation resources, including `BareMetalHosts`, and also restores the status for the `BareMetalHosts` resources: [source,yaml] ---- - apiVersion: cluster.open-cluster-management.io/v1beta1 +apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Restore metadata: name: restore-acm-bmh From 25789cb140ed5333596228bd452cf870b7da1d42 Mon Sep 17 00:00:00 2001 From: Olivia Brown Date: Wed, 11 Dec 2024 14:14:38 -0500 Subject: [PATCH 170/669] OCPBUGS-44063: Removing RHEL 9.2 update requirement 4.15+ --- .../installation-minimum-resource-requirements.adoc | 2 +- modules/installation-vsphere-infrastructure.adoc | 2 +- .../updating-cluster-prepare.adoc | 10 ---------- 3 files changed, 2 insertions(+), 12 deletions(-) diff --git a/modules/installation-minimum-resource-requirements.adoc b/modules/installation-minimum-resource-requirements.adoc index 03eade152e09..04ddba3248cf 100644 --- a/modules/installation-minimum-resource-requirements.adoc +++ b/modules/installation-minimum-resource-requirements.adoc @@ -242,7 +242,7 @@ For {product-title} version 4.18, RHCOS is based on RHEL version 9.4, which upda * IBM Power architecture requires Power 9 ISA * s390x architecture requires z14 ISA -For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/9.0_release_notes/index#architectures[RHEL Architectures]. +For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/9.2_release_notes/index#architectures[Architectures] ({op-system-base} documentation). ==== ifdef::azure[] diff --git a/modules/installation-vsphere-infrastructure.adoc b/modules/installation-vsphere-infrastructure.adoc index 2c64fa747d75..e8c77d5417f8 100644 --- a/modules/installation-vsphere-infrastructure.adoc +++ b/modules/installation-vsphere-infrastructure.adoc @@ -44,7 +44,7 @@ You must ensure that the time on your ESXi hosts is synchronized before you inst |CPU micro-architecture |x86-64-v2 or higher -|OpenShift 4.13 and later are based on RHEL 9.2 host operating system which raised the microarchitecture requirements to x86-64-v2. See the link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/9.0_release_notes/index#architectures[RHEL Microarchitecture requirements documentation]. You can verify compatibility by following the procedures outlined in link:https://access.redhat.com/solutions/7052996[this KCS article]. +|{product-title} version 4.13 and later are based on the {op-system-base} 9.2 host operating system, which raised the microarchitecture requirements to x86-64-v2. See link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/9.2_release_notes/index#architectures[Architectures] in the {op-system-base} documentation. |=== [IMPORTANT] diff --git a/updating/preparing_for_updates/updating-cluster-prepare.adoc b/updating/preparing_for_updates/updating-cluster-prepare.adoc index 5b45c662ca43..628f141f42dc 100644 --- a/updating/preparing_for_updates/updating-cluster-prepare.adoc +++ b/updating/preparing_for_updates/updating-cluster-prepare.adoc @@ -14,16 +14,6 @@ To do: Remove this comment once 4.13 docs are EOL. Learn more about administrative tasks that cluster admins must perform to successfully initialize an update, as well as optional guidelines for ensuring a successful update. -[id="rhel-micro-architecture-update-requirements"] -== {op-system-base} 9.2 micro-architecture requirement change - -{product-title} is now based on the {op-system-base} 9.2 host operating system. The micro-architecture requirements are now increased to x86_64-v2, Power9, and Z14. See the link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/9.0_release_notes/index#architectures[RHEL micro-architecture requirements documentation]. You can verify compatibility before updating by following the procedures outlined in this link:https://access.redhat.com/solutions/7052996[KCS article]. - -[IMPORTANT] -==== -Without the correct micro-architecture requirements, the update process will fail. Make sure you purchase the appropriate subscription for each architecture. For more information, see link:https://access.redhat.com/products/red-hat-enterprise-linux#addl-arch[Get Started with Red Hat Enterprise Linux - additional architectures] -==== - [id="kube-api-removals_{context}"] == Kubernetes API removals From 8e4040434f248f471389746b92b6c11414e1d5e0 Mon Sep 17 00:00:00 2001 From: Liang Xia Date: Tue, 11 Feb 2025 15:39:16 +0800 Subject: [PATCH 171/669] OCPBUGS#50553: Fix typo --- modules/containers-signature-verify-unsigned.adoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/containers-signature-verify-unsigned.adoc b/modules/containers-signature-verify-unsigned.adoc index 45745d7a0d38..2743fd9b76dd 100644 --- a/modules/containers-signature-verify-unsigned.adoc +++ b/modules/containers-signature-verify-unsigned.adoc @@ -12,7 +12,7 @@ For example, the image references lacking a verifiable signature are contained i .Example release info output [source,terminal] ---- -$ oc adm release info quay.io/openshift-release-dev/ ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec <1> +$ oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec <1> quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb <2> ---- @@ -23,4 +23,4 @@ quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d0 == Automated verification during updates Verification of signatures is automatic. The OpenShift Cluster Version Operator (CVO) verifies signatures on the release images during an {product-title} update. This is an internal process. An {product-title} installation or update fails if the automated verification fails. -Verification of signatures can also be done manually using the `skopeo` command-line utility. \ No newline at end of file +Verification of signatures can also be done manually using the `skopeo` command-line utility. From efa93fd4acd47358345e7535867e721e6cf4896d Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Tue, 21 Jan 2025 10:20:54 +0000 Subject: [PATCH 172/669] TELCODOCS-2149 OCPBUGS-28647 tuned deferred updates --- modules/defer-application-tuning-proc.adoc | 221 ++++++++++++++++++ modules/defer-applicaton-tuning-example.adoc | 58 +++++ .../using-node-tuning-operator.adoc | 4 + 3 files changed, 283 insertions(+) create mode 100644 modules/defer-application-tuning-proc.adoc create mode 100644 modules/defer-applicaton-tuning-example.adoc diff --git a/modules/defer-application-tuning-proc.adoc b/modules/defer-application-tuning-proc.adoc new file mode 100644 index 000000000000..453710ffd4f7 --- /dev/null +++ b/modules/defer-application-tuning-proc.adoc @@ -0,0 +1,221 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/using-node-tuning-operator.adoc + +:_mod-docs-content-type: PROCEDURE +[id="defer-application-of-tuning-changes-example_{context}"] += Deferring application of tuning changes: An example + +The following worked example describes how to defer the application of tuning changes by using the Node Tuning Operator. + +.Prerequisites +* You have `cluster-admin` role access. +* You have applied a performance profile to your cluster. +* A `MachineConfigPool` resource, for example, `worker-cnf` is configured to ensure that the profile is only applied to the designated nodes. + +.Procedure + +. Check what profiles are currently applied to your cluster by running the following command: ++ +[source,shell] +---- +$ oc -n openshift-cluster-node-tuning-operator get tuned +---- ++ +.Example output +[source,shell] +---- +NAME AGE +default 63m +openshift-node-performance-performance 21m +---- + +. Check the machine config pools in your cluster by running the following command: ++ +[source,shell] +---- +$ oc get mcp +---- ++ +.Example output +[source,shell] +---- +NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE +master rendered-master-79a26af9f78ced61fa8ccd309d3c859c True False False 3 3 3 0 157m +worker rendered-worker-d9352e91a1b14de7ef453fa54480ce0e True False False 2 2 2 0 157m +worker-cnf rendered-worker-cnf-f398fc4fcb2b20104a51e744b8247272 True False False 1 1 1 0 92m +---- + +. Describe the current applied performance profile by running the following command: ++ +[source,shell] +---- +$ oc describe performanceprofile performance | grep Tuned +---- ++ +.Example output +[source,shell] +---- +Tuned: openshift-cluster-node-tuning-operator/openshift-node-performance-performance +---- + +. Verify the existing value of the `kernel.shmmni` sysctl parameter: + +.. Run the following command to display the node names: ++ +[source,shell] +---- +$ oc get nodes +---- ++ +.Example output +[source,shell] +---- +NAME STATUS ROLES AGE VERSION +ip-10-0-26-151.ec2.internal Ready worker,worker-cnf 116m v1.30.6 +ip-10-0-46-60.ec2.internal Ready worker 115m v1.30.6 +ip-10-0-52-141.ec2.internal Ready control-plane,master 123m v1.30.6 +ip-10-0-6-97.ec2.internal Ready control-plane,master 121m v1.30.6 +ip-10-0-86-145.ec2.internal Ready worker 117m v1.30.6 +ip-10-0-92-228.ec2.internal Ready control-plane,master 123m v1.30.6 +---- + +.. Run the following command to display the current value of the `kernel.shmmni` sysctl parameters on the node `ip-10-0-32-74.ec2.internal`: ++ +[source,shell] +---- +$ oc debug node/ip-10-0-26-151.ec2.internal -q -- chroot host sysctl kernel.shmmni +---- ++ +.Example output +[source,shell] +---- +kernel.shmmni = 4096 +---- + +. Create a profile patch, for example, `perf-patch.yaml` that changes the `kernel.shmmni` sysctl parameter to `8192`. Defer the application of the change to a new manual restart by using the `always` method by applying the following configuration: ++ +[source,yaml] +---- +apiVersion: tuned.openshift.io/v1 +kind: Tuned +metadata: + name: performance-patch + namespace: openshift-cluster-node-tuning-operator + annotations: + tuned.openshift.io/deferred: "always" +spec: + profile: + - name: performance-patch + data: | + [main] + summary=Configuration changes profile inherited from performance created tuned + include=openshift-node-performance-performance <1> + [sysctl] + kernel.shmmni=8192 <2> + recommend: + - machineConfigLabels: + machineconfiguration.openshift.io/role: worker-cnf <3> + priority: 19 + profile: performance-patch +---- ++ +<1> The `include` directive is used to inherit the `openshift-node-performance-performance` profile. This is a best practice to ensure that the profile is not missing any required settings. +<2> The `kernel.shmmni` sysctl parameter is being changed to `8192`. +<3> The `machineConfigLabels` field is used to target the `worker-cnf` role. + +. Apply the profile patch by running the following command: ++ +[source,shell] +---- +$ oc apply -f perf-patch.yaml +---- + +. Run the following command to verify that the profile patch is waiting for the next node restart: ++ +[source,shell] +---- +$ oc -n openshift-cluster-node-tuning-operator get profile +---- ++ +.Example output +[source,shell] +---- +NAME TUNED APPLIED DEGRADED MESSAGE AGE +ip-10-0-26-151.ec2.internal performance-patch False True The TuneD daemon profile is waiting for the next node restart: performance-patch 126m +ip-10-0-46-60.ec2.internal openshift-node True False TuneD profile applied. 125m +ip-10-0-52-141.ec2.internal openshift-control-plane True False TuneD profile applied. 130m +ip-10-0-6-97.ec2.internal openshift-control-plane True False TuneD profile applied. 130m +ip-10-0-86-145.ec2.internal openshift-node True False TuneD profile applied. 126m +ip-10-0-92-228.ec2.internal openshift-control-plane True False TuneD profile applied. 130m +---- + +. Confirm the value of the `kernel.shmmni` sysctl parameter remain unchanged before a restart: + +.. Run the following command to confirm that the application of the `performance-patch` change to the `kernel.shmmni` sysctl parameter on the node `ip-10-0-26-151.ec2.internal` is not applied: ++ +[source,shell] +---- +$ oc debug node/ip-10-0-26-151.ec2.internal -q -- chroot host sysctl kernel.shmmni +---- ++ +.Example output +[source,shell] +---- +kernel.shmmni = 4096 +---- + +. Restart the node `ip-10-0-26-151.ec2.internal` to apply the required changes by running the following command: ++ +[source,shell] +---- +$ oc debug node/ip-10-0-26-151.ec2.internal -q -- chroot host reboot& +---- + +. In another terminal window, run the following command to verify that the node has restarted: ++ +[source,shell] +---- +$ watch oc get nodes +---- ++ +Wait for the node `ip-10-0-26-151.ec2.internal` to transition back to the `Ready` state. + +. Run the following command to verify that the profile patch is waiting for the next node restart: ++ +[source,shell] +---- +$ oc -n openshift-cluster-node-tuning-operator get profile +---- ++ +.Example output +[source,shell] +---- +NAME TUNED APPLIED DEGRADED MESSAGE AGE +ip-10-0-20-251.ec2.internal performance-patch True False TuneD profile applied. 3h3m +ip-10-0-30-148.ec2.internal openshift-control-plane True False TuneD profile applied. 3h8m +ip-10-0-32-74.ec2.internal openshift-node True True TuneD profile applied. 179m +ip-10-0-33-49.ec2.internal openshift-control-plane True False TuneD profile applied. 3h8m +ip-10-0-84-72.ec2.internal openshift-control-plane True False TuneD profile applied. 3h8m +ip-10-0-93-89.ec2.internal openshift-node True False TuneD profile applied. 179m +---- + +. Check that the value of the `kernel.shmmni` sysctl parameter have changed after the restart: + +.. Run the following command to verify that the `kernel.shmmni` sysctl parameter change has been applied on the node `ip-10-0-32-74.ec2.internal`: ++ +[source,shell] +---- +$ oc debug node/ip-10-0-32-74.ec2.internal -q -- chroot host sysctl kernel.shmmni +---- ++ +.Example output +[source,shell] +---- +kernel.shmmni = 8192 +---- + +[NOTE] +==== +An additional restart results in the restoration of the original value of the `kernel.shmmni` sysctl parameter. +==== \ No newline at end of file diff --git a/modules/defer-applicaton-tuning-example.adoc b/modules/defer-applicaton-tuning-example.adoc new file mode 100644 index 000000000000..2f7f14ede4dc --- /dev/null +++ b/modules/defer-applicaton-tuning-example.adoc @@ -0,0 +1,58 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/using-node-tuning-operator.adoc + +:_mod-docs-content-type: CONCEPT +[id="defer-application-of-tuning-changes_{context}"] += Deferring application of tuning changes + +As an administrator, use the Node Tuning Operator (NTO) to update custom resources (CRs) on a running system and make tuning changes. For example, they can update or add a sysctl parameter to the [sysctl] section of the tuned object. When administrators apply a tuning change, the NTO prompts TuneD to reprocess all configurations, causing the tuned process to roll back all tuning and then reapply it. + +Latency-sensitive applications may not tolerate the removal and reapplication of the tuned profile, as it can briefly disrupt performance. This is particularly critical for configurations that partition CPUs and manage process or interrupt affinity using the performance profile. To avoid this issue, {product-title} introduced new methods for applying tuning changes. Before {product-title} 4.17, the only available method, immediate, applied changes instantly, often triggering a tuned restart. + +The following additional methods are supported: + + * `always`: Every change is applied at the next node restart. + * `update`: When a tuning change modifies a tuned profile, it is applied immediately by default and takes effect as soon as possible. When a tuning change does not cause a tuned profile to change and its values are modified in place, it is treated as always. + +Enable this feature by adding the annotation `tuned.openshift.io/deferred`. The following table summarizes the possible values for the annotation: + +[cols="3,3",options="header"] +|=== +|Annotation value | Description +|missing | The change is applied immediately. +|always | The change is applied at the next node restart. +|update | The change is applied immediately if it causes a profile change, otherwise at the next node restart. +|=== + +The following example demonstrates how to apply a change to the `kernel.shmmni` sysctl parameter by using the `always` method: + +.Example +[source,yaml] +---- +apiVersion: tuned.openshift.io/v1 +kind: Tuned +metadata: + name: performance-patch + namespace: openshift-cluster-node-tuning-operator + annotations: + tuned.openshift.io/deferred: "always" +spec: + profile: + - name: performance-patch + data: | + [main] + summary=Configuration changes profile inherited from performance created tuned + include=openshift-node-performance-performance <1> + [sysctl] + kernel.shmmni=8192 <2> + recommend: + - machineConfigLabels: + machineconfiguration.openshift.io/role: worker-cnf <3> + priority: 19 + profile: performance-patch +---- + +<1> The `include` directive is used to inherit the `openshift-node-performance-performance` profile. This is a best practice to ensure that the profile is not missing any required settings. +<2> The `kernel.shmmni` sysctl parameter is being changed to `8192`. +<3> The `machineConfigLabels` field is used to target the `worker-cnf` role. Configure a `MachineConfigPool` resource to ensure the profile is applied only to the correct nodes. \ No newline at end of file diff --git a/scalability_and_performance/using-node-tuning-operator.adoc b/scalability_and_performance/using-node-tuning-operator.adoc index 538cf4881d4c..4bbfc510336b 100644 --- a/scalability_and_performance/using-node-tuning-operator.adoc +++ b/scalability_and_performance/using-node-tuning-operator.adoc @@ -21,6 +21,10 @@ include::modules/custom-tuning-specification.adoc[leveloffset=+1] include::modules/custom-tuning-example.adoc[leveloffset=+1] +include::modules/defer-applicaton-tuning-example.adoc[leveloffset=+1] + +include::modules/defer-application-tuning-proc.adoc[leveloffset=+2] + include::modules/node-tuning-operator-supported-tuned-daemon-plug-ins.adoc[leveloffset=+1] include::modules/node-tuning-hosted-cluster.adoc[leveloffset=+1] From 06cdfadd22b14c6caa05fff74aa3e7a7e9f63bdc Mon Sep 17 00:00:00 2001 From: Shruti Deshpande Date: Tue, 28 Jan 2025 14:03:17 +0530 Subject: [PATCH 173/669] OADP-4161 ImagePullPolicy Signed-off-by: Shruti Deshpande --- .../installing/installing-oadp-aws.adoc | 1 + .../installing/installing-oadp-azure.adoc | 1 + .../installing/installing-oadp-gcp.adoc | 1 + .../installing/installing-oadp-ibm-cloud.adoc | 2 + .../installing/installing-oadp-kubevirt.adoc | 1 + .../installing/installing-oadp-mcg.adoc | 1 + .../installing/installing-oadp-ocs.adoc | 1 + modules/oadp-configuring-imagepullpolicy.adoc | 65 +++++++++++++++++++ 8 files changed, 73 insertions(+) create mode 100644 modules/oadp-configuring-imagepullpolicy.adoc diff --git a/backup_and_restore/application_backup_and_restore/installing/installing-oadp-aws.adoc b/backup_and_restore/application_backup_and_restore/installing/installing-oadp-aws.adoc index 5999b11f0a1c..a4481f1bcab7 100644 --- a/backup_and_restore/application_backup_and_restore/installing/installing-oadp-aws.adoc +++ b/backup_and_restore/application_backup_and_restore/installing/installing-oadp-aws.adoc @@ -52,6 +52,7 @@ include::modules/oadp-installing-dpa-1-3.adoc[leveloffset=+1] include::modules/oadp-configuring-node-agents.adoc[leveloffset=+2] include::modules/oadp-configuring-aws-md5sum.adoc[leveloffset=+1] include::modules/oadp-configuring-client-burst-qps.adoc[leveloffset=+1] +include::modules/oadp-configuring-imagepullpolicy.adoc[leveloffset=+1] include::modules/oadp-configuring-dpa-multiple-bsl.adoc[leveloffset=+1] include::modules/oadp-enabling-csi-dpa.adoc[leveloffset=+2] include::modules/oadp-about-disable-node-agent-dpa.adoc[leveloffset=+2] diff --git a/backup_and_restore/application_backup_and_restore/installing/installing-oadp-azure.adoc b/backup_and_restore/application_backup_and_restore/installing/installing-oadp-azure.adoc index 899a3751c1d7..89d847a54b7e 100644 --- a/backup_and_restore/application_backup_and_restore/installing/installing-oadp-azure.adoc +++ b/backup_and_restore/application_backup_and_restore/installing/installing-oadp-azure.adoc @@ -39,6 +39,7 @@ include::modules/oadp-self-signed-certificate.adoc[leveloffset=+2] // include::modules/oadp-installing-dpa-1-2-and-earlier.adoc[leveloffset=+1] include::modules/oadp-installing-dpa-1-3.adoc[leveloffset=+1] include::modules/oadp-configuring-client-burst-qps.adoc[leveloffset=+1] +include::modules/oadp-configuring-imagepullpolicy.adoc[leveloffset=+1] include::modules/oadp-configuring-node-agents.adoc[leveloffset=+2] include::modules/oadp-enabling-csi-dpa.adoc[leveloffset=+2] include::modules/oadp-about-disable-node-agent-dpa.adoc[leveloffset=+2] diff --git a/backup_and_restore/application_backup_and_restore/installing/installing-oadp-gcp.adoc b/backup_and_restore/application_backup_and_restore/installing/installing-oadp-gcp.adoc index 8d39cf3d46c0..8d0b1add23b6 100644 --- a/backup_and_restore/application_backup_and_restore/installing/installing-oadp-gcp.adoc +++ b/backup_and_restore/application_backup_and_restore/installing/installing-oadp-gcp.adoc @@ -40,6 +40,7 @@ include::modules/oadp-self-signed-certificate.adoc[leveloffset=+2] include::modules/oadp-gcp-wif-cloud-authentication.adoc[leveloffset=+1] include::modules/oadp-installing-dpa-1-3.adoc[leveloffset=+1] include::modules/oadp-configuring-client-burst-qps.adoc[leveloffset=+1] +include::modules/oadp-configuring-imagepullpolicy.adoc[leveloffset=+1] include::modules/oadp-configuring-node-agents.adoc[leveloffset=+2] include::modules/oadp-enabling-csi-dpa.adoc[leveloffset=+2] include::modules/oadp-about-disable-node-agent-dpa.adoc[leveloffset=+2] diff --git a/backup_and_restore/application_backup_and_restore/installing/installing-oadp-ibm-cloud.adoc b/backup_and_restore/application_backup_and_restore/installing/installing-oadp-ibm-cloud.adoc index c56f78758c7f..e2e5348ec22c 100644 --- a/backup_and_restore/application_backup_and_restore/installing/installing-oadp-ibm-cloud.adoc +++ b/backup_and_restore/application_backup_and_restore/installing/installing-oadp-ibm-cloud.adoc @@ -25,6 +25,8 @@ include::modules/oadp-setting-resource-limits-and-requests.adoc[leveloffset=+1] include::modules/oadp-configuring-node-agents.adoc[leveloffset=+1] // include the module for client burst and qps config include::modules/oadp-configuring-client-burst-qps.adoc[leveloffset=+1] +// include module for image pull policy setting +include::modules/oadp-configuring-imagepullpolicy.adoc[leveloffset=+1] // include the module for configuring multiple BSL include::modules/oadp-configuring-dpa-multiple-bsl.adoc[leveloffset=+1] // include the module for disabling node agent in the DPA diff --git a/backup_and_restore/application_backup_and_restore/installing/installing-oadp-kubevirt.adoc b/backup_and_restore/application_backup_and_restore/installing/installing-oadp-kubevirt.adoc index 88dcb8a74a73..8fe2f70adff1 100644 --- a/backup_and_restore/application_backup_and_restore/installing/installing-oadp-kubevirt.adoc +++ b/backup_and_restore/application_backup_and_restore/installing/installing-oadp-kubevirt.adoc @@ -48,6 +48,7 @@ include::modules/oadp-backup-single-vm.adoc[leveloffset=+1] include::modules/oadp-restore-single-vm.adoc[leveloffset=+1] include::modules/oadp-restore-single-vm-from-multiple-vm-backup.adoc[leveloffset=+1] include::modules/oadp-configuring-client-burst-qps.adoc[leveloffset=+1] +include::modules/oadp-configuring-imagepullpolicy.adoc[leveloffset=+1] include::modules/oadp-configuring-node-agents.adoc[leveloffset=+2] include::modules/oadp-incremental-backup-support.adoc[leveloffset=+1] diff --git a/backup_and_restore/application_backup_and_restore/installing/installing-oadp-mcg.adoc b/backup_and_restore/application_backup_and_restore/installing/installing-oadp-mcg.adoc index 83eea3722377..1cd9da28706d 100644 --- a/backup_and_restore/application_backup_and_restore/installing/installing-oadp-mcg.adoc +++ b/backup_and_restore/application_backup_and_restore/installing/installing-oadp-mcg.adoc @@ -46,6 +46,7 @@ include::modules/oadp-self-signed-certificate.adoc[leveloffset=+2] // include::modules/oadp-installing-dpa-1-2-and-earlier.adoc[leveloffset=+1] include::modules/oadp-installing-dpa-1-3.adoc[leveloffset=+1] include::modules/oadp-configuring-client-burst-qps.adoc[leveloffset=+1] +include::modules/oadp-configuring-imagepullpolicy.adoc[leveloffset=+1] include::modules/oadp-configuring-node-agents.adoc[leveloffset=+2] include::modules/oadp-enabling-csi-dpa.adoc[leveloffset=+2] include::modules/oadp-about-disable-node-agent-dpa.adoc[leveloffset=+2] diff --git a/backup_and_restore/application_backup_and_restore/installing/installing-oadp-ocs.adoc b/backup_and_restore/application_backup_and_restore/installing/installing-oadp-ocs.adoc index 3cf64bb4fa68..5ee61e5385d4 100644 --- a/backup_and_restore/application_backup_and_restore/installing/installing-oadp-ocs.adoc +++ b/backup_and_restore/application_backup_and_restore/installing/installing-oadp-ocs.adoc @@ -49,6 +49,7 @@ include::modules/oadp-self-signed-certificate.adoc[leveloffset=+2] // include::modules/oadp-installing-dpa-1-2-and-earlier.adoc[leveloffset=+1] include::modules/oadp-installing-dpa-1-3.adoc[leveloffset=+1] include::modules/oadp-configuring-client-burst-qps.adoc[leveloffset=+1] +include::modules/oadp-configuring-imagepullpolicy.adoc[leveloffset=+1] include::modules/oadp-configuring-node-agents.adoc[leveloffset=+2] include::modules/oadp-creating-object-bucket-claim.adoc[leveloffset=+2] include::modules/oadp-enabling-csi-dpa.adoc[leveloffset=+2] diff --git a/modules/oadp-configuring-imagepullpolicy.adoc b/modules/oadp-configuring-imagepullpolicy.adoc new file mode 100644 index 000000000000..24a5e1f34a09 --- /dev/null +++ b/modules/oadp-configuring-imagepullpolicy.adoc @@ -0,0 +1,65 @@ +// Module included in the following assemblies: +// +// * backup_and_restore/application_backup_and_restore/installing/installing-oadp-aws.adoc + + +:_mod-docs-content-type: PROCEDURE +[id="oadp-configuring-imagepullpolicy_{context}"] += Overriding the imagePullPolicy setting in the DPA + +In {oadp-short} 1.4.0 or earlier, the Operator sets the `imagePullPolicy` field of the Velero and node agent pods to `Always` for all images. + +In {oadp-short} 1.4.1 or later, the Operator first checks if each image has the `sha256` or `sha512` digest and sets the `imagePullPolicy` field accordingly: + +* If the image has the digest, the Operator sets `imagePullPolicy` to `IfNotPresent`. +* If the image does not have the digest, the Operator sets `imagePullPolicy` to `Always`. + +You can also override the `imagePullPolicy` field by using the `spec.imagePullPolicy` field in the Data Protection Application (DPA). + +.Prerequisites + +* You have installed the {oadp-short} Operator. + +.Procedure + +* Configure the `spec.imagePullPolicy` field in the DPA as shown in the following example: ++ +.Example Data Protection Application +[source,yaml] +---- +apiVersion: oadp.openshift.io/v1alpha1 +kind: DataProtectionApplication +metadata: + name: test-dpa + namespace: openshift-adp +spec: + backupLocations: + - name: default + velero: + config: + insecureSkipTLSVerify: "true" + profile: "default" + region: + s3ForcePathStyle: "true" + s3Url: + credential: + key: cloud + name: cloud-credentials + default: true + objectStorage: + bucket: + prefix: velero + provider: aws + configuration: + nodeAgent: + enable: true + uploaderType: kopia + velero: + defaultPlugins: + - openshift + - aws + - kubevirt + - csi + imagePullPolicy: Never # <1> +---- +<1> Specify the value for `imagePullPolicy`. In this example, the `imagePullPolicy` field is set to `Never`. \ No newline at end of file From 6ffab90c052952b9d130aeca6c5f628ab8f47de4 Mon Sep 17 00:00:00 2001 From: Apurva Bhide Date: Tue, 11 Feb 2025 15:57:47 +0530 Subject: [PATCH 174/669] Revert "OADP-4883: Added 1.4.3 release notes" --- .../release-notes/oadp-1-4-release-notes.adoc | 2 +- modules/oadp-1-4-3-release-notes.adoc | 26 ------------------- 2 files changed, 1 insertion(+), 27 deletions(-) delete mode 100644 modules/oadp-1-4-3-release-notes.adoc diff --git a/backup_and_restore/application_backup_and_restore/release-notes/oadp-1-4-release-notes.adoc b/backup_and_restore/application_backup_and_restore/release-notes/oadp-1-4-release-notes.adoc index b6154cafc6c7..3172471d771c 100644 --- a/backup_and_restore/application_backup_and_restore/release-notes/oadp-1-4-release-notes.adoc +++ b/backup_and_restore/application_backup_and_restore/release-notes/oadp-1-4-release-notes.adoc @@ -13,7 +13,7 @@ The release notes for {oadp-first} describe new features and enhancements, depre ==== For additional information about {oadp-short}, see link:https://access.redhat.com/articles/5456281[{oadp-first} FAQs] ==== -include::modules/oadp-1-4-3-release-notes.adoc[leveloffset=+1] + include::modules/oadp-1-4-2-release-notes.adoc[leveloffset=+1] [role="_additional-resources"] diff --git a/modules/oadp-1-4-3-release-notes.adoc b/modules/oadp-1-4-3-release-notes.adoc deleted file mode 100644 index cdf4677725b9..000000000000 --- a/modules/oadp-1-4-3-release-notes.adoc +++ /dev/null @@ -1,26 +0,0 @@ -// Module included in the following assemblies: -// -// * backup_and_restore/oadp-1-4-release-notes.adoc - -:_mod-docs-content-type: REFERENCE - -[id="oadp-1-4-3-release-notes_{context}"] -= {oadp-short} 1.4.3 release notes - -The {oadp-first} 1.4.3 release notes lists the following new feature. - -[id="new-features-1-4-3_{context}"] -== New features - -.Notable changes in the `kubevirt` velero plugin in version 0.7.1 - -With this release, the `kubevirt` velero plugin has been updated to version 0.7.1. Notable improvements include the following bug fix and new features: - -* Virtual machine instances (VMIs) are no longer ignored from backup when the owner VM is excluded. -* Object graphs now include all extra objects during backup and restore operations. -* Optionally generated labels are now added to new firmware Universally Unique Identifiers (UUIDs) during restore operations. -* Switching VM run strategies during restore operations is now possible. -* Clearing a MAC address by label is now supported. -* The restore-specific checks during the backup operation are now skipped. -* The `VirtualMachineClusterInstancetype` and `VirtualMachineClusterPreference` custom resource definitions (CRDs) are now supported. -//link:https://issues.redhat.com/browse/OADP-5551[OADP-5551] \ No newline at end of file From f948bcc6d294890e713563008427a93dd626a990 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Mon, 27 Jan 2025 10:19:24 +0000 Subject: [PATCH 175/669] OCPBUGS-46045 Symmetric routing with MetalLB improvements + missing rule --- modules/nw-egress-service-cr.adoc | 2 +- modules/nw-egress-service-ovn.adoc | 2 +- modules/nw-metallb-configure-return-traffic-proc.adoc | 10 ++++++---- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/modules/nw-egress-service-cr.adoc b/modules/nw-egress-service-cr.adoc index d2e2f956b50a..7e2d27f5e781 100644 --- a/modules/nw-egress-service-cr.adoc +++ b/modules/nw-egress-service-cr.adoc @@ -26,7 +26,7 @@ spec: <2> Specify the namespace for the egress service. The namespace for the `EgressService` must match the namespace of the load-balancer service that you want to modify. The egress service is namespace-scoped. <3> Specify the source IP address of egress traffic for pods behind a service. Valid values are `LoadBalancerIP` or `Network`. Use the `LoadBalancerIP` value to assign the `LoadBalancer` service ingress IP address as the source IP address for egress traffic. Specify `Network` to assign the network interface IP address as the source IP address for egress traffic. <4> Optional: If you use the `LoadBalancerIP` value for the `sourceIPBy` specification, a single node handles the `LoadBalancer` service traffic. Use the `nodeSelector` field to limit which node can be assigned this task. When a node is selected to handle the service traffic, OVN-Kubernetes labels the node in the following format: `egress-service.k8s.ovn.org/-: ""`. When the `nodeSelector` field is not specified, any node can manage the `LoadBalancer` service traffic. -<5> Optional: Specify the routing table for egress traffic. If you do not include the `network` specification, the egress service uses the default host network. +<5> Optional: Specify the routing table ID for egress traffic. Ensure that the value matches the `route-table-id` ID defined in the `NodeNetworkConfigurationPolicy` resource. If you do not include the `network` specification, the egress service uses the default host network. .Example egress service specification [source,yaml] diff --git a/modules/nw-egress-service-ovn.adoc b/modules/nw-egress-service-ovn.adoc index cd51a56189c4..fa004f5d5615 100644 --- a/modules/nw-egress-service-ovn.adoc +++ b/modules/nw-egress-service-ovn.adoc @@ -105,7 +105,7 @@ metadata: spec: ipAddressPools: - example-pool - nodeSelector: + nodeSelectors: - matchLabels: egress-service.k8s.ovn.org/example-namespace-example-service: "" <1> ---- diff --git a/modules/nw-metallb-configure-return-traffic-proc.adoc b/modules/nw-metallb-configure-return-traffic-proc.adoc index ac38d7bff345..daf0df73e8c9 100644 --- a/modules/nw-metallb-configure-return-traffic-proc.adoc +++ b/modules/nw-metallb-configure-return-traffic-proc.adoc @@ -73,6 +73,9 @@ spec: - ip-to: 10.132.0.0/14 priority: 998 route-table: 254 + - ip-to: 169.254.0.0/17 + priority: 998 + route-table: 254 ---- <1> The name of the policy. <2> This example applies the policy to all nodes with the label `vrf:true`. @@ -82,7 +85,7 @@ spec: <6> The name of the route table ID for the VRF. <7> The IPv4 address of the interface associated with the VRF. <8> Defines the configuration for network routes. The `next-hop-address` field defines the IP address of the next hop for the route. The `next-hop-interface` field defines the outgoing interface for the route. In this example, the VRF routing table is `2`, which references the ID that you define in the `EgressService` CR. -<9> Defines additional route rules. The `ip-to` fields must match the `Cluster Network` CIDR and `Service Network` CIDR. You can view the values for these CIDR address specifications by running the following command: `oc describe network.config/cluster`. +<9> Defines additional route rules. The `ip-to` fields must match the `Cluster Network` CIDR, `Service Network` CIDR, and `Internal Masquerade` subnet CIDR. You can view the values for these CIDR address specifications by running the following command: `oc describe network.operator/cluster`. <10> The main routing table that the Linux kernel uses when calculating routes has the ID `254`. .. Apply the policy by running the following command: @@ -193,7 +196,7 @@ spec: <2> Specify the namespace for the egress service. The namespace for the `EgressService` must match the namespace of the load-balancer service that you want to modify. The egress service is namespace-scoped. <3> This example assigns the `LoadBalancer` service ingress IP address as the source IP address for egress traffic. <4> If you specify `LoadBalancer` for the `sourceIPBy` specification, a single node handles the `LoadBalancer` service traffic. In this example, only a node with the label `vrf: "true"` can handle the service traffic. If you do not specify a node, OVN-Kubernetes selects a worker node to handle the service traffic. When a node is selected, OVN-Kubernetes labels the node in the following format: `egress-service.k8s.ovn.org/-: ""`. -<5> Specify the routing table for egress traffic. +<5> Specify the routing table ID for egress traffic. Ensure that the value matches the `route-table-id` ID defined in the `NodeNetworkConfigurationPolicy` resource, for example, `route-table-id: 2`. .. Apply the configuration for the egress service by running the following command: + @@ -212,5 +215,4 @@ $ curl : <1> ---- <1> Update the external IP address and port number to suit your application endpoint. -. Optional: If you assigned the `LoadBalancer` service ingress IP address as the source IP address for egress traffic, verify this configuration by using tools such as `tcpdump` to analyze packets received at the external client. - +. Optional: If you assigned the `LoadBalancer` service ingress IP address as the source IP address for egress traffic, verify this configuration by using tools such as `tcpdump` to analyze packets received at the external client. \ No newline at end of file From 42d0fb3ebccd190dfc0858e8ef0c8e6c6b8b9459 Mon Sep 17 00:00:00 2001 From: JoeAldinger Date: Tue, 4 Feb 2025 16:59:13 -0500 Subject: [PATCH 176/669] OCPBUGS-42798:updates ip forwarding CNO --- modules/nw-operator-cr.adoc | 4 ++++ networking/networking_operators/cluster-network-operator.adoc | 2 ++ 2 files changed, 6 insertions(+) diff --git a/modules/nw-operator-cr.adoc b/modules/nw-operator-cr.adoc index a3e1dc25e6d5..c9fb0c959bd7 100644 --- a/modules/nw-operator-cr.adoc +++ b/modules/nw-operator-cr.adoc @@ -311,6 +311,10 @@ If you set this field to `true`, you do not receive the performance benefits of |`ipForwarding` |`object` |You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the `ipForwarding` specification in the `Network` resource. Specify `Restricted` to only allow IP forwarding for Kubernetes related traffic. Specify `Global` to allow forwarding of all IP traffic. For new installations, the default is `Restricted`. For updates to {product-title} 4.14 or later, the default is `Global`. +[NOTE] +==== +The default value of `Restricted` sets the IP forwarding to drop. +==== |`ipv4` |`object` diff --git a/networking/networking_operators/cluster-network-operator.adoc b/networking/networking_operators/cluster-network-operator.adoc index 23fde6f94163..6db590236d2d 100644 --- a/networking/networking_operators/cluster-network-operator.adoc +++ b/networking/networking_operators/cluster-network-operator.adoc @@ -25,3 +25,5 @@ include::modules/nw-operator-cr.adoc[leveloffset=+1] == Additional resources * xref:../../rest_api/operator_apis/network-operator-openshift-io-v1.adoc#network-operator-openshift-io-v1[`Network` API in the `operator.openshift.io` API group] * xref:../../networking/configuring-cluster-network-range.adoc#nw-cluster-network-range-edit_configuring-cluster-network-range[Expanding the cluster network IP address range] + +* link:https://access.redhat.com/solutions/6969174[How to configure OVN to use kernel routing table] \ No newline at end of file From f9fa7c402c1c07a702fdfb40bffa82c739941d2d Mon Sep 17 00:00:00 2001 From: Ronan Hennessy Date: Fri, 31 Jan 2025 14:10:53 +0000 Subject: [PATCH 177/669] TELCODOCS-2178: Refactoring TOC to accomodate the problem of 4th level TOCs that exists on docs.redhat.com --- _topic_maps/_topic_map.yml | 24 +++++++---------- .../ztp-advanced-policygenerator-config.adoc | 4 +-- .../ztp-advanced-policy-config.adoc | 4 +-- .../installing-openstack-nfv-preparing.adoc | 2 +- installing/overview/installing-preparing.adoc | 2 +- .../using-dpdk-and-rdma.adoc | 10 +++---- operators/operator-reference.adoc | 2 +- ...f-debugging-low-latency-tuning-status.adoc | 26 +++++++++++++++++++ .../cnf-numa-aware-scheduling.adoc | 2 +- ...g-platform-verification-latency-tests.adoc | 2 +- ...nf-provisioning-low-latency-workloads.adoc | 10 +++---- ...g-low-latency-nodes-with-perf-profile.adoc | 10 +++---- .../cnf-understanding-low-latency.adoc | 2 +- .../enabling-workload-partitioning.adoc | 2 +- scalability_and_performance/index.adoc | 2 +- .../low_latency_tuning/_attributes | 1 - ...f-debugging-low-latency-tuning-status.adoc | 26 ------------------- .../low_latency_tuning/images | 1 - .../low_latency_tuning/modules | 1 - .../low_latency_tuning/snippets | 1 - .../telco-core-ref-design-components.adoc | 6 ++--- .../ran/telco-ran-ref-du-components.adoc | 4 +-- ...onfiguring-cluster-realtime-workloads.adoc | 4 +-- 23 files changed, 70 insertions(+), 78 deletions(-) create mode 100644 scalability_and_performance/cnf-debugging-low-latency-tuning-status.adoc rename scalability_and_performance/{low_latency_tuning => }/cnf-performing-platform-verification-latency-tests.adoc (88%) rename scalability_and_performance/{low_latency_tuning => }/cnf-provisioning-low-latency-workloads.adoc (67%) rename scalability_and_performance/{low_latency_tuning => }/cnf-tuning-low-latency-nodes-with-perf-profile.adoc (80%) rename scalability_and_performance/{low_latency_tuning => }/cnf-understanding-low-latency.adoc (79%) delete mode 120000 scalability_and_performance/low_latency_tuning/_attributes delete mode 100644 scalability_and_performance/low_latency_tuning/cnf-debugging-low-latency-tuning-status.adoc delete mode 120000 scalability_and_performance/low_latency_tuning/images delete mode 120000 scalability_and_performance/low_latency_tuning/modules delete mode 120000 scalability_and_performance/low_latency_tuning/snippets diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 8e8dab6abe4f..ee726cd01f63 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -3337,20 +3337,16 @@ Topics: - Name: What huge pages do and how they are consumed by apps File: what-huge-pages-do-and-how-they-are-consumed-by-apps Distros: openshift-origin,openshift-enterprise -- Name: Low latency tuning - Dir: low_latency_tuning - Distros: openshift-origin,openshift-enterprise - Topics: - - Name: Understanding low latency - File: cnf-understanding-low-latency - - Name: Tuning nodes for low latency with the performance profile - File: cnf-tuning-low-latency-nodes-with-perf-profile - - Name: Provisioning real-time and low latency workloads - File: cnf-provisioning-low-latency-workloads - - Name: Debugging low latency tuning - File: cnf-debugging-low-latency-tuning-status - - Name: Performing latency tests for platform verification - File: cnf-performing-platform-verification-latency-tests +- Name: Understanding low latency + File: cnf-understanding-low-latency +- Name: Tuning nodes for low latency with the performance profile + File: cnf-tuning-low-latency-nodes-with-perf-profile +- Name: Provisioning real-time and low latency workloads + File: cnf-provisioning-low-latency-workloads +- Name: Debugging low latency tuning + File: cnf-debugging-low-latency-tuning-status +- Name: Performing latency tests for platform verification + File: cnf-performing-platform-verification-latency-tests - Name: Improving cluster stability in high latency environments using worker latency profiles File: scaling-worker-latency-profiles Distros: openshift-origin,openshift-enterprise diff --git a/edge_computing/policygenerator_for_ztp/ztp-advanced-policygenerator-config.adoc b/edge_computing/policygenerator_for_ztp/ztp-advanced-policygenerator-config.adoc index 9250769a5709..8fc5e0bbd1d0 100644 --- a/edge_computing/policygenerator_for_ztp/ztp-advanced-policygenerator-config.adoc +++ b/edge_computing/policygenerator_for_ztp/ztp-advanced-policygenerator-config.adoc @@ -45,7 +45,7 @@ include::modules/ztp-using-pgt-to-configure-power-states.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#configuring-workload-hints_cnf-low-latency-perf-profile[Configuring node power consumption and realtime processing with workload hints] +* xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#configuring-workload-hints_cnf-low-latency-perf-profile[Configuring node power consumption and realtime processing with workload hints] include::modules/ztp-using-pgt-to-configure-performance-mode.adoc[leveloffset=+2] @@ -56,7 +56,7 @@ include::modules/ztp-using-pgt-to-configure-power-saving-mode.adoc[leveloffset=+ [role="_additional-resources"] .Additional resources -* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-power-saving-for-nodes_cnf-low-latency-perf-profile[Configuring power saving for nodes that run colocated high and low priority workloads] +* xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-power-saving-for-nodes_cnf-low-latency-perf-profile[Configuring power saving for nodes that run colocated high and low priority workloads] * xref:../../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-du-configuring-host-firmware-requirements_sno-configure-for-vdu[Configuring host firmware for low latency and high performance] diff --git a/edge_computing/policygentemplate_for_ztp/ztp-advanced-policy-config.adoc b/edge_computing/policygentemplate_for_ztp/ztp-advanced-policy-config.adoc index eda85b1db603..d0eaeb95922d 100644 --- a/edge_computing/policygentemplate_for_ztp/ztp-advanced-policy-config.adoc +++ b/edge_computing/policygentemplate_for_ztp/ztp-advanced-policy-config.adoc @@ -41,7 +41,7 @@ include::modules/ztp-using-pgt-to-configure-power-states.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#configuring-workload-hints_cnf-low-latency-perf-profile[Configuring node power consumption and realtime processing with workload hints] +* xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#configuring-workload-hints_cnf-low-latency-perf-profile[Configuring node power consumption and realtime processing with workload hints] include::modules/ztp-using-pgt-to-configure-performance-mode.adoc[leveloffset=+2] @@ -54,7 +54,7 @@ include::modules/ztp-using-pgt-to-configure-power-saving-mode.adoc[leveloffset=+ [role="_additional-resources"] .Additional resources -* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-power-saving-for-nodes_cnf-low-latency-perf-profile[Configuring power saving for nodes that run colocated high and low priority workloads] +* xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-power-saving-for-nodes_cnf-low-latency-perf-profile[Configuring power saving for nodes that run colocated high and low priority workloads] * xref:../../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-du-configuring-host-firmware-requirements_sno-configure-for-vdu[Configuring host firmware for low latency and high performance] diff --git a/installing/installing_openstack/installing-openstack-nfv-preparing.adoc b/installing/installing_openstack/installing-openstack-nfv-preparing.adoc index a30dd5ae1878..e57c44736c74 100644 --- a/installing/installing_openstack/installing-openstack-nfv-preparing.adoc +++ b/installing/installing_openstack/installing-openstack-nfv-preparing.adoc @@ -45,5 +45,5 @@ After you perform preinstallation tasks, install your cluster by following the m * Consult the following references after you deploy your cluster to improve its performance: ** xref:../../networking/hardware_networks/using-dpdk-and-rdma.adoc#nw-openstack-ovs-dpdk-testpmd-pod_using-dpdk-and-rdma[A test pod template for clusters that use OVS-DPDK on OpenStack]. ** xref:../../networking/hardware_networks/add-pod.adoc#nw-openstack-sr-iov-testpmd-pod_add-pod[A test pod template for clusters that use SR-IOV on OpenStack]. -** xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#installation-openstack-ovs-dpdk-performance-profile_cnf-low-latency-perf-profile[A performance profile template for clusters that use OVS-DPDK on OpenStack] +** xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#installation-openstack-ovs-dpdk-performance-profile_cnf-low-latency-perf-profile[A performance profile template for clusters that use OVS-DPDK on OpenStack] . diff --git a/installing/overview/installing-preparing.adoc b/installing/overview/installing-preparing.adoc index 46aa5551e198..faf36c8dfdbe 100644 --- a/installing/overview/installing-preparing.adoc +++ b/installing/overview/installing-preparing.adoc @@ -115,7 +115,7 @@ For a production cluster, you must configure the following integrations: [id="installing-preparing-cluster-for-workloads"] == Preparing your cluster for workloads -Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application xref:../../cicd/builds/build-strategies.adoc#build-strategies[build strategy], you might need to make provisions for xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-low-latency-perf-profile[low-latency] workloads or to xref:../../nodes/pods/nodes-pods-secrets.adoc#nodes-pods-secrets[protect sensitive workloads]. You can also configure xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[monitoring] for application workloads. +Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application xref:../../cicd/builds/build-strategies.adoc#build-strategies[build strategy], you might need to make provisions for xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-low-latency-perf-profile[low-latency] workloads or to xref:../../nodes/pods/nodes-pods-secrets.adoc#nodes-pods-secrets[protect sensitive workloads]. You can also configure xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[monitoring] for application workloads. If you plan to run xref:../../windows_containers/enabling-windows-container-workloads.adoc#enabling-windows-container-workloads[Windows workloads], you must enable xref:../../networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.adoc#configuring-hybrid-networking[hybrid networking with OVN-Kubernetes] during the installation process; hybrid networking cannot be enabled after your cluster is installed. [id="supported-installation-methods-for-different-platforms"] diff --git a/networking/hardware_networks/using-dpdk-and-rdma.adoc b/networking/hardware_networks/using-dpdk-and-rdma.adoc index a2e9e8614ffb..9d85bf1e5256 100644 --- a/networking/hardware_networks/using-dpdk-and-rdma.adoc +++ b/networking/hardware_networks/using-dpdk-and-rdma.adoc @@ -24,7 +24,7 @@ include::modules/nw-running-dpdk-rootless-tap.adoc[leveloffset=+1] // I can't seem to find this in 4.16 or 4.15 * xr3f:../../networking/multiple_networks/configuring-additional-network.adoc#nw-multus-enable-container_use_devices_configuring-additional-network[Enabling the container_use_devices boolean] -* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile] +* xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile] * xref:../../networking/hardware_networks/configuring-sriov-device.adoc#configuring-sriov-device[Configuring an SR-IOV network device] @@ -59,17 +59,17 @@ include::modules/nw-openstack-hw-offload-testpmd-pod.adoc[leveloffset=+1] * xref:../../networking/hardware_networks/about-sriov.adoc#supported-devices_about-sriov[Supported devices] -* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile] +* xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile] -* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#adjusting-nic-queues-with-the-performance-profile_cnf-low-latency-perf-profile[Adjusting the NIC queues with the performance profile] +* xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#adjusting-nic-queues-with-the-performance-profile_cnf-low-latency-perf-profile[Adjusting the NIC queues with the performance profile] -* xref:../../scalability_and_performance/low_latency_tuning/cnf-provisioning-low-latency-workloads.adoc#cnf-provisioning-low-latency-workloads[Provisioning real-time and low latency workloads] +* xref:../../scalability_and_performance/cnf-provisioning-low-latency-workloads.adoc#cnf-provisioning-low-latency-workloads[Provisioning real-time and low latency workloads] * xref:../../networking/networking_operators/sr-iov-operator/installing-sriov-operator.adoc#installing-sriov-operator[Installing the SR-IOV Network Operator] * xref:../../networking/hardware_networks/configuring-sriov-device.adoc#nw-sriov-networknodepolicy-object_configuring-sriov-device[Configuring an SR-IOV network device] * xref:../../networking/multiple_networks/secondary_networks/configuring-ip-secondary-nwt.adoc#nw-multus-whereabouts_configuring-additional-network[Dynamic IP address assignment configuration with Whereabouts] -* xref:../../scalability_and_performance/low_latency_tuning/cnf-provisioning-low-latency-workloads.adoc#disabling-interrupt-processing-for-individual-pods_cnf-provisioning-low-latency[Disabling interrupt processing for individual pods] +* xref:../../scalability_and_performance/cnf-provisioning-low-latency-workloads.adoc#disabling-interrupt-processing-for-individual-pods_cnf-provisioning-low-latency[Disabling interrupt processing for individual pods] * xref:../../networking/hardware_networks/configuring-sriov-net-attach.adoc#configuring-sriov-net-attach[Configuring an SR-IOV Ethernet network attachment] diff --git a/operators/operator-reference.adoc b/operators/operator-reference.adoc index 7511a3b07645..2072519460ee 100644 --- a/operators/operator-reference.adoc +++ b/operators/operator-reference.adoc @@ -128,7 +128,7 @@ include::modules/node-tuning-operator.adoc[leveloffset=+1] [role="_additional-resources"] [id="cluster-operators-ref-nto-addtl-resources"] === Additional resources -* xref:../scalability_and_performance/low_latency_tuning/cnf-understanding-low-latency.adoc#cnf-understanding-low-latency_cnf-understanding-low-latency[About low latency] +* xref:../scalability_and_performance/cnf-understanding-low-latency.adoc#cnf-understanding-low-latency_cnf-understanding-low-latency[About low latency] include::modules/openshift-apiserver-operator.adoc[leveloffset=+1] diff --git a/scalability_and_performance/cnf-debugging-low-latency-tuning-status.adoc b/scalability_and_performance/cnf-debugging-low-latency-tuning-status.adoc new file mode 100644 index 000000000000..09248f04a138 --- /dev/null +++ b/scalability_and_performance/cnf-debugging-low-latency-tuning-status.adoc @@ -0,0 +1,26 @@ +:_mod-docs-content-type: ASSEMBLY +[id="cnf-debugging-low-latency-tuning-status"] += Debugging low latency node tuning status +include::_attributes/common-attributes.adoc[] +:context: cnf-debugging-low-latency + +toc::[] + +Use the `PerformanceProfile` custom resource (CR) status fields for reporting tuning status and debugging latency issues in the cluster node. + +include::modules/cnf-debugging-low-latency-cnf-tuning-status.adoc[leveloffset=+1] + +include::modules/cnf-collecting-low-latency-tuning-debugging-data-for-red-hat-support.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* xref:../support/gathering-cluster-data.adoc#gathering-cluster-data[Gathering data about your cluster with the `must-gather` tool] + +* xref:../nodes/nodes/nodes-nodes-managing.adoc#nodes-nodes-managing[Managing nodes with MachineConfig and KubeletConfig CRs] + +* xref:../scalability_and_performance/using-node-tuning-operator.adoc#using-node-tuning-operator[Using the Node Tuning Operator] + +* xref:../scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed-by-apps.adoc#configuring-huge-pages_huge-pages[Configuring huge pages at boot time] + +* xref:../scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed-by-apps.adoc#how-huge-pages-are-consumed-by-apps_huge-pages[How huge pages are consumed by apps] diff --git a/scalability_and_performance/cnf-numa-aware-scheduling.adoc b/scalability_and_performance/cnf-numa-aware-scheduling.adoc index 7561f35e6e65..b59ffd6d30dc 100644 --- a/scalability_and_performance/cnf-numa-aware-scheduling.adoc +++ b/scalability_and_performance/cnf-numa-aware-scheduling.adoc @@ -45,7 +45,7 @@ include::modules/cnf-configuring-single-numa-policy.adoc[leveloffset=+2] * xref:../disconnected/updating/disconnected-update.adoc#images-configuration-registry-mirror-configuring_updating-disconnected-cluster[Configuring image registry repository mirroring] -* xref:../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-about-the-profile-creator-tool_cnf-low-latency-perf-profile[About the Performance Profile Creator] +* xref:../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-about-the-profile-creator-tool_cnf-low-latency-perf-profile[About the Performance Profile Creator] include::modules/cnf-sample-single-numa-policy-from-pp.adoc[leveloffset=+2] diff --git a/scalability_and_performance/low_latency_tuning/cnf-performing-platform-verification-latency-tests.adoc b/scalability_and_performance/cnf-performing-platform-verification-latency-tests.adoc similarity index 88% rename from scalability_and_performance/low_latency_tuning/cnf-performing-platform-verification-latency-tests.adoc rename to scalability_and_performance/cnf-performing-platform-verification-latency-tests.adoc index 33dcc00b372b..dc010048b9a8 100644 --- a/scalability_and_performance/low_latency_tuning/cnf-performing-platform-verification-latency-tests.adoc +++ b/scalability_and_performance/cnf-performing-platform-verification-latency-tests.adoc @@ -22,7 +22,7 @@ Your cluster must meet the following requirements before you can run the latency [role="_additional-resources"] .Additional resources -* xref:../../scalability_and_performance/low_latency_tuning/cnf-provisioning-low-latency-workloads.adoc#cnf-scheduling-workload-onto-worker-with-real-time-capabilities_cnf-provisioning-low-latency[Scheduling a workload onto a worker with real-time capabilities] +* xref:../scalability_and_performance/cnf-provisioning-low-latency-workloads.adoc#cnf-scheduling-workload-onto-worker-with-real-time-capabilities_cnf-provisioning-low-latency[Scheduling a workload onto a worker with real-time capabilities] include::modules/cnf-measuring-latency.adoc[leveloffset=+1] diff --git a/scalability_and_performance/low_latency_tuning/cnf-provisioning-low-latency-workloads.adoc b/scalability_and_performance/cnf-provisioning-low-latency-workloads.adoc similarity index 67% rename from scalability_and_performance/low_latency_tuning/cnf-provisioning-low-latency-workloads.adoc rename to scalability_and_performance/cnf-provisioning-low-latency-workloads.adoc index c213461c650f..b13a680b421e 100644 --- a/scalability_and_performance/low_latency_tuning/cnf-provisioning-low-latency-workloads.adoc +++ b/scalability_and_performance/cnf-provisioning-low-latency-workloads.adoc @@ -20,14 +20,14 @@ When writing your applications, follow the general recommendations described in [role="_additional-resources"] .Additional resources -* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile] +* xref:../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile] include::modules/cnf-scheduling-workload-onto-worker-with-real-time-capabilities.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../nodes/scheduling/nodes-scheduler-node-selectors.adoc#nodes-pods-node-selectors[Placing pods on specific nodes using node selectors] +* xref:../nodes/scheduling/nodes-scheduler-node-selectors.adoc#nodes-pods-node-selectors[Placing pods on specific nodes using node selectors] * link:https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node[Assigning pods to nodes] @@ -40,18 +40,18 @@ include::modules/cnf-configuring-high-priority-workload-pods.adoc[leveloffset=+1 [role="_additional-resources"] .Additional resources -xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-power-saving-for-nodes_cnf-low-latency-perf-profile[Configuring power saving for nodes that run colocated high and low priority workloads] +xref:../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-power-saving-for-nodes_cnf-low-latency-perf-profile[Configuring power saving for nodes that run colocated high and low priority workloads] include::modules/cnf-disabling-cpu-cfs-quota.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../edge_computing/ztp-vdu-validating-cluster-tuning.adoc#ztp-du-firmware-config-reference_vdu-config-ref[Recommended firmware configuration for vDU cluster hosts] +* xref:../edge_computing/ztp-vdu-validating-cluster-tuning.adoc#ztp-du-firmware-config-reference_vdu-config-ref[Recommended firmware configuration for vDU cluster hosts] include::modules/cnf-disabling-interrupt-processing-for-individual-pods.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile#managing-device-interrupt-processing-for-guaranteed-pod-isolated-cpus_cnf-low-latency-perf-profile[Managing device interrupt processing for guaranteed pod isolated CPUs] +* xref:../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile#managing-device-interrupt-processing-for-guaranteed-pod-isolated-cpus_cnf-low-latency-perf-profile[Managing device interrupt processing for guaranteed pod isolated CPUs] diff --git a/scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc b/scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc similarity index 80% rename from scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc rename to scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc index 719d1b382a3b..cf390f5f678e 100644 --- a/scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc +++ b/scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc @@ -45,7 +45,7 @@ include::modules/cnf-gathering-data-about-cluster-using-must-gather.adoc[levelof .Additional resources * For more information about the `must-gather` tool, -see xref:../../support/gathering-cluster-data.adoc#nodes-nodes-managing[Gathering data about your cluster]. +see xref:../support/gathering-cluster-data.adoc#nodes-nodes-managing[Gathering data about your cluster]. include::modules/cnf-running-the-performance-creator-profile.adoc[leveloffset=+2] @@ -73,11 +73,11 @@ include::modules/cnf-configuring-power-saving-for-nodes.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-about-the-profile-creator-tool_cnf-low-latency-perf-profile[About the Performance Profile Creator] +* xref:../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-about-the-profile-creator-tool_cnf-low-latency-perf-profile[About the Performance Profile Creator] -* xref:../../scalability_and_performance/low_latency_tuning/cnf-provisioning-low-latency-workloads.adoc#cnf-configuring-high-priority-workload-pods_cnf-provisioning-low-latency[Disabling power saving mode for high priority pods] +* xref:../scalability_and_performance/cnf-provisioning-low-latency-workloads.adoc#cnf-configuring-high-priority-workload-pods_cnf-provisioning-low-latency[Disabling power saving mode for high priority pods] -* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#managing-device-interrupt-processing-for-guaranteed-pod-isolated-cpus_cnf-low-latency-perf-profile[Managing device interrupt processing for guaranteed pod isolated CPUs] +* xref:../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#managing-device-interrupt-processing-for-guaranteed-pod-isolated-cpus_cnf-low-latency-perf-profile[Managing device interrupt processing for guaranteed pod isolated CPUs] include::modules/cnf-cpu-infra-container.adoc[leveloffset=+1] @@ -104,7 +104,7 @@ include::modules/cnf-adjusting-nic-queues-with-the-performance-profile.adoc[leve [role="_additional-resources"] .Additional resources -* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile]. +* xref:../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile]. include::modules/cnf-verifying-queue-status.adoc[leveloffset=+2] diff --git a/scalability_and_performance/low_latency_tuning/cnf-understanding-low-latency.adoc b/scalability_and_performance/cnf-understanding-low-latency.adoc similarity index 79% rename from scalability_and_performance/low_latency_tuning/cnf-understanding-low-latency.adoc rename to scalability_and_performance/cnf-understanding-low-latency.adoc index e528aee325b7..1b119359a749 100644 --- a/scalability_and_performance/low_latency_tuning/cnf-understanding-low-latency.adoc +++ b/scalability_and_performance/cnf-understanding-low-latency.adoc @@ -17,4 +17,4 @@ include::modules/cnf-about-hyperthreading-for-low-latency-and-real-time-applicat [role="_additional-resources"] .Additional resources -* xref:../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-hyperthreading-for-a-cluster_cnf-low-latency-perf-profile[Configuring Hyper-Threading for a cluster] +* xref:../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-hyperthreading-for-a-cluster_cnf-low-latency-perf-profile[Configuring Hyper-Threading for a cluster] diff --git a/scalability_and_performance/enabling-workload-partitioning.adoc b/scalability_and_performance/enabling-workload-partitioning.adoc index 02f8fe590f10..fd0302823bf6 100644 --- a/scalability_and_performance/enabling-workload-partitioning.adoc +++ b/scalability_and_performance/enabling-workload-partitioning.adoc @@ -30,7 +30,7 @@ include::modules/create-perf-profile-workload-partitioning.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-about-the-profile-creator-tool_cnf-low-latency-perf-profile[About the Performance Profile Creator] +* xref:../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-about-the-profile-creator-tool_cnf-low-latency-perf-profile[About the Performance Profile Creator] == Sample performance profile configuration diff --git a/scalability_and_performance/index.adoc b/scalability_and_performance/index.adoc index 2336cb384af6..c246ab96598a 100644 --- a/scalability_and_performance/index.adoc +++ b/scalability_and_performance/index.adoc @@ -50,7 +50,7 @@ xref:../scalability_and_performance/managing-bare-metal-hosts.adoc#managing-bare xref:../scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed-by-apps.adoc#what-huge-pages-do-and-how-they-are-consumed[What are huge pages and how are they used by apps] -xref:../scalability_and_performance/low_latency_tuning/cnf-understanding-low-latency.adoc#cnf-understanding-low-latency[Low latency tuning for improving cluster stability and partitioning workload] +xref:../scalability_and_performance/cnf-understanding-low-latency.adoc#cnf-understanding-low-latency[Low latency tuning for improving cluster stability and partitioning workload] xref:../scalability_and_performance/scaling-worker-latency-profiles.adoc#scaling-worker-latency-profiles[Improving cluster stability in high latency environments using worker latency profiles] diff --git a/scalability_and_performance/low_latency_tuning/_attributes b/scalability_and_performance/low_latency_tuning/_attributes deleted file mode 120000 index 20cc1dcb77bf..000000000000 --- a/scalability_and_performance/low_latency_tuning/_attributes +++ /dev/null @@ -1 +0,0 @@ -../../_attributes/ \ No newline at end of file diff --git a/scalability_and_performance/low_latency_tuning/cnf-debugging-low-latency-tuning-status.adoc b/scalability_and_performance/low_latency_tuning/cnf-debugging-low-latency-tuning-status.adoc deleted file mode 100644 index 297cfd533f77..000000000000 --- a/scalability_and_performance/low_latency_tuning/cnf-debugging-low-latency-tuning-status.adoc +++ /dev/null @@ -1,26 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="cnf-debugging-low-latency-tuning-status"] -= Debugging low latency node tuning status -include::_attributes/common-attributes.adoc[] -:context: cnf-debugging-low-latency - -toc::[] - -Use the `PerformanceProfile` custom resource (CR) status fields for reporting tuning status and debugging latency issues in the cluster node. - -include::modules/cnf-debugging-low-latency-cnf-tuning-status.adoc[leveloffset=+1] - -include::modules/cnf-collecting-low-latency-tuning-debugging-data-for-red-hat-support.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../support/gathering-cluster-data.adoc#gathering-cluster-data[Gathering data about your cluster with the `must-gather` tool] - -* xref:../../nodes/nodes/nodes-nodes-managing.adoc#nodes-nodes-managing[Managing nodes with MachineConfig and KubeletConfig CRs] - -* xref:../../scalability_and_performance/using-node-tuning-operator.adoc#using-node-tuning-operator[Using the Node Tuning Operator] - -* xref:../../scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed-by-apps.adoc#configuring-huge-pages_huge-pages[Configuring huge pages at boot time] - -* xref:../../scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed-by-apps.adoc#how-huge-pages-are-consumed-by-apps_huge-pages[How huge pages are consumed by apps] diff --git a/scalability_and_performance/low_latency_tuning/images b/scalability_and_performance/low_latency_tuning/images deleted file mode 120000 index 847b03ed0541..000000000000 --- a/scalability_and_performance/low_latency_tuning/images +++ /dev/null @@ -1 +0,0 @@ -../../images/ \ No newline at end of file diff --git a/scalability_and_performance/low_latency_tuning/modules b/scalability_and_performance/low_latency_tuning/modules deleted file mode 120000 index 36719b9de743..000000000000 --- a/scalability_and_performance/low_latency_tuning/modules +++ /dev/null @@ -1 +0,0 @@ -../../modules/ \ No newline at end of file diff --git a/scalability_and_performance/low_latency_tuning/snippets b/scalability_and_performance/low_latency_tuning/snippets deleted file mode 120000 index 5a3f5add140e..000000000000 --- a/scalability_and_performance/low_latency_tuning/snippets +++ /dev/null @@ -1 +0,0 @@ -../../snippets/ \ No newline at end of file diff --git a/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc b/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc index 7583c18ed643..8108423725e4 100644 --- a/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +++ b/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc @@ -14,7 +14,7 @@ include::modules/telco-core-cpu-partitioning-performance-tune.adoc[leveloffset=+ [role="_additional-resources"] .Additional resources -* xref:../../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile] +* xref:../../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile] * xref:../../../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-du-configuring-host-firmware-requirements_sno-configure-for-vdu[Configuring host firmware for low latency and high performance] @@ -78,9 +78,9 @@ include::modules/telco-core-power-management.adoc[leveloffset=+1] * xref:../../../rest_api/node_apis/performanceprofile-performance-openshift-io-v2.adoc#spec-workloadhints[Performance Profile] -* xref:../../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-power-saving-for-nodes_cnf-low-latency-perf-profile[Configuring power saving for nodes] +* xref:../../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-power-saving-for-nodes_cnf-low-latency-perf-profile[Configuring power saving for nodes] -* xref:../../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-power-saving-for-nodes_cnf-low-latency-perf-profile[Configuring power saving for nodes that run colocated high and low priority workloads] +* xref:../../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-power-saving-for-nodes_cnf-low-latency-perf-profile[Configuring power saving for nodes that run colocated high and low priority workloads] include::modules/telco-core-storage.adoc[leveloffset=+1] diff --git a/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc b/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc index 8249fb024704..49126d9665f3 100644 --- a/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +++ b/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc @@ -18,14 +18,14 @@ include::modules/telco-ran-bios-tuning.adoc[leveloffset=+1] * xref:../../../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-du-configuring-host-firmware-requirements_sno-configure-for-vdu[Configuring host firmware for low latency and high performance] -* xref:../../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile] +* xref:../../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile] include::modules/telco-ran-node-tuning-operator.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#about_irq_affinity_setting_cnf-low-latency-perf-profile[Finding the effective IRQ affinity setting for a node] +* xref:../../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#about_irq_affinity_setting_cnf-low-latency-perf-profile[Finding the effective IRQ affinity setting for a node] include::modules/telco-ran-ptp-operator.adoc[leveloffset=+1] diff --git a/virt/managing_vms/advanced_vm_management/virt-configuring-cluster-realtime-workloads.adoc b/virt/managing_vms/advanced_vm_management/virt-configuring-cluster-realtime-workloads.adoc index 8b39cbb2997d..578bcdcd23fd 100644 --- a/virt/managing_vms/advanced_vm_management/virt-configuring-cluster-realtime-workloads.adoc +++ b/virt/managing_vms/advanced_vm_management/virt-configuring-cluster-realtime-workloads.adoc @@ -17,5 +17,5 @@ include::modules/virt-configuring-vm-real-time.adoc[leveloffset=+1] == Additional resources * xref:../../../../scalability_and_performance/using-node-tuning-operator.adoc#using-node-tuning-operator[Using the Node Tuning Operator] -* xref:../../../../scalability_and_performance/low_latency_tuning/cnf-provisioning-low-latency-workloads.adoc#cnf-provisioning-low-latency-workloads[Provisioning real-time and low latency workloads] -* xref:../../../../scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-reducing-nic-queues-with-nto[Reducing NIC queues using the Node Tuning Operator] +* xref:../../../../scalability_and_performance/cnf-provisioning-low-latency-workloads.adoc#cnf-provisioning-low-latency-workloads[Provisioning real-time and low latency workloads] +* xref:../../../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-reducing-nic-queues-with-nto[Reducing NIC queues using the Node Tuning Operator] From 1fc551e1af00c04a819713f6e21187b93f57fe5e Mon Sep 17 00:00:00 2001 From: Kathryn Alexander Date: Thu, 6 Feb 2025 14:35:54 -0500 Subject: [PATCH 178/669] adding redirects for PR87901 --- .s2i/httpd-cfg/01-commercial.conf | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/.s2i/httpd-cfg/01-commercial.conf b/.s2i/httpd-cfg/01-commercial.conf index ff7129284dc2..520b4d3505fd 100644 --- a/.s2i/httpd-cfg/01-commercial.conf +++ b/.s2i/httpd-cfg/01-commercial.conf @@ -273,6 +273,13 @@ AddType text/vtt vtt RewriteRule ^container-platform/(4\.10|4\.11)/scalability_and_performance/ztp-configuring-single-node-cluster-deployment-during-installation.html /container-platform/$1/scalability_and_performance/ztp_far_edge/ztp-manual-install.html [NE,R=302] RewriteRule ^container-platform/(4\.10|4\.11)/scalability_and_performance/ztp-vdu-validating-cluster-tuning.html /container-platform/$1/scalability_and_performance/ztp_far_edge/ztp-vdu-validating-cluster-tuning.html [NE,R=302] + #Redirects for Telco changes for DRC levels in https://github.com/openshift/openshift-docs/pull/87901/ + RewriteRule ^container-platform/(4\.12|4\.13|4\.14|4\.15|4\.16|4\.17|4\.18)/scalability_and_performance/low_latency_tuning/cnf-understanding-low-latency.html /container-platform/$1/scalability_and_performance/cnf-understanding-low-latency.html [NE,R=302] + RewriteRule ^container-platform/(4\.12|4\.13|4\.14|4\.15|4\.16|4\.17|4\.18)/scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.html /container-platform/$1/scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.html [NE,R=302] + RewriteRule ^container-platform/(4\.12|4\.13|4\.14|4\.15|4\.16|4\.17|4\.18)/scalability_and_performance/low_latency_tuning/cnf-provisioning-low-latency-workloads.html /container-platform/$1/scalability_and_performance/cnf-provisioning-low-latency-workloads.html [NE,R=302] + RewriteRule ^container-platform/(4\.12|4\.13|4\.14|4\.15|4\.16|4\.17|4\.18)/scalability_and_performance/low_latency_tuning/cnf-debugging-low-latency-tuning-status.html /container-platform/$1/scalability_and_performance/cnf-debugging-low-latency-tuning-status.html [NE,R=302] + RewriteRule ^container-platform/(4\.12|4\.13|4\.14|4\.15|4\.16|4\.17|4\.18)/scalability_and_performance/low_latency_tuning/cnf-performing-platform-verification-latency-tests.html /container-platform/$1/scalability_and_performance/cnf-performing-platform-verification-latency-tests.html [NE,R=302] + # Redirect for low latency tuning changes delivered in https://github.com/openshift/openshift-docs/pull/47965 RewriteRule ^container-platform/(4\.11|4\.12)/scalability_and_performance/cnf-performance-addon-operator-for-low-latency-nodes.html /container-platform/$1/scalability_and_performance/cnf-low-latency-tuning.html [NE,R=302] From e073f94dd4af726687e6b85f1e0245ee67202cef Mon Sep 17 00:00:00 2001 From: xenolinux Date: Tue, 11 Feb 2025 17:43:25 +0530 Subject: [PATCH 179/669] Add hosted control planes to term glossary --- contributing_to_docs/term_glossary.adoc | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/contributing_to_docs/term_glossary.adoc b/contributing_to_docs/term_glossary.adoc index 6c994df69ebd..3ae952157959 100644 --- a/contributing_to_docs/term_glossary.adoc +++ b/contributing_to_docs/term_glossary.adoc @@ -328,6 +328,17 @@ While "GroupVersionKind" does appear in the API guide, typically there should no == H +'''' +=== hosted control planes + +Usage: hosted control planes or Hosted control planes + +When referencing hosted control planes for the first time, write as "hosted control planes for {product-title}". Capitalize the “h” in “hosted control planes” only when “hosted” is the first word in a title, heading, or sentence. Use lowercase “h” for subsequent references of hosted control planes. + +Avoid referencing hosted control planes as HyperShift in any customer-facing content as HyperShift is the internal name. + +In the documentation for hosted control planes with ROSA, the writers are allowed to use “HCP with ROSA”, per branding. + == I '''' From 695a4693feb07509b794770d3c5ed731579a26ad Mon Sep 17 00:00:00 2001 From: Michael Ryan Peter Date: Thu, 6 Feb 2025 14:30:09 -0500 Subject: [PATCH 180/669] Remove references to pull secrets in cluster catalog CRs --- ...1-creating-a-pull-secret-for-catalogd.adoc | 0 modules/olmv1-about-catalogs.adoc | 2 -- modules/olmv1-adding-a-catalog.adoc | 28 +++++++++---------- modules/olmv1-installing-an-operator.adoc | 4 --- snippets/olmv1-multi-catalog-admon.adoc | 13 --------- 5 files changed, 14 insertions(+), 33 deletions(-) rename {modules => _unused_topics}/olmv1-creating-a-pull-secret-for-catalogd.adoc (100%) delete mode 100644 snippets/olmv1-multi-catalog-admon.adoc diff --git a/modules/olmv1-creating-a-pull-secret-for-catalogd.adoc b/_unused_topics/olmv1-creating-a-pull-secret-for-catalogd.adoc similarity index 100% rename from modules/olmv1-creating-a-pull-secret-for-catalogd.adoc rename to _unused_topics/olmv1-creating-a-pull-secret-for-catalogd.adoc diff --git a/modules/olmv1-about-catalogs.adoc b/modules/olmv1-about-catalogs.adoc index 9805c2fc5aa4..fdf9d6aef9df 100644 --- a/modules/olmv1-about-catalogs.adoc +++ b/modules/olmv1-about-catalogs.adoc @@ -10,5 +10,3 @@ = About catalogs in {olmv1} You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the {olmv1-first} suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images. - -include::snippets/olmv1-multi-catalog-admon.adoc[] diff --git a/modules/olmv1-adding-a-catalog.adoc b/modules/olmv1-adding-a-catalog.adoc index dad295773f00..a0ff11f565b5 100644 --- a/modules/olmv1-adding-a-catalog.adoc +++ b/modules/olmv1-adding-a-catalog.adoc @@ -13,36 +13,34 @@ To add a catalog to a cluster, create a catalog custom resource (CR) and apply i . Create a catalog custom resource (CR), similar to the following example: + -.Example `redhat-operators.yaml` +.Example `my-redhat-operators.yaml` file [source,yaml,subs="attributes+"] ---- apiVersion: catalogd.operatorframework.io/v1alpha1 kind: ClusterCatalog metadata: - name: redhat-operators + name: my-redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v{product-version} <1> - pullSecret: <2> - pollInterval: <3> + pollInterval: <2> ---- <1> Specify the catalog's image in the `spec.source.image` field. -<2> If your catalog is hosted on a secure registry, such as `registry.redhat.io`, you must create a pull secret scoped to the `openshift-catalog` namespace. -<3> Specify the interval for polling the remote registry for newer image digests. The default value is `24h`. Valid units include seconds (`s`), minutes (`m`), and hours (`h`). To disable polling, set a zero value, such as `0s`. +<2> Specify the interval for polling the remote registry for newer image digests. The default value is `24h`. Valid units include seconds (`s`), minutes (`m`), and hours (`h`). To disable polling, set a zero value, such as `0s`. . Add the catalog to your cluster by running the following command: + [source,terminal] ---- -$ oc apply -f redhat-operators.yaml +$ oc apply -f my-redhat-operators.yaml ---- + .Example output [source,text] ---- -catalog.catalogd.operatorframework.io/redhat-operators created +catalog.catalogd.operatorframework.io/my-redhat-operators created ---- .Verification @@ -59,8 +57,8 @@ $ oc get clustercatalog .Example output [source,text] ---- -NAME AGE -redhat-operators 20s +NAME AGE +my-redhat-operators 20s ---- .. Check the status of your catalog by running the following command: @@ -73,11 +71,11 @@ $ oc describe clustercatalog .Example output [source,text,subs="attributes+"] ---- -Name: redhat-operators +Name: my-redhat-operators Namespace: -Labels: +Labels: olm.operatorframework.io/metadata.name=my-redhat-operators Annotations: -API Version: catalogd.operatorframework.io/v1alpha1 +API Version: olm.operatorframework.io/v1 Kind: ClusterCatalog Metadata: Creation Timestamp: 2024-06-10T17:34:53Z @@ -87,9 +85,11 @@ Metadata: Resource Version: 46075 UID: 83c0db3c-a553-41da-b279-9b3cddaa117d Spec: + Availability Mode: Available + Priority: 0 Source: Image: - Pull Secret: redhat-cred + Poll Interval Minutes: 10 Ref: registry.redhat.io/redhat/redhat-operator-index:v4.18 Type: image Status: <1> diff --git a/modules/olmv1-installing-an-operator.adoc b/modules/olmv1-installing-an-operator.adoc index fae57b8e1d2d..07900f227d20 100644 --- a/modules/olmv1-installing-an-operator.adoc +++ b/modules/olmv1-installing-an-operator.adoc @@ -123,10 +123,6 @@ where: ``:: Specifies the name of the service account you created to install, update, and manage your extension. ``:: Optional: Specifies the channel, such as `pipelines-1.11` or `latest`, for the package you want to install or update. ``:: Optional: Specifies the version or version range, such as `1.11.1`, `1.12.x`, or `>=1.12.1`, of the package you want to install or update. For more information, see "Example custom resources (CRs) that specify a target version" and "Support for version ranges". -+ --- -include::snippets/olmv1-multi-catalog-admon.adoc[] --- . Apply the CR to the cluster by running the following command: + diff --git a/snippets/olmv1-multi-catalog-admon.adoc b/snippets/olmv1-multi-catalog-admon.adoc deleted file mode 100644 index df1a395680c2..000000000000 --- a/snippets/olmv1-multi-catalog-admon.adoc +++ /dev/null @@ -1,13 +0,0 @@ -// Text snippet included in the following modules: -// -// * modules/olmv1-about-catalogs.adoc - -:_mod-docs-content-type: SNIPPET - -[IMPORTANT] -==== -If you try to install an Operator or extension that does not have unique name, the installation might fail or lead to an unpredictable result. This occurs for the following reasons: - -* If mulitple catalogs are installed on a cluster, {olmv1-first} does not include a mechanism to specify a catalog when you install an Operator or extension. -* {olmv1} requires that all of the Operators and extensions that are available to install on a cluster use a unique name for their bundles and packages. -==== From f6c92f845daac38b62fce55c883b5f5b32715aa6 Mon Sep 17 00:00:00 2001 From: Laura Hinson Date: Tue, 4 Feb 2025 14:27:39 -0500 Subject: [PATCH 181/669] [OSDOCS-13017]: Adding missing link to etcd docs --- .../recommended-etcd-practices.adoc | 1 + 1 file changed, 1 insertion(+) diff --git a/scalability_and_performance/recommended-performance-scale-practices/recommended-etcd-practices.adoc b/scalability_and_performance/recommended-performance-scale-practices/recommended-etcd-practices.adoc index f5f533f5f049..dad385b5e419 100644 --- a/scalability_and_performance/recommended-performance-scale-practices/recommended-etcd-practices.adoc +++ b/scalability_and_performance/recommended-performance-scale-practices/recommended-etcd-practices.adoc @@ -21,6 +21,7 @@ include::modules/etcd-node-scaling.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources * link:https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2024/html/installing_openshift_container_platform_with_the_assisted_installer/expanding-the-cluster#adding-hosts-with-the-api_expanding-the-cluster[Adding hosts with the API] +* link:https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2023/html/assisted_installer_for_openshift_container_platform/expanding-the-cluster#installing-primary-control-plane-node-healthy-cluster_expanding-the-cluster[Installing a primary control plane node on a healthy cluster] * link:https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2024/html/installing_openshift_container_platform_with_the_assisted_installer/expanding-the-cluster#installing-control-plane-node-healthy-cluster_expanding-the-cluster[Expanding the cluster] * xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[Restoring to a previous cluster state] From ca3b95d3ab21be6abaaefd425d14f6cd8b5a52e9 Mon Sep 17 00:00:00 2001 From: Michael Burke Date: Fri, 3 Jan 2025 08:58:45 -0500 Subject: [PATCH 182/669] quickstart edits --- modules/coreos-layering-configuring-on.adoc | 89 +++++++++------------ modules/coreos-layering-configuring.adoc | 2 +- 2 files changed, 38 insertions(+), 53 deletions(-) diff --git a/modules/coreos-layering-configuring-on.adoc b/modules/coreos-layering-configuring-on.adoc index e3aed758608d..dfc565cac966 100644 --- a/modules/coreos-layering-configuring-on.adoc +++ b/modules/coreos-layering-configuring-on.adoc @@ -6,9 +6,14 @@ [id="coreos-layering-configuring-on_{context}"] = Using on-cluster layering to apply a custom layered image -To apply a custom layered image to your cluster by using the on-cluster build process, make a `MachineOSConfig` custom resource that includes a Containerfile, a machine config pool reference, repository push and pull secrets, and other parameters as described in the prerequisites. +To apply a custom layered image to your cluster by using the on-cluster build process, make a `MachineOSConfig` custom resource (CR) that specifies the following parameters: -When you create the object, the Machine Config Operator (MCO) creates a `MachineOSBuild` object and a `machine-os-builder` pod. The build process also creates transient objects, such as config maps, which are cleaned up after the build is complete. +* the Containerfile to build +* the machine config pool to associate the build +* where the final image should be pushed and pulled from +* the push and pull secrets to use + +When you create the object, the Machine Config Operator (MCO) creates a `MachineOSBuild` object and a `machine-os-builder` pod. The build process also creates transient objects, such as config maps, which are cleaned up after the build is complete. When the build is complete, the MCO pushes the new custom layered image to your repository for use when deploying new nodes. You can see the digested image pull spec for the new custom layered image in the `MachineOSBuild` object and `machine-os-builder` pod. @@ -16,16 +21,13 @@ You should not need to interact with these new objects or the `machine-os-builde You need a separate `MachineOSConfig` CR for each machine config pool where you want to use a custom layered image. -:FeatureName: On-cluster image layering -include::snippets/technology-preview.adoc[] - .Prerequisites -* You have enabled the `TechPreviewNoUpgrade` feature set by using the feature gates. For more information, see "Enabling features using feature gates". +* You have a copy of the global pull secret in the `openshift-machine-config-operator` namespace that the MCO needs in order to pull the base operating system image. -* You have the pull secret in the `openshift-machine-config-operator` namespace that the MCO needs to pull the base operating system image. +* You have a copy of the `etc-pki-entitlement` secret in the `openshift-machine-api` namespace. -* You have the push secret that the MCO needs to push the new custom layered image to your registry. +* You have the push secret that the MCO needs in order to push the new custom layered image to your registry. * You have a pull secret that your nodes need to pull the new custom layered image from your registry. This should be a different secret than the one used to push the image to the repository. @@ -50,30 +52,32 @@ spec: name: <1> buildInputs: containerFile: # <2> - - containerfileArch: noarch + - containerfileArch: noarch <3> content: |- - FROM configs AS final + FROM configs AS final <4> RUN dnf install -y cowsay && \ dnf clean all && \ ostree container commit - imageBuilder: # <3> + imageBuilder: # <5> imageBuilderType: PodImageBuilder - baseImagePullSecret: # <4> + baseImagePullSecret: # <6> name: global-pull-secret-copy - renderedImagePushspec: image-registry.openshift-image-registry.svc:5000/openshift/os-image:latest # <5> - renderedImagePushSecret: # <6> + renderedImagePushspec: image-registry.openshift-image-registry.svc:5000/openshift/os-image:latest # <7> + renderedImagePushSecret: # <8> name: builder-dockercfg-7lzwl - buildOutputs: # <7> + buildOutputs: # <9> currentImagePullSecret: name: builder-dockercfg-7lzwl ---- -<1> Specifies the name of the machine config pool associated with the nodes where you want to deploy the custom layered image. -<2> Specifies the Containerfile to configure the custom layered image. -<3> Specifies the name of the image builder to use. This must be `PodImageBuilder`. -<4> Specifies the name of the pull secret that the MCO needs to pull the base operating system image from the registry. -<5> Specifies the image registry to push the newly-built custom layered image to. This can be any registry that your cluster has access to. This example uses the internal {product-title} registry. -<6> Specifies the name of the push secret that the MCO needs to push the newly-built custom layered image to that registry. -<7> Specifies the secret required by the image registry that the nodes need to pull the newly-built custom layered image. This should be a different secret than the one used to push the image to your repository. +<1> Specifies the machine config pool to deploy the custom layered image. +<2> Specifies the Containerfile to configure the custom layered image. You can specify multiple build stages in the Containerfile. +<3> Specifies the architecture of the image to be built. You must set this parameter to `noarch`. +<4> Specifies the build stage as final. This field is required and applies to the last image in the build. +<5> Specifies the name of the image builder to use. You must set this parameter to `PodImageBuilder`. +<6> Specifies the name of the pull secret that the MCO needs in order to pull the base operating system image from the registry. +<7> Specifies the image registry to push the newly-built custom layered image to. This can be any registry that your cluster has access to. This example uses the internal {product-title} registry. +<8> Specifies the name of the push secret that the MCO needs in order to push the newly-built custom layered image to the registry. +<9> Specifies the secret required by the image registry that the nodes need in order to pull the newly-built custom layered image. This should be a different secret than the one used to push the image to your repository. .. Create the `MachineOSConfig` object: + @@ -115,13 +119,14 @@ When you save the changes, the MCO drains, cordons, and reboots the nodes. After .Verification -. Verify that the new pods are running by using the following command: +. Verify that the new pods are ready by running the following command: + [source,terminal] ---- -$ oc get pods -n +$ oc get pods -n openshift-machine-config-operator ---- + +.Example output [source,terminal] ---- NAME READY STATUS RESTARTS AGE @@ -132,48 +137,28 @@ machine-os-builder-6fb66cfb99-zcpvq 1/1 Runnin <1> This is the build pod where the custom layered image is building. <2> This pod can be used for troubleshooting. -. Verify that the `MachineOSConfig` object contains a reference to the new custom layered image: +. Verify the current stage of your layered build by running the following command: + [source,terminal] ---- -$ oc describe MachineOSConfig +$ oc get machineosbuilds ---- + -[source,yaml] +.Example output +[source,terminal] ---- -apiVersion: machineconfiguration.openshift.io/v1alpha1 -kind: MachineOSConfig -metadata: - name: layered -spec: - buildInputs: - baseImagePullSecret: - name: global-pull-secret-copy - containerFile: - - containerfileArch: noarch - content: "" - imageBuilder: - imageBuilderType: PodImageBuilder - renderedImagePushSecret: - name: builder-dockercfg-ng82t-canonical - renderedImagePushspec: image-registry.openshift-image-registry.svc:5000/openshift-machine-config-operator/os-image:latest - buildOutputs: - currentImagePullSecret: - name: global-pull-secret-copy - machineConfigPool: - name: layered -status: - currentImagePullspec: image-registry.openshift-image-registry.svc:5000/openshift-machine-config-operator/os-image@sha256:f636fa5b504e92e6faa22ecd71a60b089dab72200f3d130c68dfec07148d11cd # <1> +NAME PREPARED BUILDING SUCCEEDED INTERRUPTED FAILED +layered-rendered-layered-ef6460613affe503b530047a11b28710-builder False True False False False ---- -<1> Digested image pull spec for the new custom layered image. -. Verify that the `MachineOSBuild` object contains a reference to the new custom layered image. +. Verify that the `MachineOSBuild` object contains a reference to the new custom layered image by running the following command: + [source,terminal] ---- $ oc describe machineosbuild ---- + +.Example output [source,yaml] ---- apiVersion: machineconfiguration.openshift.io/v1alpha1 diff --git a/modules/coreos-layering-configuring.adoc b/modules/coreos-layering-configuring.adoc index a714204dd30d..5c067543a052 100644 --- a/modules/coreos-layering-configuring.adoc +++ b/modules/coreos-layering-configuring.adoc @@ -65,7 +65,7 @@ metadata: spec: osImageURL: quay.io/my-registry/custom-image@sha256... <2> ---- -<1> Specifies the machine config pool to apply the custom layered image. +<1> Specifies the machine config pool to deploy the custom layered image. <2> Specifies the path to the custom layered image in the repository. .. Create the `MachineConfig` object: From 2ee1fede53cf14d16dc39954b7c85d71727db2bc Mon Sep 17 00:00:00 2001 From: StephenJamesSmith Date: Fri, 7 Feb 2025 15:22:26 -0500 Subject: [PATCH 183/669] TELCODOCS-2199: Update MIG note. --- hardware_accelerators/nvidia-gpu-architecture.adoc | 1 - modules/nvidia-gpu-enablement.adoc | 2 +- 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/hardware_accelerators/nvidia-gpu-architecture.adoc b/hardware_accelerators/nvidia-gpu-architecture.adoc index a6ec3bcdc228..ee82fc7f7996 100644 --- a/hardware_accelerators/nvidia-gpu-architecture.adoc +++ b/hardware_accelerators/nvidia-gpu-architecture.adoc @@ -24,7 +24,6 @@ include::modules/nvidia-gpu-prerequisites.adoc[leveloffset=+1] // New enablement modules ifndef::openshift-dedicated,openshift-rosa[] include::modules/nvidia-gpu-enablement.adoc[leveloffset=+1] - include::modules/nvidia-gpu-bare-metal.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources diff --git a/modules/nvidia-gpu-enablement.adoc b/modules/nvidia-gpu-enablement.adoc index 61abf456efbe..5c20e3308561 100644 --- a/modules/nvidia-gpu-enablement.adoc +++ b/modules/nvidia-gpu-enablement.adoc @@ -14,5 +14,5 @@ image::512_OpenShift_NVIDIA_GPU_enablement_1223.png[NVIDIA GPU enablement] [NOTE] ==== -MIG is only supported with A30, A100, A100X, A800, AX800, H100, and H800. +MIG is supported on GPUs starting with the NVIDIA Ampere generation. For a list of GPUs that support MIG, see the link:https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#supported-gpus[NVIDIA MIG User Guide]. ==== From dd7887e9b849b3eae5b830d3c81de030d3b733e2 Mon Sep 17 00:00:00 2001 From: Max Bridges Date: Mon, 5 Aug 2024 10:27:11 -0400 Subject: [PATCH 184/669] Make ShiftStack UPI network resource changes triggered by OCPBUGS-33973 Resolves OCPBUGS-37970 --- ...lation-osp-creating-network-resources.adoc | 23 +++++++++++++++++ modules/installation-osp-fixing-subnet.adoc | 25 ++++++++++++++++--- 2 files changed, 44 insertions(+), 4 deletions(-) diff --git a/modules/installation-osp-creating-network-resources.adoc b/modules/installation-osp-creating-network-resources.adoc index 4aaeba644414..ed4199843a95 100644 --- a/modules/installation-osp-creating-network-resources.adoc +++ b/modules/installation-osp-creating-network-resources.adoc @@ -17,6 +17,29 @@ Create the network resources that an {product-title} on {rh-openstack-first} ins . For a dual stack cluster deployment, edit the `inventory.yaml` file and uncomment the `os_subnet6` attribute. +. To ensure that your network resources have unique names on the {rh-openstack} deployment, create an environment variable and JSON file for use in the Ansible playbooks: ++ +.. Create an environment variable that has a unique name value by running the following command: ++ +[source,terminal] +---- +$ export OS_NET_ID="openshift-$(dd if=/dev/urandom count=4 bs=1 2>/dev/null |hexdump -e '"%02x"')" +---- + +.. Verify that the variable is set by running the following command on a command line: ++ +[source,terminal] +---- +$ echo $OS_NET_ID +---- + +.. Create a JSON object that includes the variable in a file called `netid.json` by running the following command: ++ +[source,terminal] +---- +$ echo "{\"os_net_id\": \"$OS_NET_ID\"}" | tee netid.json +---- + . On a command line, create the network resources by running the following command: + [source,terminal] diff --git a/modules/installation-osp-fixing-subnet.adoc b/modules/installation-osp-fixing-subnet.adoc index b6b58df985c5..04f8ffc00ca7 100644 --- a/modules/installation-osp-fixing-subnet.adoc +++ b/modules/installation-osp-fixing-subnet.adoc @@ -24,22 +24,39 @@ The IP range that the installation program uses by default might not match the N + [source,terminal] ---- -$ python -c 'import yaml +$ python -c 'import os +import sys +import yaml +import re +re_os_net_id = re.compile(r"{{\s*os_net_id\s*}}") +os_net_id = os.getenv("OS_NET_ID") +path = "common.yaml" +facts = None +for _dict in yaml.safe_load(open(path))[0]["tasks"]: + if "os_network" in _dict.get("set_fact", {}): + facts = _dict["set_fact"] + break +if not facts: + print("Cannot find `os_network` in common.yaml file. Make sure OpenStack resource names are defined in one of the tasks.") + sys.exit(1) +os_network = re_os_net_id.sub(os_net_id, facts["os_network"]) +os_subnet = re_os_net_id.sub(os_net_id, facts["os_subnet"]) path = "install-config.yaml" data = yaml.safe_load(open(path)) inventory = yaml.safe_load(open("inventory.yaml"))["all"]["hosts"]["localhost"] machine_net = [{"cidr": inventory["os_subnet_range"]}] api_vips = [inventory["os_apiVIP"]] ingress_vips = [inventory["os_ingressVIP"]] -ctrl_plane_port = {"network": {"name": inventory["os_network"]}, "fixedIPs": [{"subnet": {"name": inventory["os_subnet"]}}]} -if inventory.get("os_subnet6"): <1> +ctrl_plane_port = {"network": {"name": os_network}, "fixedIPs": [{"subnet": {"name": os_subnet}}]} +if inventory.get("os_subnet6_range"): <1> + os_subnet6 = re_os_net_id.sub(os_net_id, facts["os_subnet6"]) machine_net.append({"cidr": inventory["os_subnet6_range"]}) api_vips.append(inventory["os_apiVIP6"]) ingress_vips.append(inventory["os_ingressVIP6"]) data["networking"]["networkType"] = "OVNKubernetes" data["networking"]["clusterNetwork"].append({"cidr": inventory["cluster_network6_cidr"], "hostPrefix": inventory["cluster_network6_prefix"]}) data["networking"]["serviceNetwork"].append(inventory["service_subnet6_range"]) - ctrl_plane_port["fixedIPs"].append({"subnet": {"name": inventory["os_subnet6"]}}) + ctrl_plane_port["fixedIPs"].append({"subnet": {"name": os_subnet6}}) data["networking"]["machineNetwork"] = machine_net data["platform"]["openstack"]["apiVIPs"] = api_vips data["platform"]["openstack"]["ingressVIPs"] = ingress_vips From 3b49c028e4cc12b88b650785120095db4fb0d8e0 Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Thu, 30 Jan 2025 15:15:19 +0000 Subject: [PATCH 185/669] OCPBUGS-41989: Added a table to the nw-ingress-sharding.adoc --- modules/nw-ingress-sharding.adoc | 30 ++++++++++++++++++++---------- 1 file changed, 20 insertions(+), 10 deletions(-) diff --git a/modules/nw-ingress-sharding.adoc b/modules/nw-ingress-sharding.adoc index 0f06fc1362d7..7308eb2017e5 100644 --- a/modules/nw-ingress-sharding.adoc +++ b/modules/nw-ingress-sharding.adoc @@ -6,25 +6,35 @@ [id="nw-ingress-sharding_{context}"] = Ingress Controller sharding -You can use Ingress sharding, also known as router sharding, to distribute a set of routes across multiple routers by adding labels to routes, namespaces, or both. The Ingress Controller uses a corresponding set of selectors to admit only the routes that have a specified label. Each Ingress shard comprises the routes that are filtered using a given selection expression. +You can use Ingress sharding, also known as router sharding, to distribute a set of routes across multiple routers by adding labels to routes, namespaces, or both. The Ingress Controller uses a corresponding set of selectors to admit only the routes that have a specified label. Each Ingress shard comprises the routes that are filtered by using a given selection expression. As the primary mechanism for traffic to enter the cluster, the demands on the Ingress Controller can be significant. As a cluster administrator, you can shard the routes to: -* Balance Ingress Controllers, or routers, with several routes to speed up responses to changes. -* Allocate certain routes to have different reliability guarantees than other routes. +* Balance Ingress Controllers, or routers, with several routes to accelerate responses to changes. +* Assign certain routes to have different reliability guarantees than other routes. * Allow certain Ingress Controllers to have different policies defined. * Allow only specific routes to use additional features. * Expose different routes on different addresses so that internal and external users can see different routes, for example. -* Transfer traffic from one version of an application to another during a blue green deployment. +* Transfer traffic from one version of an application to another during a blue-green deployment. -When Ingress Controllers are sharded, a given route is admitted to zero or more Ingress Controllers in the group. A route's status describes whether an Ingress Controller has admitted it or not. An Ingress Controller will only admit a route if it is unique to its shard. +When Ingress Controllers are sharded, a given route is admitted to zero or more Ingress Controllers in the group. The status of a route describes whether an Ingress Controller has admitted the route. An Ingress Controller only admits a route if the route is unique to a shard. -An Ingress Controller can use three sharding methods: +With sharding, you can distribute subsets of routes over multiple Ingress Controllers. These subsets can be nonoverlapping, also called _traditional_ sharding, or overlapping, otherwise known as _overlapped_ sharding. -* Adding only a namespace selector to the Ingress Controller, so that all routes in a namespace with labels that match the namespace selector are in the Ingress shard. +The following table outlines three sharding methods: -* Adding only a route selector to the Ingress Controller, so that all routes with labels that match the route selector are in the Ingress shard. +[cols="1,3",options="header"] +|=== +|Sharding method +|Description -* Adding both a namespace selector and route selector to the Ingress Controller, so that routes with labels that match the route selector in a namespace with labels that match the namespace selector are in the Ingress shard. +|Namespace selector +|After you add a namespace selector to the Ingress Controller, all routes in a namespace that have matching labels for the namespace selector are included in the Ingress shard. Consider this method when an Ingress Controller serves all routes created in a namespace. + +|Route selector +|After you add a route selector to the Ingress Controller, all routes with labels that match the route selector are included in the Ingress shard. Consider this method when you want an Ingress Controller to serve only a subset of routes or a specific route in a namespace. + +|Namespace and route selectors +|Provides your Ingress Controller scope for both namespace selector and route selector methods. Consider this method when you want the flexibility of both the namespace selector and the route selector methods. +|=== -With sharding, you can distribute subsets of routes over multiple Ingress Controllers. These subsets can be non-overlapping, also called _traditional_ sharding, or overlapping, otherwise known as _overlapped_ sharding. From 331d2c220a88aa698001232ac4b7f9d0a6f0399c Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Thu, 6 Feb 2025 14:30:17 +0000 Subject: [PATCH 186/669] OSDOCS-13230-ui: Added nmstate-console-plugin step to Uninstall K8S NMState Op --- modules/k8s-nmstate-uninstall-operator.adoc | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/modules/k8s-nmstate-uninstall-operator.adoc b/modules/k8s-nmstate-uninstall-operator.adoc index f281b9ac2302..c46fb6a34f71 100644 --- a/modules/k8s-nmstate-uninstall-operator.adoc +++ b/modules/k8s-nmstate-uninstall-operator.adoc @@ -15,6 +15,7 @@ If you need to reinstall the Kubernetes NMState Operator, see "Installing the Ku .Prerequisites * You have installed the {oc-first}. +* You have installed the `jq` CLI tool. * You are logged in as a user with `cluster-admin` privileges. .Procedure @@ -59,7 +60,24 @@ $ oc -n openshift-nmstate delete nmstate nmstate $ oc delete --all deployments --namespace=openshift-nmstate ---- -. Delete all the custom resource definition (CRD), such as `nmstates`, that exist in the `nmstate.io` namespace by running the following commands: +. After you deleted the `nmstate` CR, remove the `nmstate-console-plugin` console plugin name from the `console.operator.openshift.io/cluster` CR. ++ +.. Store the position of the `nmstate-console-plugin` entry that exists among the list of enable plugins by running the following command. The following command uses the `jq` CLI tool to store the index of the entry in an environment variable named `INDEX`: ++ +[source,terminal] +---- +INDEX=$(oc get console.operator.openshift.io cluster -o json | jq -r '.spec.plugins | to_entries[] | select(.value == "nmstate-console-plugin") | .key') +---- ++ +.. Remove the `nmstate-console-plugin` entry from the `console.operator.openshift.io/cluster` CR by running the following patch command: ++ +[source,terminal] +---- +$ oc patch console.operator.openshift.io cluster --type=json -p "[{\"op\": \"remove\", \"path\": \"/spec/plugins/$INDEX\"}]" <1> +---- +<1> `INDEX` is an auxiliary variable. You can specify a different name for this variable. + +. Delete all the custom resource definitions (CRDs), such as `nmstates.nmstate.io`, by running the following commands: + [source,terminal] ---- From 33c709d1e10c7b0276df40bb679766f80c14df4a Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Thu, 23 Jan 2025 10:38:56 +0000 Subject: [PATCH 187/669] OCPBUGS-39006: Updated cidrSelector in Example EgressFirewall CR objects --- modules/nw-egressnetworkpolicy-object.adoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/nw-egressnetworkpolicy-object.adoc b/modules/nw-egressnetworkpolicy-object.adoc index af1e2e18a23e..080333f449b2 100644 --- a/modules/nw-egressnetworkpolicy-object.adoc +++ b/modules/nw-egressnetworkpolicy-object.adoc @@ -95,7 +95,7 @@ spec: <1> A collection of egress firewall policy rule objects. ifdef::ovn[] -The following example defines a policy rule that denies traffic to the host at the `172.16.1.1` IP address, if the traffic is using either the TCP protocol and destination port `80` or any protocol and destination port `443`. +The following example defines a policy rule that denies traffic to the host at the `172.16.1.1/32` IP address, if the traffic is using either the TCP protocol and destination port `80` or any protocol and destination port `443`. [source,yaml,subs="attributes+"] ---- @@ -107,7 +107,7 @@ spec: egress: - type: Deny to: - cidrSelector: 172.16.1.1 + cidrSelector: 172.16.1.1/32 ports: - port: 80 protocol: TCP From 9f59d800e78787ab55d778f1fb3a2f824a20b5d4 Mon Sep 17 00:00:00 2001 From: srir Date: Wed, 22 Jan 2025 13:43:00 +0530 Subject: [PATCH 188/669] OCPBUGS-36933: Updated the Adding a multi-architecture compute machine set to your AWS cluster proc --- modules/multi-architecture-modify-machine-set-aws.adoc | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/modules/multi-architecture-modify-machine-set-aws.adoc b/modules/multi-architecture-modify-machine-set-aws.adoc index a919b8d632e8..d4e9fedcd91a 100644 --- a/modules/multi-architecture-modify-machine-set-aws.adoc +++ b/modules/multi-architecture-modify-machine-set-aws.adoc @@ -79,12 +79,12 @@ spec: - filters: - name: tag:Name values: - - -worker-sg <1> + - -node <1> subnet: filters: - name: tag:Name values: - - -private- + - -subnet-private- tags: - name: kubernetes.io/cluster/ <1> value: owned @@ -97,11 +97,11 @@ spec: + [source,terminal] ---- -$ oc get -o jsonpath=‘{.status.infrastructureName}{“\n”}’ infrastructure cluster +$ oc get -o jsonpath="{.status.infrastructureName}{'\n'}" infrastructure cluster ---- <2> Specify the infrastructure ID, role node label, and zone. <3> Specify the role node label to add. -<4> Specify a Red{nbsp}Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for the nodes. The RHCOS AMI must be compatible with the machine architecture. +<4> Specify a {op-system-first} Amazon Machine Image (AMI) for your AWS region for the nodes. The {op-system} AMI must be compatible with the machine architecture. + [source,terminal] ---- From 110973229acf35fd99bc0d3e606cec24872dd4e3 Mon Sep 17 00:00:00 2001 From: Padraig O'Grady Date: Mon, 11 Nov 2024 17:42:52 +0000 Subject: [PATCH 189/669] TELCODOCS-2064: Added the spec.featureGates.mellanoxFirmwareReset parameter TELCODOCS-2064: Dev feedback applied TELCODOCS-2064: Technology Preview snippet text added TELCODOCS-2064: Peer review feedback applied --- modules/nw-sriov-configuring-operator.adoc | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/modules/nw-sriov-configuring-operator.adoc b/modules/nw-sriov-configuring-operator.adoc index dcfabf6b1c86..8f2f4c4d4524 100644 --- a/modules/nw-sriov-configuring-operator.adoc +++ b/modules/nw-sriov-configuring-operator.adoc @@ -96,6 +96,13 @@ By default, this field is set to `2`. |`boolean` |Specifies whether to enable or disable the SR-IOV Network Operator metrics. By default, this field is set to `false`. +|`spec.featureGates.mellanoxFirmwareReset` +|`boolean` +|Specifies whether to reset the firmware on virtual function (VF) changes in the SR-IOV Network Operator. Some chipsets, such as the Intel C740 Series, do not completely power off the PCI-E devices, which is required to configure VFs on NVIDIA/Mellanox NICs. By default, this field is set to `false`. + +:FeatureName: The `spec.featureGates.mellanoxFirmwareReset` parameter +include::snippets/technology-preview.adoc[] + |==== [id="about-network-resource-injector_{context}"] From 3b6564d654ca5b42c620838cfc6bd8eadc447de8 Mon Sep 17 00:00:00 2001 From: Brendan Daly Date: Tue, 11 Feb 2025 11:23:47 +0000 Subject: [PATCH 190/669] OSDOCS-11442:multiple NICs Nutanix --- modules/installation-configuration-parameters.adoc | 4 ++-- modules/installation-configuring-nutanix-failure-domains.adoc | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/installation-configuration-parameters.adoc b/modules/installation-configuration-parameters.adoc index b0c0b03e3baf..2a9af20f0eb5 100644 --- a/modules/installation-configuration-parameters.adoc +++ b/modules/installation-configuration-parameters.adoc @@ -3479,7 +3479,7 @@ The name of one or more failures domains. uuid: subnetUUIDs: - -a|By default, the installation program installs cluster machines to a single Prism Element instance. You can specify additional Prism Element instances for fault tolerance, and then apply them to: +a|By default, the installation program installs cluster machines to a single Prism Element instance. A maximum of 32 subnets for each failure domain (Prism Element) in an {product-title} cluster is supported. All `subnetUUID` values must be unique. You can specify additional Prism Element instances for fault tolerance, and then apply them to: * The cluster's default machine configuration * Only control plane or compute machine pools @@ -3567,7 +3567,7 @@ For more information on usage, see "Configuring a failure domain" in "Installing [.small] -- 1. The `prismElements` section holds a list of Prism Elements (clusters). A Prism Element encompasses all of the Nutanix resources, for example virtual machines and subnets, that are used to host the {product-title} cluster. -2. Only one subnet per Prism Element in an {product-title} cluster is supported. +2. A maximum of 32 subnets for each Prism Element in an {product-title} cluster is supported. All `subnetUUID` values must be unique. -- endif::nutanix[] diff --git a/modules/installation-configuring-nutanix-failure-domains.adoc b/modules/installation-configuring-nutanix-failure-domains.adoc index 4eb86db8c8c3..9b1d03f4166d 100644 --- a/modules/installation-configuring-nutanix-failure-domains.adoc +++ b/modules/installation-configuring-nutanix-failure-domains.adoc @@ -45,7 +45,7 @@ where: ``:: Specifies a unique name for the failure domain. The name is limited to 64 or fewer characters, which can include lower-case letters, digits, and a dash (`-`). The dash cannot be in the leading or ending position of the name. ``:: Optional. Specifies the name of the Prism Element. `:: Specifies the UUID of the Prism Element. -`:: Specifies the UUID of the Prism Element subnet object. The subnet's IP address prefix (CIDR) should contain the virtual IP addresses that the {product-title} cluster uses. Only one subnet per failure domain (Prism Element) in an {product-title} cluster is supported. +`:: Specifies the one or more UUIDs of the Prism Element subnet objects. Among them, one of the subnet's IP address prefixes (CIDRs) must contain the virtual IP addresses that the {product-title} cluster uses. A maximum of 32 subnets for each failure domain (Prism Element) in an {product-title} cluster is supported. All `subnetUUID` values must be unique. . As required, configure additional failure domains. . To distribute control plane and compute machines across the failure domains, do one of the following: From 9c4422b4a592a4ddcf8fe1ea97192d5094a383cf Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Tue, 28 Jan 2025 12:38:48 +0000 Subject: [PATCH 191/669] DIAGRAMS-527: Added the namespace isolation digram to UDN docs --- images/527-OpenShift-UDN-isolation-012025.png | Bin 0 -> 61797 bytes modules/nw-udn-cr.adoc | 2 +- .../about-user-defined-networks.adoc | 14 +++++++++----- 3 files changed, 10 insertions(+), 6 deletions(-) create mode 100644 images/527-OpenShift-UDN-isolation-012025.png diff --git a/images/527-OpenShift-UDN-isolation-012025.png b/images/527-OpenShift-UDN-isolation-012025.png new file mode 100644 index 0000000000000000000000000000000000000000..bad3320943bb16cc58324549a06d70ea5376e163 GIT binary patch literal 61797 zcmeFaXH=ER8ZL;r713@JiV6lqGJ@o&A}RusC4)-NO3u)&Vk;;LC>aFFQLY>y;h&z;%4vftFP*PpHy`o%3ZrmwS#5{1qB6_gpPFDJM2^bIX5Zv{9gDK%A?U&l{Y^( z87SX&;n~`keNP#_zK`&D`E|?S#*LBU@q3se@10;Q@;t$o_1nE$zw!L;JF;A{cTS%x zMDMQA^;Ngj>S3$e&>#V!)}sdF0@hFb%5SCrcICpwi}%L5OCs88aBbwDW0mdg8i%;_ zu8bOdJvMZ@cKR``_0dn@*H& z`}cL}hCjY-`uFuWkKZOX{QLU$x&MAV#eWBf%!dERnInBmS64q>RiWU}y3rAxUz6wQ z5zIypWhEtl(bXBn=d5Z9F;W3d@$#?s?cM9wR$;zXB6G8-&jRF?j^DO54P^6Wj%d5j=q^{=gx+{nh+}e-m2${ao57u{Bt&Pw8zSXNb8?Je|{I; zx7r9X9q|Xdc>I;)<-T)RUd9@==**9alJIKNTw9pbHE++UQ7XP!w)Njtc;kY{Zu;9p z4NNIIC0$D4Cok`^9qSONw5<&nF;#DF4SJS1G5<`|)k#s!bLSz27&iB9k&%&Yg^rmQ zE?n5RXHQg;N?PKzFu`EldCgL4YO0mJy$VJ%QrNsL%U#~<*Agx5-d=g)MqIjX#ZXJS zu4I#B>I2bl1ADrnt?cZs2e4}?q=-DR7;Vd@Icwnk*I$3x%nmi`4Hx9+C*nptxlrdY zo@26Rr+$sjLP69v(Hn7bvJpc+eZ-oUW{1t^6b=-PPVW9v=xFWe=s5U*&h2SPNQio_ zMM`d$6H9i;tPj_XQVNWi)Z4djt3m}b1?|U@Z zJ1Vs2S~g>FY?fw>JM*kB@1_$~!eINesIcQ9o@Z0fl6$!4l$pg(+$+tQ8y*QA-Q7Cu z?Ck0}W*Bj?|f`-R#{)Hbsq`lpTYOH6}uMfq7{zE8wWl-KI}9z@Jv`(xCg5s zL%%xl&d(2ta*=;&RWP%&U(?7jlb4fwipgd(-6!MdjggAmxMfGY)9jE*S79EPewChz^PZ)#g7GZpc?;Fg4=qX{`(5;@P?tyN(K2UBqf{4imDU!Fr34 z3Aye#-PhP|S(0K<6U-zVE*xu%$(y?O^tCIKdsR-`%nZoS4%C&&@mlp<8Sd^0u`I4b z4cb|)ff-lzoaJ;|7ak@v?yx<_yuRFvdjB7P9OvNR=&cD+882Q=U0GT27qITT*eqr@ z5?H*x<}BXIl2#a%R=jlefS1H!K8pr?2wBr2ZY!#b)BOPv5fQ z4qT?Or`W}G;>3md@ow2sgBAVal_{y3Fd@EHr}9Vp6>r>l`88Tnmn}rlHW3fZFuwWq zO>voU;npQV28K(ShPBtvo%3LNpIKg6DTxBp`1)p9&yOFsRP82zNGvWb1vd$}EW8$7 zThLuynXcu{Z7p$kpB-scG^`C*Zcf#*#k)dg^KAyBYQ@%t>vHZtcrffID_Yl8^9emh>e^8uJ0*O~^W)-MO&z@suw_|x zi;8zn{;U}8$g>XSF@1ruFVkKXbT+PdvG1^QqEcmFvD?bzWj58<=eF&pmClfj9Yv zQ7x43##fhpa2L6`xm&4dqEW@$G)3LlT&Ru;HppF%ys)LLo=Y}DwDCu|SGLWdLYROx zL!}*y+q+hb4C=17=hSY|g>xFY7BY6L{iEF_I;|P{e)PDg!GXG0p1mh8KT#TPN-<9@ zud;^~lTKFcn%U~hiR${jm)d_5iE67}udjgaBz z96H+8{hzak8FrF6@-6M>=&1R}OB1!_zYBUCHQQ2)8Qj#Y<8IEF8Mm!CDJh9`Diw#c z1rO`*@9r-RC8dp6jK@W|1smaKx1E1}c#P5`9`(J5-gSL-zAZ!Ft41G%-!~#c$M<{D z59d%0^{gvlg0>llzSIVty^OUX&0DDA!qG!@m^aWuuZ7FJjk_k0v&_JOLlTc#@xiX6 zd#G;Bk9C^b<0qKpO1T!Ddyaa!;xA2_lGQTsP3GOjT8|I&HC6hZrs_p=G^vl3$%qL1 z`0>i@VEyH_wKcn3u~TGSi@Gk+AMTR+iV2X7k-|D@$ztW@y)NQB*MNTIw{v{I`^j{j z5;ydxPn5as<>e9^Hf*rFKmVwtmDg>>K_*Oa_Wu0hC$G=BiI{eM9Bm6JdmrK^A?iBpfAFI!M>~4uvPP)$PEKWuOH{|JZ!onS? zbBv;;;b(6C70D_nsP4xg!_`<5Ay#tLpYi5U`eWv{~kTq)tYA=S96$_nnPO&gX>3?5MjT*=86tczqB+nSU=RBn-(tWYSO%f(sJ^W z_m{QR<_qh+?(QtE`Y+W&_dswl> z=aSa#@1|h+Gih5+^;GhTopjIHLV3UlPmW355WR!9L@`IRz^*3s^mZ(M!{+&a5Lcng zPBY4u?Y_S@~+7Rd0q+se6Bzy%JW=QU2TK(n{9y6P1%< zT^6SXpQ~us)r9cPmhBb`box?JUe3hGcm>07&2AZe^9!|(OXOc$D2IT{XoMYJCu~bA zza@{EOcwcmCWVWapB$m?*N2c_xdOgu%3CmidAVqMJud|x*`wYb3Ix0 z=FkQ)(D&xRQ0UjO1O{7Wa?4zMc#N7^jFMHrAh|g3ga+>u$GA+V6JKU`D?1Y5P+=$wK4sFfGL}f+P z+~=ISmk2yTvCDa4xh`#Elf`SQc1|lwRL>yRYaU0yFQu^<0xRnp#U8>PXG3HO1rg{#ip)da=Vx;7&gO1 zKG!9i;fsZsF4xV>k}&}c;`bgtoP@HjR(8&-JPPO`D2GSCYM*j~Liv(YsmF$@r^oMJ z@}V_n^Lp~636N<9C{Jp#f<5x}>oXFUE~%Hb3Oi24V|n4J`vF;vIIa5v{hcBe^bBnW zCV69?wV`6#OkBP6UvFSgtx?k~{x5eXav7>zEZAy2y?QQ~T*x9?itQPpFeWbS| zQJLlOLC$v*K?FRKZvL1~OdB7O`yxWb`FerfC|B}&6-$~CaA#AA_1s8n@A!Dy`r3R+ zTc)8;cGt6$mu*%S?u3kgm!RM2wcmX&Jr|ckS67!WJslk#0E!x!oHj<$05gG}d{tpW zxhMy^4pX>_111ge&yK$@u^$hA!LDVpl-yR<+N!pfLHg-~2M@GeXO-)x$oH6Zyh&$z z@4hw|0LXf{YVXzsQBhHoFE7tOYM(2_rvzDO>V15Ap0w&?ML@RXQ@?*`uaA=@R|+J^ zRq5*B=xB`D*AzmD+MRgy1#1ETYly{xm=Ak@`vsRgj@wWa2p?vTMt@9H&rUR61SaIU z`R?AnEel(A?0bEvi^YvG91TCHYVX{2S?PD%Pj|a5n}^Kq*(EXkS!lc^nfmBa^VM~H zYskC8clRliYzDvPJI|+nFTunNcA^zq*E|0?R6u5#ES@fxX&D})FXuLG-TAbnq~uyC z|8ZH_+KWDEDIx6HCXHNMrh>Tizh)WNOOG!Dg!5KRfw(#QzO{c$!y~P~ef?6-yJf7y zUr%{nG3M%ig1|f{v?q0coF))DFE8)VlX*b+*|*#10*4rQ=q_Bs+v#m5+lbq+rVznpmJji0x8b0k0PFX)+Ur})x<^F7Aw*3IZui>4rs@p{O zx-65T8&4PmmxAGs!&*2rBpo5zxv!8RwdNhA-@vt&CJ75G%P^sH={U<{BE+VC}$=VzF98HTl~pqEm{K*o0N z+nsx)eACh_qulpu^w!d48XjO(eTkMvU$FkZZ6ZooL_|66+R+q^+|P2fhYy=Bgk@F^U~yp~{_1mP)EW2h<|V8-T{`a~gTgu1M>1=7>H``!;*pN7jfsu7{;u+CT#wD7 zc-c7=+gpO-aNm5l8TID#nP-NYnxR@n+WW;4j4^SjZuU>_*;W1H%9@dE4}cPAV3wteRrwsSmMimh7P|R^Zy&`bmYS3A*x;h{Sf9MaATs7h^7rtxm zSq|x|^s7)J)wTTn`)G%#?+{s%?8mzVoo3=|db?fc!#b_i81IX$k81S;zc{Tdq$hMV zT`YYRF-$Om)5NzwBUe6$331GbNLDWYQ^eibH@cMWs@*p0tCVM@Pr5G_qU>Z2t+2xl zl^fem`e!VruGY-I_0H_2}Lo zap$N;vD+W--9brN|Ma*xH-H1vGhHj(vxwu=`#wb~40mh>N<<8QA*p|a^pDtc@D!`m zPS4EvL+3DV3C8+p-8o4o>S9F0p|kyowMoh8e-_g%h@?U-J4(oNAyj_idl^$(69!s( z!!3+|W3j7qZ*_nyCnsmV+iHG}S?jG}v&KJKXoQ48YuoZ|lZ?%5ob^A4s-Xg?tgo#S zs&-V&O|xWeDG9YG5}$eK!?%EF^s|!N9(0EeC4h_tR#EO_ys^4oj-uEKsQl z<~iHgjjm-Jfd<8E>HkmVFQF!LCugI8ii!%@uNv-J*2=2&UYR`ok#Q|wz^O>+4~MJJ za<%zObM1fgrcO{vR^>#6lfs(Je$ffhfz&i8Oh)SSD~r>l%i-O7I@f>{p9|Sr5V-(d zMDEXxTN=QzncNOOo$qq4eJ$qfd9Ss|ITu=l{y^PPls&MPKu?-G*4Bz(=qYLDRB2o` z19Ci;T^(}Q6O|J5Ho55N=%7P)$=N{w0!LBClHl}@5Oq}-ahmyZZ~NYz5f5m1ObFuf z%m+_YobIck8g&D;J0Q=_Tj1HAyft@s!_Ec5pl}GZ&)rQudFSPlX79ap2H9<^CYE&kaEZ?oZ@P2 zS;p&f9`5*BYVG+$xLQRIF%+i8RIT>kKV6oP7=#qUnX(S;hM{1$XVpJcX;PqFe%nH` z6#u=?XHndnz;xFqa?BpHSSF3s?&=sB6d390i3U{tqJ$A5Qlb>9yF6r2=rgmO_T35c zuP&L?McuEtX-ZEN4>pY)C5#}C!KZ`u6I^w5b+MDB=8qqSe*u{79Ue{sqBN@XJw+pA zmkhiRi8~J&0#Z_f&+)oA$M(pcIYyNg{>AO5Rt^*S7|V&TF@)RLS9IaqrWDOQ1z^`E z%<)uDs>96O+)9u~l&@U*JKF+iS6ZYV<2UpD0i6k^BG9TYt*A>ppuQ0>OQJ?D`JLFn zz(9JRIZ;&ddg^=6X9&`2+w7A{aq8 zYQltUjF&%Ie=iq}N}*cp^BXDavpDO{A}8sdP+$4Pmod3#h$(5t)^*328n!-*TR5PB_8doLuDe` zHJ(Ll9)u+X2}zM6O{ zlx`2?vkb?C;(^K?AnIQQ43e_<4AI_!l~wRe{NWR*LEYuQ^NxFxXxqJGV-~JjV5Y16 zV(Xl)+CUx&ka14utwS_o0$E_Pum^CK*(+l|(jt#Z>F?v?W6@QZfNM1Yd$hBxZ+fcW z%MBhd?6(P$5-DDf+SFg)IG=6Vt!>@+iPK-_&F%A0pQWri8DxUv%~~^<+WxWBZcA@o z@HpzbV`4wu1v){Sj}3d9USN)xU(@ zs-W$V5o(U7u0NlpwsNXw>zM0xX!ueuSks63fgII{=3wIpz`XDG-(Ld31-;12&o?0- z9CX$&YPjId8=_s?Oxi+0)&3xcIc_81L%hZ1q23H=ewJ6<9;sgzlP}B_z%wnFIPKy`gPW5n7Sts1K zQND>Rgjzd@p5QsJRpey5I$yHRQ?%Z%5A0D{V)|}A`1P2V%Zp#rs+6u@FWs=|>PnKt z#fvSW$$MX;NU@iEyQn)I<@*a zqMoUs-z|z4#g2EEq_woP&_@g{WQw>hDZ$|&(b5xlPm~ zpsKNPcDHQs5HJW7qSf!_g=X#_c-IG({d<02zfq8vhmTv1F=LSl=B^6n8J23jZnfqe zX<1lU*wfoP`!#pj`@w?>N|_&&zs6$C?ck>WccF;D_XkS5wSndUT}&CtGEdC@H$LHT zD3tYp*QxBo8!8U`@kiwH{CJ$**)#(!{5O<=k@#<7 z1%vQ^a^9vV7*V4mGSD!42M2=?NBW(Eo&9sF6QQ*zm=RsHmw$|nrD8$C!lx+o`1|kQ z%0GOFwL80}pq8Qc%+F5|X8pMxA`TPhhqk9rQpji;pGhj;l%1cd-hDwc`kY5#*HNPWDHh9LKlfydntUaAm0!{7@(Ag%@o`$ zF>Cum0ZHgZUqRypw2pftb^qY&OpO&tfp+5~IYu2Br$yNj!9Hj)}JfmUQK=_p!Qn3=CaC(Sz4wdsn$2l7Q9 zp}aw_k1%rJco?2UapVK=mbvFVq-AEU{4A3AQ0H}7xV<{pZs{&hx#62Tu?)@@+TXcD zcuCLTpt1S^NEX^fPOB1Zn>~6^3KGFoVN%GN@NHSBIEF7-IiBMt{dE(a$E$s?Tj+8AEmQArThpFg9JKFv@0o4I9i}10o zq9mA~!*IMvYRam&N?+9yNe<|3PR!Xf6yAiQrl@5yWQS~6d4<;d?FNyK5e`t(F|2+e z1BI=T7}LO(=3Rwq5Q(n?n9M>|Npf3VdT7Cdkj%DSN4`Mjv<0N@1mQv1tAY@DHGn+@ z*PMvF4sUZ9=rX%bv6-tDrXx{3;g^4fF#zr;DgbO>!`Iz#na(v@Mcc7l-5x>!6rgA9 zns1Kn-n|>Djub$#Z$W{x@3@GtuqIT5uRvI~KR-|rZ4lO*e5OH-QkGF2ee+{)?*=fV zEW=t_$OAI)iBV$Z5lm1nurr5GTb1=04rZ49xc2Zcl&0a#T2Vz}>zTE@gvLm0Xoyeo zL?ZyRr*HMddOIrWqDnX+W;l@#p^IKjE1Y_Sdwhla8){V1k+C}bi0<7@P#qF2fEi9N zS><=y2XhJcsRd1qw+`g8C<&6Q5uP6ygvtl+L&}8?cd*vz>q=(5oR&v(8{w-HuMk)1 zl+ga6yA01P+hx&8Tn*zvOxD$zdRZc9U?7uG?h`@O2n+X(=mr&L;^$WdTx<>57WGc#CWt(yPecWjhOArVI|!&pF2f0;I5Fj(Gbw&hABPGvH9y}B zW=A4K_}L|Jw=?%oZYPBU1eVMsx8+ej0U@EJ8*l$|8ZVkxf}2VoapB?P$Bn24SqK;2 z_w*dX9gw`+qXTS-KJ8TVOxV3?-0R^UZ6JT{Z}I1>v#Vw{Ne*10~rtSdYxkb0dg= zh2r^z_T2II@~Q_9%f9pT5-Et73xt0Veg!W?K&T|tjRErR67(;n^ji|Hi@gEJ2fn(Q z2T^ezL}N<1@AT8N?}=rHjT>@hxs&ypF{AW^5JSnFr*gZuZ%RH=A!wBE90)qr1z@GcVM z(&|srt$1P#!X;?k=S}JcE{7~kf23p{)v6F#n2a4}QGi{sZ!}s06NZ~OGcd)?-mKX` z!-iSKZQUnbgv)KRf_aRISy@rhACtW);dI_`62@x?6kb#3ROxKv`igS1%ELoECjPP4 z!b}*Qe?I23S(vyfe&}eihyR_7iInqPCr+F&yS6YV+xsuiX+Ek z^bDV190Jd`0n=9L&Fz~+u|+VDdo&KYjbBNj$Q|GH|zB3e8TVn%D*y+s{jaQ2t4~C81Clw}+bMA$DL!%`Wyw zgs{JXdlL!4#Tc0b$vHO?O9d;0sk^z}r_(UUvO5(b-5|;pZvZ-<7HHvhbTsK|%#&At zZrV14ya6AZHHIG;ksGRW3{qQlylp4{5hh=2m(oIX`%<9n3qd{D?BztpFAV7v@o%0n zA!i*bN<2!vXU5Z~zsm%kIge}PY2?64`|IxA=a7=b7xC@0pr0Y^BYR1oLpakSKXM*9 z5zH$$&kX!R0{(4F*L@J#0Vy7e#}@=)a>58Ppdq66VzUg5NX3nG%f8y;Y$d@tIGci!Df#t);?Z z*7cn3yP7(yyPH)sbz&L-_=p%J&aVSER3?pzXf&-IeNed?a?CYm zN86i8a1Ss=4xLn{7mWZ}53>A#+a8U}K*-$*R9QOXGYPkpzz6y==koOhr{dfqHk5IB zC}T2+30An$A)JLZebc%VtM$}jS_HX_(Sk{y(TrUu-zEW#DU4(eL$;vma38G(&K*GJ z3-FO;jh{s8Pz@}iDq6L{=aBt-#9#BAgG4>La%ekJ_iDz>boCd23fhLoA9i1`t5flQ3Sf|Y@~ zlkl>HD(M0qm259CYvl?l`X1g)VwYmPg^VGxGK48S%H9$)9OKHdBSHy@-6G+HyJEW9t z-rm%=IE2YX64W)0#v}(0sB?-28A-rCW%TNL`nx2hP83VmG-OB{$kmV5wL*itSW6Ax zyslup*it=$RNJA*}40upx%(=rb21ZF?ra9N>LZLhfJSb zK{TW-*D@l~42;fhzG@Xs0pYzKyYC64Fioh~7KzrQZ9Mh2 z50herhBkwDmcsw1?;T=dplF^*eDzF`XLF{yX zoh_R;AM|`ifby&hiS-XJR@T=~$AqR4lMbjZxJuEgm_$|*PF9kzSbx9ad~^f+njG!2 zu6$c55*N5{2Bn^dN@tQa&;<#D&QY0%sJlai-k`!LWg5to)t^VDl+M~jwI=YUc!@4G zZLI`mVS=kG(v(qf?BHXzslPaDcnu~(>!)25(ML9je*31WsY!w-giHZ@G-3hDM2-N7 zg2eHNtHI7E`xRW52IJZg4{%#wwMSNF0MP;>yho-Y#R!h%YjzpWE6Z+F9Zl_X8>Mh0 z_DBr$RQe%Nqzq$?SoNxFc>3Z#9x#nbCTPq%K~Jn-?^NrJaf%MB8z%c*q^veTS0Ms|^~ z2Q;!cTnIBh6>fDMcpY(h#XE_%(pA_hH`i(y5#%;QcqcI=AiSemu@3u`qi1W-mH2>G z098;86Jms^TaPC}=y?4UVC2C*J__wvl=5MGi{PCf_eh2Tib>!Q{m86hs-{4;QQZX; zj3z=_FoVGUnUMlOf9IL?Z+mk`H)C`G-DDq{7`ASuXcx83Y-*~hsS*6t0^}qkuHItO zk1D$JJW1`(jddo$D<#Z><$Cg&P(z~T02`_4oEFw0iF>%g%>Zs>uNNspWUGunlcL$@ z_Gfaj(m~u#Ge$>+9pq3yltE8rDon7%Vh?ALC@~r>$*$r8)T!Aui7SED7>iCB_(T~S zV!qF!$4eJk!S^Nh7ZZdDs0_s0IVA~)jo9b2+xmXFc~*`N4rEhOKDIW&L4}X2cd&05 zEC9^sT1LeW3H(Ds5uRVlu!{(0+iHa9B~}F;U|BK>h$9Rj)0-&}A@y$h9FPdvA%qTi z3Be_VmK0I%$gD#iX7T=K6(Uz)oc04jf(W+>>!7^K5hcvQ;my+gc!Ze{>Ws-iZ!i!K zJSACDm@xr?{S6LME}@8e(#CDCRRO}z2~Pviw8bNG1ve&JKD4Yy^s4+NK^p3iJt3|g zh*nUsF8of;Pn%9jV!kLMPu!59FIBcU1aOFSi&92E3#KTt3kR!!dSgI9hoOm)DBG-o z7DAE)O{td^gW{4vLenggdixQ)CRrogHA%Cil-*`@Qw@3VbY)$hKwT&YufW`zCbHzv zzh|av#X>w|OT~TxDuJO{vYv*T)6yV3t5p-g4PZD&Em>PmDj6MxRa zT<>i}kH8j%&vG1GTwlK&9v~@o%yyY}XWTMLhm*&TC6fN)bo1E+^pK{?3{5R99z+@H zefEioi9sgQd{i>AgxFt0ma(FESU(+KN0zbQ_jED%Z`e8ZQSf*w_8p=I#G>14bXP-U z!TsqKZGU#VFqgyxP$`|JYxpqdn27i==H&;FZw#c`x=W1kbKD7*2en_Fz~| zUZ=39?=h{2I)2oQ{8r4uVJ2uw8N14?M_M?s=cdip`!n_*(!6SV(LnR_V1gwYThn+< zKtKQnUzaANwdv^|Fk7DI*zT7A(S;<{j;m(Go$h^k{%*2Z4v~-G&|ZdOk;9G+C24NukWR~bOvJGO7<6duh49Vl8~TlRfC-H0M) zJ3FL|v5~}3DC4#jx%={wgoSu>%au5pWXR5Z&PAC;4x3csG7_aaYhH^)D%{Zo1W63| zPoEO*0v9-2crAoC#ci>ddRR3TX#@o*IfP>ZIMyuv{CX>$B&R)Xx9vQ5Bhb4Dj1%+M z|2q>t_{AZoJ}eR5)~-9g+}Lpv!WR2{W_^;Y^_1}}O3qgDl5 zpRzGC^afIvRu+kb*EcTS%%CNL> ztznVFEQLT;i7IK~#jA4~Bo{){)h?(*b%g)RnH+c2GO_}YxNk?&qm}E|_~j$ki?EVO zMg`d+*5vLtLYSy>ns#SH`nGjxO`SWgy?&0^e*bOr zQwT?9DqAGpZoJayb!*dG)8qSI9SdN8^!JIszi!#@pZeF7HYfKV(f3c>ySL}+K;*p{ zN$2%9l^xOsU;TnAKh0;pFbz7}AL?lEU|KK2aIRHD?S3HA23oSBi_uteO5c8?0MGV& zrqtShdo(@%KUsiB9` zYCpmIkU6T6s(@LN-yiP$sc)9eq*&p)Xq9VG580%HFRShu^bft`dH;Tph)JPA4UONs zXFJ|B3EB+Ym}*13PdZfv)tjgks9StbURU|VNW9f?8a(B?GFbr`m(QYl^@?i__&13! zlTchoN5{T{2jfA1>RDC#3J0%ANuAY@jfQ4uK$(uHO|VR}ZQrLunIVuJaucD`Ftl5V zt*rvIGMzkG7PwbeSC{g6=G5}Ce_&vIT)5L!$Ri)j5jcRDuN}l1?>Y5WVV$|eg_soN z_K{AmjQG|EViJE1w&A&^<4j&~0gV;OA#EpBB2=$Nv^lb!R;WqqyB=fY{ z_m%;jx{@k`q&4NMS052RJxBj8kW-gH1Fn>3F&qzw!tHljHW54C4eEV2oVmP#?)BOG zOi%lA$Ty1{g{hVS%bDOwo|H+RDi z%|8zvIz-e^lHvhaJFwgr3;vGG0q>w@pY|tv)D+T#vf&D3{v5`pmGKjH*n`I1{=%92 zvbI&lv*2L$c;2j;@N;S(1ti3^38o|2dz2tPjcrl5${4Qv*Jot!_vFfsxx`!$HE}~; zMwhd{@^pj}q9aq92S))<3B3ScCnkpkfK#P?WOm^X^ZMAwl~e{}!go;Vi{KiO0@YfR zpO%(Zd9o=FXgW|>CVEc~izLKIsXr@XVq*!mNW`rx-Q(CZZ4zR7=e)gB#`RPe?FIKq^<>iCG-b&aMOH>sFSDhA)C6$@9sYl}1hjhqRp-*0b zQ%nte)v@;LfopA=wd{Kr^?&nNkM!8h{}QTUBIb@GyR@G*y@Pf#=U2gb8I_wQuC=YL z$tcg{fsz;=l*W(KnbY(K9%YTqIMjWf@u`a`@+zcNk9azld>HU_ntt$j7aO>XI9 z)#zc-XWMx_kVEM#7gtYT9|P6^B=4(GJ{zHV>sQ){3fN4q}1#P0B)FDS}h0#U~VISYENddgaO&Y(O@) zDjOUeOn_z{EVH<^LV?Ed-HkH?b(iU~`S~yG@=eR2yoH#YdZxkSV&DoAb}~3wYfVCX zXiLU-{%HHFCm>BhL7^dC_p+Dbg~Y(c#MiP%UKrr1P29G4F{OM>fYKdCFR27jKa5=- zeZ=R3G1g-?P`Yrmh>>Td^m$2zqKbBlOX&vJnYzmi`g zFuPe>5o8}C$g54;cKtHvsHk`jxe||H-{RJ3+<}45js8!Zz=vFUc{bM8^<+gkkL6v4 z2|#RWENK7!8cX0Q79EY+c-?Gr za?^{s4#!!TZKL(MUisAW`V}+N`ZO!X1;vAo6;D zwJIhuY(tt($%!xskP>k7>acZ#xPHLcho>8Xwm(fN%Tq2u$>BO|8+2BOtlembacH)r zY(e}y)0c$8-nZC$0fvY&fTZ6EvWb!h#dza1A$Z^+GO=$0`S09%W<7`AO(NDmI#0A{_@C zXo;g!D>5JY;mOgE@sob!?hjX{jxP+ z3-$}MH@xch$nO2DLL44^pm#mrPFmp>Gx^G|0|V4Naa*=-4Hik*gZ-#naY-4E-se&8 zsYWbPN*!Xbf}$efa*#|+u}N#G)_v_2_+Q9K$h3N{9=r&KdTUgMt7Koq;qSBW_sSkz z7n}e1n#k&y#>&uE8Vc+!!QZ6kJMDGCM~_v$dFhyg3o#yPNoUBjhC4{M5K2x~;d~1p z;i5K1QFCikE=!w ztL9T*RXG2hsLGi`RWwHw+7^i#0`sSY)_H`pCg^O18O<^MSy}7aKf;|Y-1+%5X47_* zwXG(>unHWJT2Hi@A|5CZ$s|SoV+o73<^V^vCHvc%85?o{1H=m@5@i4Jemk}; zb(ocqus=%vhlrx0A{lMO!(YJ!XF{%BUr)~nJ#bN!YY?lHIE`WUU7Qw38ABQhBW*DO z4UBA$NI%(*f=)s}p#KWQ&-`%M(g;!d54zYm8x0I>iYEB4e@~2(Q}px$s5{N55mVk< zUgvb^SdhaS;8nwQ_az(3-&=(L4sFo8Qsx65ClN4;7Jhgko}U29`HGmfaMsA~KWrl= z7wczT2v$Pea2$-n4Js$HGEqjpk>GAHB0pE_J{GA(8^}oz1|w`RPQP*@yJ@*KeoKee zq=q=X2w)`6LS>ny(R(vydTy_*$3p4Ne_>5@PGJW-M(V{>fW8qv4;0nmybDcfg zh*FQHTjk&fx6X)cO#Dzwgyc@g{xBz)_>Iu5z>Sr$JnAu!kYr8R)H2@x;MSFR_VnpX z#4`qb#MaKVY9j>zm$-Yx%g+o2pvf@8^?KC&r9}m~l2i7@WJ4Twv_7WXi%l^tKh*hi z&R~m#2YoN~JuSqV3gd*^emq2|#^)dSP@KYts~yj-g!}O2P>-;|=_0cPw~p8fwckM3tynQv9dD+Xuy&n8|(3i zzv?Z6t<)6HNJ~n}6QKkNG2GZtxbs*Y&I~ftbdgt+)%k_-#xt=Q3YzI4XxMc~Ta?}` z9B|4xS-MzEC$mGn4EqcQ)7|)Uf8Aoyd^o2Lythl}B3acgv!Bk^4#Wg#f5oPD@Aj!G z70g?bCd^FuV?bAUjS7oB;wL0LFWno`HcJ zmSQlT2`7r&5&9F?fssci(E^~^o@up1ss~G#zWdkt0B9i8%dX!Y26WUettZX^mL_`* zQGN4(azkVTKXXY*Nn{Qcz$uZus_CoAdXBk5Hoak#SQ?52=$=1$_%NM+W2G;w0b|tp zMK=rJg>0b8+U9l~fQIK1{MZPQB2Jc>_x}j<_`}rfEYIP_7a=g35VT7|b|9;?)~ek+ zm&7#~?pU_GKHlx9U_C~h%t!OD4@0*txa58CW(J#xh!#mIc4=aLHD0OpJmv8&SF=rt(4?w1qTJ_t5 z=}33|Cu@!CUrIJs(J}LTgiX%Q5@C0;Bs9#=3>hqj<%Pw?Aa7OTTtl*qMO6r1xjBcE zt{|(Bj35p|`O>q{QD3TrT*+g(5*EarE9Lt2=kF zKO%5#ncW01o|o(PSd z8FuMJKyGQ_r;qKxm*ajCHko3EMn>_3{R7wr9o2v#w>qq|u7Hw=tl3jbA&UFTuzp>v zpJnvTvofkk!}>+FQ$nUt=1IZNCqkr+k%!TEIgPRXl{Ob*Ei z#W);MS93JSGbVy&zTK$FT+BC|L6?Z6CgEU&su}xW39FN%ykKD|0|$AHwjuz7gH|%- z6t%D|M3O)}pf*~BgQ~C>kJKwd%#neC4p#o-4p;ydZKk(RouMhT3z9ev&~{2F&GC)K z*cl_=NHT1#`s6BuB>bpK5$4O$!d_4FfKp=UjL^jvv|IUY>==Z$!^^$-ux&r zYwDL!XBSWoG798$4zIS`%)qh^9Z0To@;*Gk}jMFH86GyXof;KVR-A|pe~ud$w4 zP|Pa?Y!d`XyexG{B2p@LGk?qO^hm?oH>#-#V?_gdT)C;`9E;=wBh=y_D zZcE0Qq2n;|HFb1RB4T{g;9JDQd^5$LNN}umTt;pE5}7~n_*`+N2j^w1_cF?ooe70@&meQuwe=Laft(@oXq zxw-bw*0lV$ku497)3Nwwg&)tno1TK%kFkLrp05@a7IJAfH5Ot>XDb1AG{9|c^y#Qvv)7q zgirFu|dNpHtlUHWTErdC!N*CIqy&=Xlso)m{xuYz_h6Fbgm`q@tG z%6iq@fX*uQYEqPS&f4==jcHH$`oJ|pQQ9kC+4d4Zs|1jG>!=1$CL)$`sug|g%=@xm z#U|ICgzv}PwA_E)*DfB#cM+R|vvU&KXt^M-^XBWuOxHf+7$v)ePwqpA zKLXdox1pnpxJ5_V;(N9?33GxR!4Y#n>durh&K9bi;VoDG>H_rZwWmUEA9e+pe2v~d zZ1wVDC==w?B`9wmx;JV{EAZN+qKG8Z#f9)Ru;Un-VJx!bw$Hs1#<^#YXrNqS zVW!NvK^#LDO>bM?uL5Zc@m(#GelH&%708hZVYTx9c0y(8&(c^=>Zom7+#zwkIK$gT z|7Z_ODKF!DM=AY?3a?3jt*mE9JTY*g>0ydRf&QqgxMYABzpJljmE=^ zQ&Cy%hn_rk(Lc0UCHPT=amUni{T`O#V>%mh-k!Po@5G__U#Iy0-PHg8-_J|8LoH&$ zezhO&z|Yqo>mwJiW7n>nA!l_dx6#n>aEnG&!ZQ&JPAWbvM*{*rUHAlRJej5%-Yhwx zjwlF;`VliOWQ^542Q-T?BF1ARvArQfM;>5ee zaY}!GkzST1m8SFSI~cn=%u$|(!LGZx@BDZ#tTT^%D$>L8#4!PeW2;w%x;*+i?C-zY z@>K1|lEKfOey5r%CHBLgWD2isW+L{gRdb)TzPr3fk&vTj&)eF2Yf~5CxG$F*ZFE znj5jGa8gV4jH_AXOS?IoJ&Vf4jZ=CN&xw-O6~NiC1cyR0dNH_65yOVQ);$~#qX)^< z#{g_{eTzg2CD4%kDRx|!a!Yn5c2AGJ`MHbg;F)tE;2ia;gfWtx#}L8U>gP9HJ-o^? z{L+2P%PhIlZJ{lSvLZ1+{y}D5*fjyjO9E0Pn+rFAocd&T;MK|34WIGVWcYBfUnHV% zM%X@(DF4c{=GNvBEdV;49Lncs4tGM(bt!9Ye0`1Vo$%~(Uw0M3X#~7pb$_Uu^h3f6 zgRx8wW5qskl8#63tTf=;xVXsKPgoZs;H=dAEx-h zg-CZq<>B@Ejzc;Jebc2uAmn z#K6(jjNjece%*(Pk%KU8Xq*}@qxY$`zd&eDhHHFrxA24qXhjNb#H){KVF)3-1PW6I zcxV)v?>GsXmQ=CLe;d3`NA1fnW1Gn#v$*)d;^ncQg3N!evtip#5MbNkMZ!dD%9}open?q~|>Qxj0Ena+b z;0o*?xFcjc6^=(s05Eg<_UE>%v(qg}nD+Q)5-U>^LSmX^=gDRbP`F>ePgJ<6sj6ws zhYuf;tqYibB<>D^s}9whv9Wej%d3%uVsajB4tBx8992Rb%J=OJIVc{pA5Z$kLt}Id zCwPaMRJ^}f-FdWD$D!o&e6A*wvOJlREp$g_V*iFdh!ai@@P}TQv$c6I_5q#bpZ|{lz%IX0lpLZff^)FleftuTe37Y- zClR{+IIRh+OVBhTWB#F`$uD0Xn^Jx;A{-&4gKr~S(TL`;rZs>aI}j0q`twkjvf=p? zX%n}PWB2|9>Q?QG*zlU?pYn3CWf0)!rz(kQ$8_AC0!i?;~anl3%_Oq2L zn4Yj_Xmz>_+0G)ST`Ej#u$WTTR(3`sMU(2P_XsH8b% zC}|=z(TGiGqS26w=F+S*mrYVAl_W`ME*Y8!)pML}zx#JT_xt?wtaU%nyVkqj-@4bm zZ0h^{4A*s?=XspRah%Zm)g1|xkB4Onr>ZacKaxWzctKLA9I;?qUFn8NFNr*s%;AAh z@LBpx;8}fxPH-(KRh24HuX80NBnZ|Agqwoe@2T@N3Z-G-P^f+Oa!H?xJ)YmeA((Px zE1+MNdGpBJgM5*Gm|`0@;@il#5xIRb8qNFO1TO|oTMP83??L~8)x?jX&_3-?qR?>i(uXk=LiWcC+8B1XsBt)T=*G)2=u~zMGot~aNLGKfWo8+Y%;uJH6Elz&BfND#?&+)4uyKVHZ^pOEbfszt@ z`ziE@M5Q6h95BP9@4uqrqWA+g>O+Xo?hl-?FGno08yR><8)zw>c4owGUrN z`;EGOZQ`IcP{<>GG(gOkHc?F+A~~ml>k=UzkXseljR>8R(+3PQ4xU5BK{C3PzN0~O zN}{-~axDSUvoh+cDR{Q<+$F9qJ*wFYCv6onp>8lNGT&Lv{g0e17SUnv7f^uldL^1V z!FeJ8p3qF>X_iI#axe=cogVcT*}i21`0hP7kLMMFq~#_~$hwoDv$R3Cuos*JwI2!z zU-n*I*9^IP6SEB%(FCRt;DZRVqWKUO9vL1II2HWaBNxG~1uKo*KG=y7buAf*Sq^Q4PMW+`i1T7~kxXF9Ai5?sd_b7C_Q*cE0 zfIJb#M*#u3bAx1!+!67)8jZ0rG+Y58%r+|(2`#BzsMiU%I!@ki)z6;2Let2xg4>01 zwxeBe0jLKhBn90GTaa=oQ{w19S);E zoeR9oG|>Z^c`W%cLt!3`0PedrwwjNRFLr-(J{JG1h8qMlxLn^+ z^fJS_!yi-0P^||V9-Odu;50}@XsO=tvv?^w$@RRXGi#<4YXZ|7L@!=`o-mKcHF|gd zDqddmgWww86>Y}t;OuOF#iFw9AzC>G=mUwoEBY75j~VE7%mc!*zN2Ub+xF}GVdYhI_BXkW=*a}}0Ra}QNcb-tqR({4K|s~BSeTm-rN9I$v{y6S{+~ex{+ZZm&!E0 z-(fWJ`XUA|>_;U~?vXWVhBHR$xGgr&FEb1yQT;MA+$uGa+j755H=r+cpTqWye#o?5 zwWpo)3{w0z_?ir!`D2XvaX{B?$UKl}mM2R1rf_D0zq|Qn^D9IxBE-_*27sQoxr2j( z?!kyI?97)AH4<19IN6eTn0iERQ-Eh{(XwcWQC-}|ybFzuS%wgzvHjbOM_)LR7$8|S zTJKqAhBQtI3{DcRzc7=Ljv8=Cz}Oqf7xJlQPDchc7nSlC%ZM>5Bn6I@aO)QEnL( z1w1q8q;EljL`Ky|(<%8NFI)&(jt(fR0of11ZQ`p<+;54?$P~P2oBIW6wLz5oz#mg7 zJB8F{Y`~%)N@vV_vI>vS9y#vW{+z}kMAJSZ3t~ltg8NG97pNMMf;XJ5k+(Ya0?2$ax*4;a4A|dSR#sAVftGbT-cAz8Mf?>D zLCXLy{|zAEXjlxBL=ML@l~gpH19qGm(5?(_u=ylWu zK($NrZ)gli&}QNmI3I_wjSf%HC@AxSn1z5{BEurI9|NCZ%g@v5gD4kpqRwYwLu`hC4S@* zS-haHxgSpgV$M=HfUG8VZ|MT47-sui$YcdTC88@rGnI@PiFbhsM7Q@SE<0$yzTX^M zFLYXOO~?Z;cHo56aaz54JC4~}JOOoJJ6(ylm;@f+OeJAHrM+Z}&=wPnuRM!((-557 zZ?Ly{xVw|u426dVlG1u~>1K5`WKBGT*W8Ejyk%T&gbFbte$mjZ;3dgG1F0IP)?D`C zA`~UF<6|JShAQe6-u^~4YHDJL)qSMPb=3|+#HWU$hmvJHI+^1YqO<#lX5i z{g=;fqd&lSMAWRv5DnW+ktx{2v#T97hVfK3BbSV!KnO9Ux(*f@#2gV6Hg7?ocv=+F zj8tZ{?7!Orh5 zFIh)jcK|yoXg`IYlAGFwVvl_Id!tw~*nic<$Hi{mx4X=Yffu5!7j0JUAyWz1^_%!y zPXNn8uIq2%`#6)SU(?)N_%@~$EC)w$*wG-Q-seRbt;2`!kyi`C3_8NkVD*Rsyz1FA zZ9u=i?5erQxRAi7?&^A0Y4&a$srtV#3v}IrS?&%5ft7>fgt|f4GT}%rY4_k511Pgx zIKVEhr8=$>)Ku)7T-L|QKzK+cNtahQHhCSP2ly5YHlE|^47+w~XwbPAiZclL8;4MZ zm4yY7htxGRbg^{FNdlh?T0(gDE&UZg?_`EM?*)>9bi%LS6rcceVC8sWrPs6MTJ?7T^(-tNM9XJ}_X(iIkCTWv2N)H&Otp;5cY5ciwL>Z7|iyRk{!Cu+5M(Nhowzs zsy=w}3dpm4*I8}Vm(k?LEaLxn*1D_;=8|>yyib@%e41Ooq^WID{JPKI@~2J?;jh=4 z8~GG>30qAT*Om%JJu`T7=-YTz(}cmu^yK!QLrYBle;}OkzuyG7R+u`6p}m>9f7IbZ z=OHaEZqkFV-_B^Z;X$H&SvWR6)yx90`izu-Lgpcbe4KNDvd z3~z3rtiRsff9s#44awdY@wG{2p-(sJqX?j1cLst8jnx7CMvjxf_Ix6X;lJ-p>#e{HvQR=_aXK`~ zurapmV&dI?9l8NBtR)wI_*K~VIjnkv9GjTpLg;;i$B~)(n5ilbd3`a6G}w@hh#L0X z0ZML=nuP!a0l)&(qgiHE^!rNGJ4EP4UyEjoy!o{4!PXD1O4ioa=z-|ZLd8r~cOs@z zU|b+5DOvw!jdUy^Q6lA0a`DF>nNHgnI6D=**DhdX4SRI(8Xrw10cM}eTDgY09Eh4h zWNQEb{@t53OdZuky;}DeRQ6=@(Uswwg>t5lJ_N-A*S3XzAeZ$I0T(Lo5pLCjs9%lp zrip$M+Q7Epoyd^t@hNuPOSQkYe=)}^+=D=u;k2988XXRluR&R-8d1gJ2N)|)R<@_? zWBZ6Gg8Dq$iBq4Za1i|U=Frf(NR15~a7f@L5Mm3csn&LS%dhwyLDk<)y5He}xLW3Fxxm3PRqs9(Ir|79eQ z8QGTL`9S5UKj_5iEQskoSb1;oF#MZo`XahRkX1Hq-l7@=_5k%q7G6ay8jm{O2_5LS zo42@i+Kpa-ibQp$N(Oa)0`RUInsG=*OknM=nwI8Q{?hkX7dh=LsO|F zF2eo^OY6Z-RG-#_bd)y|f&zO2V|jj6!eEL4xNw*sd9D0`^ITvAb&g0-htAH?^fPX5 zeJEcbi+G@B7x1L6JHrk~2h~8Nx(I&z9RBF;Fd4?hcd6B(PqRK&;$y<>d$I=Bd?FDMcup(HnB^Mn5GxFPEVu-C2X+@j5+X z9{Xy>#Z$=mKn@bDX(Wy@dcMX|{Hkm)&jV$uef%tPPat`A4L7_8y3xGLbv>PQUyzL? zpu_ZxY7aF5*D$)gOyDoBrJBVc+| zDbA4#=0GEIGvN|r18x4hMNqn3_4j}25=Bs^v~-h=_^f&S@@-d(HkB?YyGg!#69Cf& zTDpjAi{s!>4!#8UHs3V-2>KlFG7ep6IDFpY$WU=U@=znB`L-pKn2fgzEx;^e%>b%N zS;P)c69aDsEw%d`%d|hBofH5>k%Y80 zO?lFOnO-3<6!yGU6enbyp7I212(O0Z5DVUKl$6xPYtX|XqxbpLW)^?e=YplR<=&E# zoC`0WRbMYDnfg)7S5eBB`hq^ZT-b=?u9iqN%l)_2^0$!bNq6b;^!{s34jOO+;y053nh1sAJ zZ{Tk^|2zG_20tgQDZ7$!S#*B8oi9h`@0XK+H1EZ+wy{opWARWBhnT71QT#aAK6~)_ zVs+xg%0lH>B@#byN!jismKpjmdAoHr3EYNSwxA=-qzY%jQmi%XbGDwdgM)**fIopE z=*MSm) z-J^@@_4(%9fhV)Xlbb?&XR0s`1si54{p#vcmh*#k=*mzh1Y6CL%jC>wyD8ybgX#-Q z@TeiqG`xSY9&7%EX{xgJTh)E1O<;L2Js33;qJ~GaKVoCDu(wd~(QLkS$yJ}S=CvFGH30VIfg=IlB;kx#3g zaVo-G>fsBVCW<&8n5CO9FDQBRGl~xDx5DpRQ6mFBlwI(K{coIJSLJ;Up$SCgB@*NC zv!)%3)_uQYf^77Ivra0*0CmKW9at)WT6*BV)HYN?)a%1}Rjhef_(Vj%=QXfB zLt?$4&5r~Bd_Zr_ee+T?U?CQn0Ub$ebgSQ#-t+8`<2<*x>>h|$9s`vd$`w5EQmBm9 zz|lf}<2Y}U-d0HVG7C%?bLRt}>PA;dcsOsB4qdGV;ySQ+4SD`(Il&(0K=dKbLT8{M zFQ&Rbh~+_jUEMPShqEY5iGd6)g&=;>AEGkk0=^z{T8o&f&oL`7d_mN`+uq*3^v*iK z(kNTvaPAYE^*XQGOMA#{W|fWfzDP{~N`Sg!0IG8`=A)magJuXDC!~87Fj1Ozt09Ug zFpe{57)2%i8uYP^C#z38BCuwIAgX=uG)*L-=axjD8dqBiu!UtRh4}^SUtTTvt;-S{ z4eh+$_~)`|^fnIiqf3j{wAha`@FYkVu7B4ITS>1ubJ^f@5V5S@3?KYs!9c<|`pP2|JcNWkpcGB)U ztq5wG#N}7wAVP9%d#Mfn2b94O4DzD$B2Y60)@Idez>7rKgNIFxOBATG$8a3aQY$0O zHzN%|RwCFT@Vpf0Je%u`pckjgeqyBG$+f(olmO{MCIF#kLC} zlh9CyUc7RZvS~np_9c@Rt=^DS8X?$c?z7lFzU0q8wGaLvQ?{U32yd6?Z4^lX(tOU| z1p!;o_N4v06`KgrVovv`3@qq1>i-4qa!8jZ z`{3}!IY6j-K!85lp)_Ew%^60sG7bPNk#G1CeWD;oOy0Eavm`7p2=RUDcw1oZ&_v+DcPj%HPGDY(0qO#X#L1&u>u zd_a=q*L=E57}O2u1Zn=BNd1lHF$gE1tP8YyAmfY2Nz+Hl;Yk;WU}?|^P7o41-7GiZ z*+zLUG0(ElqJo*=npO|$yMn!th}HqW#|EfRsgL+&mj&2rTy z2MLd(38pwz9a;e|c1{8^EH%4&T{?^aL$u}5br5uD#gmq7Z3L&=%4pm6v`P6XG zi8!=^s6e)JXG*5un4G9(s#c=@3C3#Rj@a_Pq4DN~u^Mvbyy&deFaP!_0^*f8)T=mK zlFB?DKYq;4!xP!nhc9CLHuxv>rY6Vpl__92w4x>519xhCGWzN=9zS@hlaRxr@T#R1 z@UaA$&NpaG$TOH|Ow6OA$D;7wx8WxpEdE`yOw%zM4Zu6RdlhPSvI<0)Qzf95Gz>zu z(mWnRIy6{FFy}^;26$1$+S}a61 zN_wi#^G^jcBZDE|D6W)dlKqGu1l-%bqjwg~^oFHfE}@x~MLG)A@WCt9;^_Tf9c6*c;2h9j4aWeKNZLg{x*(5S1O#n*@ARuup|?tl zaz-DHu~s_@SC&mL}iAml7`YRA2%Eb6|^ekjrAH+yGVY>d!O=eq4BG$uAVOtI=}V++n?Ikw0}!FckQ=*t%_zr zQWilk_#_84$34?*(_7?3ts*V=R!8JVO$J0izGRAit$?_AZTF^CPc0;L?zifL9!jk@ z>g4EkVMXu79;kkhCz@pdYf;Q&UR{yD?j_kp1A3AWJ>FLdF0zR6!$)@@J+d@@@izFT zCO?|`HprXxqY8n;(>W$~GJoYSpDP@S858fZDT&hKeC0Vi$rB>C1YFW-o2jvosQhP& zWah+sMCPD$5z;7v6xIVJch1CnoUeiI!;2$F&Rm=5_0r@I#{W?RB&CK@96dlXg-<$; zlTS9VY)lSsNghOdDCf}Qxj5mgj%Vg`X{Drho1b9?E&f^z4kJT(o5OgeY{mMhp28Zu zToa&Y!S^+Z@|QIXLGeyWie8r^+FN)_4^1?`b|Y8S2ZE6*lYq5Kq|bhiWgl&jTHVh4 z{b3xj3ee?waa=qp)Eca zNo9_Wx?pf+EECyR4nB{x(;+o@nxt{Ca5ujt*dj-mA({K#%3h7Y^n zLRFk~X*&-!>?wT}k;z2|P+xHclPj}rZwlC1`^-CPK~O#XLsv=8e~~K(N2YR`oPF(D z4kqyv$nSxa)%>`D?N3AASbZzGLF^T2+`vQB6ug6NBK!uA*5>YZ!MI{LxB?e#sT#Qf z;Z6@8w&rN)Vl<3l^JW-etA&0)7?4NY%Qio*=P+94`}P*Gt4Vu|dZ>v<7Xx%rvi&7> zV&D^cAVeGnWi?Aa9`;tiPX%nHZhW$inwOORBBtb&AKp?TKq2YsHZwG@aEACW9XRT0 z?4dkUb$SysueUPR3Ypqj6e^-S2_A#vF)S;Ps~S8;Tf+N2@dknP`8@qC?-ovU>V!)3 zNP-WnVA`n*@f$diVQu1mtaDCrpjZS1(dCMw=k&1;U9JxntMF1jJU(1W&UibZn z7T6VJkb|~nl%+Y=Jh4#XC$r9hqLU0n5y8YF%)^iM`OSn`+$@_~mN;ad{0T$r34S2*pe!q;m{-PL$9?_wB~@ft@Qa6OQb%i5d*70 z_&u0ObfYh(G-MUSyowiIspO{BLfWOUeu&lTk=}!uOFwUJu{RE@=s3g2p0O8^6|Ko)&1P!igJr_W5iCo* zX*xzMO>p=v^2KGP7mwVc?Q3lU;6BSvX_^ia6y%i;`a!j-33|-1z{I=bM!hNM#3o_m zTqIJU_)Nz)b3L&X$|x)JT*&`U?8%&F@NJ-kzT(|67>{JrGO)c$(z;<1>iGudogLSu5;`fCy+e zJTP#vt|#LRF>ooD2|Av>MhCXGF-oVX7ioZ}71x~an_tb;N9=cUp5?PfIwr!f-$2Sp zsKkZTkecR^g{k9i@8{Nt^1xU&woKdY_ORqdXxxYOw_Xj(^MgB1mtDxbEPATLdKMeA zq(K{a(Y9C>Dn6gzAgsh8XR-$05C1u2_=Ka0GG`$73D}vNTEe-(PS^oqKpnX2PaMrT zzCTcY^JHxgY9PKec00V+KU)Is^oB*^3tZoI?dTPLPMQaFKgk_umxrY) zqBxDyC9CGSjz(qmy~u+jaY*9QSXTb=N&=`eG2tOd^|l zL}5&0iwvVEA{(XN`s!PiJzI^zNSB>yj{i}3=@=jVh^7~UW zaDgFzJz-lPe=T$1bBdDqyXD+gj+_oUsLHRW4}5)&ztJk$$%8(mYDtxBuW*^6;|ybY zuyf&uV{RudSF!!rDznPOU^GzOlKUy>i+en7<(_z{SSEI|Hhv`9l2m+8%mWz8VJNL6G0XRVAMbD5VHD4DRg*WX||?!vs0q?3dCy|peG?o&U91|#nn%efSkyv;M~+j_zv zbLo12Rw=FH3BuRY4lbImmQJ^Cbx-naF@OCM#&;9@P2`(DDVJB2TTi-}TmSq%Ju%_y ze|Gfoo`J=`Z<5*H_v0IDMj_}ck6Vuzq?^83nf_BFA!cT?Tv*Y)5@tVT%vhQ{y*)H@ zu;N>T)w5$Qmy6TJv*Wd!7!JlO-*`s*dPOzfswruhvG}oAnvh;I|03 zX&A5L+pxZ8xVpciuPb4IU3ewOJVxVFHW|kIs5$d3zD=rKDhqtq%U5o4jtx#MHAx{mnV+j+mIS3utS@?Mb4r z2GKt>dH-{9Pe$5fv*Ct2<}&YMN+h}-FAnROid(SR?T1@3kA8D}HGUw*8O}^Bi(RU?&xFOnDZPDEqJ`OMYZR%D=l+X;feIZUs+r3cnLhVU?WW`jKxh%(}|0O!~$XjC;#U zT5O!W_T+d*hkMQy_^!V(LoBA>tF5x^S z8#MzrC*QfAesnLFeYno#j~^#x+`oH&^~y+knZBs($NMBu{w6p0vwn*!QBp~HsoOu` zA<_w`)ch_dN|Fj_Nh|}`}EV4rpSjq!GZge)TN3a zdS+k!ZFxgAZ)xMuk99*6!u`YLxlwu@XH0;J9ieDxlSD7mLBpobItML@%U^s zK0aw%k$a*cMPpw1QAzdTf${}cIp1h0RqWL<>@ScR?mGGMmYm9hqgr~qGe5%$<=V@` z?k=^acV_zhI>ioHXWKG6$m3?tPA7%Qj5O~k9{Uz~w*86~4W#8Qol%iD_Y`Z!Xr{kg zuX27cf4pw3UW`7sa@3fTj3^KM|tBF?>@%-6i=#J!Jqv*>J$@6Vr8IX`Y65L#xeaHMrzpkhqi zpp|W1qC@tAsxV=59p4Y;Q9n*M^d1s2D)1{%obl5yDKtD1|HAogALHYS&OK3yIQ6Wn zKaVz~)jwa+yi&^gZbFosnU>U6#a{U}Ys^lh|J+rs5f;R!m%MM!Fqc9!$D-uE7Ue;^ zH)XBkkM)&R78(SWTv+jPzH46o^){!mMu^sWgyaX)mrLYswW-^wCEuu~_3+rSks_0= z%N{s3Bnau>iyaDdZ^+u=Bx=5SW9y=ybnF_0v#P6WzRoYLPMlhBV5qWD(ekGPBeJ5Y zH|2@RLwb*T(b{X2MaR99^qJoNw#c2ax1vt@lC#~UenlX6?O4x=qc$HBQd%=~a=Sj% zf1DO+=J6a-yQ#_I_;YmKisqoD%beQ3r#G2(rDYh;U%2_E+Y*lAxn+0aYQoBzKh2an zm=?QV;jw!sQSbBIvBBo`e(mdPj}DsW&kvPQFpU>BKIa(rj)^el_!H z$7?KC7#1h3swV~mk6g_^TONNpI!w@HZ^4xEk%@UPcARjmJy%@#f=BUK`)|W5W%!?u zlvYRV3-995&Eb+7?kuz=3R*TNQ(k0SDR1qtt@uc#f!dW7i+-Mym+fne8oed&`Rqnz zFSmlk^N~8sox@MP53Nv)n(XtFk*_tCH?vDwtT0~ndBjXfVoeaWX^Bz|`&o97yMgzvxJj z_HC^>`CG~Qsu0nHPi+abDBod?OWD-@ef2U^o{Uv;dYZX!?bwnzAhRZJwbHD+Ey-^w zJ<|T8)v5EfntaQ>yH#N+(YRo{1MeAf9JET;6&p@;{XP~Ek~fw3`Nt8}{pp|P7>$@r z?x?izMVYpC z48KssBdY@;k?GSJr%RIcG4(HPnSM$DUr+a^3pf(Ce!4mw3hB2@$1n2|ZV2wSY6j_+x?qpfM&LORKgX%$RmcMQ=ZtWVS}y$fXRf$Il9%|HSE4ePBx zMLZL!Xl2*eu#dD!a7qcYef(~*M8B_Z;awMw7bkb)xobJp{@IXt=3&XtpY?v7It77^ zMd{fIG1(FI8{9*xr<_#Q%^P~1dV8y6zsc~|!v{)SAALR9H{#QJXv(ZUjklz+koS0& z&1(fl;}HpV?ct1fl<^{9XvBN;>5Dd+TkW-!PN?~)BjYmiKxOgSbs8mSy?U6-P4n$W z^VSO@=E^tsG7a}l-%GnSeup0R18Yev#Jg607a$R+P%HvPyNG6m4nF zxK1hdn-jT-#!k(0w-zxv{cdg6r8NV?q}?^nt~z%+gnPq13OoASVzN4fETl{9E5a$G zH_qLpUg$Yy_R>T#9#_&oaq>1Eu!PERbF=J%&SO)(?8&C71z|+@{r+#+5@{$X3>=doO3k4N=_J z{yq9eaMP>yYd08g8!!K;ANm%cRbcAD!Wh{#hu1;8XYnFq#Y5+9ITfe&=q*l39k%^Y zA6By^@o=aOmy~t9kbY7;9ieqDnuFS#V{a%-52(^J^T_X6<}o>pv1CbC`Ku$+vC%dk z8e@ct=<&CWd3$e{vaU7m5Z2!^Il1!v%(biG0yn$Hp6eyX@{1aSiNC3)(Lv?w^f%UL zzDHBv9qW%zHD;6Aqi8>~_oDA_eU;AK4aK9Y7xhe?$oAb7a7RXUkMV;y$u7%n8w2cx zmYVL@lAGM<^i6WN(%C85=f}p}xZbBD!OAWz#YNlH-OF)zPZjP09Dz7t+cWZ-tx0X~ z-rU}T>a|ln6LK#^ep}r6)vL=`ZXoZC`^X&*SMloEP;d5MvpX~6{}T+ut^%_|33%!@BAe%9RlbIym*sf4MGE=C$b=lJWI7^4WY~6Lh9Q}jw#1_ByuQOl0i;hiv zkHMBTx;O3NzaO9eGQIEj*9P91MRfy`Yu3p9jfMx?%9XZ`oUCBy--Dnq=J7NA?hNxG z1I-}kfV7l-7pMGK=ka|0@CRg-Dp_JLid5I1j~Fa$g+L#;Q8oG~RTdIS7HVDg#zQ8j zjZb*G?O0M1)|lddXY1a)a2?*Lx~eEKn@<|-Lua#>s`vz_Vx*D4`t{}b#pJ`~_xBvO zs&yL{6dn6|k3Tmxx_Ytnx6+k);Y<2ra?WHA?@Ua=!s< z{nyumafo%VBPa2Pk9zh;PKE<>tZ=)}au$wN0{(xolqp|JV;liAWxDX9g5RzqbG%bl zA9%#|4|c;a*ipdjx8r~QsQ>)2|JVN9ZtKWk3>c{%{5cAI>_U=AQM@}BwMzxyphjW`vvipLZR|QIeA^ykEu)a6A zBm#w%2G8d83U4EbN`xS)iHV1{Dn1MbhxCv^5$hED2}e%GwUi51ZBE}s#^-wv#?T|i zvv3giHBU>ttrjW8B_q1$s@iUGnT5NB*vgLlnPJ-awEXp#w4EwGpB%Oxj}sDH|FV0N zoLRk1VAis845DRZy-6Hkqi7D*3?f6ER&XJx>Q*+_?ugek<6jX!RcL-lg(HUI`#z z-bMqBudNH55Nk-%E#T{#`h090_TMCaM?WJQ zv@%HdMB2KjYUJEh_UZ=1;3PV9#p9i~S16hnxn6g-R7HoBXo-@Z{OIizW|z>UEbw#} z`d8i|VLsTuD-;uAV=WdY&iQC`3?3%W5Ut4|Nu(JoAsrvBboi82lgACe%a;$P>6bC% z8SaNMV7xt=_*Rd0%3i@xY-pk+?VF)Ejo-M2>4w$UKMG0W2KyHY5q_5~U?;=%Z%chH zNyX6V-XwxMx{;EO7R7cC%MCP=9bpPyIlLum^7h$cVJ`O&twq2N?IkShM*{33Stf3JM=Sp%L^M zwB!2a=I+-m-7pnCUMD=mA05VHFrYp(!M>bh;1 z{o^4#lxGcS2M)vD<7lB57TSvj8>M`y9#ClSn|qFzQ>$q$6kS+wFP&%LRg?;etn_An zmvG%+44$iR+VY&C6%N_y_wV2BMj=q?I(mEZSt0$`?ICPqZFHI+A7M7#0hBCYxB>qD z<{lUv#Ohz!0#nXw6Z5yv1yFnnrgm^pniQ6jl4?Fe;&PBhq&JJXY{9VTRSA`czACMD z!y2gvmj+ok&8olXfL@_Hv5c~@C7N7s%>z~VJK=kRXN{oK|jb@<8ez5#pSCXExX zpC`C*CA{>@*Y(2jS96?V#481MT=t!+fPkDmHik!RgLBOp`;(^XE#ueB7#l-gIl%59mfE5goZ4@9sbfNEYb>>>rtMX`9GSt10bl)Ur;Sm$$KfrIy}7 zSVG;L{FCPdn%x|!4JS9K7w8zhqI-&}CH6`J03+Nc{5#(|*AZeNcVyo8rG9Yl-H@t<&h=n`%{a}R2Mm?}wYb%8lS zjKl0RxK)H9mB`3k4H`wA}+mbjxHh3fSpt`${K?Ia{6%;nI zu&rv80-Ngv^9TnG^Fv4|bUy%oC_$+vtps}wy=(H!4$rScBqymL4w4^eM#3Av_JAqqtk-yyJc2AY%F$j zDITnr#f~IT_oI&6_Vy#wQdd($0_;PkO|@&b-y`*SD{<|zx(412NU<6?*H_W&tyL+ z29!_DrQAtRfM->p8|&=Hf4qrZ$6!1xUw)AO`M+EcgHqCqpCA3m>-y(nBCp_D=);*h z|6lmYbOrI3_x|M;ze}Cuy zu^%^&nZx*Kii3sH=@f@ge;*yipmDVzayEu#wOoZQ-0Dn87INUT<8q4rI3`cY@xy&C zhz7huX>Q^6{(wtM9GE&yaw=s;G`o&GZZKszqTj6zoTs2z985`unTMOqQt7n*PinKikqnxrF(BxPkG6!I|w_NNa+*%Ghi-z;c9$8nxn~8_B6y6K1b`O^ zPIEjq>^nX&mz>jGdx>wAr7T3$=OIjCXyOwypqY9w*IdDUNeN`nP!S{4Sqo2N$iT&m z7mY#75dIBA*3*oSq2K~J5+^?9%&9$pa&xDG zJS7A{7kZhl2CoH-LAE+ZCO{j>hkkkv)v(@WIJ){e!EAL;HKlqO7SsJ5Cn& zO13X>iFnI@GG7Q(Ps~-JX?WmpFo^LU^vn_DWiOqtriE_Y)laXogD*gyC!A8aO$#_UGN)N zuUKFR7YNw1Vpl|!`6dlC&Q}-rZBH4Xo(H*TA)jsH1EJD~A{cU9LI(Q%FrV;AfTqJ z2Gdcoh8|dJeieaa3NCEV2gm2kU8s$_u;)T71?3jzaksh#k-0>}`r|Fq|E&;Cj8Amh zuZV+v5yl!-P{L4VA<@x?EF#LwjJUJ-KrzK};tgu1&J7H0ogj0-c46U;t>!cX6u;rX zL+KT!D14C<6?=;-f~!oe7|K69<`ghW^`i_RZK5$BVpQqC_Ea0`OR55B!(_YWjF64Q zOSndtmF5k=&wNWDGI8RclD>EN41cg1DzMofOFJM-KXy~n$DJiGENbGayQMs0x%6GM zXzqiUZwns3{+%{#)x^X^_dTU;N${l@e7)Nb6GgADRsMpcLcO#44zzrIRky%Y?d#x5{8eU;X+N3GDySajZ6dr$ugg8fUD1-^dU-LA@ZU5y}}h$ z8Xm)#@{-NL^^wUPmFrqRR_1EaHEVW2S_05x6F7O-u(fLsZMz8NM-Qd>SbCx|(=>h! z8>62mypUwdR9HhGZsO<;ccg)0vvacKE{GHs=+s>pC*DZw&BgR~3@SG!J)Lqz_&+pn z{?@MRl@|LB9*l-q-mdlsmfr-1Xy2dK?yXKUFHJ)VkAou-y8jqBFTjYC>^`7)VDb%s zYgrM9cjetxauPQ+X@(IpH6j#;hR*UlCwOsx_!m$Iqf4shVQbdSyeyj%XGXO%pH1N4 z5A_ig<8N?AP_2{gn3+%%@Nv%jCS>`do@+1W$PA>z-m(Chj;Ksnj*RyU2*=Z4WMblI zXOMFW zp=#?}9Qwr%Q*jJ*Y`dC=Q~&`7LbJD-X9Wp^k`hs%rqP@Tp` z%fcwVZ}utREOW`9bx#wI7b@>V>VX?cd(y}gLP zx)$5+XqTir9JCCFKd=%U=dAD;nc*?i?w2U{MycHkk>ks(J2^66Vn8H68uj<#RMyol zC|V2P=p>C;EUSne97Jocpg0OzE| zyAeFEOg!&%nCE3tX@K)n$gC*u5Y8fG?Uwlu(~AbKel4Bxd`u;Z^Z33+E|%mNAN;iomIi)&7P)o-ss@p8vHgxr`KaB|bDRtBl{HAG8(QTXN(qTX89#rj1p zU>6E$iy2`|C%_8=++mZaHBdrqk?mcYd$Oy35s1$BMSn9S^EOJvB|KR%ydCC#gmRI! zD%7Sc6qiG#Q<2oj$lHjdO5~`-O^MDhF(4uVgAX2WL2j`Q3mIK4SZ`MmKq-Og)ilt& zV9E^7e3A&uU~z9ygTDfwtrJ6ktc!Vs_`(?A10?YcLM+)7&!$B*IJ^5ABU~Kb4)}VF z*9NS$$l3jdwMh{SsSR+(gN&%lAeP5m@II{pHY{9@aDr)~{Y{e|9Xa==0E?Ia;XBlvv=CkB}u!`Eo=V$2*G_W(58nu!kD)^{u%D0{JOcm+>eW0EWU?`u#{kRysGy+rLt-P&?bVbkzv zCb@0QZV;|;3`hUv)mANyvts(N}KmYsh`!glv3@cFPVgbCW1wGu_>J&ftubD zKk=0QKk%6gmji*qJI1271^WCN6M^oddBDj%|6^THa4{DDEFVDnQkatv;h&Ch`eZVzz#PMuU*Q9B4ekzN2z~qE z$>Z$5PE2es`oa2uo@p#>;j~!U8*(6ma^`5$*KT*Rd#5q3NUUE0U9}x;?K%e3u@;hV0JD}Kj@c?5ZbnqbSk(W!MeWI4-wyd=U+1_sS+^VzW0Yi~90cuS4B}=aN zAM;@BqY4gXgl04Ra;U@Ahh+Q>3XwF(wQ1l4R>Gz7o?h4y3?u8XY>I?$7r;nk8p;I` z5Df_n70~(n8Yh#L0Wx|&E};W}H_6@+33ngC;phaATKw^0$9d8n0F@?$bW12jSR>9{ z4OEHlqJ23g0P%>xF&m~9H>Y)>pQH&IXmr6}xtu%_o1Hu-2p2!l+=Mlj;x#i()QomkUbRh|h6|!{=pQMR8~?`q z{sO!krDyn(|HTDpa(9BYC>o;sppbiwSd@cCW{i@P2DwH8#*l)WA+^XLv2mU}&@P-O z(<^)g2x|r#K%1jiEeNAD7H3bwIg_Sk_27kZm`mc_Vz#tHUq#nBueIjV zDrY@HTNqb{+?~)!@J&C1WjDYsHy>v(J<&^ojtmQ7k5B2)FKafVnomYs(87In^1isM zksc#P(40v$s|u674s^@f=(y^GtWUN=e=pd$?V}#&#)lPq>V1V26-)hS+Q$UeA~h|^ ze6>J+XFO^e5)9`C>Lb6R$*0f9B%*g;UrWZc6aXF@{I)IX%YYtmO&~f+XoK~jbt-hA z-~ntj(~yK%%FYZST_YpOel!vQq0&Fnusmc8)oaqrZ^mt`9-VdMTAh@aMzFiXmT zCoP%mV3IS^>JXB6qd9x@b~^f(5(DJCyFXrYh5UGXu#j%N-y-wtw(8~)gVpY21uJTr zr;R36<+E9LQ@|rxcf4d!8!q4wd4&(b*a#8qV`QhhDC<=+TX2vK#T*zP*H$7ty#N4P zi#a6Z;Sbf??uV)uv~F+NgEkki_zK)T_Zc( zi~tbOMsvil4(u>3_yJ0Ev&;D4WT9ydf0oRo0)>`Z=R)WXZUX_x9!RKQ`?xeZ|s*;4FvjDSe z0mJv6jiCDUj+1=-Rfo;09n*ULEg$j<{KaceQhVxjBBfKViaZ{NPv zY({5>%=pnNbfd8xbj194HO=o2)79BX;gkyaq0Iq1#N zC=whrq3Pdd>Ly{n90OB^V?c~V6j&Yx<9o<37D%r?hWWj|ZzS0>(j5bv2(nDaZZ?a5 zPgdm^Ct*M{F|g+D{3zLt0RljA$XJ<%9-`T>qKlvheCXii9HFl-3}D$uX+5nT98EOp z=q$_+)~n%+dj&_S{cv0*r(!@WL9w_e^2q3g;v1np9yv)04$l>h*gG_v5F?=EK?+XDqQ(yd)|n%V zw!!bxHbu2hBd{$aAeZFbgVC5++Afo&lT95dWI%*A9gPwn!U*-w0z zdyJu3CsRCB-gUSWM;PfIeQnV*UIo_&T8{CLmQi#;#Y(+vpOT|xOA^!V0@2g=q#Q+i zf_V$C$Ix{QItj9dkl{R_=(R9*mH+W%S#NzpprhOVF_;LZljNBsu?l~Z^AAnYq_#8- z(jGu{z(+>kFeT%Xb+(IV)(XN#Pr$RY0XTVloJn#@;RIY9vCqZbsK9)kV{0%q**a7G z7Je==Xrv{>kjW?ja~QKCy{6C5XAw6*z#AS%8zH8lMu6_1*+bqAsB+U_No~<)@cT9u zbg+@-(P$~n=HTF9vTcSN*6ccwuvAx3IttX09xr6&+3k^t&`J0j9KqMogH6RfnFkFK zcLS1Ha$kvG99}T>`x&_8k&|a(saZ@cgrP)ra3`NH_^TZQIK4vG+wOgy4v;@;RE{~^ ztcKkh;JkfU=(CemfY%4bA}Q>}`lI}+siL!9H7dWYmBsZE@}kZg^d!T2o{u1&fk@E+ z?KRvV3mkjyA$6hj0gK`;pEY6}8Leyv_MrLFttIka0-P*pX2q9jV}F`!i3npl*Z}ld zGM+Jx9D_C}Kq1SU@qfdr0YC@A1zV24PecsRc&N%7g(=DSTcb$#*NQ5m!Ra5b{^v~9 z5Jxe>v4s7C-+$M>vUEM#?(6hml6+*6YG-QAV5u8%JqcRn{Jvg--xCJaCS7>e9O=gHN2);`WR&5&BWN9`GC58avB(Hvi+9D_!4%+eGgn77ctWJ6W`*9o;r_ zehi?5dW$*S#b`#<|7h>ZOpG-#n8uQ2FlO9mI_G!py?@{P&wbsW*Su(CWWp3nq>S~WKOm5aqyWjyHrxVdWEF7ldd0t#_Y9 zK{eDOm(o4~=5~FBYs`(4wS+zY`tmACR1P|rbu6VVpa5F#p}>I@tJfml#lW>V!*^N< z_Vr7V^Z~0_84DC)&faVhug|zi48(d(S8D2P&1h9R?5KP9R@CW~qRrd*^loc5lr)rF z^E-U^&|`Ge<--pS-r6P7WMaAb)Z^%jWmg~kw(;kVw+@GS9%EfXLqm%P-_#c#7t~MA zbE$I)b)H5S)XuXSz6OOBPQ6hvnOCK*8F{>npYVnep=S)_J+T!7fxLdyZoM z)fRo;<&Rpw;d5$v@13tndlgvWU#~sALFrLTl|{R=|4aOWH~8Ymj$-a-Ge(GeL_ ze)-hFp8qCCH)|ZScyT#xrN&Y0xJZl;-E4Nf)^Po?3UA=sg-r-$u%nPL}l3nayT)wP=9t`U29WF(qk9l}_ zthW0^2&K?_y?ThZ_L-Gh_E+QjzJ7g74RH+1rPHvTuC53PBNRWm6A2e*ZEr8Q?%Ln( zCti%j8dX?mx{P8YSu`Pv;en3=?pO}oUBP1kwm&3y(E@5QVdrf^mWrWq`qir!;z#vS z;c{ZG(8ld0j#sO7I>??oQ@Yt82>BB((Nh4w7>@t3aabCT=7>DB=-T`mW{|K3j=#iR_Wm(me5_OKxPclbyT zduCr3p^?3aZ3`Iiom2Iyw#ajvim_k&){(M5`S|-FN6$lVdE`V%3XAbU4Hr0*?9>^xM0)LpfDugsQetR!i40)|@?5xz_45vtx+PF3*G{7@lfn(8qED#dp z7C{XeN|m$(vzHi=cM3ImsSdgnNl)SNBm6#}`}&@-C$rhQ@S;=F$k93sejYuO`E@B< z7#p!z7yttwY0x^e0jB?yJ|u94EwI`e*J6-xIfh*c5gqjdArg;6MjQXpucM1`bDeZ*2y_S!{z_RrMCZ=#KskW_J5o`>{`&508yRtb@7 z1U1dQ^zNJ3kamxS+oBD>udfl;ivZ?_e}j3EVfxwYYt+S;qBz%sJL=(F6#!8Q7B!Nc z&)H#v3JZE*#u6`8fMZt542$(8pY=Rq#8Ua*IIWSZ{q`GNhE`6V=N|q%xHhgtgC74q z7WkjStl4Odt`qhvdb(ePj1VufSwCDtNB0fAh6o$^O_8%#i)==tefG*HmxZ2cyjjpd zKq_c>#C*okMA#5hB7l-#b93?u8+y?qC0E{SbJ`NpV9NMq*;JBIhCMyA0CUQwfN6JeW{<;RFDuS~H>e{Nt4`OzEe}86XQ>Qu z-sl>o3crwhYSLzCH3d0)TlDqN3=LX|ckLUTy>CL199HOu_G_yX+8S@7B|>!|8`8+0 zNMP&aUeV>uMdFN#8h|n1tCU6-<>AutWWC8mZHZ1Ejb%->*|(dZWoi9FKUE!krwM0) zl_e#GS0rw^^;8e&F{0EV|1fH~MbbQ< z)G6Ay{09mok?D#>!bzj-J~mhWEoNa-6H7R1pa>Q=@q#qC^`cGV5ZPB5L2p++ovyF- z@Zz@$rK@?~)vDzY!wsI49+=ZmkO~(oZ&8)~PL2=vGa1_j8XwMrvr9dZEOO)|bWf}0 zoET-`W>BF7)xpyND)T_0FyMh+n-@-MYpf<_g-#vgh-b{6oeM2=y4)*o!bzc{>G~2O z**9wY73~1)W-{(Oz7Md&eLr%lH^{HfjYD{#_-F*1hC>>S&RI4x>l+INQ4<$9HOc%4 zVylxev`x+K>FN0%lBG}T0=oZ8e$Y10i zbQ%ZFwL1p6aL_6xx6}0%VBAU0e^28W$dIi3)Q0azLczuOe`h^xbxK4AmkoGl*H8Xn zp8OoqhcOFV99m}$S=-rl!x@@@*JL|HkzD&-gFTlp#>D?Jc+mOG)q#_z zPR+Ps-;I_IuJ%+<8Tf=IMa;$6PcS0a83E8x`Ou8fQ%WId9sM)iOe(1ph+-jCZ6;*o zbR+ni!Ih4nDR@Twv@{if4MTJ0$)@KO%mo*5mW`ST%jS3_h@Wug-=rE0B#|LK)kq{# zhVB?^Nc7tqh>k5i0`>GCoW{OfW1PdBY|$`sIUD~9iJeK82s(5kwJni-55l9fth0X> zL|w2OyK;uA50%xRmwV-!_6OzooVd6+(qODmvSI!qwV2i_uWs5RJb(XzIxiGczn|iU)-%hcL`KQne^@Ih6hjP2fVibEtlUf{#!=>=d`z!G{|+p9sDl$ zohCOQ?x+g@VDfR>q&oua7DI7YanWvHQUx=)hZsg-LWcc*loS1y%PjxE8cbMUhVRRo z#*H1A`QXTV^G&no>iwcZak7*u@C1Krgbw(WkJR~m+#cl(H@E-We*f*+{tr9xJ@^@B zK5pmVs?FuF>kzb(`s?e#fsOy%M|ki7*T3!?joNzfpN3@zZg2fx{qqAGZOOR28Ku2( z-<3NQBkeQ!;`edsK0+^X|5{R1%N)W8hx8eu0_@4XQlN&VF_?P=qJJWcQm#XX&7+Ar z`UotA{*l3|tvZs?{!uRW(2rIhfD;jorwD$(5TNP7L}33ZZdn6<(K($utO$Gf{^Xif z%C3iZA<%TO-k2W0VaFLxkV#|l+__aZfZ@G&+o|+JFd@LstkCRS{2^UEsU&L~n|djx z@GQ!Hz&mIvq{`-V;$yvBsj7g!5NEOb`Xtjc>4@27LGDd)gi*B_RfB2@#4Lb%Fmk~V zpdQeQUeBesD4h9OB_&0Gd?W)UyK3S@VEqko?N+BbeOnIKAOd|0Z&QiQ?zJQxbP6_lgPe z4{|{F41vpe1Whb?bNu56elL$S}j)i2wn^v4ilSyrJ z+|lEzFfPH$Xm?I%kfaCNz@oFLx=-{;eCtDUe8Rk2gn92BYFu|KAk)gcZ2G2kPV-&B z&_2#1_S}ErdA880RC(v_4vOP0*J{cXgIOkXsnyBvTd8^B5rUuT@#LpEhXq)J^RR=J z)?UPrv+WLYmvKn)$B!Q`ejfBZt0IWTLuW(8 z>^V)BCfyndf@k$G6YxLW-c4rB$L9=q7xsGUQOHkcnC{h-M}$$p?gEsGw+?b!u{-Hg zt_vvpIIRm+UTt!*vn&;K{Im8v5mC`(A~oMz+t({2ufS493O3BUI$1Bb0=}fz1Up!r zu7}&oKt}-e_R*_AQ_RiHrI7OC0)n2D?e{irV4u727pQx?N9b-pWRyTdc+?wVEuF4+ zoBVdXT*YIZ%L{-vSjvcSHz`5?P5ca%iRavva`B+w_f7_|S0FHO%JbfN%>vWo;n*gJ z=jMgE&gI7leiaQ3;YG%5>0?{D{cHQP9uNA72dL~1sJ8-%)=%8a%d6l@fh+uY z>Hqpl!O>(rU~O)4V)_jr$Zil~simoUw&0KtLZZ;TumFiYA(1dMS+uR0=~Vg}PNJ6> zOf81K>2~ky@9*mCvtITGZ748p7RW=Bt@BC7IHX0THF_ePSH;^KMpkZe**Ls*;4g0rbF_Svk0Z$ObBp7v(Pdc937A9dNmE>3=akE*xJL0;6EqNg5E{0ae zj1Sk)ZQ444zCEjuO)Ru7euKoeDYO@CsB;v97ebB_*aSKELUUQsp=?^dm$G<=r}US} zo8McQz-6>@Ug%4eQ=f7C1Td@B)jG&$rlh8}5I7Y0;=vbMIi=EXPwmzq-fHIJ+L5Ju zQPY&y#3c~C8n7!G%ri~&q6lqZJIFouqL{dBxqfC3kR-j_m*$1SRXLy}V>d$F(@Sk- zMLVK6sTsc8=<>uUE|QM?VWR4Bl=<==9jT$@e$5XLJkQqWd$uC9a*Qo+ zI9Ufbf*qMkC2i9#(zd&NiTV7p!s~kptX8Mel0S8D$n4%pW7$;5-&Rys1{}CTz8I=jivdrNXDeInV=&|J?$)UC8NLe-FP`7NeS6q>HR8N!v3X1D zr8`g5?>mA<7F<56|Fm{%ympB%-FR~)E)0jPK{i3=KkIGd&`crNrRniw5(#a7nI&2^ zkmg+x@Jet!m8|DzFnd6bPpitKzWjSd+oZnG-8`@)w4pZVW3sM3%Y(et@#IHSi>qC| zf`pw<&8^_T;zZ^)m7LpuytuQMS!?S(RkwOEP+diGMdb_ottLNM8qao>`UH>uCq4f0Vp4^=SKZOlbXpfM_2g77_*#jT(w+8B6_!`8nb0 zjXuw-vLhO?!3cH6!B|)eXd5dc(&-uv@}~}RpU21ZJShTgm{_et5DdWy7@#yLRh6Zs zJA2py?$>-gJzrDkh6pkP0?;OSm>o}+dmP*9G{XELE-aBi5KbVpJR6AS|A`s`JMZ>Y z1|2k3K?`W8TE@!DNhGxRno)iWlo8pU(AE*jqfst%snk-2k1t4FV(hbAnp)YAOswhg zT#g%&cc~uzkxdQR%f~H5?(tRbT3rndF1r00-|Oj)LZOKIQQ&dGX5b3-rE=g!piOac z{FTe#9x}O`9+<2z z)zVR7U7V+vTSi@Jsuh#bL+j-}ZFh#u29%^j*u=zW`T9 z+5v?KC}Ff1Q~LzIgb!KBrkWI(s>oF=AdGBmY#y~b`Jd$K^E`W9>|LUEE%)}P5`nLR z4kkVxRTg6I^^An#M!8y(Q&Q9W)u>VBH)PsJ=W=j$BMgWIX2s@ah244aO9A5A5T}Gj zTuS78>j$#Z;!4WOUcx0pj0&vZpYmPj@Gfv);N|2+^TiVg*0#2pX=;y&)Rw(N%(y8) zH6!n*o+n*n=qx~}hX-g+lY{R>=TQEUWdXsd9mhmsKr$nK4OuZ!eN`vh%A`b2b}yY4 zkJA#;pas^uS6YYlg#QZu1`yLBqxjaC0)>krI^G0JhMcb%#DW)>p&m3+8YC{A`9e;R z8~}^IZj}OQ8EJfrcXeTLadCRNc=!+*68O!LqP$x`*?hycf{Ow3{xJ3m0xv8@M{=F<}0fdqP?=}VtB^t4avIlwWO(2AwfA1zvFouXsV)1tav;d@9F?t3DLCqvqCWHG9fgGs7RH4m}- zDJdxxcsvF+qafE2lWcW8Kd>WsYlZbm@a1Y$vmF%xLKQRGYd>Q2_if4gaP6AdJH z1|eRCAZwQ18iKcAMQHpu`W2U5k8OQkj~d|6pQT9aoXtGf`;URjbmFuT#ZO&Fq|G`^Cr?ayrwj zZ);KZYu9Tdv%6lFVrU)Rb3zG(%9@(jp*T!nJ#NM`ad>YU6cVQQJ;@`vDemP5R|YZm z$gYP4WMjF5kx;mH=kgTOs#z|e!jQBaYRJgZb-;xrg_guVg79W~xwG^48@0a1lOZ3y zw+wygpuHl~3OZf&QFuqOy5PB&6*)OG&*L+O&uLqkss_`+Y4Wvv24tHxy4#bWIfR{I z3LWvyEA_Gg)dVm$rLzCSwHs4GK2{*il;+B9kYdF2?mhZI{cjcYmV^Bm@V0>ZA%Er6 zVjVzzfD_c?FTT(gfhm0|Ly)xMc><%S!CS|Vz6bh4dm_kvNA7hddHRM(7WH0?F=$-C zI1NMI%5WH=fHqqfi=>qH;*{w{|xDSbrsvn0C)of$AD*Bfg*OBuFAzXp53z*am<{=v5XE6R|gLi52_&Pg2pM zCR`j`BJaKCMA4;o=W-xKAYe?@CME8Fc7quw=2dA8q=N%;d62(If#>)G%7L^jZHOlT zSgR>$kIg<svyA5Xd4^H@I>HVktPqFR1V(-`SVY{0o4%cEP7L(fBI~! zt;18Un&(9>GVUC^aZV;z-XpIC0ZmWtgBY>B$+iCv+rmX?w$+`QhgR z@lbXI)CWpE8#<3TCB8HAWLjSBOci&+ewPKtbnmfpvHLHsoe3GnZ%L30REu`#oETO` z3pWr({j5@iG5h>J2kLulhiFq(RRzu`!Hg|6;eaSeKlOln+?o@#xqR=W3#U!X4DIN$ zR)63{0n7)5aPqao!Rp!IwcHi<@^F7`dN68LIn5A?y)w9GH@D;PB5xx^j%te9Y)Df? z_G0ML9@<^!QJaDtI8&)};uFkb79<60E9MSS_DK08{ zKaW^sqTHB!6%6Fp7z#~%NuVlo!7@Nf;EuwbJ2LY2AWg{U&!qFYAoPWP5OhyQ05$>c z#!w>wVsv}vAB5QAUinO%&LARSY|n3%& z?v`~AwLT)}e3CcZ609uG4WPae|5-I61fk@^KyfeNf_$eTXqK=Uj=3bavH6hRDpb3R zYOGdH0U@~Wc0Rl+e)KhI@<#|BWFrvGgG^EteUs(fcr_$#G?PgE3ngR`fTKgOfAZnG z?V$U3_+t$)|ESC;=lGrv+>Y zQVN+K<3uQBed@@1&PKu0b2eN`Qt@Yh>4?jN8K&c7k-J?FFCMf#T-ojZVZPyEtJs~N z`~Z_nEj`^Izor{ks%i&TQRs5NH>Pk39OmYL?{gOy1K_&gVU==(9H1!1*SvF~t*wn2 zUo-%pDx12!Wfl;jVZM;8bDas87yQOWc=L(9m$RZQZKyKVxY5AY8PtIw~U*^?pwAq;X1(a&B8J>9uo zi$Qc3EbsLUjK~ttd)+G`5*MU$wS0in4jc&%^FjUus`&6hBM)ZKARnN!(pt$&OcZ3WMtp0i zE2_zDV-L3|0&jnDDfQC{v>|7t2S6!c)IvFi1|$#`R5iaUnVvN;fukw_{T$IZ0nxOL znaYUdbI*a1gEBxK2Np6ygtiXAkW1aect)(%vn>j{ujG?B#52L=%pUOUCi^ur0Qe?O zY7R^%Cnq2B0@-)MQ3uGFKrZH@*fX8Orbl-#@yDtWw0u{Ea zIHETw5d;u;2<_hiFayZC5L&)Tao=V_(>X`06gUplKjL3&3kilavHg+>1i){m08a6F zmNI~#?_+y?nkIh-StBGY%6s`JF(gFL=v)mLK$ok66yBq)tfHnS_|;GAC(Qy;R*<~= z6Q6hG7zYS}1&0p-d4cc{=6sAly2W>#P&ArDuN1qzi?cC~^*cWyw}Dts%F3QuWs9E) zH&^>*M8ltWwY=O_byN(wP|qOSV@rm#oquY2RL|2>0q7ruhqe~|DKdJ z`>TiG+jJqB)va7_<6{47$Hq)$ literal 0 HcmV?d00001 diff --git a/modules/nw-udn-cr.adoc b/modules/nw-udn-cr.adoc index 76e4002d51fa..39112fa10f92 100644 --- a/modules/nw-udn-cr.adoc +++ b/modules/nw-udn-cr.adoc @@ -6,7 +6,7 @@ [id="nw-udn-cr_{context}"] = Creating a UserDefinedNetwork custom resource -The following procedure creates a user-defined network that is namespace scoped. Based upon your use case, create your request using either the `my-layer-two-udn.yaml` example for a `Layer2` topology type or the `my-layer-three-udn.yaml` example for a `Layer3` topology type. +The following procedure creates a user-defined network that is namespace scoped. Based upon your use case, create your request by using either the `my-layer-two-udn.yaml` example for a `Layer2` topology type or the `my-layer-three-udn.yaml` example for a `Layer3` topology type. //We won't have these pieces till GA in 4.18. //[NOTE] diff --git a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc index 3d8348f0bd9a..bcad31b0230e 100644 --- a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc +++ b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc @@ -9,16 +9,20 @@ toc::[] :featurename: `UserDefinedNetwork` include::snippets/technology-preview.adoc[] -Before the implementation of user-defined networks (UDN), the OVN-Kubernetes CNI plugin only supported a Layer 3 topology on the primary, or main, network that all pods are attached to. This allowed for network models where all pods in the cluster were part of the same global Layer 3 network, but restricted the ability to customize primary network configurations. +Before the implementation of user-defined networks (UDNs) in the default the OVN-Kubernetes CNI plugin for {product-title}, the Kubernetes Layer 3 topology was supported as the primary network, or _main_ network, to where all pods attach. The Kubernetes design principle requires that all pods communicate with each other by their IP addresses, and Kubernetes restricts inter-pod traffic according to the Kubernetes network policy. While the Kubernetes design is useful for simple deployments, the Layer 3 topology restricts customization of primary network segment configurations, especially for modern multi-tenant deployments. -User-defined networks provide cluster administrators and users with highly customizable network configuration options for both primary and secondary network types. With UDNs, administrators can create tailored network topologies with enhanced isolation, IP address management for workloads, and advanced networking features. Supporting both Layer 2 and Layer 3 topology types, UDNs enable a wide range of network architectures and topologies, enhancing network flexibility, security, and performance. +UDN improves the flexibility and segmentation capabilities of the default Layer 3 topology for a Kubernetes pod network by enabling custom Layer 2, Layer 3, and localnet network segments, where all these segments are isolated by default. These segments act as either primary or secondary networks for container pods and virtual machines that use the default OVN-Kubernetes CNI plugin. UDNs enable a wide range of network architectures and topologies, enhancing network flexibility, security, and performance. You can build a UDN by using a Virtual Router Function (VRF). + +The following diagram shows four cluster namespaces, where each namespace has a single assigned UDN, and each UDN has an assigned custom subnet for its pod IP allocations. The OVN-Kubernetes handles any overlapping UDN subnets. Without using the Kubernetes network policy, a pod attached to a UDN can communicate with other pods in that UDN. By default, these pods are isolated from communicating with pods that exist in other UDNs. For microsegmentation, you can apply the Kubernetes network policy within a UDN. You can assign one or more UDNs to a namespace, with a limitation of only one primary UDN to a namespace, and one or more namespaces to a UDN. + +image::527-OpenShift-UDN-isolation-012025.png[Namespace isolation concept in a user-defined network (UDN)] [NOTE] ==== -* Support for the Localnet topology on both primary and secondary networks will be added in a future version of {product-title}. +Support for the Localnet topology on both primary and secondary networks will be added in a future version of {product-title}. ==== -Unlike NADs, which are only namespaced scope, UDNs offer administrators the ability to create and define additional networks spanning multiple namespaces at the cluster level by leveraging the `ClusterUserDefinedNetwork` custom resource (CR). UDNs also offer both administrators and users the ability to define additional networks at the namespace level with the `UserDefinedNetwork` CR. +Unlike a network attachment definition (NAD), which is only namespaced scope, a cluster administrator can use a UDN to create and define additional networks that span multiple namespaces at the cluster level by leveraging the `ClusterUserDefinedNetwork` custom resource (CR). Additionally, a cluster administrator or a cluster user can use a UDN to define additional networks at the namespace level with the `UserDefinedNetwork` CR. The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a `ClusterUserDefinedNetwork` or `UserDefinedNetwork` custom resource, how to create the custom resource, and additional configuration details that might be relevant to your deployment. @@ -31,7 +35,7 @@ The following sections further emphasize the benefits and limitations of user-de //** EgressQoS //** EgressService //** EgressIP -//** Load balancer and NodePort services, as well as services with external IPs. +//** Load balancer and NodePort services, and services with external IPs. //benefits of UDN include::modules/nw-udn-benefits.adoc[leveloffset=+1] From a7269100341c28fc8f315443936eb00640c8736f Mon Sep 17 00:00:00 2001 From: Brendan Daly Date: Tue, 11 Feb 2025 08:31:33 +0000 Subject: [PATCH 192/669] OSDOCS-11792_2:parameter text --- modules/installation-configuration-parameters.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/installation-configuration-parameters.adoc b/modules/installation-configuration-parameters.adoc index 2a9af20f0eb5..6609b8af415f 100644 --- a/modules/installation-configuration-parameters.adoc +++ b/modules/installation-configuration-parameters.adoc @@ -3519,7 +3519,7 @@ For more information on usage, see "Configuring a failure domain" in "Installing |platform: nutanix: preloadedOSImageName: -|Instead of creating and uploading a {op-system} image object for each {product-title} cluster, this parameter uses the named, preloaded {op-system} image object from the private cloud or the public cloud. +|Instead of creating and uploading a {op-system} image object for each {product-title} cluster, this parameter uses the named, preloaded {op-system} image object from the Prism Elements to which the {product-title} cluster is deployed. |String |platform: From 9ff73486e9cb61fbd37bdee94d173fc874c42665 Mon Sep 17 00:00:00 2001 From: Jaromir Hradilek Date: Tue, 11 Feb 2025 16:36:45 +0100 Subject: [PATCH 193/669] Unified the web console name in virt docs --- modules/virt-about-auto-bootsource-updates.adoc | 2 +- modules/virt-defining-apps-for-dr.adoc | 6 +++--- virt/live_migration/virt-about-live-migration.adoc | 2 +- virt/live_migration/virt-configuring-live-migration.adoc | 2 +- virt/nodes/virt-node-maintenance.adoc | 2 +- 5 files changed, 7 insertions(+), 7 deletions(-) diff --git a/modules/virt-about-auto-bootsource-updates.adoc b/modules/virt-about-auto-bootsource-updates.adoc index 7ac395ffb649..948962070e28 100644 --- a/modules/virt-about-auto-bootsource-updates.adoc +++ b/modules/virt-about-auto-bootsource-updates.adoc @@ -15,4 +15,4 @@ When the `enableCommonBootImageImport` feature gate is disabled, `DataSource` ob _Custom_ boot sources that are not provided by {VirtProductName} are not controlled by the feature gate. You must manage them individually by editing the `HyperConverged` custom resource (CR). You can also use this method to manage individual system-defined boot sources. -Cluster administrators can enable automatic subscription for {op-system-base-full} virtual machines in the {VirtProductName} web console. \ No newline at end of file +Cluster administrators can enable automatic subscription for {op-system-base-full} virtual machines in the {product-title} web console. \ No newline at end of file diff --git a/modules/virt-defining-apps-for-dr.adoc b/modules/virt-defining-apps-for-dr.adoc index 2c52cf29eb24..65dd7449e9e8 100644 --- a/modules/virt-defining-apps-for-dr.adoc +++ b/modules/virt-defining-apps-for-dr.adoc @@ -33,14 +33,14 @@ Use the pod `pullMethod: node` when creating a data volume from a registry sourc [id="best-practices-{rh-rhacm}-discovered-vm_{context}"] == Best practices when defining an {rh-rhacm}-discovered VM -You can configure any VM in the cluster that is not an {rh-rhacm}-managed application as an {rh-rhacm}-discovered application. This includes VMs imported by using the Migration Toolkit for Virtualization (MTV), VMs created by using the {VirtProductName} web console, or VMs created by any other means, such as the CLI. +You can configure any VM in the cluster that is not an {rh-rhacm}-managed application as an {rh-rhacm}-discovered application. This includes VMs imported by using the Migration Toolkit for Virtualization (MTV), VMs created by using the {product-title} web console, or VMs created by any other means, such as the CLI. There are several actions you can take to improve your experience and chance of success when defining an {rh-rhacm}-discovered VM. [discrete] [id="protect-the-vm_{context}"] -=== Protect the VM when using MTV, the {VirtProductName} web console, or a custom VM -Because automatic labeling is not currently available, the application owner must manually label the components of the VM application when using MTV, the {VirtProductName} web console, or a custom VM. +=== Protect the VM when using MTV, the {product-title} web console, or a custom VM +Because automatic labeling is not currently available, the application owner must manually label the components of the VM application when using MTV, the {product-title} web console, or a custom VM. After creating the VM, apply a common label to the following resources associated with the VM: `VirtualMachine`, `DataVolume`, `PersistentVolumeClaim`, `Service`, `Route`, `Secret`, `ConfigMap`, `VirtualMachinePreference`, and `VirtualMachineInstancetype`. Do not label virtual machine instances (VMIs) or pods; {VirtProductName} creates and manages these automatically. diff --git a/virt/live_migration/virt-about-live-migration.adoc b/virt/live_migration/virt-about-live-migration.adoc index b93767ba10e7..57ee19c35499 100644 --- a/virt/live_migration/virt-about-live-migration.adoc +++ b/virt/live_migration/virt-about-live-migration.adoc @@ -42,7 +42,7 @@ You can perform the following live migration tasks: * xref:../../virt/live_migration/virt-configuring-live-migration.adoc#virt-configuring-live-migration[Configure live migration settings] * xref:../../virt/live_migration/virt-configuring-live-migration.adoc#virt-configuring-live-migration-heavy_virt-configuring-live-migration[Configure live migration for heavy workloads] * xref:../../virt/live_migration/virt-initiating-live-migration.adoc#virt-initiating-live-migration[Initiate and cancel live migration] -* Monitor the progress of all live migrations in the *Migration* tab of the {virtproductname} web console. +* Monitor the progress of all live migrations in the *Migration* tab of the {product-title} web console. * View VM migration metrics in the *Metrics* tab of the web console. diff --git a/virt/live_migration/virt-configuring-live-migration.adoc b/virt/live_migration/virt-configuring-live-migration.adoc index 49674206851a..2637a93e73af 100644 --- a/virt/live_migration/virt-configuring-live-migration.adoc +++ b/virt/live_migration/virt-configuring-live-migration.adoc @@ -26,7 +26,7 @@ You can create live migration policies to apply different migration configuratio [TIP] ==== -You can create live migration policies by using the {VirtProductName} web console. +You can create live migration policies by using the {product-title} web console. ==== include::modules/virt-configuring-a-live-migration-policy.adoc[leveloffset=+2] diff --git a/virt/nodes/virt-node-maintenance.adoc b/virt/nodes/virt-node-maintenance.adoc index 83075222a328..04a2acfd9489 100644 --- a/virt/nodes/virt-node-maintenance.adoc +++ b/virt/nodes/virt-node-maintenance.adoc @@ -38,7 +38,7 @@ VM eviction strategy:: The VM `LiveMigrate` eviction strategy ensures that a virtual machine instance (VMI) is not interrupted if the node is placed into maintenance or drained. VMIs with this eviction strategy will be live migrated to another node. + -You can configure eviction strategies for virtual machines (VMs) by using the {VirtProductName} web console or the xref:../../virt/live_migration/virt-configuring-live-migration.adoc#virt-configuring-a-live-migration-policy_virt-configuring-live-migration[command line]. +You can configure eviction strategies for virtual machines (VMs) by using the {product-title} web console or the xref:../../virt/live_migration/virt-configuring-live-migration.adoc#virt-configuring-a-live-migration-policy_virt-configuring-live-migration[command line]. + [IMPORTANT] ==== From 9bdb54e7612ac04fe608ef70db0c7e49aca434c0 Mon Sep 17 00:00:00 2001 From: Michael Burke Date: Wed, 29 Jan 2025 17:59:48 -0500 Subject: [PATCH 194/669] OSDOCS:12999 Misleading Example of Autoscaling for Image-Registry Deployment --- modules/nodes-pods-autoscaling-about.adoc | 26 +++++++++---------- .../nodes-pods-autoscaling-creating-cpu.adoc | 4 +-- 2 files changed, 15 insertions(+), 15 deletions(-) diff --git a/modules/nodes-pods-autoscaling-about.adoc b/modules/nodes-pods-autoscaling-about.adoc index 7ebcdb319f8d..64e561830c81 100644 --- a/modules/nodes-pods-autoscaling-about.adoc +++ b/modules/nodes-pods-autoscaling-about.adoc @@ -66,26 +66,26 @@ and ensure that your application meets these requirements before using memory-based autoscaling. ==== -The following example shows autoscaling for the `image-registry` `Deployment` object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods increase to 7: +The following example shows autoscaling for the `hello-node` `Deployment` object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods increase to 7: [source,terminal] ---- -$ oc autoscale deployment/image-registry --min=5 --max=7 --cpu-percent=75 +$ oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75 ---- .Example output [source,terminal] ---- -horizontalpodautoscaler.autoscaling/image-registry autoscaled +horizontalpodautoscaler.autoscaling/hello-node autoscaled ---- -.Sample HPA for the `image-registry` `Deployment` object with `minReplicas` set to 3 +.Sample YAML to create an HPA for the `hello-node` deployment object with `minReplicas` set to 3 [source,yaml] ---- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: - name: image-registry + name: hello-node namespace: default spec: maxReplicas: 7 @@ -93,25 +93,25 @@ spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment - name: image-registry + name: hello-node targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0 ---- -. View the new state of the deployment: -+ +After you create the HPA, you can view the new state of the deployment by running the following command: + [source,terminal] ---- -$ oc get deployment image-registry +$ oc get deployment hello-node ---- -+ + There are now 5 pods in the deployment: -+ + .Example output [source,terminal] ---- -NAME REVISION DESIRED CURRENT TRIGGERED BY -image-registry 1 5 5 config +NAME REVISION DESIRED CURRENT TRIGGERED BY +hello-node 1 5 5 config ---- diff --git a/modules/nodes-pods-autoscaling-creating-cpu.adoc b/modules/nodes-pods-autoscaling-creating-cpu.adoc index 875eb4b6f870..85f0ec5b7472 100644 --- a/modules/nodes-pods-autoscaling-creating-cpu.adoc +++ b/modules/nodes-pods-autoscaling-creating-cpu.adoc @@ -76,11 +76,11 @@ $ oc autoscale / \// <1> <3> Specify the maximum number of replicas when scaling up. <4> Specify the target average CPU utilization over all the pods, represented as a percent of requested CPU. If not specified or negative, a default autoscaling policy is used. + -For example, the following command shows autoscaling for the `image-registry` `Deployment` object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods will increase to 7: +For example, the following command shows autoscaling for the `hello-node` deployment object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods will increase to 7: + [source,terminal] ---- -$ oc autoscale deployment/image-registry --min=5 --max=7 --cpu-percent=75 +$ oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75 ---- ** To scale for a specific CPU value, create a YAML file similar to the following for an existing object: From 535c8bc655a3773d63bd74d982e0c52afe4e217a Mon Sep 17 00:00:00 2001 From: Ronan Hennessy Date: Mon, 10 Feb 2025 12:55:16 +0000 Subject: [PATCH 195/669] TELCODOCS-2072: Updating variables and hardcoded versions for Telco content 4.18 GA --- _attributes/common-attributes.adoc | 2 +- modules/telco-update-acknowledging-the-update.adoc | 2 +- ...-update-acknowledging-the-y-stream-release-update.adoc | 3 ++- modules/ztp-sno-accelerated-ztp.adoc | 2 +- modules/ztp-sno-siteconfig-config-reference.adoc | 8 ++------ snippets/ztp_example-sno.yaml | 2 +- 6 files changed, 8 insertions(+), 11 deletions(-) diff --git a/_attributes/common-attributes.adoc b/_attributes/common-attributes.adoc index ba9b387a0c14..5ebb94508579 100644 --- a/_attributes/common-attributes.adoc +++ b/_attributes/common-attributes.adoc @@ -56,7 +56,7 @@ endif::[] :rh-rhacm-title: Red Hat Advanced Cluster Management :rh-rhacm-first: Red Hat Advanced Cluster Management (RHACM) :rh-rhacm: RHACM -:rh-rhacm-version: 2.11 +:rh-rhacm-version: 2.12 :osc: OpenShift sandboxed containers :cert-manager-operator: cert-manager Operator for Red Hat OpenShift :secondary-scheduler-operator-full: Secondary Scheduler Operator for Red Hat OpenShift diff --git a/modules/telco-update-acknowledging-the-update.adoc b/modules/telco-update-acknowledging-the-update.adoc index 945397aac435..d2a1fddef48a 100644 --- a/modules/telco-update-acknowledging-the-update.adoc +++ b/modules/telco-update-acknowledging-the-update.adoc @@ -11,7 +11,7 @@ When you update to all versions from 4.11 and later, you must manually acknowled [IMPORTANT] ==== Before you acknowledge the update, verify that there are no Kubernetes APIs in use that are removed in the version you are updating to. -For example, in {product-title} 4.17, there are no API removals. +For example, in {product-title} 4.18, there are no API removals. See "Kubernetes API removals" for more information. ==== diff --git a/modules/telco-update-acknowledging-the-y-stream-release-update.adoc b/modules/telco-update-acknowledging-the-y-stream-release-update.adoc index f49b25bc099b..419d1afc57b5 100644 --- a/modules/telco-update-acknowledging-the-y-stream-release-update.adoc +++ b/modules/telco-update-acknowledging-the-y-stream-release-update.adoc @@ -12,7 +12,8 @@ In the output of the `oc adm upgrade` command, a URL is provided that shows the [IMPORTANT] ==== Before you acknowledge the update, verify that there are no Kubernetes APIs in use that are removed in the version you are updating to. -For example, in {product-title} 4.17, there are no API removals. See "Kubernetes API removals" for more information. +For example, in {product-title} 4.18, there are no API removals. +See "Kubernetes API removals" for more information. ==== .Procedure diff --git a/modules/ztp-sno-accelerated-ztp.adoc b/modules/ztp-sno-accelerated-ztp.adoc index 134ddefe2bd9..bbc1c138c532 100644 --- a/modules/ztp-sno-accelerated-ztp.adoc +++ b/modules/ztp-sno-accelerated-ztp.adoc @@ -34,7 +34,7 @@ spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" - clusterImageSetNameRef: "openshift-4.10" + clusterImageSetNameRef: "openshift-4.18" sshPublicKey: "ssh-rsa AAAA..." clusters: # ... diff --git a/modules/ztp-sno-siteconfig-config-reference.adoc b/modules/ztp-sno-siteconfig-config-reference.adoc index 48dd19ed804e..4e9803bc63a0 100644 --- a/modules/ztp-sno-siteconfig-config-reference.adoc +++ b/modules/ztp-sno-siteconfig-config-reference.adoc @@ -16,11 +16,6 @@ a|Configure workload partitioning by setting the value for `cpuPartitioningMode` to `AllNodes`. To complete the configuration, specify the `isolated` and `reserved` CPUs in the `PerformanceProfile` CR. -[NOTE] -==== -Configuring workload partitioning by using the `cpuPartitioningMode` field in the `SiteConfig` CR is a Tech Preview feature in {product-title} 4.13. -==== - |`metadata.name` |Set `name` to `assisted-deployment-pull-secret` and create the `assisted-deployment-pull-secret` CR in the same namespace as the `SiteConfig` CR. @@ -51,9 +46,10 @@ For example, `acmpolicygenerator/acm-common-ranGen.yaml` applies to all clusters |`spec.clusters.diskEncryption` a|Configure this field to enable disk encryption with Trusted Platform Module (TPM) and Platform Configuration Registers (PCRs) protection. For more information, see "About disk encryption with TPM and PCR protection". + [NOTE] ==== -Configuring disk encryption by using the `diskEncryption` field in the `SiteConfig` CR is a Technology Preview feature in {product-title} 4.17. +Configuring disk encryption by using the `diskEncryption` field in the `SiteConfig` CR is a Technology Preview feature in {product-title} 4.18. ==== |`spec.clusters.diskEncryption.type` diff --git a/snippets/ztp_example-sno.yaml b/snippets/ztp_example-sno.yaml index 252eba507204..79dce7cb533c 100644 --- a/snippets/ztp_example-sno.yaml +++ b/snippets/ztp_example-sno.yaml @@ -9,7 +9,7 @@ spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" - clusterImageSetNameRef: "openshift-4.16" + clusterImageSetNameRef: "openshift-4.18" sshPublicKey: "ssh-rsa AAAA..." clusters: - clusterName: "example-sno" From 71e5489027bfa657a3b9d6811ebd146bec379848 Mon Sep 17 00:00:00 2001 From: Michael Burke Date: Wed, 12 Feb 2025 08:15:42 -0500 Subject: [PATCH 196/669] MCO replace TP note in on-clusterlayering docs --- modules/coreos-layering-configuring-on.adoc | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/modules/coreos-layering-configuring-on.adoc b/modules/coreos-layering-configuring-on.adoc index dfc565cac966..298964aaa78a 100644 --- a/modules/coreos-layering-configuring-on.adoc +++ b/modules/coreos-layering-configuring-on.adoc @@ -21,8 +21,13 @@ You should not need to interact with these new objects or the `machine-os-builde You need a separate `MachineOSConfig` CR for each machine config pool where you want to use a custom layered image. +:FeatureName: On-cluster image layering +include::snippets/technology-preview.adoc[] + .Prerequisites +* You have enabled the `TechPreviewNoUpgrade` feature set by using the feature gates. For more information, see "Enabling features using feature gates". + * You have a copy of the global pull secret in the `openshift-machine-config-operator` namespace that the MCO needs in order to pull the base operating system image. * You have a copy of the `etc-pki-entitlement` secret in the `openshift-machine-api` namespace. From 7ee5ec8fc9cf77b1980a8e84814c2599155dd2a1 Mon Sep 17 00:00:00 2001 From: Andrea Hoffer Date: Thu, 6 Feb 2025 08:57:04 -0500 Subject: [PATCH 197/669] OSDOCS#12867: Noting that SNO can't have workloads running to shut down --- modules/graceful-shutdown.adoc | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/graceful-shutdown.adoc b/modules/graceful-shutdown.adoc index 6c9b85449726..5cd7eafff67f 100644 --- a/modules/graceful-shutdown.adoc +++ b/modules/graceful-shutdown.adoc @@ -17,6 +17,7 @@ You can shut down a cluster until a year from the installation date and expect i * You have access to the cluster as a user with the `cluster-admin` role. * You have taken an etcd backup. +* If you are running a {sno} cluster, you must evacuate all workload pods off of the cluster before you shut it down. .Procedure @@ -77,7 +78,7 @@ Ensure that the control plane node with the API VIP assigned is the last node pr $ for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/${node} -- chroot /host shutdown -h 1; done <1> ---- + -<1> `-h 1` indicates how long, in minutes, this process lasts before the control plane nodes are shut down. For large-scale clusters with 10 nodes or more, set to `-h 10` or longer to make sure all the compute nodes have time to shut down first. +<1> `-h 1` indicates how long, in minutes, this process lasts before the control plane nodes are shut down. For large-scale clusters with 10 nodes or more, set to `-h 10` or longer to make sure all the compute nodes have time to shut down first. + .Example output [source,terminal] From 66303f0351399e2f14668ccee9ae33e2161435f0 Mon Sep 17 00:00:00 2001 From: John Wilkins Date: Tue, 11 Feb 2025 15:31:49 -0800 Subject: [PATCH 198/669] Added IPI to network customizations table. Signed-off-by: John Wilkins --- installing/overview/installing-preparing.adoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/installing/overview/installing-preparing.adoc b/installing/overview/installing-preparing.adoc index faf36c8dfdbe..2a3919f6b0aa 100644 --- a/installing/overview/installing-preparing.adoc +++ b/installing/overview/installing-preparing.adoc @@ -181,8 +181,8 @@ ifndef::openshift-origin[] |xref:../../installing/installing_gcp/installing-gcp-network-customizations.adoc#installing-gcp-network-customizations[✓] | | -| -| +|xref:../../installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc#configuring-host-network-interfaces-in-the-install-config-yaml-file_ipi-install-installation-workflow[✓] +|xref:../../installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc#configuring-host-network-interfaces-in-the-install-config-yaml-file_ipi-install-installation-workflow[✓] |xref:../../installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.adoc#installing-vsphere-installer-provisioned-network-customizations[✓] |xref:../../installing/installing_ibm_cloud/installing-ibm-cloud-network-customizations.adoc#installing-ibm-cloud-network-customizations[✓] | From 3b01490a1461b298cf1b129e29948d49f5142443 Mon Sep 17 00:00:00 2001 From: mletalie Date: Tue, 11 Feb 2025 14:18:05 -0500 Subject: [PATCH 199/669] openshiftAI --- adding_service_cluster/rosa-available-services.adoc | 4 ++-- modules/rosa-rhods.adoc | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/adding_service_cluster/rosa-available-services.adoc b/adding_service_cluster/rosa-available-services.adoc index d81a769cfa5f..657744c6115b 100644 --- a/adding_service_cluster/rosa-available-services.adoc +++ b/adding_service_cluster/rosa-available-services.adoc @@ -38,5 +38,5 @@ include::modules/rosa-rhods.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_science/1[Red{nbsp}Hat OpenShift Data Science] documentation -* link:https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-data-science[Red{nbsp}Hat OpenShift Data Science] product page +* link:https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025[Red{nbsp}Hat OpenShift AI] documentation +* link:https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-ai[Red{nbsp}Hat OpenShift AI] product page diff --git a/modules/rosa-rhods.adoc b/modules/rosa-rhods.adoc index 37bd694972f8..59dc9e5dfc21 100644 --- a/modules/rosa-rhods.adoc +++ b/modules/rosa-rhods.adoc @@ -2,7 +2,7 @@ // // * adding_service_cluster/rosa-available-services.adoc :_mod-docs-content-type: CONCEPT -[id="rosa-rhods_{context}"] -= Red{nbsp}Hat OpenShift Data Science +[id="rosa-AI_{context}"] += Red{nbsp}Hat OpenShift AI -{rhods} (RHODS) enables users to integrate data and AI and machine learning software to run end-to-end machine learning workflows. It provides a collection of notebook images with the tools and libraries required to develop and deploy data models. This allows data scientists to easily develop data models, integrate models into applications, and deploy applications using Red{nbsp}Hat OpenShift. RHODS is available as an add-on to Red{nbsp}Hat managed environments such as {osd} and {product-title} (ROSA). +Red Hat OpenShift AI enables users to integrate data and AI and machine learning software to run end-to-end machine learning workflows. It provides a collection of notebook images with the tools and libraries required to develop and deploy data models. This allows data scientists to easily develop data models, integrate models into applications, and deploy applications using Red{nbsp}Hat OpenShift. OpenShift AI is available as an add-on to Red{nbsp}Hat managed environments such as {osd} and {product-title} (ROSA). \ No newline at end of file From 3747fdcb246ca749fa2d3e299b1d4a41a3354515 Mon Sep 17 00:00:00 2001 From: Jeana Routh Date: Tue, 21 Jan 2025 15:59:56 -0500 Subject: [PATCH 200/669] OSDOCS-5774: Azure CAPI TP --- _topic_maps/_topic_map.yml | 2 + ...capi-creating-infrastructure-resource.adoc | 1 + .../capi-yaml-infrastructure-aws.adoc | 33 +++++++++++ .../capi-yaml-infrastructure-azure.adoc | 51 +++++++++++++++++ .../capi-yaml-infrastructure-gcp.adoc | 36 ++++++++++++ .../capi-yaml-infrastructure-vsphere.adoc | 18 +++--- .../cluster-api-about.adoc | 2 +- .../cluster-api-configuration.adoc | 2 + .../cluster-api-getting-started.adoc | 12 +--- .../cluster-api-managing-machines.adoc | 2 + .../cluster-api-config-options-aws.adoc | 3 - .../cluster-api-config-options-azure.adoc | 30 ++++++++++ .../cluster-api-config-options-gcp.adoc | 3 - .../cluster-api-config-options-vsphere.adoc | 3 - modules/capi-arch-resources.adoc | 4 +- modules/capi-creating-cluster-resource.adoc | 1 + modules/capi-creating-machine-set.adoc | 3 +- modules/capi-creating-machine-template.adoc | 3 +- modules/capi-limitations.adoc | 4 +- modules/capi-modifying-machine-template.adoc | 1 + modules/capi-yaml-cluster.adoc | 1 + modules/capi-yaml-infrastructure-aws.adoc | 29 ---------- modules/capi-yaml-infrastructure-gcp.adoc | 32 ----------- modules/capi-yaml-machine-set-aws.adoc | 21 ++++--- modules/capi-yaml-machine-set-azure.adoc | 50 ++++++++++++++++ modules/capi-yaml-machine-set-gcp.adoc | 25 +++++--- modules/capi-yaml-machine-set-vsphere.adoc | 27 +++++---- modules/capi-yaml-machine-template-azure.adoc | 57 +++++++++++++++++++ modules/cluster-capi-operator.adoc | 2 +- 29 files changed, 338 insertions(+), 120 deletions(-) rename {modules => _unused_topics}/capi-creating-infrastructure-resource.adoc (97%) create mode 100644 _unused_topics/capi-yaml-infrastructure-aws.adoc create mode 100644 _unused_topics/capi-yaml-infrastructure-azure.adoc create mode 100644 _unused_topics/capi-yaml-infrastructure-gcp.adoc rename {modules => _unused_topics}/capi-yaml-infrastructure-vsphere.adoc (53%) create mode 100644 machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-azure.adoc delete mode 100644 modules/capi-yaml-infrastructure-aws.adoc delete mode 100644 modules/capi-yaml-infrastructure-gcp.adoc create mode 100644 modules/capi-yaml-machine-set-azure.adoc create mode 100644 modules/capi-yaml-machine-template-azure.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index ee726cd01f63..ea2cf1fcccd6 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2467,6 +2467,8 @@ Topics: File: cluster-api-config-options-aws - Name: Cluster API configuration options for Google Cloud Platform File: cluster-api-config-options-gcp + - Name: Cluster API configuration options for Microsoft Azure + File: cluster-api-config-options-azure - Name: Cluster API configuration options for VMware vSphere File: cluster-api-config-options-vsphere # - Name: Cluster API resiliency and recovery diff --git a/modules/capi-creating-infrastructure-resource.adoc b/_unused_topics/capi-creating-infrastructure-resource.adoc similarity index 97% rename from modules/capi-creating-infrastructure-resource.adoc rename to _unused_topics/capi-creating-infrastructure-resource.adoc index c1c2df11dc8a..1528fb4b775b 100644 --- a/modules/capi-creating-infrastructure-resource.adoc +++ b/_unused_topics/capi-creating-infrastructure-resource.adoc @@ -46,6 +46,7 @@ This value must match the value for your platform. The following values are valid: * `AWSCluster`: The cluster is running on {aws-short}. * `GCPCluster`: The cluster is running on {gcp-short}. +* `AzureCluster`: The cluster is running on {azure-first}. * `VSphereCluster`: The cluster is running on {vmw-short}. <3> Specify the name of the cluster. <4> Specify the details for your environment. diff --git a/_unused_topics/capi-yaml-infrastructure-aws.adoc b/_unused_topics/capi-yaml-infrastructure-aws.adoc new file mode 100644 index 000000000000..adaf51ac5ab9 --- /dev/null +++ b/_unused_topics/capi-yaml-infrastructure-aws.adoc @@ -0,0 +1,33 @@ +// Module included in the following assemblies: +// +// * machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-aws.adoc + +:_mod-docs-content-type: REFERENCE +[id="capi-yaml-infrastructure-aws_{context}"] += Sample YAML for a Cluster API infrastructure cluster resource on {aws-full} + +The infrastructure cluster resource is provider-specific and defines properties that all the compute machine sets in the cluster share, such as the region and subnets. +The compute machine set references this resource when creating machines. + +In {product-title} {product-version}, the {cluster-capi-operator} generates this resource. +The following sample YAML file is for informational purposes. +User modification of this generated resource is not recommended. + +[source,yaml] +---- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 +kind: AWSCluster # <1> +metadata: + name: # <2> + namespace: openshift-cluster-api +spec: + controlPlaneEndpoint: # <3> + host: + port: 6443 + region: # <4> +---- +<1> Specifies the infrastructure kind for the cluster. +This value matches the value for your platform. +<2> Specifies the cluster ID as the name of the cluster. +<3> Specifies the address of the control plane endpoint and the port to use to access it. +<4> Specifies the {aws-short} region. \ No newline at end of file diff --git a/_unused_topics/capi-yaml-infrastructure-azure.adoc b/_unused_topics/capi-yaml-infrastructure-azure.adoc new file mode 100644 index 000000000000..20c6c352e312 --- /dev/null +++ b/_unused_topics/capi-yaml-infrastructure-azure.adoc @@ -0,0 +1,51 @@ +// Module included in the following assemblies: +// +// * machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-azure.adoc + +:_mod-docs-content-type: REFERENCE +[id="capi-yaml-infrastructure-azure_{context}"] += Sample YAML for a Cluster API infrastructure cluster resource on {azure-full} + +The infrastructure cluster resource is provider-specific and defines properties that all the compute machine sets in the cluster share, such as the region and subnets. +The compute machine set references this resource when creating machines. + +In {product-title} {product-version}, the {cluster-capi-operator} generates this resource. +The following sample YAML file is for informational purposes. +User modification of this generated resource is not recommended. + +[source,yaml] +---- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: AzureCluster # <1> +metadata: + name: # <2> + namespace: openshift-cluster-api +spec: + azureEnvironment: AzurePublicCloud + bastionSpec: {} + controlPlaneEndpoint: # <3> + host: + port: 6443 + identityRef: # <4> + kind: AzureClusterIdentity + name: + namespace: openshift-cluster-api + location: westus # <5> + networkSpec: + apiServerLB: + backendPool: {} + nodeOutboundLB: + backendPool: + name: + name: + vnet: + name: -vnet + resourceGroup: -rg + resourceGroup: -rg +---- +<1> Specifies the infrastructure kind for the cluster. +This value matches the value for your platform. +<2> Specifies the cluster ID as the name of the cluster. +<3> Specifies the address of the control plane endpoint and the port to use to access it. +<4> The cluster identity that the {cluster-capi-operator} creates. +<5> Specifies the {azure-short} region. \ No newline at end of file diff --git a/_unused_topics/capi-yaml-infrastructure-gcp.adoc b/_unused_topics/capi-yaml-infrastructure-gcp.adoc new file mode 100644 index 000000000000..42bbe1450495 --- /dev/null +++ b/_unused_topics/capi-yaml-infrastructure-gcp.adoc @@ -0,0 +1,36 @@ +// Module included in the following assemblies: +// +// * machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-gcp.adoc + +:_mod-docs-content-type: REFERENCE +[id="capi-yaml-infrastructure-gcp_{context}"] += Sample YAML for a Cluster API infrastructure cluster resource on {gcp-full} + +The infrastructure cluster resource is provider-specific and defines properties that all the compute machine sets in the cluster share, such as the region and subnets. +The compute machine set references this resource when creating machines. + +In {product-title} {product-version}, the {cluster-capi-operator} generates this resource. +The following sample YAML file is for informational purposes. +User modification of this generated resource is not recommended. + +[source,yaml] +---- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: GCPCluster # <1> +metadata: + name: # <2> +spec: + controlPlaneEndpoint: # <3> + host: + port: 6443 + network: + name: -network + project: # <4> + region: # <5> +---- +<1> Specifies the infrastructure kind for the cluster. +This value matches the value for your platform. +<2> Specifies the cluster ID as the name of the cluster. +<3> Specifies the IP address of the control plane endpoint and the port used to access it. +<4> Specifies the {gcp-short} project name. +<5> Specifies the {gcp-short} region. \ No newline at end of file diff --git a/modules/capi-yaml-infrastructure-vsphere.adoc b/_unused_topics/capi-yaml-infrastructure-vsphere.adoc similarity index 53% rename from modules/capi-yaml-infrastructure-vsphere.adoc rename to _unused_topics/capi-yaml-infrastructure-vsphere.adoc index 403af3240647..bd213de70fbc 100644 --- a/modules/capi-yaml-infrastructure-vsphere.adoc +++ b/_unused_topics/capi-yaml-infrastructure-vsphere.adoc @@ -4,11 +4,15 @@ :_mod-docs-content-type: REFERENCE [id="capi-yaml-infrastructure-vsphere_{context}"] -= Sample YAML for a Cluster API infrastructure resource on {vmw-full} += Sample YAML for a Cluster API infrastructure cluster resource on {vmw-full} -The infrastructure resource is provider-specific and defines properties that are shared by all the compute machine sets in the cluster, such as the region and subnets. +The infrastructure cluster resource is provider-specific and defines properties that all the compute machine sets in the cluster share, such as the region and subnets. The compute machine set references this resource when creating machines. +In {product-title} {product-version}, the {cluster-capi-operator} generates this resource. +The following sample YAML file is for informational purposes. +User modification of this generated resource is not recommended. + [source,yaml] ---- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 @@ -24,11 +28,11 @@ spec: name: server: # <4> ---- -<1> Specify the infrastructure kind for the cluster. -This value must match the value for your platform. -<2> Specify the cluster ID as the name of the cluster. -<3> Specify the IP address of the control plane endpoint and the port used to access it. -<4> Specify the {vmw-short} server for the cluster. +<1> Specifies the infrastructure kind for the cluster. +This value matches the value for your platform. +<2> Specifies the cluster ID as the name of the cluster. +<3> Specifies the IP address of the control plane endpoint and the port used to access it. +<4> Specifies the {vmw-short} server for the cluster. You can find this value on an existing {vmw-short} cluster by running the following command: + [source,terminal] diff --git a/machine_management/cluster_api_machine_management/cluster-api-about.adoc b/machine_management/cluster_api_machine_management/cluster-api-about.adoc index 6fa385b29864..2cc10b40739a 100644 --- a/machine_management/cluster_api_machine_management/cluster-api-about.adoc +++ b/machine_management/cluster_api_machine_management/cluster-api-about.adoc @@ -9,7 +9,7 @@ toc::[] :FeatureName: Managing machines with the Cluster API include::snippets/technology-preview.adoc[] -The link:https://cluster-api.sigs.k8s.io/[Cluster API] is an upstream project that is integrated into {product-title} as a Technology Preview for {aws-first}, {gcp-first}, and {vmw-first}. +The link:https://cluster-api.sigs.k8s.io/[Cluster API] is an upstream project that is integrated into {product-title} as a Technology Preview for {aws-first}, {gcp-first}, {azure-first} and {vmw-first}. //Cluster API overview include::modules/capi-overview.adoc[leveloffset=+1] diff --git a/machine_management/cluster_api_machine_management/cluster-api-configuration.adoc b/machine_management/cluster_api_machine_management/cluster-api-configuration.adoc index 088719c6fc9b..8b67a9b6d47d 100644 --- a/machine_management/cluster_api_machine_management/cluster-api-configuration.adoc +++ b/machine_management/cluster_api_machine_management/cluster-api-configuration.adoc @@ -24,4 +24,6 @@ For provider-specific configuration options for your cluster, see the following * xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-gcp.adoc#cluster-api-config-options-gcp[Cluster API configuration options for {gcp-full}] +* xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-azure.adoc#cluster-api-config-options-azure[Cluster API configuration options for {azure-full}] + * xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-vsphere.adoc#cluster-api-config-options-vsphere[Cluster API configuration options for {vmw-full}] \ No newline at end of file diff --git a/machine_management/cluster_api_machine_management/cluster-api-getting-started.adoc b/machine_management/cluster_api_machine_management/cluster-api-getting-started.adoc index d1a4748f996f..0b82e2871b53 100644 --- a/machine_management/cluster_api_machine_management/cluster-api-getting-started.adoc +++ b/machine_management/cluster_api_machine_management/cluster-api-getting-started.adoc @@ -9,7 +9,7 @@ toc::[] :FeatureName: Managing machines with the Cluster API include::snippets/technology-preview.adoc[] -For the Cluster API Technology Preview, you must create the primary resources that the Cluster API requires manually. +For the Cluster API Technology Preview, you must manually create some of the primary resources that the Cluster API requires. [id="creating-primary-resources_{context}"] == Creating the Cluster API primary resources @@ -27,20 +27,13 @@ include::modules/capi-creating-cluster-resource.adoc[leveloffset=+2] .Additional resources * xref:../../machine_management/cluster_api_machine_management/cluster-api-configuration.adoc#cluster-api-configuration[Cluster API configuration] -//Creating a Cluster API infrastructure resource -include::modules/capi-creating-infrastructure-resource.adoc[leveloffset=+2] -[role="_additional-resources"] -.Additional resources -* xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-aws.adoc#capi-yaml-infrastructure-aws_cluster-api-config-options-aws[Sample YAML for a Cluster API infrastructure resource on {aws-full}] -* xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-gcp.adoc#capi-yaml-infrastructure-gcp_cluster-api-config-options-gcp[Sample YAML for a Cluster API infrastructure resource on {gcp-full}] -* xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-vsphere.adoc#capi-yaml-infrastructure-vsphere_cluster-api-config-options-vsphere[Sample YAML for a Cluster API infrastructure resource on {vmw-full}] - //Creating a Cluster API machine template include::modules/capi-creating-machine-template.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources * xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-aws.adoc#capi-yaml-machine-template-aws_cluster-api-config-options-aws[Sample YAML for a Cluster API machine template resource on {aws-full}] * xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-gcp.adoc#capi-yaml-machine-template-gcp_cluster-api-config-options-gcp[Sample YAML for a Cluster API machine template resource on {gcp-full}] +* xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-azure.adoc#capi-yaml-machine-template-azure_cluster-api-config-options-azure[Sample YAML for a Cluster API machine template resource on {azure-full}] * xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-vsphere.adoc#capi-yaml-machine-template-vsphere_cluster-api-config-options-vsphere[Sample YAML for a Cluster API machine template resource on {vmw-full}] //Creating a Cluster API compute machine set @@ -49,4 +42,5 @@ include::modules/capi-creating-machine-set.adoc[leveloffset=+2] .Additional resources * xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-aws.adoc#capi-yaml-machine-set-aws_cluster-api-config-options-aws[Sample YAML for a Cluster API compute machine set resource on {aws-full}] * xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-gcp.adoc#capi-yaml-machine-set-gcp_cluster-api-config-options-gcp[Sample YAML for a Cluster API compute machine set resource on {gcp-full}] +* xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-azure.adoc#capi-yaml-machine-set-azure_cluster-api-config-options-azure[Sample YAML for a Cluster API compute machine set resource on {azure-full}] * xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-vsphere.adoc#capi-yaml-machine-set-vsphere_cluster-api-config-options-vsphere[Sample YAML for a Cluster API compute machine set resource on {vmw-full}] \ No newline at end of file diff --git a/machine_management/cluster_api_machine_management/cluster-api-managing-machines.adoc b/machine_management/cluster_api_machine_management/cluster-api-managing-machines.adoc index 606fcad55482..9f0e6f59b884 100644 --- a/machine_management/cluster_api_machine_management/cluster-api-managing-machines.adoc +++ b/machine_management/cluster_api_machine_management/cluster-api-managing-machines.adoc @@ -15,6 +15,7 @@ include::modules/capi-modifying-machine-template.adoc[leveloffset=+1] .Additional resources * xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-aws.adoc#capi-yaml-machine-template-aws_cluster-api-config-options-aws[Sample YAML for a Cluster API machine template resource on {aws-full}] * xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-gcp.adoc#capi-yaml-machine-template-gcp_cluster-api-config-options-gcp[Sample YAML for a Cluster API machine template resource on {gcp-full}] +* xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-azure.adoc#capi-yaml-machine-template-azure_cluster-api-config-options-azure[Sample YAML for a Cluster API machine template resource on {azure-full}] * xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-vsphere.adoc#capi-yaml-machine-template-vsphere_cluster-api-config-options-vsphere[Sample YAML for a Cluster API machine template resource on {vmw-full}] * xref:../../machine_management/cluster_api_machine_management/cluster-api-managing-machines.adoc#machineset-modifying_cluster-api-managing-machines[Modifying a compute machine set by using the CLI] @@ -25,4 +26,5 @@ include::modules/machineset-modifying.adoc[leveloffset=+1,tag=!MAPI] .Additional resources * xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-aws.adoc#capi-yaml-machine-set-aws_cluster-api-config-options-aws[Sample YAML for a Cluster API compute machine set resource on {aws-full}] * xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-gcp.adoc#capi-yaml-machine-set-gcp_cluster-api-config-options-gcp[Sample YAML for a Cluster API compute machine set resource on {gcp-full}] +* xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-azure.adoc#capi-yaml-machine-set-azure_cluster-api-config-options-azure[Sample YAML for a Cluster API compute machine set resource on {azure-full}] * xref:../../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-vsphere.adoc#capi-yaml-machine-set-vsphere_cluster-api-config-options-vsphere[Sample YAML for a Cluster API compute machine set resource on {vmw-full}] \ No newline at end of file diff --git a/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-aws.adoc b/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-aws.adoc index b2da6c6ca4eb..df66bd621976 100644 --- a/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-aws.adoc +++ b/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-aws.adoc @@ -16,9 +16,6 @@ You can change the configuration of your {aws-first} Cluster API machines by upd The following example YAML files show configurations for an {aws-full} cluster. -//Sample YAML for a CAPI AWS infrastructure resource -include::modules/capi-yaml-infrastructure-aws.adoc[leveloffset=+2] - //Sample YAML for CAPI AWS machine template resource include::modules/capi-yaml-machine-template-aws.adoc[leveloffset=+2] diff --git a/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-azure.adoc b/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-azure.adoc new file mode 100644 index 000000000000..ffe0cd25028b --- /dev/null +++ b/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-azure.adoc @@ -0,0 +1,30 @@ +:_mod-docs-content-type: ASSEMBLY +[id="cluster-api-config-options-azure"] += Cluster API configuration options for {azure-full} +include::_attributes/common-attributes.adoc[] +:context: cluster-api-config-options-azure + +toc::[] + +:FeatureName: Managing machines with the Cluster API +include::snippets/technology-preview.adoc[] + +You can change the configuration of your {azure-first} Cluster API machines by updating values in the Cluster API custom resource manifests. + +[id="cluster-api-sample-yaml-azure_{context}"] +== Sample YAML for configuring {azure-full} clusters + +The following example YAML files show configurations for an {azure-short} cluster. + +//Sample YAML for CAPI Azure machine template resource +include::modules/capi-yaml-machine-template-azure.adoc[leveloffset=+2] + +//Sample YAML for a CAPI Azure compute machine set resource +include::modules/capi-yaml-machine-set-azure.adoc[leveloffset=+2] + +// [id="cluster-api-supported-features-azure_{context}"] +// == Enabling {azure-full} features with the Cluster API + +// You can enable the following features by updating values in the Cluster API custom resource manifests. + +//Not sure what, if anything, we can add here at this time. \ No newline at end of file diff --git a/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-gcp.adoc b/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-gcp.adoc index b9e5c23ecb0c..7b3ea376a30a 100644 --- a/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-gcp.adoc +++ b/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-gcp.adoc @@ -16,9 +16,6 @@ You can change the configuration of your {gcp-first} Cluster API machines by upd The following example YAML files show configurations for a {gcp-full} cluster. -//Sample YAML for a CAPI GCP infrastructure resource -include::modules/capi-yaml-infrastructure-gcp.adoc[leveloffset=+2] - //Sample YAML for CAPI GCP machine template resource include::modules/capi-yaml-machine-template-gcp.adoc[leveloffset=+2] diff --git a/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-vsphere.adoc b/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-vsphere.adoc index 98fa4cfcb6a5..43fe1ed67e9c 100644 --- a/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-vsphere.adoc +++ b/machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-vsphere.adoc @@ -16,9 +16,6 @@ You can change the configuration of your {vmw-first} Cluster API machines by upd The following example YAML files show configurations for a {vmw-full} cluster. -//Sample YAML for a CAPI vSphere infrastructure resource -include::modules/capi-yaml-infrastructure-vsphere.adoc[leveloffset=+2] - //Sample YAML for CAPI vSphere machine template resource include::modules/capi-yaml-machine-template-vsphere.adoc[leveloffset=+2] diff --git a/modules/capi-arch-resources.adoc b/modules/capi-arch-resources.adoc index 88b2221349b5..5fe588308f97 100644 --- a/modules/capi-arch-resources.adoc +++ b/modules/capi-arch-resources.adoc @@ -6,11 +6,11 @@ [id="capi-arch-resources_{context}"] = Cluster API primary resources -The Cluster API consists of the following primary resources. For the Technology Preview of this feature, you must create these resources manually in the `openshift-cluster-api` namespace. +The Cluster API consists of the following primary resources. For the Technology Preview of this feature, you must create some of these resources manually in the `openshift-cluster-api` namespace. Cluster:: A fundamental unit that represents a cluster that is managed by the Cluster API. -Infrastructure:: A provider-specific resource that defines properties that are shared by all the compute machine sets in the cluster, such as the region and subnets. +Infrastructure cluster:: A provider-specific resource that defines properties that all of the compute machine sets in the cluster share, such as the region and subnets. Machine template:: A provider-specific template that defines the properties of the machines that a compute machine set creates. diff --git a/modules/capi-creating-cluster-resource.adoc b/modules/capi-creating-cluster-resource.adoc index 793d18157045..83a208255e2c 100644 --- a/modules/capi-creating-cluster-resource.adoc +++ b/modules/capi-creating-cluster-resource.adoc @@ -45,6 +45,7 @@ The following values are valid: + * `AWSCluster`: The cluster is running on {aws-first}. * `GCPCluster`: The cluster is running on {gcp-first}. +* `AzureCluster`: The cluster is running on {azure-first}. * `VSphereCluster`: The cluster is running on {vmw-first}. -- diff --git a/modules/capi-creating-machine-set.adoc b/modules/capi-creating-machine-set.adoc index 7b46a53b8a87..dce60e98e310 100644 --- a/modules/capi-creating-machine-set.adoc +++ b/modules/capi-creating-machine-set.adoc @@ -18,7 +18,7 @@ You can create compute machine sets that use the Cluster API to dynamically mana * You have installed the {oc-first}. -* You have created the cluster, infrastructure, and machine template resources. +* You have created the cluster and machine template resources. .Procedure @@ -46,6 +46,7 @@ spec: # ... ---- <1> Specify a name for the compute machine set. +The cluster ID, machine role, and region form a typical pattern for this value in the following format: `--`. <2> Specify the name of the cluster. <3> Specify the details for your environment. These parameters are provider specific. For more information, see the sample Cluster API compute machine set YAML for your provider. -- diff --git a/modules/capi-creating-machine-template.adoc b/modules/capi-creating-machine-template.adoc index 0591043ccd2d..d1bb435fcd93 100644 --- a/modules/capi-creating-machine-template.adoc +++ b/modules/capi-creating-machine-template.adoc @@ -18,7 +18,7 @@ You can create a provider-specific machine template resource by creating a YAML * You have installed the {oc-first}. -* You have created and applied the cluster and infrastructure resources. +* You have created and applied the cluster resource. .Procedure @@ -39,6 +39,7 @@ spec: <1> Specify the machine template kind. This value must match the value for your platform. The following values are valid: * `AWSMachineTemplate`: The cluster is running on {aws-first}. * `GCPMachineTemplate`: The cluster is running on {gcp-first}. +* `AzureMachineTemplate`: The cluster is running on {azure-first}. * `VSphereMachineTemplate`: The cluster is running on {vmw-first}. <2> Specify a name for the machine template. <3> Specify the details for your environment. These parameters are provider specific. For more information, see the sample Cluster API machine template YAML for your provider. diff --git a/modules/capi-limitations.adoc b/modules/capi-limitations.adoc index d15bbbf6a5d4..5ff0ad950e0b 100644 --- a/modules/capi-limitations.adoc +++ b/modules/capi-limitations.adoc @@ -15,9 +15,9 @@ Using the Cluster API to manage machines is a Technology Preview feature and has Enabling this feature set cannot be undone and prevents minor version updates. ==== -* Only {aws-first}, {gcp-first}, and {vmw-first} clusters can use the Cluster API. +* Only {aws-first}, {gcp-first}, {azure-first}, and {vmw-first} clusters can use the Cluster API. -* You must manually create the primary resources that the Cluster API requires. +* You must manually create some of the primary resources that the Cluster API requires. For more information, see "Getting started with the Cluster API". * You cannot use the Cluster API to manage control plane machines. diff --git a/modules/capi-modifying-machine-template.adoc b/modules/capi-modifying-machine-template.adoc index 8e6a8afda7a5..076dc76760d0 100644 --- a/modules/capi-modifying-machine-template.adoc +++ b/modules/capi-modifying-machine-template.adoc @@ -28,6 +28,7 @@ $ oc get <1> <1> Specify the value that corresponds to your platform. The following values are valid: * `AWSMachineTemplate`: The cluster is running on {aws-first}. * `GCPMachineTemplate`: The cluster is running on {gcp-first}. +* `AzureMachineTemplate`: The cluster is running on {azure-first}. * `VSphereMachineTemplate`: The cluster is running on {vmw-first}. -- + diff --git a/modules/capi-yaml-cluster.adoc b/modules/capi-yaml-cluster.adoc index 7f26d8515c42..3308adc2f4eb 100644 --- a/modules/capi-yaml-cluster.adoc +++ b/modules/capi-yaml-cluster.adoc @@ -34,5 +34,6 @@ Valid values are: -- * `AWSCluster`: The cluster is running on {aws-full}. * `GCPCluster`: The cluster is running on {gcp-full}. +* `AzureCluster`: The cluster is running on {azure-full}. * `VSphereCluster`: The cluster is running on {vmw-full}. -- \ No newline at end of file diff --git a/modules/capi-yaml-infrastructure-aws.adoc b/modules/capi-yaml-infrastructure-aws.adoc deleted file mode 100644 index cfb20233b206..000000000000 --- a/modules/capi-yaml-infrastructure-aws.adoc +++ /dev/null @@ -1,29 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-aws.adoc - -:_mod-docs-content-type: REFERENCE -[id="capi-yaml-infrastructure-aws_{context}"] -= Sample YAML for a Cluster API infrastructure resource on {aws-full} - -The infrastructure resource is provider-specific and defines properties that are shared by all the compute machine sets in the cluster, such as the region and subnets. -The compute machine set references this resource when creating machines. - -[source,yaml] ----- -apiVersion: infrastructure.cluster.x-k8s.io/v1beta2 -kind: AWSCluster # <1> -metadata: - name: # <2> - namespace: openshift-cluster-api -spec: - controlPlaneEndpoint: # <3> - host: - port: 6443 - region: # <4> ----- -<1> Specify the infrastructure kind for the cluster. -This value must match the value for your platform. -<2> Specify the cluster ID as the name of the cluster. -<3> Specify the address of the control plane endpoint and the port to use to access it. -<4> Specify the {aws-short} region. \ No newline at end of file diff --git a/modules/capi-yaml-infrastructure-gcp.adoc b/modules/capi-yaml-infrastructure-gcp.adoc deleted file mode 100644 index 96d0a3733118..000000000000 --- a/modules/capi-yaml-infrastructure-gcp.adoc +++ /dev/null @@ -1,32 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-gcp.adoc - -:_mod-docs-content-type: REFERENCE -[id="capi-yaml-infrastructure-gcp_{context}"] -= Sample YAML for a Cluster API infrastructure resource on {gcp-full} - -The infrastructure resource is provider-specific and defines properties that are shared by all the compute machine sets in the cluster, such as the region and subnets. -The compute machine set references this resource when creating machines. - -[source,yaml] ----- -apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 -kind: GCPCluster # <1> -metadata: - name: # <2> -spec: - controlPlaneEndpoint: # <3> - host: - port: 6443 - network: - name: -network - project: # <4> - region: # <5> ----- -<1> Specify the infrastructure kind for the cluster. -This value must match the value for your platform. -<2> Specify the cluster ID as the name of the cluster. -<3> Specify the IP address of the control plane endpoint and the port used to access it. -<4> Specify the {gcp-short} project name. -<5> Specify the {gcp-short} region. \ No newline at end of file diff --git a/modules/capi-yaml-machine-set-aws.adoc b/modules/capi-yaml-machine-set-aws.adoc index 468aea4cbea0..adb5604cd6e1 100644 --- a/modules/capi-yaml-machine-set-aws.adoc +++ b/modules/capi-yaml-machine-set-aws.adoc @@ -7,7 +7,7 @@ = Sample YAML for a Cluster API compute machine set resource on {aws-full} The compute machine set resource defines additional properties of the machines that it creates. -The compute machine set also references the infrastructure resource and machine template when creating machines. +The compute machine set also references the cluster resource and machine template when creating machines. [source,yaml] ---- @@ -16,28 +16,35 @@ kind: MachineSet metadata: name: # <1> namespace: openshift-cluster-api + labels: + cluster.x-k8s.io/cluster-name: # <2> spec: clusterName: # <2> replicas: 1 selector: matchLabels: test: example + cluster.x-k8s.io/cluster-name: + cluster.x-k8s.io/set-name: template: metadata: labels: test: example + cluster.x-k8s.io/cluster-name: + cluster.x-k8s.io/set-name: + node-role.kubernetes.io/: "" spec: bootstrap: - dataSecretName: worker-user-data # <3> + dataSecretName: worker-user-data clusterName: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 - kind: AWSMachineTemplate # <4> - name: # <5> + kind: AWSMachineTemplate # <3> + name: # <4> ---- <1> Specify a name for the compute machine set. +The cluster ID, machine role, and region form a typical pattern for this value in the following format: `--`. <2> Specify the cluster ID as the name of the cluster. -<3> For the Cluster API Technology Preview, the Operator can use the worker user data secret from the `openshift-machine-api` namespace. -<4> Specify the machine template kind. +<3> Specify the machine template kind. This value must match the value for your platform. -<5> Specify the machine template name. +<4> Specify the machine template name. diff --git a/modules/capi-yaml-machine-set-azure.adoc b/modules/capi-yaml-machine-set-azure.adoc new file mode 100644 index 000000000000..566ac534d919 --- /dev/null +++ b/modules/capi-yaml-machine-set-azure.adoc @@ -0,0 +1,50 @@ +// Module included in the following assemblies: +// +// * machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-azure.adoc + +:_mod-docs-content-type: REFERENCE +[id="capi-yaml-machine-set-azure_{context}"] += Sample YAML for a Cluster API compute machine set resource on {azure-full} + +The compute machine set resource defines additional properties of the machines that it creates. +The compute machine set also references the cluster resource and machine template when creating machines. + +[source,yaml] +---- +apiVersion: cluster.x-k8s.io/v1beta1 +kind: MachineSet +metadata: + name: # <1> + namespace: openshift-cluster-api + labels: + cluster.x-k8s.io/cluster-name: # <2> +spec: + clusterName: + replicas: 1 + selector: + matchLabels: + test: example + cluster.x-k8s.io/cluster-name: + cluster.x-k8s.io/set-name: + template: + metadata: + labels: + test: example + cluster.x-k8s.io/cluster-name: + cluster.x-k8s.io/set-name: + node-role.kubernetes.io/: "" + spec: + bootstrap: + dataSecretName: worker-user-data + clusterName: + infrastructureRef: + apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 + kind: AzureMachineTemplate # <3> + name: # <4> +---- +<1> Specify a name for the compute machine set. +The cluster ID, machine role, and region form a typical pattern for this value in the following format: `--`. +<2> Specify the cluster ID as the name of the cluster. +<3> Specify the machine template kind. +This value must match the value for your platform. +<4> Specify the machine template name. \ No newline at end of file diff --git a/modules/capi-yaml-machine-set-gcp.adoc b/modules/capi-yaml-machine-set-gcp.adoc index 784629644233..400a83ee2bd5 100644 --- a/modules/capi-yaml-machine-set-gcp.adoc +++ b/modules/capi-yaml-machine-set-gcp.adoc @@ -7,7 +7,7 @@ = Sample YAML for a Cluster API compute machine set resource on {gcp-full} The compute machine set resource defines additional properties of the machines that it creates. -The compute machine set also references the infrastructure resource and machine template when creating machines. +The compute machine set also references the cluster resource and machine template when creating machines. [source,yaml] ---- @@ -16,30 +16,37 @@ kind: MachineSet metadata: name: # <1> namespace: openshift-cluster-api + labels: + cluster.x-k8s.io/cluster-name: # <2> spec: clusterName: # <2> replicas: 1 selector: matchLabels: test: example + cluster.x-k8s.io/cluster-name: + cluster.x-k8s.io/set-name: template: metadata: labels: test: example + cluster.x-k8s.io/cluster-name: + cluster.x-k8s.io/set-name: + node-role.kubernetes.io/: "" spec: bootstrap: - dataSecretName: worker-user-data # <3> + dataSecretName: worker-user-data clusterName: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 - kind: GCPMachineTemplate # <4> - name: # <5> - failureDomain: # <6> + kind: GCPMachineTemplate # <3> + name: # <4> + failureDomain: # <5> ---- <1> Specify a name for the compute machine set. +The cluster ID, machine role, and region form a typical pattern for this value in the following format: `--`. <2> Specify the cluster ID as the name of the cluster. -<3> For the Cluster API Technology Preview, the Operator can use the worker user data secret from the `openshift-machine-api` namespace. -<4> Specify the machine template kind. +<3> Specify the machine template kind. This value must match the value for your platform. -<5> Specify the machine template name. -<6> Specify the failure domain within the {gcp-short} region. \ No newline at end of file +<4> Specify the machine template name. +<5> Specify the failure domain within the {gcp-short} region. \ No newline at end of file diff --git a/modules/capi-yaml-machine-set-vsphere.adoc b/modules/capi-yaml-machine-set-vsphere.adoc index afdc2ad61f78..58aac856d60c 100644 --- a/modules/capi-yaml-machine-set-vsphere.adoc +++ b/modules/capi-yaml-machine-set-vsphere.adoc @@ -7,7 +7,7 @@ = Sample YAML for a Cluster API compute machine set resource on {vmw-full} The compute machine set resource defines additional properties of the machines that it creates. -The compute machine set also references the infrastructure resource and machine template when creating machines. +The compute machine set also references the cluster resource and machine template when creating machines. [source,yaml] ---- @@ -16,25 +16,32 @@ kind: MachineSet metadata: name: # <1> namespace: openshift-cluster-api + labels: + cluster.x-k8s.io/cluster-name: # <2> spec: clusterName: # <2> replicas: 1 selector: matchLabels: test: example + cluster.x-k8s.io/cluster-name: + cluster.x-k8s.io/set-name: template: metadata: labels: test: example + cluster.x-k8s.io/cluster-name: + cluster.x-k8s.io/set-name: + node-role.kubernetes.io/: "" spec: bootstrap: - dataSecretName: worker-user-data # <3> + dataSecretName: worker-user-data clusterName: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 - kind: VSphereMachineTemplate # <4> - name: # <5> - failureDomain: # <6> + kind: VSphereMachineTemplate # <3> + name: # <4> + failureDomain: # <5> - name: region: zone: @@ -48,17 +55,17 @@ spec: - port-group ---- <1> Specify a name for the compute machine set. +The cluster ID, machine role, and region form a typical pattern for this value in the following format: `--`. <2> Specify the cluster ID as the name of the cluster. -<3> For the Cluster API Technology Preview, the Operator can use the worker user data secret from the `openshift-machine-api` namespace. -<4> Specify the machine template kind. +<3> Specify the machine template kind. This value must match the value for your platform. -<5> Specify the machine template name. -<6> Specify the failure domain configuration details. +<4> Specify the machine template name. +<5> Specify the failure domain configuration details. + [NOTE] ==== Using multiple regions and zones on a {vmw-short} cluster that uses the Cluster API is not a validated configuration. ==== // This callout section can be updated if this configuration is validated. (see also: additional resources in cluster-api-config-options-vsphere.adoc) -// <6> Specify one or more failure domains. +// <5> Specify one or more failure domains. // For more information about specifying multiple regions and zones on a {vmw-short} cluster, see "Multiple regions and zones configuration for a cluster on {vmw-full}." \ No newline at end of file diff --git a/modules/capi-yaml-machine-template-azure.adoc b/modules/capi-yaml-machine-template-azure.adoc new file mode 100644 index 000000000000..9121bd4766b2 --- /dev/null +++ b/modules/capi-yaml-machine-template-azure.adoc @@ -0,0 +1,57 @@ +// Module included in the following assemblies: +// +// * machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-azure.adoc + +:_mod-docs-content-type: REFERENCE +[id="capi-yaml-machine-template-azure_{context}"] += Sample YAML for a Cluster API machine template resource on {azure-full} + +The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. +The compute machine set references this template when creating machines. + +[source,yaml] +---- +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: AzureMachineTemplate # <1> +metadata: + name: # <2> + namespace: openshift-cluster-api +spec: + template: + spec: # <3> + disableExtensionOperations: true + identity: UserAssigned + image: + id: /subscriptions//resourceGroups/-rg/providers/Microsoft.Compute/galleries/gallery_/images/-gen2/versions/latest # <4> + networkInterfaces: + - acceleratedNetworking: true + privateIPConfigs: 1 + subnetName: -worker-subnet + osDisk: + diskSizeGB: 128 + managedDisk: + storageAccountType: Premium_LRS + osType: Linux + sshPublicKey: + userAssignedIdentities: + - providerID: 'azure:///subscriptions//resourcegroups/-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/-identity' + vmSize: Standard_D4s_v3 +---- +<1> Specify the machine template kind. +This value must match the value for your platform. +<2> Specify a name for the machine template. +<3> Specify the details for your environment. +The values here are examples. +<4> Specify an image that is compatible with your instance type. +The Hyper-V generation V2 images created by the installation program have a `-gen2` suffix, while V1 images have the same name without the suffix. ++ +[NOTE] +==== +Default {product-title} cluster names contain hyphens (`-`), which are not compatible with {azure-short} gallery name requirements. +The value of `` in this configuration must use underscores (`_`) instead of hyphens to comply with these requirements. +Other instances of `` do not change. + +For example, a cluster name of `jdoe-test-2m2np` transforms to `jdoe_test_2m2np`. +The full string for `gallery_` in this example is `gallery_jdoe_test_2m2np`, not `gallery_jdoe-test-2m2np`. +The complete value of `spec.template.spec.image.id` for this example value is `/subscriptions//resourceGroups/jdoe-test-2m2np-rg/providers/Microsoft.Compute/galleries/gallery_jdoe_test_2m2np/images/jdoe-test-2m2np-gen2/versions/latest`. +==== \ No newline at end of file diff --git a/modules/cluster-capi-operator.adoc b/modules/cluster-capi-operator.adoc index f99d0703bc90..c0bb6f46f53a 100644 --- a/modules/cluster-capi-operator.adoc +++ b/modules/cluster-capi-operator.adoc @@ -7,7 +7,7 @@ [NOTE] ==== -This Operator is available as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] for {aws-first}, {gcp-first}, and {vmw-first} clusters. +This Operator is available as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] for {aws-first}, {gcp-first}, {azure-first}, and {vmw-first} clusters. ==== [discrete] From 20db6a0269e4a7f9a907896f11846d5d47c37fe0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E2=80=9CShauna=20Diaz=E2=80=9D?= Date: Thu, 6 Feb 2025 08:39:31 -0500 Subject: [PATCH 201/669] OSDOCS-11312: adds enhancements and nit fixes to auto recovery MicroShift --- ...microshift-auto-recover-manual-backup.adoc | 23 ++--- ...t-auto-recovery-example-bootc-systems.adoc | 93 +++++++++++++++++++ ...auto-recovery-example-ostree-systems.adoc} | 35 ++++--- ...ft-auto-recovery-example-rpm-systems.adoc} | 19 ++-- ...croshift-auto-recovery-manual-backups.adoc | 22 +++++ ...hift-automation-example-bootc-systems.adoc | 32 ------- ...shift-creating-backups-auto-recovery.adoc} | 19 ++-- ...hift-restoring-backups-auto-recovery.adoc} | 37 +++++--- 8 files changed, 186 insertions(+), 94 deletions(-) create mode 100644 modules/microshift-auto-recovery-example-bootc-systems.adoc rename modules/{microshift-automation-example-ostree-systems.adoc => microshift-auto-recovery-example-ostree-systems.adoc} (65%) rename modules/{microshift-automation-example-rpm-systems.adoc => microshift-auto-recovery-example-rpm-systems.adoc} (75%) create mode 100644 modules/microshift-auto-recovery-manual-backups.adoc delete mode 100644 modules/microshift-automation-example-bootc-systems.adoc rename modules/{microshift-creating-backups.adoc => microshift-creating-backups-auto-recovery.adoc} (86%) rename modules/{microshift-restoring-backups.adoc => microshift-restoring-backups-auto-recovery.adoc} (82%) diff --git a/microshift_backup_and_restore/microshift-auto-recover-manual-backup.adoc b/microshift_backup_and_restore/microshift-auto-recover-manual-backup.adoc index e2aab42ce764..87048afbbe0a 100644 --- a/microshift_backup_and_restore/microshift-auto-recover-manual-backup.adoc +++ b/microshift_backup_and_restore/microshift-auto-recover-manual-backup.adoc @@ -6,25 +6,16 @@ include::_attributes/attributes-microshift.adoc[] toc::[] -You can automatically restore data from manual backups when {microshift-short} fails to start by using the `auto-recovery` feature. +You can automatically restore data from manual backups when {microshift-short} fails to start by configuring automatic recovery. -You can use the following options with the existing `backup` and `restore` commands in this feature: +include::modules/microshift-auto-recovery-manual-backups.adoc[leveloffset=+1] -* `--auto-recovery`: Selects the most recent version of the backup, and then restores it. This option treats the `PATH` argument as a path to a directory that holds all the backups for automated recovery, and not just as a path to a particular backup file. -* `--dont-save-failed`: Disables the backup of failed {microshift-short} data. +include::modules/microshift-creating-backups-auto-recovery.adoc[leveloffset=+1] -[NOTE] -==== -* You can use the `--auto-recovery` option with both the `backup` and `restore` commands. -* You can use the `--dont-save-failed` option only with the `restore` command. -==== +include::modules/microshift-restoring-backups-auto-recovery.adoc[leveloffset=+1] -include::modules/microshift-creating-backups.adoc[leveloffset=+1] +include::modules/microshift-auto-recovery-example-rpm-systems.adoc[leveloffset=+2] -include::modules/microshift-restoring-backups.adoc[leveloffset=+1] +include::modules/microshift-auto-recovery-example-ostree-systems.adoc[leveloffset=+2] -include::modules/microshift-automation-example-rpm-systems.adoc[leveloffset=+1] - -include::modules/microshift-automation-example-ostree-systems.adoc[leveloffset=+1] - -include::modules/microshift-automation-example-bootc-systems.adoc[leveloffset=+1] +include::modules/microshift-auto-recovery-example-bootc-systems.adoc[leveloffset=+2] diff --git a/modules/microshift-auto-recovery-example-bootc-systems.adoc b/modules/microshift-auto-recovery-example-bootc-systems.adoc new file mode 100644 index 000000000000..e89da20976c1 --- /dev/null +++ b/modules/microshift-auto-recovery-example-bootc-systems.adoc @@ -0,0 +1,93 @@ +// Module included in the following assemblies: +// +// * microshift/microshift_backup_and_restore/microshift-auto-recover-manual-backup.adoc + +:_mod-docs-content-type: PROCEDURE +[id="microshift-auto-recovery-example-bootc-systems_{context}"] += Using automatic recovery in image mode for {op-system-base} systems + +:FeatureName: Image mode for {op-system-base} + +include::snippets/technology-preview.adoc[] + +As a use case, consider the following example situation in which you want to automate the `auto-recovery` process for image mode for {op-system-base-full} systems that use the systemd service. + +[IMPORTANT] +==== +You must include the entire `auto-recovery` process for {op-system-image} systems that use `systemd` in the container file. +==== + +.Prerequisites + +* You created a Containerfile as instructed in link:https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/installing_with_rhel_image_mode/installing-with-rhel-image-mode#microshift-rhel-image-mode-build-image_microshift-install-rhel-image-mode[Building the bootc image]. + +* You created the `10-auto-recovery.conf` and `microshift-auto-recovery.service` files as explained in the "Using auto-recovery in RPM systems" section. ++ +[IMPORTANT] +==== +The location of the the `10-auto-recovery.conf` and `microshift-auto-recovery.service` must be relative to the Containerfile. + +For example, if the path to the Containerfile is `/home/microshift/my-build/Containerfile`, the systemd files need to be adjacent for proper embedding. The following paths are correct for this example: + +* `/home/microshift/my-build/auto-rec/10-auto-recovery.conf` +* `/home/microshift/my-build/auto-rec/microshift-auto-recovery.service` +* `/home/microshift/my-build/auto-rec/microshift-auto-recovery` +==== + +* You created the `microshift-auto-recovery` script as explained in the "Using auto-recovery in RPM systems" section. + +.Procedure + +. Use the following example snippet to update the container file that you use to prepare the {op-system-image} image. ++ +[source,text] +---- +RUN mkdir -p /usr/lib/systemd/system/microshift.service.d +COPY ./auto-rec/10-auto-recovery.conf /usr/lib/systemd/system/microshift.service.d/10-auto-recovery.conf +COPY ./auto-rec/microshift-auto-recovery.service /usr/lib/systemd/system/ +COPY ./auto-rec/microshift-auto-recovery /usr/bin/ +RUN chmod +x /usr/bin/microshift-auto-recovery +---- ++ +[IMPORTANT] +==== +Podman uses the host subscription information and repositories inside the container when building the container image. If the `rhocp` and `fast-datapath` repositories are not available on the host, the build fails. +==== + +. Rebuild your local bootc image by running the following image build command: ++ +[source,terminal] +---- +PULL_SECRET=~/.pull-secret.json +USER_PASSWD= +IMAGE_NAME=microshift-4.18-bootc + +sudo podman build --authfile "${PULL_SECRET}" -t "${IMAGE_NAME}" \ + --build-arg USER_PASSWD="${USER_PASSWD}" \ + -f Containerfile +---- ++ +[NOTE] +==== +Secrets are used during the image build in the following ways: + +* The podman `--authfile` argument is required to pull the base `rhel-bootc:9.4` image from the `registry.redhat.io` registry. + +* The build `USER_PASSWD` argument is used to set a password for the `redhat user`. +==== + +.Verification + +* Verify that the local bootc image was created by running the following command: ++ +[source,terminal] +---- +$ sudo podman images "${IMAGE_NAME}" +---- ++ +.Example output +[source,text] +---- +REPOSITORY TAG IMAGE ID CREATED SIZE +localhost/microshift-4.18-bootc latest 193425283c00 2 minutes ago 2.31 GB +---- \ No newline at end of file diff --git a/modules/microshift-automation-example-ostree-systems.adoc b/modules/microshift-auto-recovery-example-ostree-systems.adoc similarity index 65% rename from modules/microshift-automation-example-ostree-systems.adoc rename to modules/microshift-auto-recovery-example-ostree-systems.adoc index ee1e6bcae380..ef6e8416bbc8 100644 --- a/modules/microshift-automation-example-ostree-systems.adoc +++ b/modules/microshift-auto-recovery-example-ostree-systems.adoc @@ -3,50 +3,60 @@ // * microshift/microshift_backup_and_restore/microshift-auto-recover-manual-backup.adoc :_mod-docs-content-type: PROCEDURE -[id="microshift-automation-example-ostree-systems_{context}"] -= Automating the integration process with systemd for OSTree systems +[id="microshift-auto-recovery-ostree-systems_{context}"] += Using automatic recovery with {op-system-ostree} + +As a use case, consider the following example situation in which you want to automate the `auto-recovery` process for {op-system-ostree-first} systems that use systemd in the blueprint file. [IMPORTANT] ==== -You must include the entire `auto-recovery` process for OSTree systems that use `systemd` in the blueprint file. +You must include the entire `auto-recovery` process for {op-system-ostree} systems that use `systemd` in the blueprint file. ==== -As a use case, consider the following example situation in which you want to automate the `auto-recovery` process for OSTree systems that use systemd. +.Prerequisites + +* You installed Podman. +* You installed the command-line `composer-cli` tool. .Procedure +. Optional: Because the `composer-cli` can only create files in the `/etc` directory, package your files into an RPM that you include the blueprint. + . Use the following example to create your blueprint file: + [source,terminal] ---- +[[customizations.directories]] +path = "/etc/systemd/system/microshift.service.d" + +[[customizations.directories]] +path = "/etc/bin" + [[customizations.files]] -path = "/usr/lib/systemd/system/microshift.service.d/10-auto-recovery.conf" +path = "/etc/systemd/system/microshift.service.d/10-auto-recovery.conf" data = """ [Unit] OnFailure=microshift-auto-recovery.service """ [[customizations.files]] -path = "/usr/lib/systemd/system/microshift-auto-recovery.service" +path = "/etc/systemd/system/microshift-auto-recovery.service" data = """ [Unit] Description=MicroShift auto-recovery - [Service] Type=oneshot -ExecStart=/usr/bin/microshift-auto-recovery - +ExecStart=/etc/bin/microshift-auto-recovery [Install] WantedBy=multi-user.target """ [[customizations.files]] -path = "/usr/bin/microshift-auto-recovery" +path = "/etc/bin/microshift-auto-recovery" mode = "0755" data = """ #!/usr/bin/env bash set -xeuo pipefail - # If greenboot uses a non-default file for clearing boot_counter, use boot_success instead. if grep -q "/boot/grubenv" /usr/libexec/greenboot/greenboot-grub2-set-success; then if grub2-editenv - list | grep -q ^boot_success=0; then @@ -61,12 +71,11 @@ else exit 0 fi fi - /usr/bin/microshift restore --auto-recovery /var/lib/microshift-auto-recovery /usr/bin/systemctl reset-failed microshift /usr/bin/systemctl start microshift - echo "DONE" """ ---- + . For the next steps, see link:https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/{ocp-version}/html/embedding_in_a_rhel_for_edge_image/microshift-embed-in-rpm-ostree#preparing-for-image-building_microshift-embed-in-rpm-ostree[Preparing for image building]. \ No newline at end of file diff --git a/modules/microshift-automation-example-rpm-systems.adoc b/modules/microshift-auto-recovery-example-rpm-systems.adoc similarity index 75% rename from modules/microshift-automation-example-rpm-systems.adoc rename to modules/microshift-auto-recovery-example-rpm-systems.adoc index e6703592607e..81f8a77ab6db 100644 --- a/modules/microshift-automation-example-rpm-systems.adoc +++ b/modules/microshift-auto-recovery-example-rpm-systems.adoc @@ -3,19 +3,16 @@ // * microshift/microshift_backup_and_restore/microshift-auto-recover-manual-backup.adoc :_mod-docs-content-type: PROCEDURE -[id="microshift-automation-example-rpm-systems_{context}"] -= Automating the integration process with systemd for RPM systems +[id="microshift-auto-recovery-rpm-systems_{context}"] += Using automatic recovery in RPM systems -[NOTE] -==== -When the `microshift.service` enters a failed state, `systemd` starts the `microshift-auto-recovery.service` unit. This unit executes the `auto-recovery` restore process and restarts {microshift-short}. -==== +When {microshift-short} enters a failed state, the systemd service starts the `microshift-auto-recovery.service` unit. This unit executes the `auto-recovery` restore process. -As a use case, consider the following example situation in which you want to automate the `auto-recovery` process for RPM systems that use systemd. +As a use case, consider the following example situation in which you want to automate the automatic recovery process for RPM systems that use the systemd service. .Procedure -. Create a directory for the `microshift.service` by running the following command: +. Create a directory for the `microshift` systemd service by running the following command: + [source,terminal] ---- @@ -28,8 +25,11 @@ $ sudo mkdir -p /usr/lib/systemd/system/microshift.service.d $ sudo tee /usr/lib/systemd/system/microshift.service.d/10-auto-recovery.conf > /dev/null <<'EOF' [Unit] OnFailure=microshift-auto-recovery.service +StartLimitIntervalSec=25s # <1> EOF ---- +<1> Increase the `StartLimitInterval` value from the default `10s` to a larger value for slower systems. A value that is too low can result in systemd never marking the `microshift` systemd service as failed, which means that the `OnFailure=` service does not get triggered. + . Create the `microshift-auto-recovery.service` file by running the following command: + [source,terminal] @@ -46,6 +46,7 @@ ExecStart=/usr/bin/microshift-auto-recovery WantedBy=multi-user.target EOF ---- + . Create the `microshift-auto-recovery` script by running the following command: + [source,terminal] @@ -76,12 +77,14 @@ fi echo "DONE" EOF ---- + . Make the script executable by running the following command: + [source,terminal] ---- $ sudo chmod +x /usr/bin/microshift-auto-recovery ---- + . Reload the system configuration by running the following command: + [source,terminal] diff --git a/modules/microshift-auto-recovery-manual-backups.adoc b/modules/microshift-auto-recovery-manual-backups.adoc new file mode 100644 index 000000000000..aee6a8db6580 --- /dev/null +++ b/modules/microshift-auto-recovery-manual-backups.adoc @@ -0,0 +1,22 @@ +// Module included in the following assemblies: +// +// * microshift/microshift_backup_and_restore/microshift-auto-recover-manual-backup.adoc + +:_mod-docs-content-type: CONCEPT +[id="microshift-auto-recovery-manual-backups_{context}"] += Modifying backup and restore commands to automate data recovery + +Use automatic recovery options to store all of your backups in a single directory, then automatically select the latest one to restore. Modifying existing `backup` and `restore` commands enables you to set up automatic recovery. + +The `--auto-recovery` option treats the `PATH` argument as a path to a directory that holds all the backups for automated recovery, and not just as a path to a particular backup file. You can use the `--auto-recovery` option with both `backup` and `restore` commands. + +* For example, if you use the automatic recovery option with `restore`, such as in `microshift restore --auto-recovery PATH`, running the modified command automatically selects and restores the most recent backup. + +* If you use the same option in the `microshift backup` command, such as in `microshift backup --auto-recovery PATH`, a new backup is created in the PATH. + +* By default, `microshift restore --auto-recovery PATH` creates a backup of the failed {microshift-short} data in `PATH/failed`. You can add the `--dont-save-failed` option to disable the creation of failed backup data. + +[IMPORTANT] +==== +You can only use the `--dont-save-failed` option with the `restore` command. +==== \ No newline at end of file diff --git a/modules/microshift-automation-example-bootc-systems.adoc b/modules/microshift-automation-example-bootc-systems.adoc deleted file mode 100644 index 302cf4e09958..000000000000 --- a/modules/microshift-automation-example-bootc-systems.adoc +++ /dev/null @@ -1,32 +0,0 @@ -// Module included in the following assemblies: -// -// * microshift/microshift_backup_and_restore/microshift-auto-recover-manual-backup.adoc - -:_mod-docs-content-type: PROCEDURE -[id="microshift-automation-example-bootc-systems_{context}"] -= Automating the integration process with systemd for bootc systems - -[IMPORTANT] -==== -You must include the entire `auto-recovery` process for bootc systems that use `systemd` in the container file. -==== - -As a use case, consider the following example situation in which you want to automate the `auto-recovery` process for bootc systems that use systemd. - -.Prerequisites - -* You have created the `10-auto-recovery.conf` and `microshift-auto-recovery.service` files as explained in the "Automating the integration process with systemd for RPM systems" section. -* You have created the `microshift-auto-recovery` script as explained in the "Automating the integration process with systemd for RPM systems" section. - -.Procedure - -* Use the following example to update your Containerfile that you use to prepare the bootc image. -+ -[source,text] ----- -RUN mkdir -p /usr/lib/systemd/system/microshift.service.d -COPY ./auto-rec/10-auto-recovery.conf /usr/lib/systemd/system/microshift.service.d/10-auto-recovery.conf -COPY ./auto-rec/microshift-auto-recovery.service /usr/lib/systemd/system/ -COPY ./auto-rec/microshift-auto-recovery /usr/bin/ -RUN chmod +x /usr/bin/microshift-auto-recovery && systemctl daemon-reload ----- diff --git a/modules/microshift-creating-backups.adoc b/modules/microshift-creating-backups-auto-recovery.adoc similarity index 86% rename from modules/microshift-creating-backups.adoc rename to modules/microshift-creating-backups-auto-recovery.adoc index b683a9c1e19f..8e70d579d6fb 100644 --- a/modules/microshift-creating-backups.adoc +++ b/modules/microshift-creating-backups-auto-recovery.adoc @@ -3,10 +3,10 @@ // * microshift/microshift_backup_and_restore/microshift-auto-recover-manual-backup.adoc :_mod-docs-content-type: PROCEDURE -[id="microshift-creating-backups_{context}"] +[id="microshift-creating-backups-auto-recovery_{context}"] = Creating backups using the auto-recovery feature -Use the following procedure to create backups. +Use the following procedure to create backups using automatic recovery options. [NOTE] ==== @@ -15,14 +15,13 @@ Creating backups require stopping {microshift-short}, so you must determine the .Prerequisites -* You have stopped {microshift-short}. +* You stopped {microshift-short}. .Procedure * Create and store backups in the directory you choose by running the following command: + -[source,terminal] -[subs="+quotes"] +[source,terminal,subs="+quotes"] ---- $ sudo microshift backup --auto-recovery __ <1> ---- @@ -30,11 +29,10 @@ $ sudo microshift backup --auto-recovery __ <1> + [NOTE] ==== -The `--auto-recovery` option modifies the interpretation of the `PATH` argument from the final backup path to a directory that holds all the backups for automated recovery. +The `--auto-recovery` option modifies the interpretation of the `PATH` argument from the final backup path to a directory that holds all of the backups for automated recovery. ==== + .Example output -+ [source,terminal] ---- ??? I1104 09:18:52.100725 8906 system.go:58] "OSTree deployments" deployments=[{"id":"default-b3442053c9ce69310cd54140d8d592234c5306e4c5132de6efe615f79c84300a.1","booted":true,"staged":false,"pinned":false},{"id":"default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0","booted":false,"staged":false,"pinned":false}] @@ -49,11 +47,10 @@ The `--auto-recovery` option modifies the interpretation of the `PATH` argument .Verification -* To verify that the backup has been created, view the directory you chose to store backups by running the following command: +* Verify that the backup you created exists in your customized storage directory by running the following command: + -[source,terminal] -[subs="+quotes"] +[source,terminal,subs="+quotes"] ---- -$ ls -la __ <1> +$ sudo ls -la __ <1> ---- <1> Replace `__` with the path of the directory that stores backups. For example, `/var/lib/microshift-auto-recovery`. \ No newline at end of file diff --git a/modules/microshift-restoring-backups.adoc b/modules/microshift-restoring-backups-auto-recovery.adoc similarity index 82% rename from modules/microshift-restoring-backups.adoc rename to modules/microshift-restoring-backups-auto-recovery.adoc index ab20009bae51..4d4a8247fbaa 100644 --- a/modules/microshift-restoring-backups.adoc +++ b/modules/microshift-restoring-backups-auto-recovery.adoc @@ -3,12 +3,10 @@ // * microshift/microshift_backup_and_restore/microshift-auto-recover-manual-backup.adoc :_mod-docs-content-type: PROCEDURE -[id="microshift-restoring-backups_{context}"] +[id="microshift-restoring-backups-auto-recovery_{context}"] = Restoring backups using the auto-recovery feature -You can restore backups after system events that remove or damage required data. - -Use the following procedure to restore backups. +You can restore backups after system events that remove or damage required data. Use the following procedure to restore backups using automatic recovery. Automatic recovery selects the most recent backup and restores it. Previously restored backups that used automatic recovery are moved to your `PATH/restored` directory. .Prerequisites @@ -16,10 +14,9 @@ Use the following procedure to restore backups. .Procedure -* Restore the backup from the directory in which you have stored the backups by running the following command: +. Restore the latest backup from your backups directory by running the following command: + -[source,terminal] -[subs="+quotes"] +[source,terminal,subs="+quotes"] ---- $ sudo microshift restore --auto-recovery __ <1> ---- @@ -27,12 +24,12 @@ $ sudo microshift restore --auto-recovery __ <1> + [NOTE] ==== -The `--auto-recovery` option copies the {microshift-short} data to `/var/lib/microshift-auto-recovery/failed/` for later investigation, selects the most recent backup, and restores it. -The `--dont-save-failed` option disables the backing up of failed {microshift-short} data. +* The `--auto-recovery` option copies the {microshift-short} data to `/var/lib/microshift-auto-recovery/failed/` for later investigation, selects the most recent backup, and restores it. + +* The `--dont-save-failed` option disables the backing up of failed {microshift-short} data. ==== + .Example output -+ [source,terminal] ---- ??? I1104 09:19:28.617225 8950 state.go:80] "Read state from the disk" state={"LastBackup":"20241022101528_default-a129624b9233fa54fe3574f1aa211bc2d85e1052b52245fe7d83f10c2f6d28e3.0"} @@ -57,12 +54,24 @@ The `--dont-save-failed` option disables the backing up of failed {microshift-sh ??? I1104 09:19:28.662983 8950 restore.go:141] "Auto-recovery restore completed". ---- + -[NOTE] +[IMPORTANT] ==== -* The `restore` command does not start {microshift-short} after restoration. When you execute this command, {microshift-short} service has already failed or you need to stop it. -* {microshift-short} does not monitor the disk space of any filesystem. You need to ensure your automation handles old backup removal. +* The `restore` command does not restart {microshift-short} after restoration. When you execute this command, {microshift-short} service has already failed or you stopped it. + +* {microshift-short} does not monitor the disk space of any filesystem. You must ensure that your automation handles old backup removal. For example, you can add this process to the auto-recovery service or add another service that runs periodically. ==== +. Restart {microshift-short} by running the following command: ++ +[source,terminal] +---- +$ sudo systemctl restart microshift +---- + .Verification -* Verify that {microshift-short} has started successfully. \ No newline at end of file +* Verify that {microshift-short} has started successfully by running the following command: ++ +-- +include::snippets/microshift-healthy-pods-snip.adoc[leveloffset=+1] +-- \ No newline at end of file From 20a09594a2e8a3f490900a2ad96682a92bc1eed4 Mon Sep 17 00:00:00 2001 From: mletalie Date: Wed, 15 Jan 2025 13:21:51 -0500 Subject: [PATCH 202/669] sdnovnosd --- _topic_maps/_topic_map_osd.yml | 2 + modules/migrate-sdn-ovn-osd.adoc | 111 ++++++++++++++++++ networking/about-managed-networking.adoc | 8 +- .../migrate-from-openshift-sdn-osd.adoc | 24 ++++ osd_whats_new/osd-whats-new.adoc | 8 ++ 5 files changed, 151 insertions(+), 2 deletions(-) create mode 100644 modules/migrate-sdn-ovn-osd.adoc create mode 100644 networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn-osd.adoc diff --git a/_topic_maps/_topic_map_osd.yml b/_topic_maps/_topic_map_osd.yml index b6d8332ee5b8..d4656914b99e 100644 --- a/_topic_maps/_topic_map_osd.yml +++ b/_topic_maps/_topic_map_osd.yml @@ -867,6 +867,8 @@ Topics: Topics: - Name: About the OVN-Kubernetes network plugin File: about-ovn-kubernetes + - Name: Migrating from the OpenShift SDN network plugin + File: migrate-from-openshift-sdn-osd - Name: OpenShift SDN network plugin Dir: ovn_kubernetes_network_provider Topics: diff --git a/modules/migrate-sdn-ovn-osd.adoc b/modules/migrate-sdn-ovn-osd.adoc new file mode 100644 index 000000000000..5a4320c6559c --- /dev/null +++ b/modules/migrate-sdn-ovn-osd.adoc @@ -0,0 +1,111 @@ +// Module included in the following assemblies: +//networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc + +:_mod-docs-content-type: PROCEDURE +[id="migrate-sdn-ovn-ocm-cli_{context}"] += Initiate migration using the OpenShift Cluster Manager API command-line interface (ocm) CLI + +[WARNING] +==== +You can only initiate migration on clusters that are version 4.16.24 and above. +==== + +.Prerequisites + +* You installed the link:https://console.redhat.com/openshift/downloads[OpenShift Cluster Manager API command-line interface (`ocm`)]. + +[IMPORTANT] +==== +[subs="attributes+"] +OpenShift Cluster Manager API command-line interface (`ocm`) is a Technology Preview feature only. +For more information about the support scope of Red Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview Features Support Scope]. +==== + +.Procedure + +. Create a JSON file with the following content: + ++ +[source,json] +---- +{ + "type": "sdnToOvn" +} +---- ++ + +** Optional: Within the JSON file, you can configure internal subnets using any or all of the options `join`, `masquerade`, and `transit`, along with a single CIDR per option, as shown in the following example: ++ +[source,json] +---- +{ + "type": "sdnToOvn", + "sdn_to_ovn": { + "transit_ipv4": "192.168.255.0/24", + "join_ipv4": "192.168.255.0/24", + "masquerade_ipv4": "192.168.255.0/24" + } +} +---- ++ +[NOTE] +==== +OVN-Kubernetes reserves the following IP address ranges: + +`100.64.0.0/16`. This IP address range is used for the `internalJoinSubnet` parameter of OVN-Kubernetes by default. + +`100.88.0.0/16`. This IP address range is used for the `internalTransSwitchSubnet` parameter of OVN-Kubernetes by default. + +If these IP addresses have been used by OpenShift SDN or any external networks that might communicate with this cluster, you must patch them to use a different IP address range before initiating the limited live migration. For more information, see _Patching OVN-Kubernetes address ranges_ in the _Additional resources_ section. +==== ++ + +. To initiate the migration, run the following post request in a terminal window: + ++ +[source,terminal] +---- +$ ocm post /api/clusters_mgmt/v1/clusters/{cluster_id}/migrations <1> + --body=myjsonfile.json <2> +---- +<1> Replace `{cluster_id}` with the ID of the cluster you want to migrate to the OVN-Kubernetes network plugin. +<2> Replace `myjsonfile.json` with the name of the JSON file you created in the previous step. ++ +.Example output ++ +[source,json] +---- +{ + "kind": "ClusterMigration", + "href": "/api/clusters_mgmt/v1/clusters/2gnts65ra30sclb114p8qdc26g5c8o3e/migrations/2gois8j244rs0qrfu9ti2o790jssgh9i", + "id": "7sois8j244rs0qrhu9ti2o790jssgh9i", + "cluster_id": "2gnts65ra30sclb114p8qdc26g5c8o3e", + "type": "sdnToOvn", + "state": { + "value": "scheduled", + "description": "" + }, + "sdn_to_ovn": { + "transit_ipv4": "100.65.0.0/16", + "join_ipv4": "100.66.0.0/16" + }, + "creation_timestamp": "2025-02-05T14:56:34.878467542Z", + "updated_timestamp": "2025-02-05T14:56:34.878467542Z" +} +---- + +// :_mod-docs-content-type: PROCEDURE +// [id="verify-sdn-ovn-ocm_{context}"] +// = Verify migration status using the OCM CLI + +.Verification + +* To check the status of the migration, run the following command: + ++ + +[source,terminal] +---- +$ ocm get cluster $cluster_id/migration <1> +---- +<1> Replace `$cluster_id` with the ID of the cluster that the migration was applied to. \ No newline at end of file diff --git a/networking/about-managed-networking.adoc b/networking/about-managed-networking.adoc index 403ada3341f0..ce16248dfd05 100644 --- a/networking/about-managed-networking.adoc +++ b/networking/about-managed-networking.adoc @@ -19,7 +19,7 @@ The following are some of the most commonly used {openshift-networking} features + [IMPORTANT] ==== -You cannot migrate an {OCP-short} 4.16 cluster that uses the SDN network plugin to {OCP-short} 4.17 because no migration path currently exists. +Before upgrading {product-title} clusters that are configured with the OpenShift SDN network plugin to version 4.17, you must migrate to the OVN-Kubernetes network plugin. For more information, see _Migrating from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin_ in the _Additional resources_ section. ==== [discrete] @@ -27,4 +27,8 @@ You cannot migrate an {OCP-short} 4.16 cluster that uses the SDN network plugin [id="additional-resources_{context}"] == Additional resources -* link:https://access.redhat.com/articles/7065170[{OCP-short} SDN CNI removal in OCP 4.17] \ No newline at end of file +* link:https://access.redhat.com/articles/7065170[{OCP-short} SDN CNI removal in OCP 4.17] + +ifdef::openshift-dedicated[] +* xref:../networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn-osd.adoc#migrate-from-openshift-sdn-osd[Migrating from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin] +endif::openshift-dedicated[] \ No newline at end of file diff --git a/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn-osd.adoc b/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn-osd.adoc new file mode 100644 index 000000000000..190a8b23b273 --- /dev/null +++ b/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn-osd.adoc @@ -0,0 +1,24 @@ +:_mod-docs-content-type: ASSEMBLY +[id="migrate-from-openshift-sdn-osd"] += Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin +include::_attributes/common-attributes.adoc[] +include::_attributes/attributes-openshift-dedicated.adoc[] +:context: migrate-from-openshift-sdn + +toc::[] + +As an {product-title} cluster administrator, you can initiate the migration from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin and verify the migration status using the OCM CLI. + +Some considerations before starting migration initiation are: + +* The cluster version must be 4.16.24 and above. +* The migration process cannot be interrupted. +* Migrating back to the SDN network plugin is not possible. +* Cluster nodes will be rebooted during migration. +* There will be no impact to workloads that are resilient to node disruptions. +* Migration time can vary between several minutes and hours, depending on the cluster size and workload configurations. + +include::modules/migrate-sdn-ovn-osd.adoc[leveloffset=+1] + +.Additional resources +link:https://docs.openshift.com/container-platform/4.16/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.html#patching-ovnk-address-ranges_migrate-from-openshift-sdn[Patching OVN-Kubernetes address ranges] \ No newline at end of file diff --git a/osd_whats_new/osd-whats-new.adoc b/osd_whats_new/osd-whats-new.adoc index 83a74730a10f..6e6d826e8a78 100644 --- a/osd_whats_new/osd-whats-new.adoc +++ b/osd_whats_new/osd-whats-new.adoc @@ -19,6 +19,14 @@ With its foundation in Kubernetes, {product-title} is a complete {OCP} cluster p === Q1 2025 * **Cluster node limit update.** {product-title} clusters versions 4.14.14 and greater can now scale to 249 worker nodes. This is an increase from the previous limit of 180 nodes. For more information, see xref:../osd_planning/osd-limits-scalability.adoc#osd-limits-scalability[limits and scalability]. +// * **{product-title} SDN network plugin blocks future major upgrades** +* **Initiate live migration from OpenShift SDN to OVN-Kubernetes.** +As part of the {product-title} move to OVN-Kubernetes as the only supported network plugin starting with {product-title} version 4.17, users can now initiate live migration from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin. ++ +If your cluster uses the OpenShift SDN network plugin, you cannot upgrade to future major versions of {product-title} without migrating to OVN-Kubernetes. ++ +For more information about migrating to OVN-Kubernetes, see xref:../networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn-osd.adoc#migrate-from-openshift-sdn[Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin]. + * **Red{nbsp}Hat SRE log-based alerting endpoints have been updated.** {product-title} customers who are using a firewall to control egress traffic can now remove all references to `*.osdsecuritylogs.splunkcloud.com:9997` from your firewall allowlist. {product-title} clusters still require the `http-inputs-osdsecuritylogs.splunkcloud.com:443` log-based alerting endpoint to be accessible from the cluster. From e28efcc252ec78b55a688b64eab7a4bbafa4c974 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Wed, 12 Feb 2025 10:46:05 +0000 Subject: [PATCH 203/669] TELCODOCS-2168 MetalLB Annotation used by service need to be revised from metallb.universe.tf to metallb.io --- modules/hcp-bm-ingress.adoc | 2 +- modules/nw-egress-service-ovn.adoc | 2 +- modules/nw-metallb-addresspool-cr.adoc | 4 ++-- modules/nw-metallb-configure-svc.adoc | 2 +- .../metallb/metallb-configure-services.adoc | 18 +++++++++--------- 5 files changed, 14 insertions(+), 14 deletions(-) diff --git a/modules/hcp-bm-ingress.adoc b/modules/hcp-bm-ingress.adoc index 0d5fa38ad25b..53550963e677 100644 --- a/modules/hcp-bm-ingress.adoc +++ b/modules/hcp-bm-ingress.adoc @@ -146,7 +146,7 @@ kind: Service apiVersion: v1 metadata: annotations: - metallb.universe.tf/address-pool: ingress-public-ip + metallb.io/address-pool: ingress-public-ip name: metallb-ingress namespace: openshift-ingress spec: diff --git a/modules/nw-egress-service-ovn.adoc b/modules/nw-egress-service-ovn.adoc index fa004f5d5615..131491fe50b0 100644 --- a/modules/nw-egress-service-ovn.adoc +++ b/modules/nw-egress-service-ovn.adoc @@ -53,7 +53,7 @@ metadata: name: example-service namespace: example-namespace annotations: - metallb.universe.tf/address-pool: example-pool <1> + metallb.io/address-pool: example-pool <1> spec: selector: app: example diff --git a/modules/nw-metallb-addresspool-cr.adoc b/modules/nw-metallb-addresspool-cr.adoc index b931ad7d0977..c6d91d2b495a 100644 --- a/modules/nw-metallb-addresspool-cr.adoc +++ b/modules/nw-metallb-addresspool-cr.adoc @@ -19,7 +19,7 @@ The fields for the `IPAddressPool` custom resource are described in the followin |`metadata.name` |`string` |Specifies the name for the address pool. -When you add a service, you can specify this pool name in the `metallb.universe.tf/address-pool` annotation to select an IP address from a specific pool. +When you add a service, you can specify this pool name in the `metallb.io/address-pool` annotation to select an IP address from a specific pool. The names `doc-example`, `silver`, and `gold` are used throughout the documentation. |`metadata.namespace` @@ -40,7 +40,7 @@ Specify each range in CIDR notation or as starting and ending IP addresses separ |`spec.autoAssign` |`boolean` |Optional: Specifies whether MetalLB automatically assigns IP addresses from this pool. -Specify `false` if you want explicitly request an IP address from this pool with the `metallb.universe.tf/address-pool` annotation. +Specify `false` if you want explicitly request an IP address from this pool with the `metallb.io/address-pool` annotation. The default value is `true`. |`spec.avoidBuggyIPs` diff --git a/modules/nw-metallb-configure-svc.adoc b/modules/nw-metallb-configure-svc.adoc index ccc3b38b573b..8820d3d89b74 100644 --- a/modules/nw-metallb-configure-svc.adoc +++ b/modules/nw-metallb-configure-svc.adoc @@ -51,7 +51,7 @@ $ oc describe service Name: Namespace: default Labels: -Annotations: metallb.universe.tf/address-pool: doc-example <1> +Annotations: metallb.io/address-pool: doc-example <1> Selector: app=service_name Type: LoadBalancer <2> IP Family Policy: SingleStack diff --git a/networking/metallb/metallb-configure-services.adoc b/networking/metallb/metallb-configure-services.adoc index 6883a74ddf5e..11bd6e261624 100644 --- a/networking/metallb/metallb-configure-services.adoc +++ b/networking/metallb/metallb-configure-services.adoc @@ -25,7 +25,7 @@ kind: Service metadata: name: annotations: - metallb.universe.tf/address-pool: + metallb.io/address-pool: spec: selector: : @@ -52,7 +52,7 @@ Events: [id="request-ip-address-from-pool_{context}"] == Request an IP address from a specific pool -To assign an IP address from a specific range, but you are not concerned with the specific IP address, then you can use the `metallb.universe.tf/address-pool` annotation to request an IP address from the specified address pool. +To assign an IP address from a specific range, but you are not concerned with the specific IP address, then you can use the `metallb.io/address-pool` annotation to request an IP address from the specified address pool. .Example service YAML for an IP address from a specific pool [source,yaml] @@ -62,7 +62,7 @@ kind: Service metadata: name: annotations: - metallb.universe.tf/address-pool: + metallb.io/address-pool: spec: selector: : @@ -104,7 +104,7 @@ spec: == Share a specific IP address By default, services do not share IP addresses. -However, if you need to colocate services on a single IP address, you can enable selective IP sharing by adding the `metallb.universe.tf/allow-shared-ip` annotation to the services. +However, if you need to colocate services on a single IP address, you can enable selective IP sharing by adding the `metallb.io/allow-shared-ip` annotation to the services. [source,yaml] ---- @@ -113,8 +113,8 @@ kind: Service metadata: name: service-http annotations: - metallb.universe.tf/address-pool: doc-example - metallb.universe.tf/allow-shared-ip: "web-server-svc" <1> + metallb.io/address-pool: doc-example + metallb.io/allow-shared-ip: "web-server-svc" <1> spec: ports: - name: http @@ -131,8 +131,8 @@ kind: Service metadata: name: service-https annotations: - metallb.universe.tf/address-pool: doc-example - metallb.universe.tf/allow-shared-ip: "web-server-svc" + metallb.io/address-pool: doc-example + metallb.io/allow-shared-ip: "web-server-svc" spec: ports: - name: https @@ -144,7 +144,7 @@ spec: type: LoadBalancer loadBalancerIP: 172.31.249.7 ---- -<1> Specify the same value for the `metallb.universe.tf/allow-shared-ip` annotation. This value is referred to as the _sharing key_. +<1> Specify the same value for the `metallb.io/allow-shared-ip` annotation. This value is referred to as the _sharing key_. <2> Specify different port numbers for the services. <3> Specify identical pod selectors if you must specify `externalTrafficPolicy: local` so the services send traffic to the same set of pods. If you use the `cluster` external traffic policy, then the pod selectors do not need to be identical. <4> Optional: If you specify the three preceding items, MetalLB might colocate the services on the same IP address. To ensure that services share an IP address, specify the IP address to share. From bbc9f2444972a6468884e616d509df651d5e4163 Mon Sep 17 00:00:00 2001 From: mletalie Date: Tue, 7 Jan 2025 17:35:38 -0500 Subject: [PATCH 204/669] SDN Migration rosa_release_notes/rosa-release-notes.adoc --- _topic_maps/_topic_map_osd.yml | 2 + _topic_maps/_topic_map_rosa.yml | 2 + modules/migrate-sdn-ovn.adoc | 37 +++++++++++++++++++ networking/about-managed-networking.adoc | 14 ++++--- .../migrate-from-openshift-sdn.adoc | 30 +++++++++++++++ rosa_release_notes/rosa-release-notes.adoc | 5 +++ 6 files changed, 84 insertions(+), 6 deletions(-) create mode 100644 modules/migrate-sdn-ovn.adoc create mode 100644 networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc diff --git a/_topic_maps/_topic_map_osd.yml b/_topic_maps/_topic_map_osd.yml index d4656914b99e..d610d4f33a1f 100644 --- a/_topic_maps/_topic_map_osd.yml +++ b/_topic_maps/_topic_map_osd.yml @@ -869,6 +869,8 @@ Topics: File: about-ovn-kubernetes - Name: Migrating from the OpenShift SDN network plugin File: migrate-from-openshift-sdn-osd + # - Name: Migrating from the OpenShift SDN network plugin + # File: migrate-from-openshift-sdn - Name: OpenShift SDN network plugin Dir: ovn_kubernetes_network_provider Topics: diff --git a/_topic_maps/_topic_map_rosa.yml b/_topic_maps/_topic_map_rosa.yml index 4563aff95dea..9b9596b1c1f7 100644 --- a/_topic_maps/_topic_map_rosa.yml +++ b/_topic_maps/_topic_map_rosa.yml @@ -1129,6 +1129,8 @@ Topics: File: about-ovn-kubernetes - Name: Configuring an egress IP address File: configuring-egress-ips-ovn + - Name: Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin + File: migrate-from-openshift-sdn - Name: OpenShift SDN network plugin Dir: ovn_kubernetes_network_provider Topics: diff --git a/modules/migrate-sdn-ovn.adoc b/modules/migrate-sdn-ovn.adoc new file mode 100644 index 000000000000..5b037dd7dd12 --- /dev/null +++ b/modules/migrate-sdn-ovn.adoc @@ -0,0 +1,37 @@ +// Module included in the following assemblies: +//networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc + +:_mod-docs-content-type: PROCEDURE +[id="migrate-sdn-ovn-cli_{context}"] += Initiate migration using the ROSA CLI + +[WARNING] +==== +You can only initiate migration on clusters that are version 4.16.24 and above. +==== + +To initiate the migration, run the following command: +[source,terminal] +---- +$ rosa edit cluster -c <1> + --network-type OVNKubernetes + --ovn-internal-subnets <2> +---- +<1> Replace `` with the ID of the cluster you want to migrate to the OVN-Kubernetes network plugin. +<2> Optional: Users can create key-value pairs to configure internal subnets using any or all of the options `join, masquerade, transit` along with a single CIDR per option. For example, `--ovn-internal-subnets="join=0.0.0.0/24,transit=0.0.0.0/24,masquerade=0.0.0.0/24"`. + +[IMPORTANT] +==== +You cannot include the optional flag `--ovn-internal-subnets` in the command unless you define a value for the flag `--network-type`. +==== + +:_mod-docs-content-type: PROCEDURE +[id="verify-sdn-ovn_{context}"] += Verify migration status using the ROSA CLI + +To check the status of the migration, run the following command: +[source,terminal] +---- +rosa describe cluster -c <1> +---- +<1> Replace `` with the ID of the cluster to check the migration status. \ No newline at end of file diff --git a/networking/about-managed-networking.adoc b/networking/about-managed-networking.adoc index ce16248dfd05..059075201f05 100644 --- a/networking/about-managed-networking.adoc +++ b/networking/about-managed-networking.adoc @@ -16,19 +16,21 @@ The following are some of the most commonly used {openshift-networking} features + ** xref:../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc#about-ovn-kubernetes[OVN-Kubernetes network plugin], which is the default CNI plugin. ** {OCP-short} SDN network plugin, which was deprecated in {OCP-short} 4.16 and removed in {OCP-short} 4.17. -+ + +ifdef::openshift-rosa[] + [IMPORTANT] ==== -Before upgrading {product-title} clusters that are configured with the OpenShift SDN network plugin to version 4.17, you must migrate to the OVN-Kubernetes network plugin. For more information, see _Migrating from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin_ in the _Additional resources_ section. +Before upgrading {rosa-classic} clusters that are configured with the OpenShift SDN network plugin to version 4.17, you must migrate to the OVN-Kubernetes network plugin. For more information, see _Migrating from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin_ in the _Additional resources_ section. ==== +endif::openshift-rosa[] [discrete] [role="_additional-resources"] [id="additional-resources_{context}"] == Additional resources * link:https://access.redhat.com/articles/7065170[{OCP-short} SDN CNI removal in OCP 4.17] - -ifdef::openshift-dedicated[] -* xref:../networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn-osd.adoc#migrate-from-openshift-sdn-osd[Migrating from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin] -endif::openshift-dedicated[] \ No newline at end of file +ifdef::openshift-rosa[] +* xref:../networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc#migrate-from-openshift-sdn[Migrating from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin] +endif::openshift-rosa[] diff --git a/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc b/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc new file mode 100644 index 000000000000..7a319d49d9e5 --- /dev/null +++ b/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc @@ -0,0 +1,30 @@ +:_mod-docs-content-type: ASSEMBLY +[id="migrate-from-openshift-sdn"] += Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin +include::_attributes/common-attributes.adoc[] +include::_attributes/attributes-openshift-dedicated.adoc[] +:context: migrate-from-openshift-sdn + +toc::[] + +As a {rosa-classic-first} cluster administrator, you can initiate the migration from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin and verify the migration status using the ROSA CLI. + +Some considerations before starting migration initiation are: + +* The cluster version must be 4.16.24 and above. + +* The migration process cannot be interrupted. + +* Migrating back to the SDN network plugin is not possible. + +* Cluster nodes will be rebooted during migration. + +* There will be no impact to workloads that are resilient to node disruptions. + +* Migration time can vary between several minutes and hours, depending on the cluster size and workload configurations. + +include::modules/migrate-sdn-ovn.adoc[leveloffset=+1] + + + + diff --git a/rosa_release_notes/rosa-release-notes.adoc b/rosa_release_notes/rosa-release-notes.adoc index ef84ee6fb09e..db021b8d5259 100644 --- a/rosa_release_notes/rosa-release-notes.adoc +++ b/rosa_release_notes/rosa-release-notes.adoc @@ -24,6 +24,11 @@ endif::openshift-rosa-hcp[] ifdef::openshift-rosa[] * **{rosa-classic} cluster node limit update.** {rosa-classic} clusters versions 4.14.14 and greater can now scale to 249 worker nodes. This is an increase from the previous limit of 180 nodes. For more information, see xref:../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[Limits and scalability]. + +// * **{product-title} SDN network plugin blocks future major upgrades** +* **Initiate live migration from OpenShift SDN to OVN-Kubernetes.** +As part of the {product-title} move to OVN-Kubernetes as the only supported network plugin starting with {product-title} 4.17, users can now initiate live migration from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin. +If your cluster uses the OpenShift SDN network plugin, you cannot upgrade to future major versions of {product-title} without migrating to OVN-Kubernetes. For more information about migrating to OVN-Kubernetes, see xref:../networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc#migrate-from-openshift-sdn[Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin]. + [IMPORTANT] ==== Egress lockdown is a Technology Preview feature. From 3e5dd976294c10aff34e159426d82c52116cc710 Mon Sep 17 00:00:00 2001 From: mletalie Date: Wed, 12 Feb 2025 15:16:20 -0500 Subject: [PATCH 205/669] SDN OVN --- modules/migrate-sdn-ovn.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/migrate-sdn-ovn.adoc b/modules/migrate-sdn-ovn.adoc index 5b037dd7dd12..904bfd953b31 100644 --- a/modules/migrate-sdn-ovn.adoc +++ b/modules/migrate-sdn-ovn.adoc @@ -32,6 +32,6 @@ You cannot include the optional flag `--ovn-internal-subnets` in the command unl To check the status of the migration, run the following command: [source,terminal] ---- -rosa describe cluster -c <1> +$ rosa describe cluster -c <1> ---- <1> Replace `` with the ID of the cluster to check the migration status. \ No newline at end of file From e4d5d25cf99b4d02f492c52da7c54169149cfd37 Mon Sep 17 00:00:00 2001 From: Janelle Neczypor Date: Mon, 20 Jan 2025 11:53:04 -0800 Subject: [PATCH 206/669] OSDOCS-13081 --- _topic_maps/_topic_map_rosa_hcp.yml | 18 ++++++++++++++++++ ...ud-experts-aws-load-balancer-operator.adoc | 13 +++++++++++++ .../cloud-experts-aws-secret-manager.adoc | 8 +++++++- .../cloud-experts-consistent-egress-ip.adoc | 12 ++++++++++++ .../cloud-experts-custom-dns-resolver.adoc | 19 ++++++++----------- ...ud-experts-deploy-api-data-protection.adoc | 5 +++++ .../cloud-experts-entra-id-idp.adoc | 16 +--------------- cloud_experts_tutorials/index.adoc | 7 +++++-- ...obb-verify-permissions-sts-deployment.adoc | 15 +++++++++++++-- 9 files changed, 82 insertions(+), 31 deletions(-) diff --git a/_topic_maps/_topic_map_rosa_hcp.yml b/_topic_maps/_topic_map_rosa_hcp.yml index 8b7e62d22911..5a6e9e9bb89f 100644 --- a/_topic_maps/_topic_map_rosa_hcp.yml +++ b/_topic_maps/_topic_map_rosa_hcp.yml @@ -125,8 +125,26 @@ Topics: File: cloud-experts-rosa-hcp-activation-and-account-linking-tutorial - Name: ROSA with HCP private offer acceptance and sharing File: cloud-experts-rosa-with-hcp-private-offer-acceptance-and-sharing +- Name: Deploying ROSA with a Custom DNS Resolver + File: cloud-experts-custom-dns-resolver +- Name: Using AWS WAF and Amazon CloudFront to protect ROSA workloads + File: cloud-experts-using-cloudfront-and-waf +- Name: Using AWS WAF and AWS ALBs to protect ROSA workloads + File: cloud-experts-using-alb-and-waf +- Name: Deploying OpenShift API for Data Protection on a ROSA cluster + File: cloud-experts-deploy-api-data-protection +- Name: AWS Load Balancer Operator on ROSA + File: cloud-experts-aws-load-balancer-operator - Name: Configuring Microsoft Entra ID (formerly Azure Active Directory) as an identity provider File: cloud-experts-entra-id-idp +- Name: Using AWS Secrets Manager CSI on ROSA with STS + File: cloud-experts-aws-secret-manager +- Name: Using AWS Controllers for Kubernetes on ROSA + File: cloud-experts-using-aws-ack +- Name: Dynamically issuing certificates using the cert-manager Operator on ROSA + File: cloud-experts-dynamic-certificate-custom-domain +- Name: Assigning consistent egress IP for external traffic + File: cloud-experts-consistent-egress-ip # --- # Name: Getting started # Dir: rosa_getting_started diff --git a/cloud_experts_tutorials/cloud-experts-aws-load-balancer-operator.adoc b/cloud_experts_tutorials/cloud-experts-aws-load-balancer-operator.adoc index 965411acfdd5..e0416c26f0e2 100644 --- a/cloud_experts_tutorials/cloud-experts-aws-load-balancer-operator.adoc +++ b/cloud_experts_tutorials/cloud-experts-aws-load-balancer-operator.adoc @@ -20,10 +20,18 @@ toc::[] include::snippets/mobb-support-statement.adoc[leveloffset=+1] +ifndef::openshift-rosa-hcp[] [TIP] ==== Load Balancers created by the AWS Load Balancer Operator cannot be used for xref:../networking/routes/route-configuration.adoc#route-configuration[OpenShift Routes], and should only be used for individual services or ingress resources that do not need the full layer 7 capabilities of an OpenShift Route. ==== +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +[TIP] +==== +Load Balancers created by the AWS Load Balancer Operator cannot be used for link:https://docs.openshift.com/rosa/networking/routes/route-configuration.html[OpenShift Routes], and should only be used for individual services or ingress resources that do not need the full layer 7 capabilities of an OpenShift Route. +==== +endif::openshift-rosa-hcp[] The link:https://kubernetes-sigs.github.io/aws-load-balancer-controller/[AWS Load Balancer Controller] manages AWS Elastic Load Balancers for a {product-title} (ROSA) cluster. The controller provisions link:https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html[AWS Application Load Balancers (ALB)] when you create Kubernetes Ingress resources and link:https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html[AWS Network Load Balancers (NLB)] when implementing Kubernetes Service resources with a type of LoadBalancer. @@ -44,7 +52,12 @@ The link:https://github.com/openshift/aws-load-balancer-operator[AWS Load Balanc AWS ALBs require a multi-AZ cluster, as well as three public subnets split across three AZs in the same VPC as the cluster. This makes ALBs unsuitable for many PrivateLink clusters. AWS NLBs do not have this restriction. ==== +ifndef::openshift-rosa-hcp[] * xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[A multi-AZ ROSA classic cluster] +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +* link:https://docs.openshift.com/rosa-hcp/rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.html[A multi-AZ ROSA cluster] +endif::openshift-rosa-hcp[] * BYO VPC cluster * AWS CLI * OC CLI diff --git a/cloud_experts_tutorials/cloud-experts-aws-secret-manager.adoc b/cloud_experts_tutorials/cloud-experts-aws-secret-manager.adoc index 1c2b8ea88c96..d69e8831b938 100644 --- a/cloud_experts_tutorials/cloud-experts-aws-secret-manager.adoc +++ b/cloud_experts_tutorials/cloud-experts-aws-secret-manager.adoc @@ -58,7 +58,13 @@ $ oc get authentication.config.openshift.io cluster -o json \ "https://xxxxx.cloudfront.net/xxxxx" ---- + -If your output is different, do not proceed. See xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Red{nbsp}Hat documentation on creating an STS cluster] before continuing this process. +If your output is different, do not proceed. +ifndef::openshift-rosa-hcp[] +See xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Red{nbsp}Hat documentation on creating an STS cluster] before continuing this process. +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +See link:https://docs.openshift.com/rosa-hcp/rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.html[Creating ROSA with HCP clusters using the default options] before continuing this process. +endif::openshift-rosa-hcp[] . Set the `SecurityContextConstraints` permission to allow the CSI driver to run by running the following command: + diff --git a/cloud_experts_tutorials/cloud-experts-consistent-egress-ip.adoc b/cloud_experts_tutorials/cloud-experts-consistent-egress-ip.adoc index 990184fdd26c..191cec92169b 100644 --- a/cloud_experts_tutorials/cloud-experts-consistent-egress-ip.adoc +++ b/cloud_experts_tutorials/cloud-experts-consistent-egress-ip.adoc @@ -21,7 +21,12 @@ You can assign a consistent IP address for traffic that leaves your cluster such By default, {product-title} (ROSA) uses the OVN-Kubernetes container network interface (CNI) to assign random IP addresses from a pool. This can make configuring security lockdowns unpredictable or open. +ifndef::openshift-rosa-hcp[] See xref:../networking/ovn_kubernetes_network_provider/configuring-egress-ips-ovn.adoc#configuring-egress-ips-ovn[Configuring an egress IP address] for more information. +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +See link:https://docs.openshift.com/rosa/networking/ovn_kubernetes_network_provider/configuring-egress-ips-ovn.html[Configuring an egress IP address] for more information. +endif::openshift-rosa-hcp[] .Objectives @@ -30,10 +35,17 @@ See xref:../networking/ovn_kubernetes_network_provider/configuring-egress-ips-ov .Prerequisites * A ROSA cluster deployed with OVN-Kubernetes +ifndef::openshift-rosa-hcp[] * The xref:../cli_reference/openshift_cli/getting-started-cli.adoc#cli-getting-started[OpenShift CLI] (`oc`) * The xref:../cli_reference/rosa_cli/rosa-get-started-cli.adoc#rosa-get-started-cli[ROSA CLI] (`rosa`) +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +* The link:https://docs.openshift.com/rosa/cli_reference/openshift_cli/getting-started-cli.html[OpenShift CLI] (`oc`) +* The link:https://docs.openshift.com/rosa/cli_reference/rosa_cli/rosa-get-started-cli.html[ROSA CLI] (`rosa`) +endif::openshift-rosa-hcp[] * link:https://stedolan.github.io/jq/[`jq`] + == Setting your environment variables * Set your environment variables by running the following command: diff --git a/cloud_experts_tutorials/cloud-experts-custom-dns-resolver.adoc b/cloud_experts_tutorials/cloud-experts-custom-dns-resolver.adoc index 74b5903457a3..3260ed64c306 100644 --- a/cloud_experts_tutorials/cloud-experts-custom-dns-resolver.adoc +++ b/cloud_experts_tutorials/cloud-experts-custom-dns-resolver.adoc @@ -124,8 +124,7 @@ $ aws route53resolver list-resolver-endpoint-ip-addresses \ Use the following procedure to configure your DNS server to forward the necessary private hosted zones to your Amazon Route 53 Inbound Resolver. -=== ROSA with HCP - +ifdef::openshift-rosa-hcp[] ROSA with HCP clusters require you to configure DNS forwarding for two private hosted zones: * `.hypershift.local` @@ -151,7 +150,7 @@ zone ".hypershift.local" { <1> <1> Replace `` with your ROSA HCP cluster name. <2> Replace with the IP addresses of your inbound resolver endpoints collected above, ensuring that following each IP address there is a `;`. + -. xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-account-wide-sts-roles-and-policies_rosa-hcp-sts-creating-a-cluster-quickly[Create your cluster]. +. link:https://docs.openshift.com/rosa/rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.html[Create your cluster]. + . Once your cluster has begun the creation process, locate the newly created private hosted zone: + @@ -198,21 +197,18 @@ zone "rosa...p3.openshiftapps.com" { <1> ---- <1> Replace `` with your cluster domain prefix and `` with your unique ID collected above. <2> Replace with the IP addresses of your inbound resolver endpoints collected above, ensuring that following each IP address there is a `;`. +endif::openshift-rosa-hcp[] -=== ROSA Classic - +ifdef::openshift-rosa[] ROSA Classic clusters require you to configure DNS forwarding for one private hosted zones: * `..p1.openshiftapps.com` This Amazon Route 53 private hosted zones is created during cluster creation. The `domain-prefix` is a customer-specified value, but the `unique-ID` is randomly generated during cluster creation and cannot be preselected. As such, you must wait for the cluster creation process to begin before configuring forwarding for the `p1.openshiftapps.com` private hosted zone. -ifdef::temp-ifdef[] +ifndef::openshift-rosa-hcp[] . xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-account-wide-sts-roles-and-policies_rosa-sts-creating-a-cluster-quickly[Create your cluster]. -endif::[] -ifdef::temp-ifdef[] -* xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Create your cluster]. -endif::[] +endif::openshift-rosa-hcp[] + . Once your cluster has begun the creation process, locate the newly created private hosted zone: + @@ -257,4 +253,5 @@ zone "..p1.openshiftapps.com" { <1> }; ---- <1> Replace `` with your cluster domain prefix and `` with your unique ID collected above. -<2> Replace with the IP addresses of your inbound resolver endpoints collected above, ensuring that following each IP address there is a `;`. \ No newline at end of file +<2> Replace with the IP addresses of your inbound resolver endpoints collected above, ensuring that following each IP address there is a `;`. +endif::openshift-rosa[] \ No newline at end of file diff --git a/cloud_experts_tutorials/cloud-experts-deploy-api-data-protection.adoc b/cloud_experts_tutorials/cloud-experts-deploy-api-data-protection.adoc index f14a1c8f8aec..1b3a9fb7b60d 100644 --- a/cloud_experts_tutorials/cloud-experts-deploy-api-data-protection.adoc +++ b/cloud_experts_tutorials/cloud-experts-deploy-api-data-protection.adoc @@ -21,7 +21,12 @@ include::snippets/mobb-support-statement.adoc[leveloffset=+1] .Prerequisites +ifndef::openshift-rosa-hcp[] * A xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[ROSA classic cluster] +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +* A link:https://docs.openshift.com/rosa-hcp/rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.html[ROSA cluster] +endif::openshift-rosa-hcp[] .Environment diff --git a/cloud_experts_tutorials/cloud-experts-entra-id-idp.adoc b/cloud_experts_tutorials/cloud-experts-entra-id-idp.adoc index ed250313f800..bbb5f609ab35 100644 --- a/cloud_experts_tutorials/cloud-experts-entra-id-idp.adoc +++ b/cloud_experts_tutorials/cloud-experts-entra-id-idp.adoc @@ -28,12 +28,10 @@ This tutorial guides you to complete the following tasks: . Configure the {product-title} cluster to use Entra ID as the identity provider. . Grant additional permissions to individual groups. -[id="cloud-experts-entra-id-idp-prerequisites"] == Prerequisites * You created a set of security groups and assigned users by following link:https://learn.microsoft.com/en-us/azure/active-directory/fundamentals/how-to-manage-groups[the Microsoft documentation]. -[id="cloud-experts-entra-id-idp-register-application"] == Registering a new application in Entra ID for authentication To register your application in Entra ID, first create the OAuth callback URL, then register your application. @@ -50,7 +48,7 @@ Remember to save this callback URL; it will be required later in the process. [source,terminal] ---- $ domain=$(rosa describe cluster -c | grep "DNS" | grep -oE '\S+.openshiftapps.com') -$ echo "OAuth callback URL: https://oauth-openshift.apps.$domain/oauth2callback/AAD" +echo "OAuth callback URL: https://oauth.${domain}/oauth2callback/AAD" ---- + The "AAD" directory at the end of the OAuth callback URL must match the OAuth identity provider name that you will set up later in this process. @@ -82,15 +80,12 @@ image:azure-portal_add-a-client-secret-page.png[Azure Portal - Add a Client Secr + image:azure-portal_copy-client-secret-page.png[Azure Portal - Copy Client Secret page] -[id="rosa-mobb-entra-id-configure-claims"] == Configuring the application registration in Entra ID to include optional and group claims So that {product-title} has enough information to create the user's account, you must configure Entra ID to give two optional claims: `email` and `preferred_username`. For more information about optional claims in Entra ID, see link:https://learn.microsoft.com/en-us/azure/active-directory/develop/optional-claims[the Microsoft documentation]. In addition to individual user authentication, {product-title} provides group claim functionality. This functionality allows an OpenID Connect (OIDC) identity provider, such as Entra ID, to offer a user's group membership for use within {product-title}. -[discrete] -[id="rosa-mobb-entra-id-configure-optional-claims"] === Configuring optional claims You can configure the optional claims in Entra ID. @@ -115,8 +110,6 @@ image:azure-portal_add-optional-preferred_username-claims-page.png[Azure Portal + image:azure-portal_add-optional-claims-graph-permissions-prompt.png[Azure Portal - Add Optional Claims - Graph Permissions Prompt] -[discrete] -[id="rosa-mobb-entra-id-configure-group-claims"] === Configuring group claims (optional) Configure Entra ID to offer a groups claim. @@ -135,7 +128,6 @@ In this example, the group claim includes all of the security groups that a user + image:azure-portal_edit-group-claims-page.png[Azure Portal - Edit Groups Claim Page] -[id="cloud-experts-entra-id-idp-configure-cluster"] == Configuring the {product-title} cluster to use Entra ID as the identity provider You must configure {product-title} to use Entra ID as its identity provider. @@ -201,15 +193,12 @@ $ rosa create idp \ After a few minutes, the cluster authentication Operator reconciles your changes, and you can log in to the cluster by using Entra ID. -[id="rosa-mobb-azure-oidc-grant-permissions"] == Granting additional permissions to individual users and groups When your first log in, you might notice that you have very limited permissions. By default, {product-title} only grants you the ability to create new projects, or namespaces, in the cluster. Other projects are restricted from view. You must grant these additional abilities to individual users and groups. -[discrete] -[id="rosa-mobb-azure-oidc-grant-permissions-users"] === Granting additional permissions to individual users {product-title} includes a significant number of preconfigured roles, including the `cluster-admin` role that grants full access and control over the cluster. @@ -228,8 +217,6 @@ $ rosa grant user cluster-admin \ <1> Provide the Entra ID username that you want to have cluster admin permissions. -- -[discrete] -[id="cloud-experts-entra-id-idp-additional-permissions-groups"] === Granting additional permissions to individual groups If you opted to enable group claims, the cluster OAuth provider automatically creates or updates the user's group memberships by using the group ID. The cluster OAuth provider does not automatically create `RoleBindings` and `ClusterRoleBindings` for the groups that are created; you are responsible for creating those bindings by using your own processes. @@ -252,7 +239,6 @@ $ oc create clusterrolebinding cluster-admin-group \ + Now, any user in the specified group automatically receives `cluster-admin` access. -[id="cloud-experts-entra-id-idp-additional-resources"] [role="_additional-resources"] == Additional resources diff --git a/cloud_experts_tutorials/index.adoc b/cloud_experts_tutorials/index.adoc index 0e79a967853b..b46bb742de27 100644 --- a/cloud_experts_tutorials/index.adoc +++ b/cloud_experts_tutorials/index.adoc @@ -4,6 +4,9 @@ include::_attributes/attributes-openshift-dedicated.adoc[] :context: tutorials-overview -Step-by-step tutorials from Red{nbsp}Hat experts to help you get the most out of your Managed OpenShift cluster. +Use the step-by-step tutorials from Red{nbsp}Hat experts to get the most out of your Managed OpenShift cluster. -In an effort to make this Cloud Expert tutorial content available quickly, it may not yet be tested on every supported configuration. +[IMPORTANT] +==== +This content is authored by Red Hat experts but has not yet been tested on every supported configuration. +==== diff --git a/cloud_experts_tutorials/rosa-mobb-verify-permissions-sts-deployment.adoc b/cloud_experts_tutorials/rosa-mobb-verify-permissions-sts-deployment.adoc index f1f7f774b049..adb1292853c6 100644 --- a/cloud_experts_tutorials/rosa-mobb-verify-permissions-sts-deployment.adoc +++ b/cloud_experts_tutorials/rosa-mobb-verify-permissions-sts-deployment.adoc @@ -1,6 +1,6 @@ :_mod-docs-content-type: ASSEMBLY [id="rosa-mobb-verify-permissions-sts-deployment"] -= Tutorial: Verifying Permissions for a ROSA STS Deployment += Tutorial: Verifying permissions for a ROSA STS deployment include::_attributes/attributes-openshift-dedicated.adoc[] :context: rosa-mobb-verify-permissions-sts-deployment @@ -16,16 +16,27 @@ toc::[] // --- To proceed with the deployment of a ROSA cluster, an account must support the required roles and permissions. -AWS Service Control Policies (SCPs) cannot block the API calls made by the installer or operator roles. +AWS Service Control Policies (SCPs) cannot block the API calls made by the installer or Operator roles. Details about the IAM resources required for an STS-enabled installation of ROSA can be found here: xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] +ifndef::openshift-rosa-hcp[] +Details about the IAM resources required for an STS-enabled installation of ROSA can be found here: xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources for ROSA clusters that use STS] +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +Details about the IAM resources required for an STS-enabled installation of ROSA can be found here: link:https://docs.openshift.com/rosa/rosa_architecture/rosa-sts-about-iam-resources.html[About IAM resources for ROSA clusters] +endif::openshift-rosa-hcp[] This guide is validated for ROSA v4.11.X. == Prerequisites * link:https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html[AWS CLI] +ifndef::openshift-rosa-hcp[] * xref:../cli_reference/rosa_cli/rosa-get-started-cli.adoc#rosa-get-started-cli[ROSA CLI] v1.2.6 +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +* link:https://docs.openshift.com/rosa/cli_reference/rosa_cli/rosa-get-started-cli.html[ROSA CLI] v1.2.6 +endif::openshift-rosa-hcp[] * link:https://stedolan.github.io/jq/[jq CLI] * link:https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html[AWS role with required permissions] From cb3cd2c04952b9d5bf73d0ef69cd3b7151c64387 Mon Sep 17 00:00:00 2001 From: Steven Smith Date: Tue, 11 Feb 2025 14:20:37 -0500 Subject: [PATCH 207/669] Adds an extraction command to exposing the registry --- ...stry-exposing-default-registry-manually.adoc | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/modules/registry-exposing-default-registry-manually.adoc b/modules/registry-exposing-default-registry-manually.adoc index 284c330deddf..d229198a4a36 100644 --- a/modules/registry-exposing-default-registry-manually.adoc +++ b/modules/registry-exposing-default-registry-manually.adoc @@ -17,35 +17,42 @@ You can expose the route by using the `defaultRoute` parameter in the `configs.i To expose the registry using the `defaultRoute`: -. Set `defaultRoute` to `true`: +. Set `defaultRoute` to `true` by running the following command: + [source,terminal] ---- $ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge ---- + -. Get the default registry route: +. Get the default registry route by running the following command: + [source,terminal] ---- $ HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') ---- -. Get the certificate of the Ingress Operator: +. Get the certificate of the Ingress Operator by running the following command: + [source,terminal] ---- $ oc extract secret/$(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm ---- -. Enable the cluster's default certificate to trust the route using the following commands: +. Move the extracted certificate to the system's trusted CA directory by running the following command: ++ +[source,terminal] +---- +$ sudo mv tls.crt /etc/pki/ca-trust/source/anchors/ +---- + +. Enable the cluster's default certificate to trust the route by running the following command: + [source,terminal] ---- $ sudo update-ca-trust enable ---- -. Log in with podman using the default route: +. Log in with podman using the default route by running the following command: + [source,terminal] ---- From 541d01a4f8169f72be48d207fe650df06fc7839d Mon Sep 17 00:00:00 2001 From: Prithviraj Patil <116709298+prithvipatil97@users.noreply.github.com> Date: Wed, 12 Feb 2025 15:30:23 +0530 Subject: [PATCH 208/669] Prerequisites are missing and editing steps from Update log6x-about.adoc - Here is the link: https://docs.openshift.com/container-platform/4.16/observability/logging/logging-6.0/log6x-about.html#quick-start - Problems: - Quick Start indentation is wrong. - Prerequisites are missing. - Need to remove Step 6 and add it in Step 1 - Need to remove step 9 - We are performing the following changes through this PR: - Corrected Quick Start indentation, it should be in line with Validation. - Added required Prerequisites. - Removed Step 6 (Install the Cluster Observability Operator.) and added it under Step 1. - Removed Step 9, as it is a verification, not a Step. So, added it a Verification point. Prerequisites (1) are missing and editing steps from Update log6x-about.adoc Committing the suggested changes. - * Verify that logs are visible in the *Log* section of the *Observe* tab in the OpenShift web console. + * Verify that logs are visible in the *Log* section of the *Observe* tab in the {product-title} web console. Co-authored-by: Servesha Dudhgaonkar <49194531+xenolinux@users.noreply.github.com> (2) Prerequisites are missing and editing steps from Update log6x-about.adoc Committing the suggested changes in Peer Review - * You have administrator permissions. + * You have access to an {product-title} cluster with `cluster-admin` permissions. Co-authored-by: Servesha Dudhgaonkar <49194531+xenolinux@users.noreply.github.com> --- observability/logging/logging-6.0/log6x-about.adoc | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/observability/logging/logging-6.0/log6x-about.adoc b/observability/logging/logging-6.0/log6x-about.adoc index c6514c40d8bb..1f9b8acaa0af 100644 --- a/observability/logging/logging-6.0/log6x-about.adoc +++ b/observability/logging/logging-6.0/log6x-about.adoc @@ -34,14 +34,16 @@ The Cluster Logging Operator manages the deployment and configuration of the col == Validation Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The `ClusterLogForwarder` resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. -=== Quick Start +== Quick Start .Prerequisites -* Cluster administrator permissions +* You have access to an {product-title} cluster with `cluster-admin` permissions. +* You installed the {oc-first}. +* You have access to a supported object store. For example, AWS S3, Google Cloud Storage, {azure-short}, Swift, Minio, or {rh-storage}. .Procedure -. Install the `OpenShift Logging` and `Loki` Operators from OperatorHub. +. Install the `{clo}`, `{loki-op}`, and `{coo-first}` from OperatorHub. . Create a secret to access an existing object storage bucket: + @@ -95,8 +97,6 @@ $ oc create sa collector -n openshift-logging $ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging ---- -. Install the Cluster Observability Operator. - . Create a `UIPlugin` to enable the Log section in the Observe tab: + [source,yaml] @@ -156,4 +156,5 @@ spec: - default-lokistack ---- -. Verify that logs are visible in the Log section of the Observe tab in the OpenShift web console. +.Verification +* Verify that logs are visible in the *Log* section of the *Observe* tab in the {product-title} web console. From 209d0d01631c89b1c3d71e9c24e2de0ab69c4f44 Mon Sep 17 00:00:00 2001 From: Apurva Bhide Date: Wed, 12 Feb 2025 23:36:48 +0530 Subject: [PATCH 209/669] OADP-5637: Update recommendation for velero loglevel values --- modules/oadp-debugging-oc-cli.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/oadp-debugging-oc-cli.adoc b/modules/oadp-debugging-oc-cli.adoc index e4d15c9d93b6..3c470d2ad1d0 100644 --- a/modules/oadp-debugging-oc-cli.adoc +++ b/modules/oadp-debugging-oc-cli.adoc @@ -63,4 +63,4 @@ The following `logLevel` values are available: * `fatal` * `panic` -It is recommended to use `debug` for most logs. +It is recommended to use the `info` `logLevel` value for most logs. From 794318852bf853eb1544c29d26d3115199382f5c Mon Sep 17 00:00:00 2001 From: Eliska Romanova Date: Wed, 12 Feb 2025 09:37:22 +0100 Subject: [PATCH 210/669] OBSDOCS-1492: [DOC] Early validation for monitoring configmaps --- ...onfig-map-reference-for-the-cluster-monitoring-operator.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc b/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc index 4ee35e301ffd..b10eac377c1c 100644 --- a/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc +++ b/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc @@ -32,7 +32,7 @@ Only the parameters and fields listed in this reference are supported for config For more information about supported configurations, see xref:../monitoring/configuring-the-monitoring-stack.adoc#maintenance-and-support_configuring-the-monitoring-stack[Maintenance and support for monitoring]. * Configuring cluster monitoring is optional. * If a configuration does not exist or is empty, default values are used. -* If the configuration is invalid YAML data, the Cluster Monitoring Operator stops reconciling the resources and reports `Degraded=True` in the status conditions of the Operator. +* If the configuration has invalid YAML data, or if it contains unsupported or duplicated fields that bypassed early validation, the Cluster Monitoring Operator stops reconciling the resources and reports the `Degraded=True` status in the status conditions of the Operator. ==== == AdditionalAlertmanagerConfig From 5799412f5c30c3bfcaf7ab0ed206d35739841e6e Mon Sep 17 00:00:00 2001 From: amrin101 Date: Mon, 30 Dec 2024 15:50:30 +0530 Subject: [PATCH 211/669] Update builds-restricting-build-strategy-globally.adoc Removed the pre-requisite as that was not required. --- modules/builds-restricting-build-strategy-globally.adoc | 4 ---- 1 file changed, 4 deletions(-) diff --git a/modules/builds-restricting-build-strategy-globally.adoc b/modules/builds-restricting-build-strategy-globally.adoc index 8937671956ce..b8580ee097c2 100644 --- a/modules/builds-restricting-build-strategy-globally.adoc +++ b/modules/builds-restricting-build-strategy-globally.adoc @@ -9,10 +9,6 @@ You can allow a set of specific users to create builds with a particular strategy. -.Prerequisites - -* Disable global access to the build strategy. - .Procedure * Assign the role that corresponds to the build strategy to a specific user. For From 03c48982e81b9a65d15189a77da98ba9a20cf335 Mon Sep 17 00:00:00 2001 From: Padraig O'Grady Date: Tue, 26 Nov 2024 09:59:59 +0000 Subject: [PATCH 212/669] TELCODOCS-2036: Procedure added for MLX secure boot TELCODOCS-2036: Include Step 1. Configure virtual functions TELCODOCS-2036: Include Step 2. Configure the sriov operator with the Mellanox plugin disabled TELCODOCS-2036: Include Step 3. Check virtual functions after rebooting TELCODOCS-2036: Include Step 4. Eable secure boot TELCODOCS-2036: Dev feedback applied TELCODOCS-2036: '_mod-docs-content-type' commented out TELCODOCS-2036: Mellanox topic commented out TELCODOCS-2036: Mellanox topic reinstated TELCODOCS-2036: Some full stops addded TELCODOCS-2036: Dev feedback #2 applied TELCODOCS-2036: Dev feedback #3 applied TELCODOCS-2036: Dev feedback #4 applied TELCODOCS-2036: Dev feedback #4 applied TELCODOCS-2036: Dev feedback #4 applied TELCODOCS-2036: Dev feedback #4 applied TELCODOCS-2036: Dev feedback #4 applied TELCODOCS-2036: Dev feedback #5 applied TELCODOCS-2036: Peer review feedback applied TELCODOCS-2036: Repeating text removed TELCODOCS-2036: Peer review feedback #2 applied --- modules/nw-sriov-nic-mlx-secure-boot.adoc | 84 +++++++++++++++++++ .../configuring-sriov-device.adoc | 3 + 2 files changed, 87 insertions(+) create mode 100644 modules/nw-sriov-nic-mlx-secure-boot.adoc diff --git a/modules/nw-sriov-nic-mlx-secure-boot.adoc b/modules/nw-sriov-nic-mlx-secure-boot.adoc new file mode 100644 index 000000000000..68cfd04a14ef --- /dev/null +++ b/modules/nw-sriov-nic-mlx-secure-boot.adoc @@ -0,0 +1,84 @@ +// Module included in the following assemblies: +// +// * networking/hardware_networks/configuring-sriov-device.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nw-sriov-nic-mlx-secure-boot_{context}"] += Configuring the SR-IOV Network Operator on Mellanox cards when Secure Boot is enabled + +The SR-IOV Network Operator supports an option to skip the firmware configuration for Mellanox devices. This option allows you to create virtual functions by using the SR-IOV Network Operator when the system has secure boot enabled. You must manually configure and allocate the number of virtual functions in the firmware before switching the system to secure boot. + +[NOTE] +==== +The number of virtual functions in the firmware is the maximum number of virtual functions that you can request in the policy. +==== + +.Procedure + +. Configure the virtual functions (VFs) by running the following command when the system is without a secure boot when using the sriov-config daemon: ++ +[source,terminal] +---- +$ mstconfig -d -0001:b1:00.1 set SRIOV_EN=1 NUM_OF_VFS=16 <1> <2> +---- +<1> The `SRIOV_EN` environment variable enables the SR-IOV Network Operator support on the Mellanox card. +<2> The `NUM_OF_VFS` environment variable specifies the number of virtual functions to enable in the firmware. + +. Configure the SR-IOV Network Operator by disabling the Mellanox plugin. See the following `SriovOperatorConfig` example configuration: ++ +[source,yaml] +---- +apiVersion: sriovnetwork.openshift.io/v1 +kind: SriovOperatorConfig +metadata: + name: default + namespace: openshift-sriov-network-operator +spec: + configDaemonNodeSelector: {} + configurationMode: daemon + disableDrain: false + disablePlugins: + - mellanox + enableInjector: true + enableOperatorWebhook: true + logLevel: 2 +---- + +. Reboot the system to enable the virtual functions and the configuration settings. + +. Check the virtual functions (VFs) after rebooting the system by running the following command: ++ +[source,terminal] +---- +$ oc -n openshift-sriov-network-operator get sriovnetworknodestate.sriovnetwork.openshift.io worker-0 -oyaml +---- ++ +.Example output +[source,yaml] +---- +- deviceID: 101d + driver: mlx5_core + eSwitchMode: legacy + linkSpeed: -1 Mb/s + linkType: ETH + mac: 08:c0:eb:96:31:25 + mtu: 1500 + name: ens3f1np1 + pciAddress: 0000:b1:00.1 <1> + totalvfs: 16 + vendor: 15b3 +---- +<1> The `totalvfs` value is the same number used in the `mstconfig` command earlier in the procedure. + +. Enable secure boot to prevent unauthorized operating systems and malicious software from loading during the device's boot process. + +.. Enable secure boot using the BIOS (Basic Input/Output System). ++ +[source,terminal] +---- +Secure Boot: Enabled +Secure Boot Policy: Standard +Secure Boot Mode: Mode Deployed +---- + +.. Reboot the system. diff --git a/networking/hardware_networks/configuring-sriov-device.adoc b/networking/hardware_networks/configuring-sriov-device.adoc index d0c63b1147d9..372da6f2ccbf 100644 --- a/networking/hardware_networks/configuring-sriov-device.adoc +++ b/networking/hardware_networks/configuring-sriov-device.adoc @@ -15,6 +15,9 @@ include::modules/nw-sriov-networknodepolicy-object.adoc[leveloffset=+1] // A direct companion to nw-sriov-networknodepolicy-object // Virtual function (VF) partitioning for SR-IOV devices + +include::modules/nw-sriov-nic-mlx-secure-boot.adoc[leveloffset=+2] + include::modules/nw-sriov-nic-partitioning.adoc[leveloffset=+2] // Configuring SR-IOV network devices From a1f6e16aae0a60c53212dd1fc08cbf447bea171c Mon Sep 17 00:00:00 2001 From: Olivia Brown Date: Mon, 10 Feb 2025 15:21:57 -0500 Subject: [PATCH 213/669] Updating oc adm upgrade status for 4.18+ --- ...pdate-upgrading-oc-adm-upgrade-status.adoc | 43 ++++++++----------- 1 file changed, 18 insertions(+), 25 deletions(-) diff --git a/modules/update-upgrading-oc-adm-upgrade-status.adoc b/modules/update-upgrading-oc-adm-upgrade-status.adoc index 9b4bb08b4d0a..936936984afe 100644 --- a/modules/update-upgrading-oc-adm-upgrade-status.adoc +++ b/modules/update-upgrading-oc-adm-upgrade-status.adoc @@ -41,42 +41,35 @@ $ oc adm upgrade status ---- = Control Plane = Assessment: Progressing -Target Version: 4.14.1 (from 4.14.0) -Completion: 97% -Duration: 54m +Target Version: 4.17.1 (from 4.17.0) +Updating: machine-config +Completion: 97% (32 operators updated, 1 updating, 0 waiting) +Duration: 54m (Est. Time Remaining: <10m) Operator Status: 32 Healthy, 1 Unavailable Control Plane Nodes NAME ASSESSMENT PHASE VERSION EST MESSAGE -ip-10-0-53-40.us-east-2.compute.internal Progressing Draining 4.14.0 +10m -ip-10-0-30-217.us-east-2.compute.internal Outdated Pending 4.14.0 ? -ip-10-0-92-180.us-east-2.compute.internal Outdated Pending 4.14.0 ? +ip-10-0-53-40.us-east-2.compute.internal Progressing Draining 4.17.0 +10m +ip-10-0-30-217.us-east-2.compute.internal Outdated Pending 4.17.0 ? +ip-10-0-92-180.us-east-2.compute.internal Outdated Pending 4.17.0 ? = Worker Upgrade = -= Worker Pool = -Worker Pool: worker -Assessment: Progressing -Completion: 0% -Worker Status: 3 Total, 2 Available, 1 Progressing, 3 Outdated, 1 Draining, 0 Excluded, 0 Degraded - -Worker Pool Nodes -NAME ASSESSMENT PHASE VERSION EST MESSAGE -ip-10-0-4-159.us-east-2.compute.internal Progressing Draining 4.14.0 +10m -ip-10-0-20-162.us-east-2.compute.internal Outdated Pending 4.14.0 ? -ip-10-0-99-40.us-east-2.compute.internal Outdated Pending 4.14.0 ? +WORKER POOL ASSESSMENT COMPLETION STATUS +worker Progressing 0% (0/2) 1 Available, 1 Progressing, 1 Draining +infra Progressing 50% (1/2) 1 Available, 1 Progressing, 1 Draining -= Worker Pool = -Worker Pool: infra -Assessment: Progressing -Completion: 0% -Worker Status: 1 Total, 0 Available, 1 Progressing, 1 Outdated, 1 Draining, 0 Excluded, 0 Degraded +Worker Pool Nodes: Worker +NAME ASSESSMENT PHASE VERSION EST MESSAGE +ip-10-0-4-159.us-east-2.compute.internal Progressing Draining 4.17.0 +10m +ip-10-0-99-40.us-east-2.compute.internal Outdated Pending 4.17.0 ? -Worker Pool Node +Worker Pool Nodes: infra NAME ASSESSMENT PHASE VERSION EST MESSAGE -ip-10-0-4-159-infra.us-east-2.compute.internal Progressing Draining 4.14.0 +10m +ip-10-0-4-159-infra.us-east-2.compute.internal Progressing Draining 4.17.0 +10m +ip-10-0-20-162.us-east-2.compute.internal Completed Updated 4.17.1 - = Update Health = SINCE LEVEL IMPACT MESSAGE -14m4s Info None Update is proceeding well +54m4s Info None Update is proceeding well ---- \ No newline at end of file From 8a94774c9e57eba59a9092b0fb296dd5c6a05b5a Mon Sep 17 00:00:00 2001 From: Laura Hinson Date: Tue, 11 Feb 2025 11:31:27 -0500 Subject: [PATCH 214/669] [OSDOCS-13345]: Fix command in HCP backup docs --- modules/hcp-dr-oadp-backup-cp-workload.adoc | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/hcp-dr-oadp-backup-cp-workload.adoc b/modules/hcp-dr-oadp-backup-cp-workload.adoc index 0df86a90ba14..08768c3779c7 100644 --- a/modules/hcp-dr-oadp-backup-cp-workload.adoc +++ b/modules/hcp-dr-oadp-backup-cp-workload.adoc @@ -34,7 +34,8 @@ Note the infrastructure ID to use in the next step. + [source,terminal] ---- -$ oc patch cluster.cluster.x-k8s.io \ +$ oc --kubeconfig \ + patch cluster.cluster.x-k8s.io \ -n local-cluster- \ --type json -p '[{"op": "add", "path": "/spec/paused", "value": true}]' ---- From f4d9e976f927a0b66d2db404fb811d1e3f2158e8 Mon Sep 17 00:00:00 2001 From: Ben Hardesty Date: Thu, 13 Feb 2025 11:21:38 -0500 Subject: [PATCH 215/669] Remove duplicate Support book in ROSA HCP topic map --- _topic_maps/_topic_map_rosa_hcp.yml | 49 ----------------------------- 1 file changed, 49 deletions(-) diff --git a/_topic_maps/_topic_map_rosa_hcp.yml b/_topic_maps/_topic_map_rosa_hcp.yml index 5a6e9e9bb89f..23495ff40e8a 100644 --- a/_topic_maps/_topic_map_rosa_hcp.yml +++ b/_topic_maps/_topic_map_rosa_hcp.yml @@ -251,55 +251,6 @@ Topics: # docs needed to ensure that xrefs in "Planning your environment" work; # omit as required by further HCP migration work. --- -Name: Support -Dir: support -Distros: openshift-rosa-hcp -Topics: -# - Name: Support overview -# File: index -# - Name: Managing your cluster resources -# File: managing-cluster-resources -# - Name: Approved Access -# File: approved-access -# - Name: Getting support -# File: getting-support -# Distros: openshift-rosa-hcp -# - Name: Remote health monitoring with connected clusters -# Dir: remote_health_monitoring -# Distros: openshift-rosa-hcp -# Topics: -# - Name: About remote health monitoring -# File: about-remote-health-monitoring -# - Name: Showing data collected by remote health monitoring -# File: showing-data-collected-by-remote-health-monitoring -# - Name: Using Insights to identify issues with your cluster -# File: using-insights-to-identify-issues-with-your-cluster -# - Name: Using Insights Operator -# File: using-insights-operator -# - Name: Gathering data about your cluster -# File: gathering-cluster-data -# Distros: openshift-rosa-hcp -# - Name: Summarizing cluster specifications -# File: summarizing-cluster-specifications -# Distros: openshift-rosa-hcp -- Name: Troubleshooting - Dir: troubleshooting - Distros: openshift-rosa-hcp - Topics: - - Name: Troubleshooting ROSA installations - File: rosa-troubleshooting-installations - - Name: Troubleshooting networking - File: rosa-troubleshooting-networking - - Name: Troubleshooting IAM roles - File: rosa-troubleshooting-iam-resources - Distros: openshift-rosa-hcp - - Name: Troubleshooting cluster deployments - File: rosa-troubleshooting-deployments - Distros: openshift-rosa-hcp - - Name: Red Hat OpenShift Service on AWS managed resources - File: sd-managed-resources - Distros: openshift-rosa-hcp ---- # OSDOCS-11789: Adding the minimum chapters of CLI doc needed # to ensure that xrefs in "Planning your environment" work; # @BM feel free to alter as needed From ea73c24ebeb3eacbcae85dc08102ba74b476ab31 Mon Sep 17 00:00:00 2001 From: Jesse Dohmann Date: Tue, 28 Jan 2025 13:47:41 -0800 Subject: [PATCH 216/669] OSDOCS-12889: add openstack floatingip config --- modules/nw-osp-specify-floating-ip.adoc | 64 ++++++++++++++++++++++++ networking/load-balancing-openstack.adoc | 3 ++ 2 files changed, 67 insertions(+) create mode 100644 modules/nw-osp-specify-floating-ip.adoc diff --git a/modules/nw-osp-specify-floating-ip.adoc b/modules/nw-osp-specify-floating-ip.adoc new file mode 100644 index 000000000000..96e14de0749b --- /dev/null +++ b/modules/nw-osp-specify-floating-ip.adoc @@ -0,0 +1,64 @@ +// Modules included in the following assemblies: +// +// * networking/load-balancing-openstack.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nw-osp-specify-floating-ip_{context}"] += Specifying a floating IP address in the Ingress Controller + +By default, a floating IP address gets randomly assigned to your {product-title} cluster on {rh-openstack-first} upon deployment. This floating IP address is associated with your Ingress port. + +You might want to pre-create a floating IP address before updating your DNS records and cluster deployment. In this situation, you can define a floating IP address to the Ingress Controller. You can do this regardless of whether you are using Octavia or a user-managed cluster. + +.Procedure + +. Create the Ingress Controller custom resource (CR) file with the floating IPs: ++ +.Example Ingress config `sample-ingress.yaml` +[source,yaml] +---- +apiVersion: operator.openshift.io/v1 +kind: IngressController +metadata: + namespace: openshift-ingress-operator + name: <1> +spec: + domain: <2> + endpointPublishingStrategy: + type: LoadBalancerService + loadBalancer: + scope: External <3> + providerParameters: + type: OpenStack + openstack: + floatingIP: <4> +---- +<1> The name of your Ingress Controller. If you are using the default Ingress Controller, the value for this field is `default`. +<2> The DNS name serviced by the Ingress Controller. +<3> You must set the scope to `External` to use a floating IP address. +<4> The floating IP address associated with the port your Ingress Controller is listening on. + +. Apply the CR file by running the following command: ++ +[source,terminal] +---- +$ oc apply -f sample-ingress.yaml +---- + +. Update your DNS records with the Ingress Controller endpoint: ++ +[source,text] +---- +*.apps... IN A +---- + +. Continue with creating your {product-title} cluster. + +.Verification + +* Confirm that the load balancer was successfully provisioned by checking the `IngressController` conditions using the following command: ++ +[source,terminal] +---- +$ oc get ingresscontroller -n openshift-ingress-operator -o jsonpath="{.status.conditions}" | yq -PC +---- diff --git a/networking/load-balancing-openstack.adoc b/networking/load-balancing-openstack.adoc index cecb2551828a..b642026720ce 100644 --- a/networking/load-balancing-openstack.adoc +++ b/networking/load-balancing-openstack.adoc @@ -16,3 +16,6 @@ include::modules/nw-osp-services-external-load-balancer.adoc[leveloffset=+1] // Configuring a user-managed load balancer include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+2] + +// Configuring an Ingress controller to use floating IPs +include::modules/nw-osp-specify-floating-ip.adoc[leveloffset=+1] From e8f71c81f823ad5708475d0a9e74585d069f0235 Mon Sep 17 00:00:00 2001 From: Jesse Dohmann Date: Mon, 3 Feb 2025 14:56:15 -0800 Subject: [PATCH 217/669] OSDOCS-9474: add aws eip procedure --- ...ress-aws-static-eip-nlb-configuration.adoc | 74 +++++++++++++++++++ ...nfiguring-ingress-cluster-traffic-aws.adoc | 2 + 2 files changed, 76 insertions(+) create mode 100644 modules/nw-ingress-aws-static-eip-nlb-configuration.adoc diff --git a/modules/nw-ingress-aws-static-eip-nlb-configuration.adoc b/modules/nw-ingress-aws-static-eip-nlb-configuration.adoc new file mode 100644 index 000000000000..e68595ee3c50 --- /dev/null +++ b/modules/nw-ingress-aws-static-eip-nlb-configuration.adoc @@ -0,0 +1,74 @@ +// Modules included in the following assemblies: +// +// * networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-aws.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nw-ingress-aws-static-eip-nlb-configuration_{context}"] += Configuring AWS Elastic IP (EIP) addresses for a Network Load Balancer (NLB) + +You can specify static IPs, otherwise known as elastic IPs, for your network load balancer (NLB) in the Ingress Controller. This is useful in situations where you want to configure appropriate firewall rules for your cluster network. + +.Prerequisites +* You must have an installed AWS cluster. +* You must know the names or IDs of the subnets to which you intend to map your `IngressController`. + +.Procedure + +. Create a YAML file that contains the following content: ++ +.`sample-ingress.yaml` +[source,yaml] +---- +apiVersion: operator.openshift.io/v1 +kind: IngressController +metadata: + namespace: openshift-ingress-operator + name: <1> +spec: + domain: <2> + endpointPublishingStrategy: + loadBalancer: + scope: External <3> + type: LoadBalancerService + providerParameters: + type: AWS + aws: + type: NLB + networkLoadBalancer: + subnets: <4> + ids: + - + names: + - + - + eipAllocations: <5> + - + - + - +---- +<1> Replace the `` placeholder with a name for the Ingress Controller. +<2> Replace the `` placeholder with the DNS name serviced by the Ingress Controller. +<3> The scope must be set to the value `External` and be Internet-facing in order to allocate EIPs. +<4> Specify the IDs and names for your subnets. The total number of IDs and names must be equal to your allocated EIPs. +<5> Specify the EIP addresses. ++ +[IMPORTANT] +==== +You can specify a maximum of one subnet per availability zone. Only provide public subnets for external Ingress Controllers. You can associate one EIP address per subnet. +==== + +. Save and apply the CR file by entering the following command: ++ +[source,terminal] +---- +$ oc apply -f sample-ingress.yaml +---- + +.Verification + +. Confirm the load balancer was provisioned successfully by checking the `IngressController` conditions by running the following command: ++ +[source,terminal] +---- +$ oc get ingresscontroller -n openshift-ingress-operator -o jsonpath="{.status.conditions}" | yq -PC +---- diff --git a/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-aws.adoc b/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-aws.adoc index 958e1c4f6dfc..6f91305c7a83 100644 --- a/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-aws.adoc +++ b/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-aws.adoc @@ -44,6 +44,8 @@ include::modules/nw-ingress-setting-select-subnet-loadbalancerservice.adoc[level include::modules/nw-ingress-setting-update-subnet-loadbalancerservice.adoc[leveloffset=+2] +include::modules/nw-ingress-aws-static-eip-nlb-configuration.adoc[leveloffset=+2] + [role="_additional-resources"] [id="additional-resources_configuring-ingress-cluster-traffic-aws"] == Additional resources From 28b5ab2353a31b27974e36303a0fb183fff46ff8 Mon Sep 17 00:00:00 2001 From: Gabriel McGoldrick Date: Tue, 5 Nov 2024 17:05:27 +0000 Subject: [PATCH 218/669] OBSDOCS-1161 add content for validating that the monitoring stack is working --- ...ct-for-cluster-observability-operator.adoc | 61 ++++++++++++++ ...ck-for-cluster-observability-operator.adoc | 81 +++++++++++++++++++ ...ability-operator-to-monitor-a-service.adoc | 4 + 3 files changed, 146 insertions(+) create mode 100644 modules/monitoring-validating-a-monitoringstack-for-cluster-observability-operator.adoc diff --git a/modules/monitoring-creating-a-monitoringstack-object-for-cluster-observability-operator.adoc b/modules/monitoring-creating-a-monitoringstack-object-for-cluster-observability-operator.adoc index 703d00388835..fc1d37ab08b6 100644 --- a/modules/monitoring-creating-a-monitoringstack-object-for-cluster-observability-operator.adoc +++ b/modules/monitoring-creating-a-monitoringstack-object-for-cluster-observability-operator.adoc @@ -59,3 +59,64 @@ $ oc -n ns1-coo get monitoringstack NAME AGE example-coo-monitoring-stack 81m ---- + +. Run the following comand to retrieve information about the active targets from Prometheus and filter the output to list only targets labeled with `app=prometheus-coo-example-app`. This verifies which targets are discovered and actively monitored by Prometheus with this specific label. ++ +[source,terminal] +---- +$ oc -n oc -n ns1-coo exec -c prometheus prometheus-example-coo-monitoring-stack-0 -- curl -s 'http://localhost:9090/api/v1/targets' | jq '.data.activeTargets[].discoveredLabels | select(.__meta_kubernetes_endpoints_label_app=="prometheus-coo-example-app")' +---- ++ +.Example output +[source,json] +---- +{ + "__address__": "10.129.2.25:8080", + "__meta_kubernetes_endpoint_address_target_kind": "Pod", + "__meta_kubernetes_endpoint_address_target_name": "prometheus-coo-example-app-5d8cd498c7-9j2gj", + "__meta_kubernetes_endpoint_node_name": "ci-ln-8tt8vxb-72292-6cxjr-worker-a-wdfnz", + "__meta_kubernetes_endpoint_port_name": "web", + "__meta_kubernetes_endpoint_port_protocol": "TCP", + "__meta_kubernetes_endpoint_ready": "true", + "__meta_kubernetes_endpoints_annotation_endpoints_kubernetes_io_last_change_trigger_time": "2024-11-05T11:24:09Z", + "__meta_kubernetes_endpoints_annotationpresent_endpoints_kubernetes_io_last_change_trigger_time": "true", + "__meta_kubernetes_endpoints_label_app": "prometheus-coo-example-app", + "__meta_kubernetes_endpoints_labelpresent_app": "true", + "__meta_kubernetes_endpoints_name": "prometheus-coo-example-app", + "__meta_kubernetes_namespace": "ns1-coo", + "__meta_kubernetes_pod_annotation_k8s_ovn_org_pod_networks": "{\"default\":{\"ip_addresses\":[\"10.129.2.25/23\"],\"mac_address\":\"0a:58:0a:81:02:19\",\"gateway_ips\":[\"10.129.2.1\"],\"routes\":[{\"dest\":\"10.128.0.0/14\",\"nextHop\":\"10.129.2.1\"},{\"dest\":\"172.30.0.0/16\",\"nextHop\":\"10.129.2.1\"},{\"dest\":\"100.64.0.0/16\",\"nextHop\":\"10.129.2.1\"}],\"ip_address\":\"10.129.2.25/23\",\"gateway_ip\":\"10.129.2.1\",\"role\":\"primary\"}}", + "__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status": "[{\n \"name\": \"ovn-kubernetes\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.129.2.25\"\n ],\n \"mac\": \"0a:58:0a:81:02:19\",\n \"default\": true,\n \"dns\": {}\n}]", + "__meta_kubernetes_pod_annotation_openshift_io_scc": "restricted-v2", + "__meta_kubernetes_pod_annotation_seccomp_security_alpha_kubernetes_io_pod": "runtime/default", + "__meta_kubernetes_pod_annotationpresent_k8s_ovn_org_pod_networks": "true", + "__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status": "true", + "__meta_kubernetes_pod_annotationpresent_openshift_io_scc": "true", + "__meta_kubernetes_pod_annotationpresent_seccomp_security_alpha_kubernetes_io_pod": "true", + "__meta_kubernetes_pod_controller_kind": "ReplicaSet", + "__meta_kubernetes_pod_controller_name": "prometheus-coo-example-app-5d8cd498c7", + "__meta_kubernetes_pod_host_ip": "10.0.128.2", + "__meta_kubernetes_pod_ip": "10.129.2.25", + "__meta_kubernetes_pod_label_app": "prometheus-coo-example-app", + "__meta_kubernetes_pod_label_pod_template_hash": "5d8cd498c7", + "__meta_kubernetes_pod_labelpresent_app": "true", + "__meta_kubernetes_pod_labelpresent_pod_template_hash": "true", + "__meta_kubernetes_pod_name": "prometheus-coo-example-app-5d8cd498c7-9j2gj", + "__meta_kubernetes_pod_node_name": "ci-ln-8tt8vxb-72292-6cxjr-worker-a-wdfnz", + "__meta_kubernetes_pod_phase": "Running", + "__meta_kubernetes_pod_ready": "true", + "__meta_kubernetes_pod_uid": "054c11b6-9a76-4827-a860-47f3a4596871", + "__meta_kubernetes_service_label_app": "prometheus-coo-example-app", + "__meta_kubernetes_service_labelpresent_app": "true", + "__meta_kubernetes_service_name": "prometheus-coo-example-app", + "__metrics_path__": "/metrics", + "__scheme__": "http", + "__scrape_interval__": "30s", + "__scrape_timeout__": "10s", + "job": "serviceMonitor/ns1-coo/prometheus-coo-example-monitor/0" +} +---- ++ +[NOTE] +==== +The above example uses link:https://jqlang.github.io/jq/[`jq` command-line JSON processor] to format the output for convenience. +==== diff --git a/modules/monitoring-validating-a-monitoringstack-for-cluster-observability-operator.adoc b/modules/monitoring-validating-a-monitoringstack-for-cluster-observability-operator.adoc new file mode 100644 index 000000000000..74b8760eb39b --- /dev/null +++ b/modules/monitoring-validating-a-monitoringstack-for-cluster-observability-operator.adoc @@ -0,0 +1,81 @@ +// Module included in the following assemblies: +// +// * observability/cluster-observability-operator/configuring-the-cluster-observability-operator-to-monitor-a-service.adoc + +:_mod-docs-content-type: PROCEDURE +[id="monitoring-validating-a-monitoringstack-for-cluster-observability-operator_{context}"] += Validating the monitoring stack + +To validate that the monitoring stack is working correctly, access the example service and then view the gathered metrics. + +.Prerequisites + +* You have access to the cluster as a user with the `cluster-admin` cluster role or as a user with administrative permissions for the namespace. +* You have installed the {coo-full}. +* You have deployed the `prometheus-coo-example-app` sample service in the `ns1-coo` namespace. +* You have created a `ServiceMonitor` object named `prometheus-coo-example-monitor` in the `ns1-coo` namespace. +* You have created a `MonitoringStack` object named `example-coo-monitoring-stack` in the `ns1-coo` namespace. + +.Procedure + +. Create a route to expose the example `prometheus-coo-example-app` service. From your terminal, run the command: ++ +[source,terminal] +---- +$ oc expose svc prometheus-coo-example-app +---- +. Access the route from your browser, or command line, to generate metrics. + +. Execute a query on the Prometheus pod to return the total HTTP requests metric: ++ +[source,terminal] +---- +$ oc -n ns1-coo exec -c prometheus prometheus-example-coo-monitoring-stack-0 -- curl -s 'http://localhost:9090/api/v1/query?query=http_requests_total' +---- ++ +.Example output (formatted using `jq` for convenience) +[source,json] +---- +{ + "status": "success", + "data": { + "resultType": "vector", + "result": [ + { + "metric": { + "__name__": "http_requests_total", + "code": "200", + "endpoint": "web", + "instance": "10.129.2.25:8080", + "job": "prometheus-coo-example-app", + "method": "get", + "namespace": "ns1-coo", + "pod": "prometheus-coo-example-app-5d8cd498c7-9j2gj", + "service": "prometheus-coo-example-app" + }, + "value": [ + 1730807483.632, + "3" + ] + }, + { + "metric": { + "__name__": "http_requests_total", + "code": "404", + "endpoint": "web", + "instance": "10.129.2.25:8080", + "job": "prometheus-coo-example-app", + "method": "get", + "namespace": "ns1-coo", + "pod": "prometheus-coo-example-app-5d8cd498c7-9j2gj", + "service": "prometheus-coo-example-app" + }, + "value": [ + 1730807483.632, + "0" + ] + } + ] + } +} +---- diff --git a/observability/cluster_observability_operator/configuring-the-cluster-observability-operator-to-monitor-a-service.adoc b/observability/cluster_observability_operator/configuring-the-cluster-observability-operator-to-monitor-a-service.adoc index ad91404bee2f..a1683f75824b 100644 --- a/observability/cluster_observability_operator/configuring-the-cluster-observability-operator-to-monitor-a-service.adoc +++ b/observability/cluster_observability_operator/configuring-the-cluster-observability-operator-to-monitor-a-service.adoc @@ -25,3 +25,7 @@ include::modules/monitoring-specifying-how-a-service-is-monitored-by-cluster-obs // Create a MonitoringStack object to discover the service monitor include::modules/monitoring-creating-a-monitoringstack-object-for-cluster-observability-operator.adoc[leveloffset=+1] + +// Validate a MonitoringStack +include::modules/monitoring-validating-a-monitoringstack-for-cluster-observability-operator.adoc[leveloffset=+1] + From 8a4950b6a760a26291c50ef7d572ee06693c037c Mon Sep 17 00:00:00 2001 From: Gabriel McGoldrick Date: Wed, 12 Feb 2025 09:48:53 +0000 Subject: [PATCH 219/669] OBSDOCS-1669 add details on server-side apply --- modules/coo-server-side-apply.adoc | 302 ++++++++++++++++++ ...uster-observability-operator-overview.adoc | 2 + 2 files changed, 304 insertions(+) create mode 100644 modules/coo-server-side-apply.adoc diff --git a/modules/coo-server-side-apply.adoc b/modules/coo-server-side-apply.adoc new file mode 100644 index 000000000000..41ab94721d35 --- /dev/null +++ b/modules/coo-server-side-apply.adoc @@ -0,0 +1,302 @@ +//Module included in the following assemblies: +// +// * observability/cluster_observability_operator/cluster-observability-operator-overview.adoc + +:_mod-docs-content-type: PROCEDURE +[id="server-side-apply_{context}"] += Using Server-Side Apply to customize Prometheus resources + +Server-Side Apply is a feature that enables collaborative management of Kubernetes resources. The control plane tracks how different users and controllers manage fields within a Kubernetes object. It introduces the concept of field managers and tracks ownership of fields. This centralized control provides conflict detection and resolution, and reduces the risk of unintended overwrites. + +Compared to Client-Side Apply, it is more declarative, and tracks field management instead of last applied state. + +Server-Side Apply:: Declarative configuration management by updating a resource's state without needing to delete and recreate it. + +Field management:: Users can specify which fields of a resource they want to update, without affecting the other fields. + +Managed fields:: Kubernetes stores metadata about who manages each field of an object in the `managedFields` field within metadata. + +Conflicts:: If multiple managers try to modify the same field, a conflict occurs. The applier can choose to overwrite, relinquish control, or share management. + +Merge strategy:: Server-Side Apply merges fields based on the actor who manages them. + +.Procedure + +. Add a `MonitoringStack` resource using the following configuration: ++ +.Example `MonitoringStack` object ++ +[source,yaml] +---- +apiVersion: monitoring.rhobs/v1alpha1 +kind: MonitoringStack +metadata: + labels: + coo: example + name: sample-monitoring-stack + namespace: coo-demo +spec: + logLevel: debug + retention: 1d + resourceSelector: + matchLabels: + app: demo +---- + +. A Prometheus resource named `sample-monitoring-stack` is generated in the `coo-demo` namespace. Retrieve the managed fields of the generated Prometheus resource by running the following command: ++ +[source,terminal] +---- +$ oc -n coo-demo get Prometheus.monitoring.rhobs -oyaml --show-managed-fields +---- ++ +.Example output +[source,yaml] +---- +managedFields: +- apiVersion: monitoring.rhobs/v1 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:labels: + f:app.kubernetes.io/managed-by: {} + f:app.kubernetes.io/name: {} + f:app.kubernetes.io/part-of: {} + f:ownerReferences: + k:{"uid":"81da0d9a-61aa-4df3-affc-71015bcbde5a"}: {} + f:spec: + f:additionalScrapeConfigs: {} + f:affinity: + f:podAntiAffinity: + f:requiredDuringSchedulingIgnoredDuringExecution: {} + f:alerting: + f:alertmanagers: {} + f:arbitraryFSAccessThroughSMs: {} + f:logLevel: {} + f:podMetadata: + f:labels: + f:app.kubernetes.io/component: {} + f:app.kubernetes.io/part-of: {} + f:podMonitorSelector: {} + f:replicas: {} + f:resources: + f:limits: + f:cpu: {} + f:memory: {} + f:requests: + f:cpu: {} + f:memory: {} + f:retention: {} + f:ruleSelector: {} + f:rules: + f:alert: {} + f:securityContext: + f:fsGroup: {} + f:runAsNonRoot: {} + f:runAsUser: {} + f:serviceAccountName: {} + f:serviceMonitorSelector: {} + f:thanos: + f:baseImage: {} + f:resources: {} + f:version: {} + f:tsdb: {} + manager: observability-operator + operation: Apply +- apiVersion: monitoring.rhobs/v1 + fieldsType: FieldsV1 + fieldsV1: + f:status: + .: {} + f:availableReplicas: {} + f:conditions: + .: {} + k:{"type":"Available"}: + .: {} + f:lastTransitionTime: {} + f:observedGeneration: {} + f:status: {} + f:type: {} + k:{"type":"Reconciled"}: + .: {} + f:lastTransitionTime: {} + f:observedGeneration: {} + f:status: {} + f:type: {} + f:paused: {} + f:replicas: {} + f:shardStatuses: + .: {} + k:{"shardID":"0"}: + .: {} + f:availableReplicas: {} + f:replicas: {} + f:shardID: {} + f:unavailableReplicas: {} + f:updatedReplicas: {} + f:unavailableReplicas: {} + f:updatedReplicas: {} + manager: PrometheusOperator + operation: Update + subresource: status +---- + +. Check the `metadata.managedFields` values, and observe that some fields in `metadata` and `spec` are managed by the `MonitoringStack` resource. + +. Modify a field that is not controlled by the `MonitoringStack` resource: + +.. Change `spec.enforcedSampleLimit`, which is a field not set by the `MonitoringStack` resource. Create the file `prom-spec-edited.yaml`: ++ +.`prom-spec-edited.yaml` ++ +[source,yaml] +---- +apiVersion: monitoring.rhobs/v1 +kind: Prometheus +metadata: + name: sample-monitoring-stack + namespace: coo-demo +spec: + enforcedSampleLimit: 1000 +---- + +.. Apply the YAML by running the following command: ++ +[source,terminal] +---- +$ oc apply -f ./prom-spec-edited.yaml --server-side +---- ++ +[NOTE] +==== +You must use the `--server-side` flag. +==== + +.. Get the changed Prometheus object and note that there is one more section in `managedFields` which has `spec.enforcedSampleLimit`: ++ +[source,terminal] +---- +$ oc get prometheus -n coo-demo +---- ++ +.Example output +[source,yaml] +---- +managedFields: <1> +- apiVersion: monitoring.rhobs/v1 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:labels: + f:app.kubernetes.io/managed-by: {} + f:app.kubernetes.io/name: {} + f:app.kubernetes.io/part-of: {} + f:spec: + f:enforcedSampleLimit: {} <2> + manager: kubectl + operation: Apply +---- +<1> `managedFields` +<2> `spec.enforcedSampleLimit` + +. Modify a field that is managed by the `MonitoringStack` resource: +.. Change `spec.LogLevel`, which is a field managed by the `MonitoringStack` resource, using the following YAML configuration: ++ +[source,yaml] +---- +# changing the logLevel from debug to info +apiVersion: monitoring.rhobs/v1 +kind: Prometheus +metadata: + name: sample-monitoring-stack + namespace: coo-demo +spec: + logLevel: info <1> +---- +<1> `spec.logLevel` has been added + +.. Apply the YAML by running the following command: ++ +[source,terminal] +---- +$ oc apply -f ./prom-spec-edited.yaml --server-side +---- ++ +.Example output ++ +[source,terminal] +---- +error: Apply failed with 1 conflict: conflict with "observability-operator": .spec.logLevel +Please review the fields above--they currently have other managers. Here +are the ways you can resolve this warning: +* If you intend to manage all of these fields, please re-run the apply + command with the `--force-conflicts` flag. +* If you do not intend to manage all of the fields, please edit your + manifest to remove references to the fields that should keep their + current managers. +* You may co-own fields by updating your manifest to match the existing + value; in this case, you'll become the manager if the other manager(s) + stop managing the field (remove it from their configuration). +See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts +---- + +.. Notice that the field `spec.logLevel` cannot be changed using Server-Side Apply, because it is already managed by `observability-operator`. + +.. Use the `--force-conflicts` flag to force the change. ++ +[source,terminal] +---- +$ oc apply -f ./prom-spec-edited.yaml --server-side --force-conflicts +---- ++ +.Example output ++ +[source,terminal] +---- +prometheus.monitoring.rhobs/sample-monitoring-stack serverside-applied +---- ++ +With `--force-conflicts` flag, the field can be forced to change, but since the same field is also managed by the `MonitoringStack` resource, the Observability Operator detects the change, and reverts it back to the value set by the `MonitoringStack` resource. ++ +[NOTE] +==== +Some Prometheus fields generated by the `MonitoringStack` resource are influenced by the fields in the `MonitoringStack` `spec` stanza, for example, `logLevel`. These can be changed by changing the `MonitoringStack` `spec`. +==== + +.. To change the `logLevel` in the Prometheus object, apply the following YAML to change the `MonitoringStack` resource: ++ +[source,yaml] +---- +apiVersion: monitoring.rhobs/v1alpha1 +kind: MonitoringStack +metadata: + name: sample-monitoring-stack + labels: + coo: example +spec: + logLevel: info +---- + +.. To confirm that the change has taken place, query for the log level by running the following command: ++ +[source,terminal] +---- +$ oc -n coo-demo get Prometheus.monitoring.rhobs -o=jsonpath='{.items[0].spec.logLevel}' +---- ++ +.Example output ++ +[source,terminal] +---- +info +---- + + +[NOTE] +==== +. If a new version of an Operator generates a field that was previously generated and controlled by an actor, the value set by the actor will be overridden. ++ +For example, you are managing a field `enforcedSampleLimit` which is not generated by the `MonitoringStack` resource. If the Observability Operator is upgraded, and the new version of the Operator generates a value for `enforcedSampleLimit`, this will overide the value you have previously set. + +. The `Prometheus` object generated by the `MonitoringStack` resource may contain some fields which are not explicitly set by the monitoring stack. These fields appear because they have default values. +==== diff --git a/observability/cluster_observability_operator/cluster-observability-operator-overview.adoc b/observability/cluster_observability_operator/cluster-observability-operator-overview.adoc index 3d42e9836c31..3baf6d0db953 100644 --- a/observability/cluster_observability_operator/cluster-observability-operator-overview.adoc +++ b/observability/cluster_observability_operator/cluster-observability-operator-overview.adoc @@ -22,6 +22,8 @@ Monitoring stacks deployed by the two Operators do not conflict. You can use a { include::modules/monitoring-understanding-the-cluster-observability-operator.adoc[leveloffset=+1] +include::modules/coo-server-side-apply.adoc[leveloffset=+1] + [role="_additional-resources"] .Additional resources From 6007f8332af16944573019cbf2f982ad82a9c0ba Mon Sep 17 00:00:00 2001 From: Gabriel McGoldrick Date: Tue, 10 Dec 2024 16:40:24 +0000 Subject: [PATCH 220/669] OBSDOCS-1562 improve COO overview, including article content --- modules/coo-advantages.adoc | 36 +++++++++++++++++++ modules/coo-target-users.adoc | 23 ++++++++++++ .../coo-versus-default-ocp-monitoring.adoc | 36 +++++++++++++++++++ ...uster-observability-operator-overview.adoc | 20 ++++++----- 4 files changed, 107 insertions(+), 8 deletions(-) create mode 100644 modules/coo-advantages.adoc create mode 100644 modules/coo-target-users.adoc create mode 100644 modules/coo-versus-default-ocp-monitoring.adoc diff --git a/modules/coo-advantages.adoc b/modules/coo-advantages.adoc new file mode 100644 index 000000000000..603694e2eef5 --- /dev/null +++ b/modules/coo-advantages.adoc @@ -0,0 +1,36 @@ +// Module included in the following assemblies: +// * observability/cluster_observability_operator/cluster-observability-operator-overview.adoc + +:_mod-docs-content-type: CONCEPT +[id="coo-advantages_{context}"] += Key advantages of using {coo-short} + +Deploying {coo-short} helps you address monitoring requirements that are hard to achieve using the default monitoring stack. + +[id="coo-advantages-extensibility_{context}"] +== Extensibility + +- You can add more metrics to a {coo-short}-deployed monitoring stack, which is not possible with core platform monitoring without losing support. +- You can receive cluster-specific metrics from core platform monitoring through federation. +- {coo-short} supports advanced monitoring scenarios like trend forecasting and anomaly detection. + +[id="coo-advantages-multi-tenancy_{context}"] +== Multi-tenancy support + +- You can create monitoring stacks per user namespace. +- You can deploy multiple stacks per namespace or a single stack for multiple namespaces. +- {coo-short} enables independent configuration of alerts and receivers for different teams. + +[id="coo-advantages-scalability_{context}"] +== Scalability + +- Supports multiple monitoring stacks on a single cluster. +- Enables monitoring of large clusters through manual sharding. +- Addresses cases where metrics exceed the capabilities of a single Prometheus instance. + +[id="coo-advantages-scalabilityflexibility_{context}"] +== Flexibility + +- Decoupled from {product-title} release cycles. +- Faster release iterations and rapid response to changing requirements. +- Independent management of alerting rules. \ No newline at end of file diff --git a/modules/coo-target-users.adoc b/modules/coo-target-users.adoc new file mode 100644 index 000000000000..dbd27e5fdc94 --- /dev/null +++ b/modules/coo-target-users.adoc @@ -0,0 +1,23 @@ +// Module included in the following assemblies: +// * observability/cluster_observability_operator/cluster-observability-operator-overview.adoc + +:_mod-docs-content-type: CONCEPT +[id="coo-target-users_{context}"] += Target users for {coo-short} + +{coo-short} is ideal for users who need high customizability, scalability, and long-term data retention, especially in complex, multi-tenant enterprise environments. + +[id="coo-target-users-enterprise_{context}"] +== Enterprise-level users and administrators + +Enterprise users require in-depth monitoring capabilities for {product-title} clusters, including advanced performance analysis, long-term data retention, trend forecasting, and historical analysis. These features help enterprises better understand resource usage, prevent performance issues, and optimize resource allocation. + +[id="coo-target-users-multi-tenant_{context}"] +== Operations teams in multi-tenant environments + +With multi-tenancy support, {coo-short} allows different teams to configure monitoring views for their projects and applications, making it suitable for teams with flexible monitoring needs. + +[id="coo-target-users-devops_{context}"] +== Development and operations teams + +{coo-short} provides fine-grained monitoring and customizable observability views for in-depth troubleshooting, anomaly detection, and performance tuning during development and operations. \ No newline at end of file diff --git a/modules/coo-versus-default-ocp-monitoring.adoc b/modules/coo-versus-default-ocp-monitoring.adoc new file mode 100644 index 000000000000..618ff8413c19 --- /dev/null +++ b/modules/coo-versus-default-ocp-monitoring.adoc @@ -0,0 +1,36 @@ +// Module included in the following assemblies: + +// * observability/cluster_observability_operator/cluster-observability-operator-overview.adoc + +:_mod-docs-content-type: CONCEPT +[id="coo-versus-default-ocp-monitoring_{context}"] += {coo-short} compared to default monitoring stack + +The {coo-short} components function independently of the default in-cluster monitoring stack, which is deployed and managed by the {cmo-first}. +Monitoring stacks deployed by the two Operators do not conflict. You can use a {coo-short} monitoring stack in addition to the default platform monitoring components deployed by the {cmo-short}. + +The key differences between {coo-short} and the default in-cluster monitoring stack are shown in the following table: + +[cols="1,3,3", options="header"] +|=== +| Feature | {coo-short} | Default monitoring stack + +| **Scope and integration** +| Offers comprehensive monitoring and analytics for enterprise-level needs, covering cluster and workload performance. + +However, it lacks direct integration with {product-title} and typically requires an external Grafana instance for dashboards. +| Limited to core components within the cluster, for example, API server and etcd, and to OpenShift-specific namespaces. + +There is deep integration into {product-title} including console dashboards and alert management in the console. + +| **Configuration and customization** +| Broader configuration options including data retention periods, storage methods, and collected data types. + +The {coo-short} can delegate ownership of single configurable fields in custom resources to users by using Server-Side Apply (SSA), which enhances customization. +| Built-in configurations with limited customization options. + +| **Data retention and storage** +| Long-term data retention, supporting historical analysis and capacity planning +| Shorter data retention times, focusing on short-term monitoring and real-time detection. + +|=== diff --git a/observability/cluster_observability_operator/cluster-observability-operator-overview.adoc b/observability/cluster_observability_operator/cluster-observability-operator-overview.adoc index 3baf6d0db953..5fa4d3f062fa 100644 --- a/observability/cluster_observability_operator/cluster-observability-operator-overview.adoc +++ b/observability/cluster_observability_operator/cluster-observability-operator-overview.adoc @@ -9,18 +9,23 @@ toc::[] :FeatureName: The Cluster Observability Operator include::snippets/technology-preview.adoc[leveloffset=+2] -The {coo-first} is an optional component of the {product-title}. You can deploy it to create standalone monitoring stacks that are independently configurable for use by different services and users. +The {coo-first} is an optional component of the {product-title} designed for creating and managing highly customizable monitoring stacks. It enables cluster administrators to automate configuration and management of monitoring needs extensively, offering a more tailored and detailed view of each namespace compared to the default {product-title} monitoring system. The {coo-short} deploys the following monitoring components: -* Prometheus -* Thanos Querier (optional) -* Alertmanager (optional) +* **Prometheus** - A highly available Prometheus instance capable of sending metrics to an external endpoint by using remote write. +* **Thanos Querier** (optional) - Enables querying of Prometheus instances from a central location. +* **Alertmanager** (optional) - Provides alert configuration capabilities for different services. +* **UI plugins** (optional) - Enhances the observability capabilities with plugins for monitoring, logging, distributed tracing and troubleshooting. +* **Korrel8r** (optional) - Provides observability signal correlation, powered by the open source Korrel8r project. -The {coo-short} components function independently of the default in-cluster monitoring stack, which is deployed and managed by the {cmo-first}. -Monitoring stacks deployed by the two Operators do not conflict. You can use a {coo-short} monitoring stack in addition to the default platform monitoring components deployed by the {cmo-short}. +include::modules/coo-versus-default-ocp-monitoring.adoc[leveloffset=+1] -include::modules/monitoring-understanding-the-cluster-observability-operator.adoc[leveloffset=+1] +include::modules/coo-advantages.adoc[leveloffset=+1] + +include::modules/coo-target-users.adoc[leveloffset=+1] + +//include::modules/monitoring-understanding-the-cluster-observability-operator.adoc[leveloffset=+1] include::modules/coo-server-side-apply.adoc[leveloffset=+1] @@ -28,4 +33,3 @@ include::modules/coo-server-side-apply.adoc[leveloffset=+1] .Additional resources * link:https://kubernetes.io/docs/reference/using-api/server-side-apply/[Kubernetes documentation for Server-Side Apply (SSA)] - From 1d26817c89f2b92860d38b7c6c19a4fc1a7fbc2e Mon Sep 17 00:00:00 2001 From: opayne1 Date: Thu, 13 Feb 2025 14:06:46 -0500 Subject: [PATCH 221/669] OSDOCS#13054-2: Adds missing extension type to the dynamic plugin reference docs --- modules/dynamic-plugin-sdk-extensions.adoc | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/modules/dynamic-plugin-sdk-extensions.adoc b/modules/dynamic-plugin-sdk-extensions.adoc index 6bbe4c890181..12f42efd6f01 100644 --- a/modules/dynamic-plugin-sdk-extensions.adoc +++ b/modules/dynamic-plugin-sdk-extensions.adoc @@ -269,6 +269,17 @@ Adds a new React context provider to the web console application root. |`useValueHook` |`CodeRef<() => T>` |no |Hook for the Context value. |=== +[discrete] +== `console.create-project-modal` + +This extension can be used to pass a component that will be rendered in place of the standard create project modal. + +[cols=",,,",options="header",] +|=== +|Name |Value Type |Optional |Description +|`component` |`CodeRef>` |no |A component to render in place of the create project modal. +|=== + [discrete] == `console.dashboards/card` From 8f31caeb4bee9349f24aa9bb87e60e7f9dd90905 Mon Sep 17 00:00:00 2001 From: Max Bridges Date: Mon, 6 Jan 2025 16:27:42 -0500 Subject: [PATCH 222/669] Add observability metrics correlation for ShiftStack docs OSDOCS-12841 GH#86741 --- _attributes/common-attributes.adoc | 2 + _topic_maps/_topic_map.yml | 2 + ...ng-configuring-shiftstack-remotewrite.adoc | 162 ++++++++++++++++++ ...oring-configuring-shiftstack-scraping.adoc | 83 +++++++++ modules/monitoring-shiftstack-metrics.adoc | 42 +++++ .../shiftstack-prometheus-configuration.adoc | 34 ++++ 6 files changed, 325 insertions(+) create mode 100644 modules/monitoring-configuring-shiftstack-remotewrite.adoc create mode 100644 modules/monitoring-configuring-shiftstack-scraping.adoc create mode 100644 modules/monitoring-shiftstack-metrics.adoc create mode 100644 observability/monitoring/shiftstack-prometheus-configuration.adoc diff --git a/_attributes/common-attributes.adoc b/_attributes/common-attributes.adoc index 5ebb94508579..31d04438e57d 100644 --- a/_attributes/common-attributes.adoc +++ b/_attributes/common-attributes.adoc @@ -316,6 +316,8 @@ ifdef::openshift-origin[] :rh-openstack-first: OpenStack :rh-openstack: OpenStack endif::openshift-origin[] +:rhoso-first: Red Hat OpenStack Services on OpenShift (RHOSO) +:rhoso: RHOSO // VMware vSphere :vmw-first: VMware vSphere :vmw-full: VMware vSphere diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index ea2cf1fcccd6..6cc2909ead3a 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2917,6 +2917,8 @@ Topics: File: managing-alerts - Name: Reviewing monitoring dashboards File: reviewing-monitoring-dashboards + - Name: Monitoring clusters that run on RHOSO + File: shiftstack-prometheus-configuration - Name: Accessing monitoring APIs by using the CLI File: accessing-third-party-monitoring-apis - Name: Troubleshooting monitoring issues diff --git a/modules/monitoring-configuring-shiftstack-remotewrite.adoc b/modules/monitoring-configuring-shiftstack-remotewrite.adoc new file mode 100644 index 000000000000..85929f3be14e --- /dev/null +++ b/modules/monitoring-configuring-shiftstack-remotewrite.adoc @@ -0,0 +1,162 @@ +// Module included in the following assemblies: +// +// * observability/monitoring/shiftstack-prometheus-configuration.adoc + +:_mod-docs-content-type: PROCEDURE +[id="monitoring-configuring-shiftstack-remotewrite_{context}"] += Remote writing to an external Prometheus instance + +Use remote write with both {rhoso-first} and {product-title} to push their metrics to an external Prometheus instance. + +.Prerequisites + +- You have access to an external Prometheus instance. +- You have administrative access to {rhoso} and your cluster. +- You have certificates for secure communication with mTLS. +- Your Prometheus instance is configured for client TLS certificates and has been set up as a remote write receiver. +- The Cluster Observability Operator is installed on your {rhoso} cluster. +- The monitoring stack for your {rhoso} cluster is configured to collect the metrics that you are interested in. +- Telemetry is enabled in the {rhoso} environment. ++ +[NOTE] +==== +To verify that the telemetry service is operating normally, entering the following command: +[source,shell] +---- +$ oc -n openstack get monitoringstacks metric-storage -o yaml +---- +The `monitoringstacks` CRD indicates whether telemetry is enabled correctly. +==== + +.Procedure + +// Steps 1, 2, 3, and 4 run on the OpenShift cluster hosting the RHOSO control plane. This configure RHOSO to send their metrics to an external prometheus. +// +// Steps 5, 6, 7, and 8 run on the tenant's OpenShift cluster. This configures the tenant OpenShift cluster to send their metrics to the same Prometheus instance. +// Comment from before moving telemetry check to prereqs -- offset by 1. + +// on mgmt cluster + +. Configure your {rhoso} management cluster to send metrics to Prometheus: + +.. Create a secret that is named `mtls-bundle` in the `openstack` namespace that contains HTTPS client certificates for authentication to Prometheus by entering the following command: ++ +[source,shell] +---- +$ oc --namespace openstack \ + create secret generic mtls-bundle \ + --from-file=./ca.crt \ + --from-file=osp-client.crt \ + --from-file=osp-client.key +---- + +.. Open the `controlplane` configuration for editing by running the following command: ++ +[source,shell] +---- +$ oc -n openstack edit openstackcontrolplane/controlplane +---- + +.. With the configuration open, replace the `.spec.telemetry.template.metricStorage` section so that {rhoso} sends metrics to Prometheus. As an example: ++ +[source,yaml] +---- + metricStorage: + customMonitoringStack: + alertmanagerConfig: + disabled: false + logLevel: info + prometheusConfig: + scrapeInterval: 30s + remoteWrite: + - url: https://external-prometheus.example.com/api/v1/write # <1> + tlsConfig: + ca: + secret: + name: mtls-bundle + key: ca.crt + cert: + secret: + name: mtls-bundle + key: ocp-client.crt + keySecret: + name: mtls-bundle + key: ocp-client.key + replicas: 2 + resourceSelector: + matchLabels: + service: metricStorage + resources: + limits: + cpu: 500m + memory: 512Mi + requests: + cpu: 100m + memory: 256Mi + retention: 1d # <2> + dashboardsEnabled: false + dataplaneNetwork: ctlplane + enabled: true + prometheusTls: {} +---- +<1> Replace this URL with the URL of your Prometheus instance. +<2> Set a retention period. Optionally, you can reduce retention for local metrics because of external collection. +// run on tenant's openshift cluster +. Configure the tenant cluster on which your workloads run to send metrics to Prometheus: + +.. Create a cluster monitoring config map as a YAML file. The map must include a remote write configuration and cluster identifiers. As an example: ++ +[source,yaml] +---- +apiVersion: v1 +kind: ConfigMap +metadata: + name: cluster-monitoring-config + namespace: openshift-monitoring +data: + config.yaml: | + prometheusK8s: + retention: 1d # <1> + remoteWrite: + - url: "https://external-prometheus.example.com/api/v1/write" + writeRelabelConfigs: + - sourceLabels: + - __tmp_openshift_cluster_id__ + targetLabel: cluster_id + action: replace + tlsConfig: + ca: + secret: + name: mtls-bundle + key: ca.crt + cert: + secret: + name: mtls-bundle + key: ocp-client.crt + keySecret: + name: mtls-bundle + key: ocp-client.key +---- +<1> Set a retention period. Optionally, you can reduce retention for local metrics because of external collection. + +.. Save the config map as a file called `cluster-monitoring-config.yaml`. + +.. Create a secret that is named `mtls-bundle` in the `openshift-monitoring` namespace that contains HTTPS client certificates for authentication to Prometheus by entering the following command: ++ +[source,terminal] +---- +$ oc --namespace openshift-monitoring \ + create secret generic mtls-bundle \ + --from-file=./ca.crt \ + --from-file=ocp-client.crt \ + --from-file=ocp-client.key +---- + +.. Apply the cluster monitoring configuration by running the following command: ++ +[source,terminal] +---- +$ oc apply -f cluster-monitoring-config.yaml +---- + +After the changes propagate, you can see aggregated metrics in your external Prometheus instance. \ No newline at end of file diff --git a/modules/monitoring-configuring-shiftstack-scraping.adoc b/modules/monitoring-configuring-shiftstack-scraping.adoc new file mode 100644 index 000000000000..c2e6354fb11c --- /dev/null +++ b/modules/monitoring-configuring-shiftstack-scraping.adoc @@ -0,0 +1,83 @@ +// Module included in the following assemblies: +// +// * observability/monitoring/shiftstack-prometheus-configuration.adoc + +:_mod-docs-content-type: PROCEDURE +[id="monitoring-configuring-shiftstack-scraping_{context}"] += Collecting cluster metrics from the federation endpoint + +You can employ the federation endpoint of your {product-title} cluster to make metrics available to a {rhoso-first} cluster to practice pull-based monitoring. + +.Prerequisites + +- You have administrative access to {rhoso} and the tenant cluster that is running on it. +- Telemetry is enabled in the {rhoso} environment. +- The Cluster Observability Operator is installed on your cluster. +- The monitoring stack for your cluster is configured. +- Your cluster has its federation endpoint exposed. + +.Procedure + +. Connect to your cluster by using a username and password; do not log in by using a `kubeconfig` file that was generated by the installation program. + +. To retrieve a token from the {product-title} cluster, run the following command on it: ++ +[source,terminal] +---- +$ oc whoami -t +---- + +. Make the token available as a secret in the `openstack` namespace in the {rhoso} management cluster by running the following command: ++ +[source,terminal] +---- +$ oc -n openstack create secret generic ocp-federated --from-literal=token= +---- + +. To get the Prometheus federation route URL from your {product-title} cluster, run the following command: ++ +[source,terminal] +---- +$ oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath={'.status.ingress[].host'} +---- + +. Write a manifest for a scrape configuration and save it as a file called `cluster-scrape-config.yaml`. As an example: ++ +[source,yaml] +---- +apiVersion: monitoring.rhobs/v1alpha1 +kind: ScrapeConfig +metadata: + labels: + service: metricStorage + name: sos1-federated + namespace: openstack +spec: + params: + 'match[]': + - '{__name__=~"kube_node_info|kube_persistentvolume_info|cluster:master_nodes"}' # <1> + metricsPath: '/federate' + authorization: + type: Bearer + credentials: + name: ocp-federated # <2> + key: token + scheme: HTTPS # or HTTP + scrapeInterval: 30s # <3> + staticConfigs: + - targets: + - prometheus-k8s-federate-openshift-monitoring.apps.openshift.example # <4> +---- +<1> Add metrics here. In this example, only the metrics `kube_node_info`, `kube_persistentvolume_info`, and `cluster:master_nodes` are requested. +<2> Insert the previously generated secret name here. +<3> Limit scraping to fewer than 1000 samples for each request with a maximum frequency of once every 30 seconds. +<4> Insert the URL you fetched previously here. If the endpoint is HTTPS and uses a custom certificate authority, add a `tlsConfig` section after it. + +. While connected to the {rhoso} management cluster, apply the manifest by running the following command: ++ +[source,terminal] +---- +$ oc apply -f cluster-scrape-config.yaml +---- + +After the config propagates, the cluster metrics are accessible for querying in the {product-title} UI in RHOSO. diff --git a/modules/monitoring-shiftstack-metrics.adoc b/modules/monitoring-shiftstack-metrics.adoc new file mode 100644 index 000000000000..c9abf138aa13 --- /dev/null +++ b/modules/monitoring-shiftstack-metrics.adoc @@ -0,0 +1,42 @@ +// Module included in the following assemblies: +// +// * observability/monitoring/shiftstack-prometheus-configuration.adoc + +:_mod-docs-content-type: CONCEPT +[id="monitoring-shiftstack-metrics.adoc_{context}"] += Available metrics for clusters that run on RHOSO + +To query metrics and identifying resources across the stack, there are helper metrics that establish a correlation between {rhoso-first} infrastructure resources and their representations in the tenant {product-title} cluster. + +To map nodes with {rhoso} compute instances, in the metric `kube_node_info`: + +* `node` is the Kubernetes node name. + +* `provider_id` contains the identifier of the corresponding compute service instance. + +To map persistent volumes with {rhoso} block storage or shared filesystems shares, in the metric `kube_persistentvolume_info`: + +* `persistentvolume` is the volume name. + +* `csi_volume_handle` is the block storage volume or share identifier. + +By default, the compute machines that back the cluster control plane nodes are created in a server group with a soft anti-affinity policy. As a result, the compute service creates them on separate hypervisors on a best-effort basis. However, if the state of the {rhoso} cluster is not appropriate for this distribution, the machines are created anyway. + +In combination with the default soft anti-affinity policy, you can configure an alert that activates when a hypervisor hosts more than one control plane node of a given cluster to highlight the degraded level of high availability. + +As an example, this PromQL query returns the number of {product-title} master nodes per {rh-openstack} host: + +[source,promql] +---- +sum by (vm_instance) ( + group by (vm_instance, resource) (ceilometer_cpu) + / on (resource) group_right(vm_instance) ( + group by (node, resource) ( + label_replace(kube_node_info, "resource", "$1", "system_uuid", "(.+)") + ) + / on (node) group_left group by (node) ( + cluster:master_nodes + ) + ) +) +---- \ No newline at end of file diff --git a/observability/monitoring/shiftstack-prometheus-configuration.adoc b/observability/monitoring/shiftstack-prometheus-configuration.adoc new file mode 100644 index 000000000000..8a2e9f120d5d --- /dev/null +++ b/observability/monitoring/shiftstack-prometheus-configuration.adoc @@ -0,0 +1,34 @@ +:_mod-docs-content-type: ASSEMBLY +[id="shiftstack-prometheus-configuration"] += Monitoring clusters that run on RHOSO +include::_attributes/common-attributes.adoc[] +:context: shiftstack-prometheus-configuration + +toc::[] + +You can correlate observability metrics for clusters that run on {rhoso-first}. By collecting metrics from both environments, you can monitor and troubleshoot issues across the infrastructure and application layers. + +There are two supported methods for metric correlation for clusters that run on {rhoso}: + +- https://prometheus.io/docs/practices/remote_write/#remote-write-tuning[Remote writing] to an external Prometheus instance. +- Collecting data from the {product-title} federation endpoint to the {rhoso} observability stack. + +include::modules/monitoring-configuring-shiftstack-remotewrite.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources +* xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#configuring-remote-write-storage_configuring-the-monitoring-stack[Configuring remote write storage] +* xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#adding-cluster-id-labels-to-metrics_configuring-the-monitoring-stack[Adding cluster ID labels to metrics] + +include::modules/monitoring-configuring-shiftstack-scraping.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources +* xref:../../observability/monitoring/accessing-third-party-monitoring-apis.adoc#monitoring-querying-metrics-by-using-the-federation-endpoint-for-prometheus_accessing-monitoring-apis-by-using-the-cli[Querying metrics by using the federation endpoint for Prometheus] + +include::modules/monitoring-shiftstack-metrics.adoc[leveloffset=+1] + +[role="_additional-resources"] +[id="additional-resources_{context}"] +== Additional resources +* xref:../../observability/cluster_observability_operator/cluster-observability-operator-overview.adoc#understanding-the-cluster-observability-operator_cluster_observability_operator_overview[Cluster Observability Operator overview] \ No newline at end of file From 32997a41fa45cb4cbd3b9aa110d66563e50d09ac Mon Sep 17 00:00:00 2001 From: Steven Smith Date: Thu, 9 Jan 2025 08:09:42 -0500 Subject: [PATCH 223/669] Updates UDN docs to include required NS label Addresses ocpbugs48423--commit two --- modules/nw-udn-benefits.adoc | 6 +++++- modules/nw-udn-best-practices.adoc | 17 ++++++++++++++--- modules/nw-udn-cr.adoc | 16 +++++++++++++++- 3 files changed, 34 insertions(+), 5 deletions(-) diff --git a/modules/nw-udn-benefits.adoc b/modules/nw-udn-benefits.adoc index b0117fe1b063..2a47f35878af 100644 --- a/modules/nw-udn-benefits.adoc +++ b/modules/nw-udn-benefits.adoc @@ -29,4 +29,8 @@ User-defined networks provide the following benefits: + * **Network parity**: With user-defined networking, the migration of applications from OpenStack to {product-title} is simplified by providing similar network isolation and configuration options. -Developers and administrators can create a user-defined network that is namespace scoped using the custom resource. An overview of the process is: create a namespace, create and configure the custom resource, create pods in the namespace. \ No newline at end of file +Developers and administrators can create a user-defined network that is namespace scoped using the custom resource. An overview of the process is as follows: + +. An administrator creates a namespace for a user-defined network with the `k8s.ovn.org/primary-user-defined-network` label. +. The `UserDefinedNetwork` CR is created by either the cluster administrator or the user. +. The user creates pods in the namespace. \ No newline at end of file diff --git a/modules/nw-udn-best-practices.adoc b/modules/nw-udn-best-practices.adoc index afff32b3551f..024f9e7890aa 100644 --- a/modules/nw-udn-best-practices.adoc +++ b/modules/nw-udn-best-practices.adoc @@ -6,7 +6,7 @@ [id="considerations-for-udn_{context}"] = Best practices for UserDefinedNetwork -Before setting up a `UserDefinedNetwork` (UDN) resource, users should consider the following information: +Before setting up a `UserDefinedNetwork` (UDN) resource, you should consider the following information: //These will not go live till 4.18 GA //* To eliminate errors and ensure connectivity, you should create a namespace scoped UDN CR before creating any workload in the namespace. @@ -15,6 +15,18 @@ Before setting up a `UserDefinedNetwork` (UDN) resource, users should consider t * `openshift-*` namespaces should not be used to set up a UDN. +* `UserDefinedNetwork` CRs should not be created in the default namespace. This can result in no isolation and, as a result, could introduce security risks to the cluster. + +* For primary networks, the namespace used for the `UserDefinedNetwork` CR must include the `k8s.ovn.org/primary-user-defined-network` label. This label cannot be updated, and can only be added when the namespace is created. The following conditions apply with the `k8s.ovn.org/primary-user-defined-network` namespace label: + +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a pod is created, the pod attaches itself to the default network. + +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary UDN CR is created that matches the namespace, the UDN reports an error status and the network is not created. + +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary UDN already exists, a pod in the namespace is created and attached to the default network. + +** If the namespace _has_ the label, and a primary UDN does not exist, a pod in the namespace is not created until the UDN is created. + * 2 masquerade IP addresses are required for user defined networks. You must reconfigure your masquerade subnet to be large enough to hold the required number of networks. + [IMPORTANT] @@ -29,5 +41,4 @@ Before setting up a `UserDefinedNetwork` (UDN) resource, users should consider t * When creating network segmentation, you should only use the NAD resource if user-defined network segmentation cannot be completed using the UDN resource. -* The cluster subnet and services CIDR for a UDN cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default network's join subnet, you must not use that value to configure a UDN `joinSubnets` field. If the default address values are used anywhere in the cluster's network you must override it by setting the `joinSubnets` field. For more information, see "Additional configuration details for a UserDefinedNetworks CR". - +* The cluster subnet and services CIDR for a UDN cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default network's join subnet, you must not use that value to configure a UDN `joinSubnets` field. If the default address values are used anywhere in the cluster's networ, you must override it by setting the `joinSubnets` field. For more information, see "Additional configuration details for a UserDefinedNetworks CR". \ No newline at end of file diff --git a/modules/nw-udn-cr.adoc b/modules/nw-udn-cr.adoc index 39112fa10f92..d11d7366a8b8 100644 --- a/modules/nw-udn-cr.adoc +++ b/modules/nw-udn-cr.adoc @@ -16,6 +16,20 @@ The following procedure creates a user-defined network that is namespace scoped. .Procedure +. Optional: For a `UserDefinedNetwork` CR that uses a primary network, create a namespace with the `k8s.ovn.org/primary-user-defined-network` label by entering the following command: ++ +[source,yaml] +---- +$ cat << EOF | oc apply -f - +apiVersion: v1 +kind: Namespace +metadata: + name: + labels: + k8s.ovn.org/primary-user-defined-network: "" +EOF +---- + . Create a request for either a `Layer2` or `Layer3` topology type user-defined network: .. Create a YAML file, such as `my-layer-two-udn.yaml`, to define your request for a `Layer2` topology as in the following example: @@ -123,5 +137,5 @@ status: message: NetworkAttachmentDefinition has been created reason: NetworkAttachmentDefinitionReady status: "True" - type: NetworkReady + type: NetworkCreated ---- \ No newline at end of file From c5bc05a899d3370fb6e272434321ec6dda6e5ec1 Mon Sep 17 00:00:00 2001 From: GroceryBoyJr <75502996+GroceryBoyJr@users.noreply.github.com> Date: Thu, 13 Feb 2025 15:25:43 -0500 Subject: [PATCH 224/669] CMP-1293:Compliance Operator shows INCONSISTENT results --- modules/running-compliance-scans.adoc | 2 ++ 1 file changed, 2 insertions(+) diff --git a/modules/running-compliance-scans.adoc b/modules/running-compliance-scans.adoc index bf7d8156a1ea..54b84f0bf110 100644 --- a/modules/running-compliance-scans.adoc +++ b/modules/running-compliance-scans.adoc @@ -13,6 +13,8 @@ You can run a scan using the Center for Internet Security (CIS) profiles. For co For all-in-one control plane and worker nodes, the compliance scan runs twice on the worker and control plane nodes. The compliance scan might generate inconsistent scan results. You can avoid inconsistent results by defining only a single role in the `ScanSetting` object. ==== +For more information about inconsistent scan results, see link:https://access.redhat.com/solutions/6970861[Compliance Operator shows INCONSISTENT scan result with worker node]. + .Procedure . Inspect the `ScanSetting` object by running the following command: From d0da13e275b44405efe79312ab369d10e7f5ed0e Mon Sep 17 00:00:00 2001 From: SNiemann15 Date: Wed, 5 Feb 2025 14:31:28 +0100 Subject: [PATCH 225/669] LPAR day 2 procedure --- .../prepare-pxe-assets-agent.adoc | 9 +- modules/adding-ibm-z-lpar-agent-day-2.adoc | 124 ++++++++++++++++++ 2 files changed, 132 insertions(+), 1 deletion(-) create mode 100644 modules/adding-ibm-z-lpar-agent-day-2.adoc diff --git a/installing/installing_with_agent_based_installer/prepare-pxe-assets-agent.adoc b/installing/installing_with_agent_based_installer/prepare-pxe-assets-agent.adoc index d64342f7a0ba..035266315aa8 100644 --- a/installing/installing_with_agent_based_installer/prepare-pxe-assets-agent.adoc +++ b/installing/installing_with_agent_based_installer/prepare-pxe-assets-agent.adoc @@ -52,12 +52,19 @@ include::modules/installing-ocp-agent-ibm-z-zvm.adoc[leveloffset=+2] // Adding {ibm-z-name} agents with {op-system-base} KVM include::modules/installing-ocp-agent-ibm-z-kvm.adoc[leveloffset=+2] +[role="_additional-resources"] +.Additional resources + +* xref:../../installing/installing_ibm_z/upi/installing-ibm-z-kvm.adoc#installing-ibm-z-kvm[Installing a cluster with {op-system-base} KVM on {ibm-z-title} and {ibm-linuxone-title}] + // Adding {ibm-z-title} Logical Partition (LPAR) as agents include::modules/adding-ibm-z-lpar-agent.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources -* xref:../../installing/installing_ibm_z/upi/installing-ibm-z.adoc#installing-ibm-z[Installing a cluster with z/VM on {ibm-z-title} and {ibm-linuxone-title}] +* xref:../../installing/installing_ibm_z/upi/installing-ibm-z-lpar.adoc#installing-ibm-z-lpar[Installing a cluster in an LPAR on {ibm-z-title} and {ibm-linuxone-title}] +// Adding {ibm-z-title} agents in a Logical Partition (LPAR) as a day 2 operation +include::modules/adding-ibm-z-lpar-agent-day-2.adoc[leveloffset=+2] diff --git a/modules/adding-ibm-z-lpar-agent-day-2.adoc b/modules/adding-ibm-z-lpar-agent-day-2.adoc new file mode 100644 index 000000000000..f37206b9d175 --- /dev/null +++ b/modules/adding-ibm-z-lpar-agent-day-2.adoc @@ -0,0 +1,124 @@ +// Module included in the following assemblies: +// +// * installing/installing_with_agent_based_installer/prepare-pxe-assets-agent.adoc + +:_mod-docs-content-type: PROCEDURE +[id="adding-ibm-z-lpar-agents-day-2_{context}"] += Adding {ibm-z-title} agents in a Logical Partition (LPAR) to an existing cluster + +In {product-title} {product-version}, the `.ins` and `initrd.img.addrsize` files are not automatically generated as part of the boot-artifacts for an existing cluster. You must manually generate these files for {ibm-z-name} clusters running in an LPAR. + +The `.ins` file is a special file that includes installation data and is present on the FTP server. You can access this file from the hardware management console (HMC) system. This file contains details such as mapping of the location of installation data on the disk or FTP server and the memory locations where the data is to be copied. + +.Prerequisites + +* A running file server with access to the LPAR. + +.Procedure + +. Generate the `.ins` and `initrd.img.addrsize` files: + +.. Retrieve the size of the `kernel` and `initrd` by running the following commands: ++ +[source,terminal] +---- +$ KERNEL_IMG_PATH='./kernel.img' +---- ++ +[source,terminal] +---- +$ INITRD_IMG_PATH='./initrd.img' +---- ++ +[source,terminal] +---- +$ CMDLINE_PATH='./generic.prm' +---- ++ +[source,terminal] +---- +$ kernel_size=$(stat -c%s $KERNEL_IMG_PATH) +---- ++ +[source,terminal] +---- +$ initrd_size=$(stat -c%s $INITRD_IMG_PATH) +---- + +.. Round up the `kernel` size to the next Mebibytes (MiB) boundary by running the following command: ++ +[source,terminal] +---- +$ BYTE_PER_MIB=$(( 1024 * 1024 )) offset=$(( (kernel_size + BYTE_PER_MIB - 1) / BYTE_PER_MIB * BYTE_PER_MIB )) +---- + +.. Create the kernel binary patch file that contains the `initrd` address and size by running the following commands: ++ +[source,terminal] +---- +$ INITRD_IMG_NAME=$(echo $INITRD_IMG_PATH | rev | cut -d '/' -f 1 | rev) +---- ++ +[source,terminal] +---- +$ KERNEL_OFFSET=0x00000000 +---- ++ +[source,terminal] +---- +$ KERNEL_CMDLINE_OFFSET=0x00010480 +---- ++ +[source,terminal] +---- +$ INITRD_ADDR_SIZE_OFFSET=0x00010408 +---- ++ +[source,terminal] +---- +$ OFFSET_HEX=(printf '0x%08x\n' offset) +---- + +.. Convert the address and size to binary format by running the following command: ++ +[source,terminal] +---- +$ printf "$(printf '%016x\n' $initrd_size)" | xxd -r -p > temp_size.bin +---- + +.. Merge the address and size binaries by running the following command: ++ +[source,terminal] +---- +$ cat temp_address.bin temp_size.bin > "$INITRD_IMG_NAME.addrsize" +---- + +.. Clean up temporary files by running the following command: ++ +[source,terminal] +---- +$ rm -rf temp_address.bin temp_size.bin +---- + +.. Create the `.ins` file by running the following command: ++ +[source,terminal] +---- +$ cat > generic.ins < Date: Thu, 23 Jan 2025 22:07:41 +0000 Subject: [PATCH 226/669] uncommenting Operators section in HCP topic map and adding conditions for HCP resolving error in HCP topic map indentation removed the < and replaced with brackets for tech preview in HCP topic map removed the < and replaced with brackets for Operators tech preview in HCP topic map removed the file with tech preview for Operators in HCP topic map applied conditions to 20 xrefs because the networking, cli, authentication and observability books are not in HCP repo yet added blank line after and endif and an include module --- _topic_maps/_topic_map_rosa_hcp.yml | 169 ++++++++++++++++++ modules/gathering-operator-logs.adoc | 11 +- .../olm-adding-operators-to-cluster.adoc | 22 +-- .../admin/olm-configuring-proxy-support.adoc | 8 +- operators/admin/olm-cs-podsched.adoc | 8 +- .../olm-deleting-operators-from-cluster.adoc | 4 +- .../admin/olm-managing-custom-catalogs.adoc | 51 +++--- .../olm-managing-operatorconditions.adoc | 8 +- operators/admin/olm-status.adoc | 4 +- .../olm-troubleshooting-operator-issues.adoc | 24 +-- operators/admin/olm-upgrading-operators.adoc | 8 +- operators/index.adoc | 40 ++--- .../ansible/osdk-ansible-tutorial.adoc | 33 ++-- .../golang/osdk-golang-tutorial.adoc | 31 ++-- .../operator_sdk/helm/osdk-helm-tutorial.adoc | 27 +-- .../operator_sdk/java/osdk-java-tutorial.adoc | 16 +- operators/operator_sdk/osdk-about.adoc | 8 +- .../operator_sdk/osdk-bundle-validate.adoc | 2 +- operators/operator_sdk/osdk-cli-ref.adoc | 9 + .../operator_sdk/osdk-complying-with-psa.adoc | 12 +- .../operator_sdk/osdk-generating-csvs.adoc | 8 +- operators/operator_sdk/osdk-ha-sno.adoc | 4 +- .../operator_sdk/osdk-installing-cli.adoc | 8 +- .../operator_sdk/osdk-leader-election.adoc | 2 +- .../osdk-monitoring-prometheus.adoc | 8 +- operators/understanding/olm-multitenancy.adoc | 4 +- .../understanding/olm-packaging-format.adoc | 18 +- operators/understanding/olm-rh-catalogs.adoc | 14 +- .../olm/olm-operatorconditions.adoc | 4 +- .../olm/olm-understanding-olm.adoc | 16 +- .../olm/olm-understanding-operatorgroups.adoc | 8 +- operators/understanding/olm/olm-webhooks.adoc | 4 +- 32 files changed, 409 insertions(+), 184 deletions(-) diff --git a/_topic_maps/_topic_map_rosa_hcp.yml b/_topic_maps/_topic_map_rosa_hcp.yml index 23495ff40e8a..88668564aa8f 100644 --- a/_topic_maps/_topic_map_rosa_hcp.yml +++ b/_topic_maps/_topic_map_rosa_hcp.yml @@ -516,6 +516,175 @@ Topics: - Name: Accessing the registry File: accessing-the-registry --- +Name: Operators +Dir: operators +Distros: openshift-rosa-hcp +Topics: +- Name: Operators overview + File: index +- Name: Understanding Operators + Dir: understanding + Topics: + - Name: What are Operators? + File: olm-what-operators-are + - Name: Packaging format + File: olm-packaging-format + - Name: Common terms + File: olm-common-terms + - Name: Operator Lifecycle Manager (OLM) + Dir: olm + Topics: + - Name: Concepts and resources + File: olm-understanding-olm + - Name: Architecture + File: olm-arch + - Name: Workflow + File: olm-workflow + - Name: Dependency resolution + File: olm-understanding-dependency-resolution + - Name: Operator groups + File: olm-understanding-operatorgroups + - Name: Multitenancy and Operator colocation + File: olm-colocation + - Name: Operator conditions + File: olm-operatorconditions + - Name: Metrics + File: olm-understanding-metrics + - Name: Webhooks + File: olm-webhooks + - Name: OperatorHub + File: olm-understanding-operatorhub + - Name: Red Hat-provided Operator catalogs + File: olm-rh-catalogs + - Name: Operators in multitenant clusters + File: olm-multitenancy + - Name: CRDs + Dir: crds + Topics: + - Name: Managing resources from CRDs + File: crd-managing-resources-from-crds +- Name: User tasks + Dir: user + Topics: + - Name: Creating applications from installed Operators + File: olm-creating-apps-from-installed-operators +- Name: Administrator tasks + Dir: admin + Topics: + - Name: Adding Operators to a cluster + File: olm-adding-operators-to-cluster + - Name: Updating installed Operators + File: olm-upgrading-operators + - Name: Deleting Operators from a cluster + File: olm-deleting-operators-from-cluster + - Name: Configuring proxy support + File: olm-configuring-proxy-support + - Name: Viewing Operator status + File: olm-status + - Name: Managing Operator conditions + File: olm-managing-operatorconditions + - Name: Managing custom catalogs + File: olm-managing-custom-catalogs + - Name: Catalog source pod scheduling + File: olm-cs-podsched + - Name: Troubleshooting Operator issues + File: olm-troubleshooting-operator-issues +- Name: Developing Operators + Dir: operator_sdk + Topics: + - Name: About the Operator SDK + File: osdk-about + - Name: Installing the Operator SDK CLI + File: osdk-installing-cli + - Name: Go-based Operators + Dir: golang + Topics: +# Quick start excluded, because it requires cluster-admin permissions. +# - Name: Getting started +# File: osdk-golang-quickstart + - Name: Tutorial + File: osdk-golang-tutorial + - Name: Project layout + File: osdk-golang-project-layout + - Name: Updating Go-based projects + File: osdk-golang-updating-projects + - Name: Ansible-based Operators + Dir: ansible + Topics: +# Quick start excluded, because it requires cluster-admin permissions. +# - Name: Getting started +# File: osdk-ansible-quickstart + - Name: Tutorial + File: osdk-ansible-tutorial + - Name: Project layout + File: osdk-ansible-project-layout + - Name: Updating Ansible-based projects + File: osdk-ansible-updating-projects + - Name: Ansible support + File: osdk-ansible-support + - Name: Kubernetes Collection for Ansible + File: osdk-ansible-k8s-collection + - Name: Using Ansible inside an Operator + File: osdk-ansible-inside-operator + - Name: Custom resource status management + File: osdk-ansible-cr-status + - Name: Helm-based Operators + Dir: helm + Topics: +# Quick start excluded, because it requires cluster-admin permissions. +# - Name: Getting started +# File: osdk-helm-quickstart + - Name: Tutorial + File: osdk-helm-tutorial + - Name: Project layout + File: osdk-helm-project-layout + - Name: Updating Helm-based projects + File: osdk-helm-updating-projects + - Name: Helm support + File: osdk-helm-support +# - Name: Hybrid Helm Operator <= Tech Preview +# File: osdk-hybrid-helm +# - Name: Updating Hybrid Helm-based projects (Technology Preview) +# File: osdk-hybrid-helm-updating-projects +# - Name: Java-based Operators <= Tech Preview +# Dir: java +# Topics: +# - Name: Getting started +# File: osdk-java-quickstart +# - Name: Tutorial +# File: osdk-java-tutorial +# - Name: Project layout +# File: osdk-java-project-layout +# - Name: Updating Java-based projects +# File: osdk-java-updating-projects + - Name: Defining cluster service versions (CSVs) + File: osdk-generating-csvs + - Name: Working with bundle images + File: osdk-working-bundle-images + - Name: Complying with pod security admission + File: osdk-complying-with-psa + - Name: Validating Operators using the scorecard + File: osdk-scorecard + - Name: Validating Operator bundles + File: osdk-bundle-validate + - Name: High-availability or single-node cluster detection and support + File: osdk-ha-sno + - Name: Configuring built-in monitoring with Prometheus + File: osdk-monitoring-prometheus + - Name: Configuring leader election + File: osdk-leader-election + - Name: Object pruning utility + File: osdk-pruning-utility + - Name: Migrating package manifest projects to bundle format + File: osdk-pkgman-to-bundle + - Name: Operator SDK CLI reference + File: osdk-cli-ref + - Name: Migrating to Operator SDK v0.1.0 + File: osdk-migrating-to-v0-1-0 +# ROSA customers can't configure/edit the cluster Operators +# - Name: Cluster Operators reference +# File: operator-reference +--- Name: Backup and restore Dir: backup_and_restore Distros: openshift-rosa-hcp diff --git a/modules/gathering-operator-logs.adoc b/modules/gathering-operator-logs.adoc index 9ef14db7ec63..5a8799280c89 100644 --- a/modules/gathering-operator-logs.adoc +++ b/modules/gathering-operator-logs.adoc @@ -10,12 +10,12 @@ If you experience Operator issues, you can gather detailed diagnostic informatio .Prerequisites -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * You have access to the cluster as a user with the `cluster-admin` role. -endif::openshift-rosa,openshift-dedicated[] -ifdef::openshift-rosa,openshift-dedicated[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * You have access to the cluster as a user with the `dedicated-admin` role. -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * Your API service is still functional. * You have installed the OpenShift CLI (`oc`). * You have the fully qualified domain names of the control plane or control plane machines. @@ -42,7 +42,7 @@ If an Operator pod has multiple containers, the preceding command will produce a ---- $ oc logs pod/ -c -n ---- - +ifndef::openshift-rosa-hcp[] . If the API is not functional, review Operator pod and container logs on each control plane node by using SSH instead. Replace `..` with appropriate values. .. List pods on each control plane node: + @@ -83,3 +83,4 @@ $ ssh core@.. sudo crictl logs -f ..`. ==== +endif::openshift-rosa-hcp[] \ No newline at end of file diff --git a/operators/admin/olm-adding-operators-to-cluster.adoc b/operators/admin/olm-adding-operators-to-cluster.adoc index 65bc5d6a1bb5..9d472b67d2da 100644 --- a/operators/admin/olm-adding-operators-to-cluster.adoc +++ b/operators/admin/olm-adding-operators-to-cluster.adoc @@ -10,12 +10,12 @@ include::_attributes/common-attributes.adoc[] toc::[] Using Operator Lifecycle Manager (OLM), -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] cluster administrators -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] administrators with the `dedicated-admin` role -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] can install OLM-based Operators to an {product-title} cluster. [NOTE] @@ -45,7 +45,7 @@ include::modules/olm-installing-from-operatorhub-using-web-console.adoc[leveloff * xref:../../operators/admin/olm-upgrading-operators.adoc#olm-approving-pending-upgrade_olm-upgrading-operators[Manually approving a pending Operator update] -ifdef::openshift-enterprise,openshift-webscale,openshift-origin,openshift-dedicated,openshift-rosa[] +ifdef::openshift-enterprise,openshift-webscale,openshift-origin,openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/olm-installing-from-operatorhub-using-cli.adoc[leveloffset=+1] [role="_additional-resources"] @@ -89,13 +89,13 @@ include::modules/olm-pod-placement.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * Adding taints and tolerations xref:../../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations-adding_nodes-scheduler-taints-tolerations[manually to nodes] or xref:../../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations-adding-machineset_nodes-scheduler-taints-tolerations[with compute machine sets] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../nodes/scheduling/nodes-scheduler-node-selectors.adoc#nodes-scheduler-node-selectors-project_nodes-scheduler-node-selectors[Creating project-wide node selectors] -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations-projects_nodes-scheduler-taints-tolerations[Creating a project with a node selector and toleration] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] endif::[] include::modules/olm-overriding-operator-pod-affinity.adoc[leveloffset=+1] @@ -106,6 +106,6 @@ include::modules/olm-overriding-operator-pod-affinity.adoc[leveloffset=+1] * xref:../../nodes/scheduling/nodes-scheduler-pod-affinity.adoc#nodes-scheduler-pod-affinity-about_nodes-scheduler-pod-affinity[Understanding pod affinity] * xref:../../nodes/scheduling/nodes-scheduler-node-affinity.adoc#nodes-scheduler-node-affinity-about_nodes-scheduler-node-affinity[Understanding node affinity] // This xref points to a topic not currently included in the OSD and ROSA docs. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-updating_nodes-nodes-working[Understanding how to update labels on nodes] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] diff --git a/operators/admin/olm-configuring-proxy-support.adoc b/operators/admin/olm-configuring-proxy-support.adoc index 85e18486e04e..49dc2ff2a76e 100644 --- a/operators/admin/olm-configuring-proxy-support.adoc +++ b/operators/admin/olm-configuring-proxy-support.adoc @@ -12,17 +12,17 @@ If a global proxy is configured on the {product-title} cluster, Operator Lifecyc .Additional resources // Configuring the cluster-wide proxy is a different topic in OSD/ROSA. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../networking/enable-cluster-wide-proxy.adoc#enable-cluster-wide-proxy[Configuring the cluster-wide proxy] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ifdef::openshift-dedicated,openshift-rosa[] * xref:../../networking/configuring-cluster-wide-proxy.adoc[Configuring a cluster-wide proxy] endif::openshift-dedicated,openshift-rosa[] // This xref points to a topic that is not currently included in the OSD and ROSA docs. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../networking/configuring-a-custom-pki.adoc#configuring-a-custom-pki[Configuring a custom PKI] (custom CA certificate) -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * Developing Operators that support proxy settings for xref:../../operators/operator_sdk/golang/osdk-golang-tutorial.adoc#osdk-run-proxy_osdk-golang-tutorial[Go], xref:../../operators/operator_sdk/ansible/osdk-ansible-tutorial.adoc#osdk-run-proxy_osdk-ansible-tutorial[Ansible], and xref:../../operators/operator_sdk/helm/osdk-helm-tutorial.adoc#osdk-run-proxy_osdk-helm-tutorial[Helm] diff --git a/operators/admin/olm-cs-podsched.adoc b/operators/admin/olm-cs-podsched.adoc index faef82038a7e..0c192fa82216 100644 --- a/operators/admin/olm-cs-podsched.adoc +++ b/operators/admin/olm-cs-podsched.adoc @@ -33,9 +33,9 @@ include::modules/disabling-catalogsource-objects.adoc[leveloffset=+1] * xref:../../operators/understanding/olm-understanding-operatorhub.adoc#olm-operatorhub-arch-operatorhub_crd_olm-understanding-operatorhub[OperatorHub custom resource] -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../disconnected/using-olm.adoc#olm-restricted-networks-operatorhub_olm-restricted-networks[Disabling the default OperatorHub catalog sources] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/olm-node-selector.adoc[leveloffset=+1] @@ -55,9 +55,9 @@ include::modules/olm-tolerations.adoc[leveloffset=+1] // The following xref points to a topic that is not included in the OSD or // ROSA docs. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources * xref:../../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations-about_nodes-scheduler-taints-tolerations[Understanding taints and tolerations] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] diff --git a/operators/admin/olm-deleting-operators-from-cluster.adoc b/operators/admin/olm-deleting-operators-from-cluster.adoc index cfb8c4a7923e..18b72926ee7f 100644 --- a/operators/admin/olm-deleting-operators-from-cluster.adoc +++ b/operators/admin/olm-deleting-operators-from-cluster.adoc @@ -12,9 +12,9 @@ The following describes how to delete, or uninstall, Operators that were previou ==== You must successfully and completely uninstall an Operator prior to attempting to reinstall the same Operator. Failure to fully uninstall the Operator properly can leave resources, such as a project or namespace, stuck in a "Terminating" state and cause "error resolving resource" messages to be observed when trying to reinstall the Operator. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] For more information, see xref:../../operators/admin/olm-troubleshooting-operator-issues.adoc#olm-reinstall_olm-troubleshooting-operator-issues[Reinstalling Operators after failed uninstallation]. -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ==== include::modules/olm-deleting-operators-from-a-cluster-using-web-console.adoc[leveloffset=+1] diff --git a/operators/admin/olm-managing-custom-catalogs.adoc b/operators/admin/olm-managing-custom-catalogs.adoc index 9e6da3946d34..290cde3bb89e 100644 --- a/operators/admin/olm-managing-custom-catalogs.adoc +++ b/operators/admin/olm-managing-custom-catalogs.adoc @@ -6,12 +6,12 @@ include::_attributes/common-attributes.adoc[] toc::[] -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] Cluster administrators -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] Administrators with the `dedicated-admin` role -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] and Operator catalog maintainers can create and manage custom catalogs packaged using the xref:../../operators/understanding/olm-packaging-format.adoc#olm-bundle-format_olm-packaging-format[bundle format] on Operator Lifecycle Manager (OLM) in {product-title}. [IMPORTANT] @@ -29,7 +29,10 @@ If your cluster is using custom catalogs, see xref:../../operators/operator_sdk/ [id="olm-managing-custom-catalogs-bundle-format-prereqs"] == Prerequisites +// TODO-HCP remove conditions for HCP after cli_tools book is migrated +ifndef::openshift-rosa-hcp[] * You have installed the xref:../../cli_reference/opm/cli-opm-install.adoc#cli-opm-install[`opm` CLI]. +endif::openshift-rosa-hcp[] [id="olm-managing-custom-catalogs-fb"] == File-based catalogs @@ -43,29 +46,32 @@ As of {product-title} 4.11, the default Red Hat-provided Operator catalog releas The `opm` subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the `opm` subcommands and flags for working with the SQLite database format, such as `opm index prune`, do not work with the file-based catalog format. -ifndef::openshift-dedicated,openshift-rosa[] -For more information about working with file-based catalogs, see xref:../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[Operator Framework packaging format] and xref:../../disconnected/mirroring/about-installing-oc-mirror-v2.adoc#about-installing-oc-mirror-v2[Mirroring images for a disconnected installation by using the oc-mirror plugin v2]. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +For more information about working with file-based catalogs, see xref:../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[Operator Framework packaging format] and xref:../../disconnected/mirroring/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin]. +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] For more information about working with file-based catalogs, see xref:../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[Operator Framework packaging format]. -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ==== include::modules/olm-creating-fb-catalog-image.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources +// TODO-HCP remove conditions for HCP after cli_tools book is migrated +ifndef::openshift-rosa-hcp[] * xref:../../cli_reference/opm/cli-opm-ref.adoc#cli-opm-ref[`opm` CLI reference] +endif::openshift-rosa-hcp[] include::modules/olm-filtering-fbc.adoc[leveloffset=+2] -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources * xref:../../operators/understanding/olm-packaging-format.adoc#olm-deprecations-schema_olm-packaging-format[Packaging format -> Schemas -> olm.deprecations schema] * xref:../../disconnected/mirroring/installing-mirroring-disconnected.adoc#updating-mirror-registry-content[Mirroring images for a disconnected installation using the oc-mirror plugin -> Keeping your mirror registry content updated] * xref:../../disconnected/using-olm.adoc#olm-creating-catalog-from-index_olm-restricted-networks[Adding a catalog source to a cluster] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [id="olm-managing-custom-catalogs-sqlite"] == SQLite-based catalogs @@ -82,7 +88,10 @@ include::modules/olm-catalog-source-and-psa.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources +// TODO-HCP remove conditions for HCP after cli_tools book is migrated +ifndef::openshift-rosa-hcp[] * xref:../../authentication/understanding-and-managing-pod-security-admission.adoc#understanding-and-managing-pod-security-admission[Understanding and managing pod security admission] +endif::openshift-rosa-hcp[] include::modules/olm-migrating-sqlite-catalog-to-fbc.adoc[leveloffset=+2] @@ -100,14 +109,14 @@ include::modules/olm-creating-catalog-from-index.adoc[leveloffset=+1] .Additional resources * xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-catalogsource_olm-understanding-olm[Operator Lifecycle Manager concepts and resources -> Catalog source] -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-accessing-images-private-registries_olm-managing-custom-catalogs[Accessing images for Operators from private registries] // This xref may be relevant to OSD/ROSA, but the topic is not currently included in the OSD and ROSA docs. * xref:../../openshift_images/managing_images/image-pull-policy.adoc#image-pull-policy[Image pull policy] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // Exclude from OSD/ROSA - dedicated-admins can't create the necessary secrets to do this procedure. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/olm-accessing-images-private-registries.adoc[leveloffset=+1] [role="_additional-resources"] @@ -116,17 +125,17 @@ include::modules/olm-accessing-images-private-registries.adoc[leveloffset=+1] * See xref:../../cicd/builds/creating-build-inputs.adoc#builds-secrets-overview_creating-build-inputs[What is a secret?] for more information on the types of secrets, including those used for registry credentials. * See xref:../../openshift_images/managing_images/using-image-pull-secrets.adoc#images-update-global-pull-secret_using-image-pull-secrets[Updating the global cluster pull secret] for more details on the impact of changing this secret. * See xref:../../openshift_images/managing_images/using-image-pull-secrets.adoc#images-allow-pods-to-reference-images-from-secure-registries_using-image-pull-secrets[Allowing pods to reference images from other secured registries] for more details on linking pull secrets to service accounts per namespace. -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // Exclude from OSD/ROSA - dedicated-admins can't do this procedure. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/olm-restricted-networks-configuring-operatorhub.adoc[leveloffset=+1] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // Removing custom catalogs can be done as a dedicated-admin, but the steps are different. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/olm-removing-catalogs.adoc[leveloffset=+1] -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/sd-olm-removing-catalogs.adoc[leveloffset=+1] -endif::openshift-dedicated,openshift-rosa[] \ No newline at end of file +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] \ No newline at end of file diff --git a/operators/admin/olm-managing-operatorconditions.adoc b/operators/admin/olm-managing-operatorconditions.adoc index 1079c2d2f6d4..b7052be0a864 100644 --- a/operators/admin/olm-managing-operatorconditions.adoc +++ b/operators/admin/olm-managing-operatorconditions.adoc @@ -6,12 +6,12 @@ include::_attributes/common-attributes.adoc[] toc::[] -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] As a cluster administrator, you can manage Operator conditions by using Operator Lifecycle Manager (OLM). -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] As an administrator with the `dedicated-admin` role, you can manage Operator conditions by using Operator Lifecycle Manager (OLM). -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/olm-overriding-operatorconditions.adoc[leveloffset=+1] include::modules/olm-updating-use-operatorconditions.adoc[leveloffset=+1] diff --git a/operators/admin/olm-status.adoc b/operators/admin/olm-status.adoc index ee437f105c29..4849030b05ed 100644 --- a/operators/admin/olm-status.adoc +++ b/operators/admin/olm-status.adoc @@ -23,6 +23,6 @@ include::modules/olm-cs-status-cli.adoc[leveloffset=+1] * xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-catalogsource_olm-understanding-olm[Operator Lifecycle Manager concepts and resources -> Catalog source] * gRPC documentation: link:https://grpc.github.io/grpc/core/md_doc_connectivity-semantics-and-api.html[States of Connectivity] -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-accessing-images-private-registries_olm-managing-custom-catalogs[Accessing images for Operators from private registries] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] diff --git a/operators/admin/olm-troubleshooting-operator-issues.adoc b/operators/admin/olm-troubleshooting-operator-issues.adoc index 3ae7b624e775..e46c13c53eda 100644 --- a/operators/admin/olm-troubleshooting-operator-issues.adoc +++ b/operators/admin/olm-troubleshooting-operator-issues.adoc @@ -26,13 +26,13 @@ include::modules/olm-cs-status-cli.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-catalogsource_olm-understanding-olm[Operator Lifecycle Manager concepts and resources -> Catalog source] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * gRPC documentation: link:https://grpc.github.io/grpc/core/md_doc_connectivity-semantics-and-api.html[States of Connectivity] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-accessing-images-private-registries_olm-managing-custom-catalogs[Accessing images for Operators from private registries] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // Querying Operator Pod status include::modules/querying-operator-pod-status.adoc[leveloffset=+1] @@ -41,29 +41,29 @@ include::modules/querying-operator-pod-status.adoc[leveloffset=+1] include::modules/gathering-operator-logs.adoc[leveloffset=+1] // cannot patch resource "machineconfigpools" -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // Disabling Machine Config Operator from autorebooting include::modules/troubleshooting-disabling-autoreboot-mco.adoc[leveloffset=+1] include::modules/troubleshooting-disabling-autoreboot-mco-console.adoc[leveloffset=+2] include::modules/troubleshooting-disabling-autoreboot-mco-cli.adoc[leveloffset=+2] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // Refreshing failing subscriptions // OSD/ROSA cannot delete resource "clusterserviceversions", "jobs" in API group "operators.coreos.com" in the namespace "openshift-apiserver" -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/olm-refresh-subs.adoc[leveloffset=+1] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // Reinstalling Operators after failed uninstallation // OSD/ROSA gitcannot delete resource "customresourcedefinitions" -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/olm-reinstall.adoc[leveloffset=+1] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources * xref:../../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster[Deleting Operators from a cluster] * xref:../../operators/admin/olm-adding-operators-to-cluster.adoc#olm-adding-operators-to-a-cluster[Adding Operators to a cluster] -endif::openshift-rosa,openshift-dedicated[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] diff --git a/operators/admin/olm-upgrading-operators.adoc b/operators/admin/olm-upgrading-operators.adoc index 802ee925c1d6..7110e45fb723 100644 --- a/operators/admin/olm-upgrading-operators.adoc +++ b/operators/admin/olm-upgrading-operators.adoc @@ -7,10 +7,10 @@ include::_attributes/common-attributes.adoc[] toc::[] As -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] a cluster administrator, endif::[] -ifdef::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] an administrator with the `dedicated-admin` role, endif::[] you can update Operators that have been previously installed using Operator Lifecycle Manager (OLM) on your {product-title} cluster. @@ -24,10 +24,10 @@ include::modules/olm-preparing-upgrade.adoc[leveloffset=+1] include::modules/olm-changing-update-channel.adoc[leveloffset=+1] include::modules/olm-approving-pending-upgrade.adoc[leveloffset=+1] -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [role="_additional-resources"] [id="additional-resources_olm-upgrading-operators"] == Additional resources * xref:../../disconnected/using-olm.adoc#olm-restricted-networks[Using Operator Lifecycle Manager in disconnected environments] -endif::openshift-dedicated,openshift-rosa[] \ No newline at end of file +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] \ No newline at end of file diff --git a/operators/index.adoc b/operators/index.adoc index 082e98c777d6..152a841fdeb2 100644 --- a/operators/index.adoc +++ b/operators/index.adoc @@ -15,59 +15,59 @@ As an Operator author, you can perform the following development tasks for OLM-b ** xref:../operators/operator_sdk/osdk-installing-cli.adoc#osdk-installing-cli[Install Operator SDK CLI]. // The Operator quickstarts aren't published for OSD/ROSA, so for OSD/ROSA, these xrefs point to the tutorials instead. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ** Create xref:../operators/operator_sdk/golang/osdk-golang-quickstart.adoc#osdk-golang-quickstart[Go-based Operators], xref:../operators/operator_sdk/ansible/osdk-ansible-quickstart.adoc#osdk-ansible-quickstart[Ansible-based Operators], xref:../operators/operator_sdk/java/osdk-java-quickstart.adoc#osdk-java-quickstart[Java-based Operators], and xref:../operators/operator_sdk/helm/osdk-helm-quickstart.adoc#osdk-helm-quickstart[Helm-based Operators]. -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // TODO: When the Java-based Operators is GA, it can be added to the list below for OSD/ROSA. -ifdef::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ** Create xref:../operators/operator_sdk/golang/osdk-golang-tutorial.adoc#osdk-golang-tutorial[Go-based Operators], xref:../operators/operator_sdk/ansible/osdk-ansible-tutorial.adoc#osdk-ansible-tutorial[Ansible-based Operators], and xref:../operators/operator_sdk/helm/osdk-helm-tutorial.adoc#osdk-helm-tutorial[Helm-based Operators]. -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ** xref:../operators/operator_sdk/osdk-about.adoc#osdk-about[Use Operator SDK to build, test, and deploy an Operator]. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ** xref:../operators/user/olm-installing-operators-in-namespace.adoc#olm-installing-operators-in-namespace[Install and subscribe an Operator to your namespace]. -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ** xref:../operators/user/olm-creating-apps-from-installed-operators.adoc#olm-creating-apps-from-installed-operators[Create an application from an installed Operator through the web console]. // This xref could be relevant for OSD/ROSA, but the target doesn't currently exist in the OSD/ROSA docs. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources * xref:../machine_management/deleting-machine.adoc#machine-lifecycle-hook-deletion-uses_deleting-machine[Machine deletion lifecycle hook examples for Operator developers] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [id="operators-overview-administrator-tasks_{context}"] == For administrators -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] As a cluster administrator, you can perform the following administrative tasks for OLM-based Operators: -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] -ifdef::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] As an administrator with the `dedicated-admin` role, you can perform the following Operator tasks: -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ** xref:../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs[Manage custom catalogs]. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ** xref:../operators/admin/olm-creating-policy.adoc#olm-creating-policy[Allow non-cluster administrators to install Operators]. -endif::openshift-dedicated,openshift-rosa[] -ifndef::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ** xref:../operators/user/olm-installing-operators-in-namespace.adoc#olm-installing-operators-in-namespace[Install an Operator from OperatorHub]. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ** xref:../operators/admin/olm-adding-operators-to-cluster.adoc#olm-installing-operators-from-operatorhub_olm-adding-operators-to-a-cluster[Install an Operator from OperatorHub]. -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ** xref:../operators/admin/olm-status.adoc#olm-status[View Operator status]. ** xref:../operators/admin/olm-managing-operatorconditions.adoc#olm-managing-operatorconditions[Manage Operator conditions]. ** xref:../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Upgrade installed Operators]. ** xref:../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster[Delete installed Operators]. ** xref:../operators/admin/olm-configuring-proxy-support.adoc#olm-configuring-proxy-support[Configure proxy support]. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ** xref:../disconnected/using-olm.adoc#olm-restricted-networks[Using Operator Lifecycle Manager in disconnected environments]. // Not sure if the xref above should be changed in #82841 since this is the index page of the Operators section For information about the cluster Operators that Red Hat provides, see xref:../operators/operator-reference.adoc#cluster-operators-ref[Cluster Operators reference]. -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [id="operators-overview-next-steps"] == Next steps diff --git a/operators/operator_sdk/ansible/osdk-ansible-tutorial.adoc b/operators/operator_sdk/ansible/osdk-ansible-tutorial.adoc index e30868ead3c0..700bdb3698c3 100644 --- a/operators/operator_sdk/ansible/osdk-ansible-tutorial.adoc +++ b/operators/operator_sdk/ansible/osdk-ansible-tutorial.adoc @@ -20,27 +20,30 @@ Operator SDK:: The `operator-sdk` CLI tool and `controller-runtime` library API Operator Lifecycle Manager (OLM):: Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [NOTE] ==== This tutorial goes into greater detail than xref:../../../operators/operator_sdk/ansible/osdk-ansible-quickstart.adoc#osdk-ansible-quickstart[Getting started with Operator SDK for Ansible-based Operators]. ==== -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // The "Getting started" quickstarts require cluster-admin and are therefore only available in OCP. -ifdef::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [NOTE] ==== This tutorial goes into greater detail than link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-ansible-quickstart[Getting started with Operator SDK for Ansible-based Operators] in the OpenShift Container Platform documentation. ==== -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/osdk-common-prereqs.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources * xref:../../../operators/operator_sdk/osdk-installing-cli.adoc#osdk-installing-cli[Installing the Operator SDK CLI] +// TODO-HCP remove line 44 and 46 ifndef conditions for HCP after cli_tools book is migrated +ifndef::openshift-rosa-hcp[] * xref:../../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI] +endif::openshift-rosa-hcp[] include::modules/osdk-create-project.adoc[leveloffset=+1] include::modules/osdk-project-file.adoc[leveloffset=+2] @@ -52,18 +55,18 @@ include::modules/osdk-run-proxy.adoc[leveloffset=+1] include::modules/osdk-run-operator.adoc[leveloffset=+1] -ifdef::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources * link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-run-locally_osdk-ansible-tutorial[Running locally outside the cluster] (OpenShift Container Platform documentation) * link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-run-deployment_osdk-ansible-tutorial[Running as a deployment on the cluster] (OpenShift Container Platform documentation) -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // In OSD/ROSA, the only applicable option for running the Operator is to bundle and deploy with OLM. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/osdk-run-locally.adoc[leveloffset=+2] include::modules/osdk-run-deployment.adoc[leveloffset=+2] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [id="osdk-bundle-deploy-olm_{context}"] === Bundling an Operator and deploying with Operator Lifecycle Manager @@ -78,9 +81,13 @@ include::modules/osdk-create-cr.adoc[leveloffset=+1] == Additional resources * See xref:../../../operators/operator_sdk/ansible/osdk-ansible-project-layout.adoc#osdk-ansible-project-layout[Project layout for Ansible-based Operators] to learn about the directory structures created by the Operator SDK. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * If a xref:../../../networking/enable-cluster-wide-proxy.adoc#enable-cluster-wide-proxy[cluster-wide egress proxy is configured], cluster administrators can xref:../../../operators/admin/olm-configuring-proxy-support.adoc#olm-configuring-proxy-support[override the proxy settings or inject a custom CA certificate] for specific Operators running on Operator Lifecycle Manager (OLM). -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] -* If a xref:../../../networking/configuring-cluster-wide-proxy.adoc#configuring-a-cluster-wide-proxy[cluster-wide egress proxy is configured], administrators with the `dedicated-admin` role can xref:../../../operators/admin/olm-configuring-proxy-support.adoc#olm-configuring-proxy-support[override the proxy settings or inject a custom CA certificate] for specific Operators running on Operator Lifecycle Manager (OLM). -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +// TODO-HCP remove line 88 and 91 ifndef conditions for HCP after networking book is migrated +ifndef::openshift-rosa-hcp[] +* If a xref:../../../networking/configuring-cluster-wide-proxy.adoc#configuring-a-cluster-wide-proxy[cluster-wide egress proxy is configured] +endif::openshift-rosa-hcp[] +, administrators with the `dedicated-admin` role can xref:../../../operators/admin/olm-configuring-proxy-support.adoc#olm-configuring-proxy-support[override the proxy settings or inject a custom CA certificate] for specific Operators running on Operator Lifecycle Manager (OLM). +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] diff --git a/operators/operator_sdk/golang/osdk-golang-tutorial.adoc b/operators/operator_sdk/golang/osdk-golang-tutorial.adoc index c2928008367f..51cb7b75d67b 100644 --- a/operators/operator_sdk/golang/osdk-golang-tutorial.adoc +++ b/operators/operator_sdk/golang/osdk-golang-tutorial.adoc @@ -16,27 +16,30 @@ Operator SDK:: The `operator-sdk` CLI tool and `controller-runtime` library API Operator Lifecycle Manager (OLM):: Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [NOTE] ==== This tutorial goes into greater detail than xref:../../../operators/operator_sdk/golang/osdk-golang-quickstart.adoc#osdk-golang-quickstart[Getting started with Operator SDK for Go-based Operators]. ==== -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // The "Getting started" quickstarts require cluster-admin and are therefore only available in OCP. -ifdef::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [NOTE] ==== This tutorial goes into greater detail than link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-golang-quickstart[Getting started with Operator SDK for Go-based Operators] in the OpenShift Container Platform documentation. ==== -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/osdk-common-prereqs.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources * xref:../../../operators/operator_sdk/osdk-installing-cli.adoc#osdk-installing-cli[Installing the Operator SDK CLI] +// TODO-HCP remove conditions ifndef line 40 & 42 for HCP after cli_tools book is migrated +ifndef::openshift-rosa-hcp[] * xref:../../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI] +endif::openshift-rosa-hcp[] include::modules/osdk-create-project.adoc[leveloffset=+1] include::modules/osdk-project-file.adoc[leveloffset=+2] @@ -60,18 +63,18 @@ include::modules/osdk-golang-controller-rbac-markers.adoc[leveloffset=+2] include::modules/osdk-run-proxy.adoc[leveloffset=+1] include::modules/osdk-run-operator.adoc[leveloffset=+1] -ifdef::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources * link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-run-locally_osdk-golang-tutorial[Running locally outside the cluster] (OpenShift Container Platform documentation) * link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-run-deployment_osdk-golang-tutorial[Running as a deployment on the cluster] (OpenShift Container Platform documentation) -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // In OSD/ROSA, the only applicable option for running the Operator is to bundle and deploy with OLM. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/osdk-run-locally.adoc[leveloffset=+2] include::modules/osdk-run-deployment.adoc[leveloffset=+2] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [id="osdk-bundle-deploy-olm_{context}"] === Bundling an Operator and deploying with Operator Lifecycle Manager @@ -86,9 +89,15 @@ include::modules/osdk-create-cr.adoc[leveloffset=+1] == Additional resources * See xref:../../../operators/operator_sdk/golang/osdk-golang-project-layout.adoc#osdk-golang-project-layout[Project layout for Go-based Operators] to learn about the directory structures created by the Operator SDK. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * If a xref:../../../networking/enable-cluster-wide-proxy.adoc#enable-cluster-wide-proxy[cluster-wide egress proxy is configured], cluster administrators can xref:../../../operators/admin/olm-configuring-proxy-support.adoc#olm-configuring-proxy-support[override the proxy settings or inject a custom CA certificate] for specific Operators running on Operator Lifecycle Manager (OLM). -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ifdef::openshift-dedicated,openshift-rosa[] -* If a xref:../../../networking/configuring-cluster-wide-proxy.adoc#configuring-a-cluster-wide-proxy[cluster-wide egress proxy is configured], administrators with the `dedicated-admin` role can xref:../../../operators/admin/olm-configuring-proxy-support.adoc#olm-configuring-proxy-support[override the proxy settings or inject a custom CA certificate] for specific Operators running on Operator Lifecycle Manager (OLM). +// TODO-HCP remove line 97 and 99 conditions and add the HCP condition to line 92 and 98 for HCP after networking book is migrated +ifndef::openshift-rosa-hcp[] +* If a xref:../../../networking/configuring-cluster-wide-proxy.adoc#configuring-a-cluster-wide-proxy[cluster-wide egress proxy is configured], +endif::openshift-rosa-hcp[] +administrators with the `dedicated-admin` role can xref:../../../operators/admin/olm-configuring-proxy-support.adoc#olm-configuring-proxy-support[override the proxy settings or inject a custom CA certificate] for specific Operators running on Operator Lifecycle Manager (OLM). endif::openshift-dedicated,openshift-rosa[] + + diff --git a/operators/operator_sdk/helm/osdk-helm-tutorial.adoc b/operators/operator_sdk/helm/osdk-helm-tutorial.adoc index 71c14c118794..ec6d25f9b455 100644 --- a/operators/operator_sdk/helm/osdk-helm-tutorial.adoc +++ b/operators/operator_sdk/helm/osdk-helm-tutorial.adoc @@ -20,27 +20,30 @@ Operator SDK:: The `operator-sdk` CLI tool and `controller-runtime` library API Operator Lifecycle Manager (OLM):: Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [NOTE] ==== This tutorial goes into greater detail than xref:../../../operators/operator_sdk/helm/osdk-helm-quickstart.adoc#osdk-helm-quickstart[Getting started with Operator SDK for Helm-based Operators]. ==== -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // The "Getting started" quickstarts require cluster-admin and are therefore only available in OCP. -ifdef::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [NOTE] ==== This tutorial goes into greater detail than link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-helm-quickstart[Getting started with Operator SDK for Helm-based Operators] in the OpenShift Container Platform documentation. ==== -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/osdk-common-prereqs.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources * xref:../../../operators/operator_sdk/osdk-installing-cli.adoc#osdk-installing-cli[Installing the Operator SDK CLI] +// TODO-HCP remove line 44 and 46 ifndef conditions for HCP after cli_tools book is migrated +ifndef::openshift-rosa-hcp[] * xref:../../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI] +endif::openshift-rosa-hcp[] include::modules/osdk-create-project.adoc[leveloffset=+1] include::modules/osdk-helm-existing-chart.adoc[leveloffset=+2] @@ -55,18 +58,18 @@ include::modules/osdk-run-proxy.adoc[leveloffset=+1] include::modules/osdk-run-operator.adoc[leveloffset=+1] -ifdef::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources * link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-run-locally_osdk-helm-tutorial[Running locally outside the cluster] (OpenShift Container Platform documentation) * link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-run-deployment_osdk-helm-tutorial[Running as a deployment on the cluster] (OpenShift Container Platform documentation) -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // In OSD/ROSA, the only applicable option for running the Operator is to bundle and deploy with OLM. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/osdk-run-locally.adoc[leveloffset=+2] include::modules/osdk-run-deployment.adoc[leveloffset=+2] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [id="osdk-bundle-deploy-olm_{context}"] === Bundling an Operator and deploying with Operator Lifecycle Manager @@ -81,9 +84,13 @@ include::modules/osdk-create-cr.adoc[leveloffset=+1] == Additional resources * See xref:../../../operators/operator_sdk/helm/osdk-helm-project-layout.adoc#osdk-helm-project-layout[Project layout for Helm-based Operators] to learn about the directory structures created by the Operator SDK. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * If a xref:../../../networking/enable-cluster-wide-proxy.adoc#enable-cluster-wide-proxy[cluster-wide egress proxy is configured], cluster administrators can xref:../../../operators/admin/olm-configuring-proxy-support.adoc#olm-configuring-proxy-support[override the proxy settings or inject a custom CA certificate] for specific Operators running on Operator Lifecycle Manager (OLM). -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ifdef::openshift-dedicated,openshift-rosa[] +// TODO-HCP remove line 92 and 94 ifndef conditions for HCP after networking book is migrated ad put the hcp condition back on line 90 and 95 +ifndef::openshift-rosa-hcp[] * If a xref:../../../networking/configuring-cluster-wide-proxy.adoc#configuring-a-cluster-wide-proxy[cluster-wide egress proxy is configured], administrators with the `dedicated-admin` role can xref:../../../operators/admin/olm-configuring-proxy-support.adoc#olm-configuring-proxy-support[override the proxy settings or inject a custom CA certificate] for specific Operators running on Operator Lifecycle Manager (OLM). +endif::openshift-rosa-hcp[] endif::openshift-dedicated,openshift-rosa[] + diff --git a/operators/operator_sdk/java/osdk-java-tutorial.adoc b/operators/operator_sdk/java/osdk-java-tutorial.adoc index 12eecea2a4fb..a0d5d5d4a2fb 100644 --- a/operators/operator_sdk/java/osdk-java-tutorial.adoc +++ b/operators/operator_sdk/java/osdk-java-tutorial.adoc @@ -20,20 +20,20 @@ Operator SDK:: The `operator-sdk` CLI tool and `java-operator-sdk` library API Operator Lifecycle Manager (OLM):: Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [NOTE] ==== This tutorial goes into greater detail than xref:../../../operators/operator_sdk/java/osdk-java-quickstart.adoc#osdk-java-quickstart[Getting started with Operator SDK for Java-based Operators]. ==== -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // The "Getting started" quickstarts require cluster-admin and are therefore only available in OCP. -ifdef::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [NOTE] ==== This tutorial goes into greater detail than link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-java-quickstart[Getting started with Operator SDK for Java-based Operators] in the OpenShift Container Platform documentation. ==== -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/osdk-common-prereqs.adoc[leveloffset=+1] @@ -60,18 +60,18 @@ include::modules/osdk-java-controller-memcached-deployment.adoc[leveloffset=+2] include::modules/osdk-run-operator.adoc[leveloffset=+1] -ifdef::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources * link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-run-locally_osdk-java-tutorial[Running locally outside the cluster] (OpenShift Container Platform documentation) * link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-run-deployment_osdk-java-tutorial[Running as a deployment on the cluster] (OpenShift Container Platform documentation) -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // In OSD/ROSA, the only applicable option for running the Operator is to bundle and deploy with OLM. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/osdk-run-locally.adoc[leveloffset=+2] include::modules/osdk-run-deployment.adoc[leveloffset=+2] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [id="osdk-bundle-deploy-olm_{context}"] === Bundling an Operator and deploying with Operator Lifecycle Manager diff --git a/operators/operator_sdk/osdk-about.adoc b/operators/operator_sdk/osdk-about.adoc index 239054f93a7c..65e45c58496b 100644 --- a/operators/operator_sdk/osdk-about.adoc +++ b/operators/operator_sdk/osdk-about.adoc @@ -26,12 +26,12 @@ The Operator SDK is a framework that uses the link:https://github.com/kubernetes - Extensions to cover common Operator use cases - Metrics set up automatically in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] Operator authors with cluster administrator access to a Kubernetes-based cluster (such as {product-title}) -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] Operator authors with dedicated-admin access to {product-title} -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. link:https://kubebuilder.io/[Kubebuilder] is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. [NOTE] diff --git a/operators/operator_sdk/osdk-bundle-validate.adoc b/operators/operator_sdk/osdk-bundle-validate.adoc index 7d4a67bd11ca..16773a53b389 100644 --- a/operators/operator_sdk/osdk-bundle-validate.adoc +++ b/operators/operator_sdk/osdk-bundle-validate.adoc @@ -20,7 +20,7 @@ include::modules/osdk-bundle-validate-tests.adoc[leveloffset=+1] include::modules/osdk-bundle-validate-run.adoc[leveloffset=+1] -ifndef::openshift-rosa,openshift-dedicated[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/osdk-multi-arch-validate.adoc[leveloffset=+1] [role="_additional-resources"] diff --git a/operators/operator_sdk/osdk-cli-ref.adoc b/operators/operator_sdk/osdk-cli-ref.adoc index b6c1ecac088e..33db59edcabb 100644 --- a/operators/operator_sdk/osdk-cli-ref.adoc +++ b/operators/operator_sdk/osdk-cli-ref.adoc @@ -41,7 +41,10 @@ include::modules/osdk-cli-ref-run-bundle.adoc[leveloffset=+2] * See xref:../../operators/understanding/olm/olm-understanding-operatorgroups.adoc#olm-operatorgroups-membership_olm-understanding-operatorgroups[Operator group membership] for details on possible install modes. * xref:../../operators/operator_sdk/osdk-complying-with-psa.adoc#osdk-complying-with-psa[Complying with pod security admission] +// TODO-HCP remove line 45 and 47 ifndef conditions for HCP after Authentication book is migrated +ifndef::openshift-rosa-hcp[] * xref:../../authentication/understanding-and-managing-pod-security-admission.adoc#understanding-and-managing-pod-security-admission[Understanding and managing pod security admission] +endif::openshift-rosa-hcp[] include::modules/osdk-cli-ref-run-bundle-upgrade.adoc[leveloffset=+2] @@ -49,7 +52,10 @@ include::modules/osdk-cli-ref-run-bundle-upgrade.adoc[leveloffset=+2] .Additional resources * xref:../../operators/operator_sdk/osdk-complying-with-psa.adoc#osdk-complying-with-psa[Complying with pod security admission] +// TODO-HCP remove line 55 and 57 ifndef conditions for HCP after Authentication book is migrated +ifndef::openshift-rosa-hcp[] * xref:../../authentication/understanding-and-managing-pod-security-admission.adoc#understanding-and-managing-pod-security-admission[Understanding and managing pod security admission] +endif::openshift-rosa-hcp[] include::modules/osdk-cli-ref-scorecard.adoc[leveloffset=+1] @@ -58,4 +64,7 @@ include::modules/osdk-cli-ref-scorecard.adoc[leveloffset=+1] * See xref:../../operators/operator_sdk/osdk-scorecard.adoc#osdk-scorecard[Validating Operators using the scorecard tool] for details about running the scorecard tool. * xref:../../operators/operator_sdk/osdk-complying-with-psa.adoc#osdk-complying-with-psa[Complying with pod security admission] +// TODO-HCP remove line 67 and 69 ifndef conditions for HCP after Authentication book is migrated +ifndef::openshift-rosa-hcp[] * xref:../../authentication/understanding-and-managing-pod-security-admission.adoc#understanding-and-managing-pod-security-admission[Understanding and managing pod security admission] +endif::openshift-rosa-hcp[] \ No newline at end of file diff --git a/operators/operator_sdk/osdk-complying-with-psa.adoc b/operators/operator_sdk/osdk-complying-with-psa.adoc index 9e80f34132b3..cc448d4ab560 100644 --- a/operators/operator_sdk/osdk-complying-with-psa.adoc +++ b/operators/operator_sdk/osdk-complying-with-psa.adoc @@ -13,8 +13,10 @@ If your Operator project does not require escalated permissions to run, you can * The allowed pod security admission level for the Operator's namespace * The allowed security context constraints (SCC) for the workload's service account - +// TODO-HCP remove line 17 and 19 ifndef conditions for HCP after authentication book is migrated +ifndef::openshift-rosa-hcp[] For more information, see xref:../../authentication/understanding-and-managing-pod-security-admission.adoc#understanding-and-managing-pod-security-admission[Understanding and managing pod security admission]. +endif::openshift-rosa-hcp[] include::snippets/osdk-deprecation.adoc[] @@ -30,13 +32,17 @@ include::modules/osdk-ensuring-operator-workloads-run-restricted-psa.adoc[levelo [role="_additional-resources"] .Additional resources - +// TODO-HCP remove line 36 and 38 ifndef conditions for HCP after authentication book is migrated +ifndef::openshift-rosa-hcp[] * xref:../../authentication/managing-security-context-constraints.adoc#managing-security-context-constraints[Managing security context constraints] +endif::openshift-rosa-hcp[] include::modules/osdk-managing-psa-for-operators-with-escalated-permissions.adoc[leveloffset=+1] [id="osdk-complying-with-psa-additional-resources"] [role="_additional-resources"] == Additional resources - +// TODO-HCP remove line 46 and 48 ifndef conditions for HCP after authentication book is migrated +ifndef::openshift-rosa-hcp[] * xref:../../authentication/understanding-and-managing-pod-security-admission.adoc#understanding-and-managing-pod-security-admission[Understanding and managing pod security admission] +endif::openshift-rosa-hcp[] \ No newline at end of file diff --git a/operators/operator_sdk/osdk-generating-csvs.adoc b/operators/operator_sdk/osdk-generating-csvs.adoc index a46a0e7f39cf..a4020e9de8ab 100644 --- a/operators/operator_sdk/osdk-generating-csvs.adoc +++ b/operators/operator_sdk/osdk-generating-csvs.adoc @@ -37,9 +37,9 @@ include::modules/osdk-csv-annotations-infra.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources * xref:../../operators/operator_sdk/osdk-generating-csvs.adoc#olm-enabling-operator-for-restricted-network_osdk-generating-csvs[Enabling your Operator for restricted network environments] (disconnected mode) -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../installing/overview/installing-fips.adoc#installing-fips[Support for FIPS cryptography] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/osdk-csv-annotations-dep.adoc[leveloffset=+2] include::modules/osdk-csv-annotations-other.adoc[leveloffset=+2] @@ -76,9 +76,9 @@ include::modules/olm-defining-csv-webhooks.adoc[leveloffset=+1] .Additional resources // This xref points to a topic that is not currently included in the OSD and ROSA docs. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../architecture/admission-plug-ins.adoc#admission-webhook-types_admission-plug-ins[Types of webhook admission plugins] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * Kubernetes documentation: ** link:https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook[Validating admission webhooks] ** link:https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook[Mutating admission webhooks] diff --git a/operators/operator_sdk/osdk-ha-sno.adoc b/operators/operator_sdk/osdk-ha-sno.adoc index 68ac8ed76148..325cae8cabe5 100644 --- a/operators/operator_sdk/osdk-ha-sno.adoc +++ b/operators/operator_sdk/osdk-ha-sno.adoc @@ -7,9 +7,9 @@ include::_attributes/common-attributes.adoc[] toc::[] // OSD/ROSA don't support single-node clusters, but these Operator authors still need to know how to handle this configuration for their Operators to work correctly in OCP. -ifdef::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] To ensure that your Operator runs well on both high-availability (HA) and non-HA modes in OpenShift Container Platform clusters, you can use the Operator SDK to detect the cluster's infrastructure topology and set the resource requirements to fit the cluster's topology. -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // Not using {product-title} here, because HA mode and non-HA mode are specific to OCP and should be spelled out this way in other distros. An OpenShift Container Platform cluster can be configured in high-availability (HA) mode, which uses multiple nodes, or in non-HA mode, which uses a single node. A single-node cluster, also known as {sno}, is likely to have more conservative resource constraints. Therefore, it is important that Operators installed on a single-node cluster can adjust accordingly and still run well. diff --git a/operators/operator_sdk/osdk-installing-cli.adoc b/operators/operator_sdk/osdk-installing-cli.adoc index d0bfb4742cb1..857d24cdc471 100644 --- a/operators/operator_sdk/osdk-installing-cli.adoc +++ b/operators/operator_sdk/osdk-installing-cli.adoc @@ -10,12 +10,12 @@ The Operator SDK provides a command-line interface (CLI) tool that Operator deve include::snippets/osdk-deprecation.adoc[] -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] Operator authors with cluster administrator access to a Kubernetes-based cluster, such as {product-title}, -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] Operator authors with dedicated-admin access to {product-title} -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. link:https://kubebuilder.io/[Kubebuilder] is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. [NOTE] diff --git a/operators/operator_sdk/osdk-leader-election.adoc b/operators/operator_sdk/osdk-leader-election.adoc index e34681b25a39..9cd2d5c0e4e7 100644 --- a/operators/operator_sdk/osdk-leader-election.adoc +++ b/operators/operator_sdk/osdk-leader-election.adoc @@ -8,7 +8,7 @@ toc::[] During the lifecycle of an Operator, it is possible that there may be more than one instance running at any given time, for example when rolling out an upgrade for the Operator. In such a scenario, it is necessary to avoid contention between multiple Operator instances using leader election. This ensures only one leader instance handles the reconciliation while the other instances are inactive but ready to take over when the leader steps down. -There are two different leader election implementations to choose from, each with its own trade-off: +There are two different leader election implementations to choose from, each with its own tradeoff: Leader-for-life:: The leader pod only gives up leadership, using garbage collection, when it is deleted. This implementation precludes the possibility of two instances mistakenly running as leaders, a state also known as split brain. However, this method can be subject to a delay in electing a new leader. For example, when the leader pod is on an unresponsive or partitioned node, you can specify `node.kubernetes.io/unreachable` and `node.kubernetes.io/not-ready` tolerations on the leader pod and use the `tolerationSeconds` value to dictate how long it takes for the leader pod to be deleted from the node and step down. These tolerations are added to the pod by default on admission with a `tolerationSeconds` value of 5 minutes. See the link:https://godoc.org/github.com/operator-framework/operator-sdk/pkg/leader[Leader-for-life] Go documentation for more. diff --git a/operators/operator_sdk/osdk-monitoring-prometheus.adoc b/operators/operator_sdk/osdk-monitoring-prometheus.adoc index 83f6efb33cc5..b1b292ca6193 100644 --- a/operators/operator_sdk/osdk-monitoring-prometheus.adoc +++ b/operators/operator_sdk/osdk-monitoring-prometheus.adoc @@ -8,7 +8,7 @@ toc::[] // Dedicated-admins in OSD and ROSA don't have the permissions to complete the procedures in this assembly. Also, the procedures use the default Prometheus Operator in the openshift-monitoring project, which OSD/ROSA customers should not use. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] This guide describes the built-in monitoring support provided by the Operator SDK using the Prometheus Operator and details usage for authors of Go-based and Ansible-based Operators. include::snippets/osdk-deprecation.adoc[] @@ -16,7 +16,7 @@ include::snippets/osdk-deprecation.adoc[] include::modules/osdk-monitoring-prometheus-operator-support.adoc[leveloffset=+1] include::modules/osdk-monitoring-custom-metrics.adoc[leveloffset=+1] include::modules/osdk-ansible-metrics.adoc[leveloffset=+1] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ifdef::openshift-dedicated,openshift-rosa[] // Since OSD/ROSA dedicated-admins can't do the procedures in this assembly, point to the OCP docs. @@ -35,5 +35,9 @@ Do not use the Prometheus Operator in the `openshift-monitoring` project. Red Ha .Additional resources * link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-monitoring-custom-metrics_osdk-monitoring-prometheus[Exposing custom metrics for Go-based Operators] (OpenShift Container Platform documentation) * link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-ansible-metrics_osdk-monitoring-prometheus[Exposing custom metrics for Ansible-based Operators] (OpenShift Container Platform documentation) +// TODO-HCP remove line 39 and 41 ifndef conditions for HCP after Observability book is migrated and add back HCP condition to line 41 and 21 +ifndef::openshift-rosa-hcp[] * xref:../../observability/monitoring/monitoring-overview.adoc#understanding-the-monitoring-stack_monitoring-overview[Understanding the monitoring stack] in {product-title} +endif::openshift-rosa-hcp[] endif::openshift-dedicated,openshift-rosa[] + diff --git a/operators/understanding/olm-multitenancy.adoc b/operators/understanding/olm-multitenancy.adoc index b81b915a3039..0d9ed89db150 100644 --- a/operators/understanding/olm-multitenancy.adoc +++ b/operators/understanding/olm-multitenancy.adoc @@ -26,10 +26,10 @@ include::modules/olm-multitenancy-solution.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources * xref:../../operators/admin/olm-adding-operators-to-cluster.adoc#olm-preparing-operators-multitenant_olm-adding-operators-to-a-cluster[Preparing for multiple instances of an Operator for multitenant clusters] -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../operators/admin/olm-creating-policy.adoc#olm-creating-policy[Allowing non-cluster administrators to install Operators] * xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-restricted-networks-operatorhub_olm-managing-custom-catalogs[Disabling the default OperatorHub catalog sources] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [id="olm-colocation_{context}"] == Operator colocation and Operator groups diff --git a/operators/understanding/olm-packaging-format.adoc b/operators/understanding/olm-packaging-format.adoc index f5dabaab71fc..7e6543cd0afa 100644 --- a/operators/understanding/olm-packaging-format.adoc +++ b/operators/understanding/olm-packaging-format.adoc @@ -17,8 +17,10 @@ include::modules/olm-dependencies.adoc[leveloffset=+2] * xref:../../operators/understanding/olm/olm-understanding-dependency-resolution.adoc#olm-understanding-dependency-resolution[Operator Lifecycle Manager dependency resolution] include::modules/olm-about-opm.adoc[leveloffset=+2] - +// TODO-HCP remove conditions for HCP after cli_tools book is migrated +ifndef::openshift-rosa-hcp[] * See xref:../../cli_reference/opm/cli-opm-install.adoc#cli-opm-install[CLI tools] for steps on installing the `opm` CLI. +endif::openshift-rosa-hcp[] ifdef::openshift-origin[] [id="olm-packaging-format-addtl-resources"] @@ -40,12 +42,12 @@ As of {product-title} 4.11, the default Red Hat-provided Operator catalog releas The `opm` subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the `opm` subcommands and flags for working with the SQLite database format, such as `opm index prune`, do not work with the file-based catalog format. -ifndef::openshift-dedicated,openshift-rosa[] -For more information about working with file-based catalogs, see xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs-fb[Managing custom catalogs] and xref:../../disconnected/mirroring/about-installing-oc-mirror-v2.adoc#about-installing-oc-mirror-v2[Mirroring images for a disconnected installation by using the oc-mirror plugin v2]. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +For more information about working with file-based catalogs, see xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs-fb[Managing custom catalogs] and xref:../../disconnected/mirroring/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin]. +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] For more information about working with file-based catalogs, see xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs-fb[Managing custom catalogs]. -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ==== include::modules/olm-fb-catalogs-structure.adoc[leveloffset=+2] @@ -68,7 +70,9 @@ include::modules/olm-fb-catalogs-guidelines.adoc[leveloffset=+2] === CLI usage For instructions about creating file-based catalogs by using the `opm` CLI, see xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-creating-fb-catalog-image_olm-managing-custom-catalogs[Managing custom catalogs]. - +// TODO-HCP remove conditions for HCP after cli_tools book is migrated +ifndef::openshift-rosa-hcp[] For reference documentation about the `opm` CLI commands related to managing file-based catalogs, see xref:../../cli_reference/opm/cli-opm-ref.adoc#cli-opm-ref[CLI tools]. +endif::openshift-rosa-hcp[] include::modules/olm-fb-catalogs-automation.adoc[leveloffset=+2] diff --git a/operators/understanding/olm-rh-catalogs.adoc b/operators/understanding/olm-rh-catalogs.adoc index dedfd12cdc70..443ee9bcbbf5 100644 --- a/operators/understanding/olm-rh-catalogs.adoc +++ b/operators/understanding/olm-rh-catalogs.adoc @@ -15,14 +15,14 @@ As of {product-title} 4.11, the default Red Hat-provided Operator catalog releas The `opm` subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the `opm` subcommands and flags for working with the SQLite database format, such as `opm index prune`, do not work with the file-based catalog format. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] For more information about working with file-based catalogs, see xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs[Managing custom catalogs], -xref:../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[Operator Framework packaging format], and xref:../../disconnected/mirroring/about-installing-oc-mirror-v2.adoc#about-installing-oc-mirror-v2[Mirroring images for a disconnected installation by using the oc-mirror plugin v2]. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +xref:../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[Operator Framework packaging format], and xref:../../disconnected/mirroring/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin]. +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] +ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] For more information about working with file-based catalogs, see xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs[Managing custom catalogs], and xref:../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[Operator Framework packaging format]. -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ==== include::modules/olm-about-catalogs.adoc[leveloffset=+1] @@ -32,8 +32,8 @@ include::modules/olm-about-catalogs.adoc[leveloffset=+1] * xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs[Managing custom catalogs] * xref:../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[Packaging format] -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../disconnected/using-olm.adoc#olm-restricted-networks[Using Operator Lifecycle Manager in disconnected environments] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/olm-rh-catalogs.adoc[leveloffset=+1] diff --git a/operators/understanding/olm/olm-operatorconditions.adoc b/operators/understanding/olm/olm-operatorconditions.adoc index 0d592ce73726..5a148915c43f 100644 --- a/operators/understanding/olm/olm-operatorconditions.adoc +++ b/operators/understanding/olm/olm-operatorconditions.adoc @@ -18,7 +18,7 @@ include::modules/olm-supported-operatorconditions.adoc[leveloffset=+1] * xref:../../../operators/admin/olm-managing-operatorconditions.adoc#olm-operatorconditions[Managing Operator conditions] * xref:../../../operators/operator_sdk/osdk-generating-csvs.adoc#osdk-operatorconditions_osdk-generating-csvs[Enabling Operator conditions] // The following xrefs point to topics that are not currently included in the OSD/ROSA docs. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../../nodes/pods/nodes-pods-configuring.adoc#nodes-pods-configuring-pod-distruption-about_nodes-pods-configuring[Using pod disruption budgets to specify the number of pods that must be up] * xref:../../../applications/deployments/route-based-deployment-strategies.adoc#deployments-graceful-termination_route-based-deployment-strategies[Graceful termination] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] diff --git a/operators/understanding/olm/olm-understanding-olm.adoc b/operators/understanding/olm/olm-understanding-olm.adoc index 11390a63c4e9..5f5384725662 100644 --- a/operators/understanding/olm/olm-understanding-olm.adoc +++ b/operators/understanding/olm/olm-understanding-olm.adoc @@ -21,40 +21,40 @@ include::modules/olm-catalogsource.adoc[leveloffset=+2] * xref:../../../operators/understanding/olm/olm-understanding-dependency-resolution.adoc#olm-dependency-catalog-priority_olm-understanding-dependency-resolution[Catalog priority] * xref:../../../operators/admin/olm-status.adoc#olm-cs-status-cli_olm-status[Viewing Operator catalog source status by using the CLI] // This xref points to a topic that is not currently included in the OSD/ROSA docs. -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../../authentication/understanding-and-managing-pod-security-admission.adoc#understanding-and-managing-pod-security-admission[Understanding and managing pod security admission] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../../operators/admin/olm-cs-podsched.adoc#olm-cs-podsched[Catalog source pod scheduling] include::modules/olm-catalogsource-image-template.adoc[leveloffset=+3] include::modules/olm-cs-health.adoc[leveloffset=+3] -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources * xref:../../../operators/admin/olm-managing-custom-catalogs.adoc#olm-removing-catalogs_olm-managing-custom-catalogs[Removing custom catalogs] * xref:../../../operators/admin/olm-managing-custom-catalogs.adoc#olm-restricted-networks-operatorhub_olm-managing-custom-catalogs[Disabling the default OperatorHub catalog sources] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/olm-subscription.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../../operators/understanding/olm/olm-colocation.adoc#olm-colocation[Multitenancy and Operator colocation] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../../operators/admin/olm-status.adoc#olm-status-viewing-cli_olm-status[Viewing Operator subscription status by using the CLI] include::modules/olm-installplan.adoc[leveloffset=+2] -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources * xref:../../../operators/understanding/olm/olm-colocation.adoc#olm-colocation[Multitenancy and Operator colocation] * xref:../../../operators/admin/olm-creating-policy.adoc#olm-creating-policy[Allowing non-cluster administrators to install Operators] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/olm-operatorgroups-about.adoc[leveloffset=+2] .Additional resources diff --git a/operators/understanding/olm/olm-understanding-operatorgroups.adoc b/operators/understanding/olm/olm-understanding-operatorgroups.adoc index be19ef7c7b3c..e7fdc406ee2e 100644 --- a/operators/understanding/olm/olm-understanding-operatorgroups.adoc +++ b/operators/understanding/olm/olm-understanding-operatorgroups.adoc @@ -20,12 +20,12 @@ include::modules/olm-operatorgroups-intersections.adoc[leveloffset=+1] include::modules/olm-operatorgroups-limitations.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../../operators/understanding/olm/olm-colocation.adoc#olm-colocation[Operator Lifecycle Manager (OLM) -> Multitenancy and Operator colocation] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../../operators/understanding/olm-multitenancy.adoc#olm-multitenancy[Operators in multitenant clusters] -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../../operators/admin/olm-creating-policy.adoc#olm-creating-policy[Allowing non-cluster administrators to install Operators] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] include::modules/olm-operatorgroups-troubleshooting.adoc[leveloffset=+1] diff --git a/operators/understanding/olm/olm-webhooks.adoc b/operators/understanding/olm/olm-webhooks.adoc index cdaf8779a43e..4c0490b8dcfc 100644 --- a/operators/understanding/olm/olm-webhooks.adoc +++ b/operators/understanding/olm/olm-webhooks.adoc @@ -14,9 +14,9 @@ See xref:../../../operators/operator_sdk/osdk-generating-csvs.adoc#olm-defining- [role="_additional-resources"] == Additional resources -ifndef::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * xref:../../../architecture/admission-plug-ins.adoc#admission-webhook-types_admission-plug-ins[Types of webhook admission plugins] -endif::openshift-dedicated,openshift-rosa[] +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * Kubernetes documentation: ** link:https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook[Validating admission webhooks] ** link:https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook[Mutating admission webhooks] From 4e87176b5e9641d97066ba95b70163b7459273e9 Mon Sep 17 00:00:00 2001 From: Javier Pena Date: Fri, 14 Feb 2025 16:00:47 +0100 Subject: [PATCH 227/669] Fixes to the PTP Events v2 documentation - apiVersion in the PtpOperatorConfig resource needs to be a string - getEvent() must write some header and not return HTTP code 200 - In createSubscription(), we need to use a different localAPIAddr to get the events --- modules/ptp-events-consumer-application-v2.adoc | 5 ++--- snippets/ptp-event-config-api-v2.adoc | 6 +++--- 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/modules/ptp-events-consumer-application-v2.adoc b/modules/ptp-events-consumer-application-v2.adoc index 3288fa9ca171..e67791640668 100644 --- a/modules/ptp-events-consumer-application-v2.adoc +++ b/modules/ptp-events-consumer-application-v2.adoc @@ -32,9 +32,8 @@ func getEvent(w http.ResponseWriter, req *http.Request) { if e != "" { processEvent(bodyBytes) log.Infof("received event %s", string(bodyBytes)) - } else { - w.WriteHeader(http.StatusNoContent) } + w.WriteHeader(http.StatusNoContent) } ---- @@ -58,7 +57,7 @@ s5,_:=createsubscription("/cluster/node//sync/ptp-status/clock-class" func createSubscription(resourceAddress string) (sub pubsub.PubSub, err error) { var status int apiPath := "/api/ocloudNotifications/v2/" - localAPIAddr := "localhost:8989" // vDU service API address + localAPIAddr := "consumer-events-subscription-service.cloud-events.svc.cluster.local:9043" // vDU service API address apiAddr := "ptp-event-publisher-service-.openshift-ptp.svc.cluster.local:9043" // <1> apiVersion := "2.0" diff --git a/snippets/ptp-event-config-api-v2.adoc b/snippets/ptp-event-config-api-v2.adoc index b37703328116..0262852f7b13 100644 --- a/snippets/ptp-event-config-api-v2.adoc +++ b/snippets/ptp-event-config-api-v2.adoc @@ -10,9 +10,9 @@ spec: daemonNodeSelector: node-role.kubernetes.io/worker: "" ptpEventConfig: - apiVersion: 2.0 <1> + apiVersion: "2.0" <1> enableEventPublisher: true <2> ---- -<1> Enable the PTP events REST API v2 for the PTP event producer by setting the `ptpEventConfig.apiVersion` to 2.0. -The default value is 1.0. +<1> Enable the PTP events REST API v2 for the PTP event producer by setting the `ptpEventConfig.apiVersion` to "2.0". +The default value is "1.0". <2> Enable PTP fast event notifications by setting `enableEventPublisher` to `true`. From 3792c82d39fad06616590723858125fcf21e124f Mon Sep 17 00:00:00 2001 From: Eliska Romanova Date: Fri, 14 Feb 2025 10:19:56 +0100 Subject: [PATCH 228/669] OBSDOCS-1443: Monitoring 4.18 Update support version matrix --- ...toring-support-version-matrix-for-monitoring-components.adoc | 2 ++ 1 file changed, 2 insertions(+) diff --git a/modules/monitoring-support-version-matrix-for-monitoring-components.adoc b/modules/monitoring-support-version-matrix-for-monitoring-components.adoc index 08bd4ba56e07..f34d9fe6c3de 100644 --- a/modules/monitoring-support-version-matrix-for-monitoring-components.adoc +++ b/modules/monitoring-support-version-matrix-for-monitoring-components.adoc @@ -12,6 +12,8 @@ The following matrix contains information about versions of monitoring component |=== |{product-title} |Prometheus Operator |Prometheus |Metrics Server |Alertmanager |kube-state-metrics agent |monitoring-plugin |node-exporter agent |Thanos +|4.18 |0.78.1 |2.55.1 |0.7.2 |0.27.0 |2.13.0 |1.0.0 |1.8.2 |0.36.1 + |4.17 |0.75.2 |2.53.1 |0.7.1 |0.27.0 |2.13.0 |1.0.0 |1.8.2 |0.35.1 |4.16 |0.73.2 |2.52.0 |0.7.1 |0.26.0 |2.12.0 |1.0.0 |1.8.0 |0.35.0 From b40cf92e29fe19055c0dc6f051e9fda2f8a57837 Mon Sep 17 00:00:00 2001 From: Ronan Hennessy Date: Thu, 13 Feb 2025 14:14:55 +0000 Subject: [PATCH 229/669] TELCODOCS-2207: Updating NROP section to reflect new SELinux policy --- modules/cnf-creating-nrop-cr.adoc | 5 ----- 1 file changed, 5 deletions(-) diff --git a/modules/cnf-creating-nrop-cr.adoc b/modules/cnf-creating-nrop-cr.adoc index d53143f378bc..67759f1ca19d 100644 --- a/modules/cnf-creating-nrop-cr.adoc +++ b/modules/cnf-creating-nrop-cr.adoc @@ -41,11 +41,6 @@ spec: ---- $ oc create -f nrop.yaml ---- -+ -[NOTE] -==== -Creating the `NUMAResourcesOperator` triggers a reboot on the corresponding machine config pool and therefore the affected node. -==== .Verification From 9573f78d58fd1ce45f3db6e2e19d86c4177f4aab Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Wed, 12 Feb 2025 16:12:28 +0000 Subject: [PATCH 230/669] OCPBUGS-49995: Added state:down important notice to No IP address --- modules/virt-example-nmstate-IP-management.adoc | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/modules/virt-example-nmstate-IP-management.adoc b/modules/virt-example-nmstate-IP-management.adoc index 9269d7b66682..37a05ff9b45e 100644 --- a/modules/virt-example-nmstate-IP-management.adoc +++ b/modules/virt-example-nmstate-IP-management.adoc @@ -51,6 +51,11 @@ The following snippet ensures that the interface has no IP address: # ... ---- +[IMPORTANT] +==== +Always set the `state` parameter to `up` when you set both the `ipv4.enabled` and the `ipv6.enabled` parameter to `false` to disable an interface. If you set `state: down` with this configuration, the interface receives a DHCP IP address because of automatic DHCP assignment. +==== + [id="virt-example-nmstate-IP-management-dhcp_{context}"] == Dynamic host configuration From a3bafa410eaa14e99c07be81b2d7e806ca8c6832 Mon Sep 17 00:00:00 2001 From: SNiemann15 Date: Thu, 13 Feb 2025 15:10:49 +0100 Subject: [PATCH 231/669] add steps to enable FIPS mode on IBM Z --- modules/adding-ibm-z-lpar-agent.adoc | 8 ++-- ...installer-configuring-fips-compliance.adoc | 10 +++++ modules/installing-ocp-agent-ibm-z-kvm.adoc | 39 +++++++++++++++++++ modules/installing-ocp-agent-ibm-z-zvm.adoc | 12 +++--- 4 files changed, 61 insertions(+), 8 deletions(-) diff --git a/modules/adding-ibm-z-lpar-agent.adoc b/modules/adding-ibm-z-lpar-agent.adoc index 139758db37e7..30dcb264ba04 100644 --- a/modules/adding-ibm-z-lpar-agent.adoc +++ b/modules/adding-ibm-z-lpar-agent.adoc @@ -23,17 +23,19 @@ rd.neednet=1 cio_ignore=all,!condev \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http:///rhcos--live-rootfs..img \// <1> -coreos.inst.persistent-kargs=console=ttysclp0 +coreos.inst.persistent-kargs=console=ttysclp0 \ ip=::::::none nameserver= \// <2> rd.znet=qeth,,layer2=1 rd.= \// <3> -zfcp.allow_lun_scan=0 -ai.ip_cfg_override=1 \// +fips=1 \// <4> +zfcp.allow_lun_scan=0 \ +ai.ip_cfg_override=1 \ random.trust_cpu=on rd.luks.options=discard ---- <1> For the `coreos.live.rootfs_url` artifact, specify the matching `rootfs` artifact for the `kernel` and `initramfs` that you are starting. Only HTTP and HTTPS protocols are supported. <2> For the `ip` parameter, manually assign the IP address, as described in _Installing a cluster with z/VM on IBM Z and IBM LinuxONE_. <3> For installations on DASD-type disks, use `rd.dasd` to specify the DASD where {op-system-first} is to be installed. For installations on FCP-type disks, use `rd.zfcp=,,` to specify the FCP disk where {op-system} is to be installed. +<4> To enable FIPS mode, specify `fips=1`. This entry is required in addition to setting the `fips` parameter to `true` in the `install-config.yaml` file. + [NOTE] ==== diff --git a/modules/agent-installer-configuring-fips-compliance.adoc b/modules/agent-installer-configuring-fips-compliance.adoc index 3060d4aa4745..d8c712aab5e1 100644 --- a/modules/agent-installer-configuring-fips-compliance.adoc +++ b/modules/agent-installer-configuring-fips-compliance.adoc @@ -10,6 +10,11 @@ During a cluster deployment, the Federal Information Processing Standards (FIPS) change is applied when the Red Hat Enterprise Linux CoreOS (RHCOS) machines are deployed in your cluster. For Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines. +[IMPORTANT] +==== +{product-title} requires the use of a FIPS-capable installation binary to install a cluster in FIPS mode. +==== + You can enable FIPS mode through the preferred method of `install-config.yaml` and `agent-config.yaml`: . You must set value of the `fips` field to `True` in the `install-config.yaml` file: @@ -24,6 +29,11 @@ metadata: name: sno-cluster fips: True ---- ++ +[IMPORTANT] +==== +To enable FIPS mode on {ibm-z-name} clusters, you must also enable FIPS in either the `.parm` file or using `virt-install` as outlined in the procedures for manually adding {ibm-z-name} agents. +==== . Optional: If you are using the {ztp} manifests, you must set the value of `fips` as `True` in the `Agent-install.openshift.io/install-config-overrides` field in the `agent-cluster-install.yaml` file: diff --git a/modules/installing-ocp-agent-ibm-z-kvm.adoc b/modules/installing-ocp-agent-ibm-z-kvm.adoc index 81e2995b4296..cb53329e3049 100644 --- a/modules/installing-ocp-agent-ibm-z-kvm.adoc +++ b/modules/installing-ocp-agent-ibm-z-kvm.adoc @@ -49,10 +49,12 @@ $ virt-install \ --osinfo detect=on,require=off ---- <1> For the `--location` parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server. + endif::pxe-boot[] ifndef::pxe-boot[] + +.ISO boot [source,terminal] ---- $ virt-install @@ -72,6 +74,43 @@ $ virt-install <1> For the `--cdrom` parameter, specify the location of the ISO image on the HTTP or HTTPS server. endif::pxe-boot[] +. Optional: Enable FIPS mode. ++ +To enable FIPS mode on {ibm-z-name} clusters with {op-system-base} KVM you must use PXE boot instead and run the `virt-install` command with the following parameters: ++ +.PXE boot +[source,terminal] +---- +$ virt-install \ + --name \ + --autostart \ + --ram=16384 \ + --cpu host \ + --vcpus=8 \ + --location ,kernel=kernel.img,initrd=initrd.img \// <1> + --disk \ + --network network:macvtap ,mac= \ + --graphics none \ + --noautoconsole \ + --wait=-1 \ + --extra-args "rd.neednet=1 nameserver=" \ + --extra-args "ip=:::::enc1:none" \ + --extra-args "coreos.live.rootfs_url=http://:8080/agent.s390x-rootfs.img" \ + --extra-args "random.trust_cpu=on rd.luks.options=discard" \ + --extra-args "ignition.firstboot ignition.platform.id=metal" \ + --extra-args "console=tty1 console=ttyS1,115200n8" \ + --extra-args "coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8" \ + --extra-args "fips=1" \// <2> + --osinfo detect=on,require=off +---- +<1> For the `--location` parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server. +<2> To enable FIPS mode, specify `fips=1`. This entry is required in addition to setting the `fips` parameter to `true` in the `install-config.yaml` file. ++ +[NOTE] +==== +Currently, only PXE boot is supported to enable FIPS mode on {ibm-z-name}. +==== + ifeval::["{context}" == "prepare-pxe-assets-agent"] :!pxe-boot: endif::[] \ No newline at end of file diff --git a/modules/installing-ocp-agent-ibm-z-zvm.adoc b/modules/installing-ocp-agent-ibm-z-zvm.adoc index 7ae89676ca47..53184b69c001 100644 --- a/modules/installing-ocp-agent-ibm-z-zvm.adoc +++ b/modules/installing-ocp-agent-ibm-z-zvm.adoc @@ -23,13 +23,14 @@ Only use this procedure for {ibm-z-name} clusters with z/VM. ---- rd.neednet=1 \ console=ttysclp0 \ -coreos.live.rootfs_url= \ <1> -ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ <2> -zfcp.allow_lun_scan=0 \ <3> +coreos.live.rootfs_url= \// <1> +ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \// <2> +zfcp.allow_lun_scan=0 \// <3> ai.ip_cfg_override=1 \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ -rd.dasd=0.0.4411 \ <4> -rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \ <5> +rd.dasd=0.0.4411 \// <4> +rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \// <5> +fips=1 \// <6> random.trust_cpu=on rd.luks.options=discard \ ignition.firstboot ignition.platform.id=metal \ console=tty1 console=ttyS1,115200n8 \ @@ -40,6 +41,7 @@ coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8" <3> The default is `1`. Omit this entry when using an OSA network adapter. <4> For installations on DASD-type disks, use `rd.dasd` to specify the DASD where {op-system-first} is to be installed. Omit this entry for FCP-type disks. <5> For installations on FCP-type disks, use `rd.zfcp=,,` to specify the FCP disk where {op-system} is to be installed. Omit this entry for DASD-type disks. +<6> To enable FIPS mode, specify `fips=1`. This entry is required in addition to setting the `fips` parameter to `true` in the `install-config.yaml` file. + Leave all other parameters unchanged. From ff315f605c826b0f036ab175cab3f609cb189dc6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E2=80=9CShauna=20Diaz=E2=80=9D?= Date: Thu, 6 Feb 2025 12:55:58 -0500 Subject: [PATCH 232/669] OSDOCS-44921: simplify mirrring docs MicroShift --- ...croshift-configuring-hosts-for-mirror.adoc | 2 +- ...croshift-downloading-container-images.adoc | 6 +-- ...t-get-mirror-reg-container-image-list.adoc | 2 +- .../microshift-mirror-container-images.adoc | 2 +- modules/microshift-mirroring-prereqs.adoc | 2 +- ...microshift-upload-cont2-mirror-script.adoc | 27 ------------- ...microshift-uploading-images-to-mirror.adoc | 39 +++++++------------ 7 files changed, 21 insertions(+), 59 deletions(-) delete mode 100644 modules/microshift-upload-cont2-mirror-script.adoc diff --git a/modules/microshift-configuring-hosts-for-mirror.adoc b/modules/microshift-configuring-hosts-for-mirror.adoc index 2bf752f9cbf0..bbf7b625ae4c 100644 --- a/modules/microshift-configuring-hosts-for-mirror.adoc +++ b/modules/microshift-configuring-hosts-for-mirror.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * microshift/running_applications/microshift-deploy-with-mirror-registry.adoc +// * microshift/microshift_install_rpm_ostree/microshift-deploy-with-mirror-registry.adoc :_mod-docs-content-type: PROCEDURE [id="microshift-configuring-hosts-for-mirror_{context}"] diff --git a/modules/microshift-downloading-container-images.adoc b/modules/microshift-downloading-container-images.adoc index 274f08208912..077c16fca78b 100644 --- a/modules/microshift-downloading-container-images.adoc +++ b/modules/microshift-downloading-container-images.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * microshift/running_applications/microshift-deploy-with-mirror-registry.adoc +// * microshift/microshift_install_rpm_ostree/microshift-deploy-with-mirror-registry.adoc :_mod-docs-content-type: PROCEDURE [id="microshift-downloading-container-images_{context}"] @@ -11,7 +11,7 @@ After you have located the container list and completed the mirroring prerequisi .Prerequisites * You are logged into a host with access to the internet. -* You have ensured that the `.pull-secret-mirror.json` file and `microshift-containers` directory contents are available locally. +* The `.pull-secret-mirror.json` file and `microshift-containers` directory contents are available locally. .Procedure @@ -61,5 +61,3 @@ while read -r src_img ; do done < "${IMAGE_LIST_FILE}" ---- - -. Transfer the image set to the target environment, such as air-gapped site. Then you can upload the image set into the mirror registry. \ No newline at end of file diff --git a/modules/microshift-get-mirror-reg-container-image-list.adoc b/modules/microshift-get-mirror-reg-container-image-list.adoc index 6e65827d3d9d..801a193a6de4 100644 --- a/modules/microshift-get-mirror-reg-container-image-list.adoc +++ b/modules/microshift-get-mirror-reg-container-image-list.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * microshift/running_applications/microshift-deploy-with-mirror-registry.adoc +// * microshift/microshift_install_rpm_ostree/microshift-deploy-with-mirror-registry.adoc :_mod-docs-content-type: PROCEDURE [id="microshift-get-mirror-reg-container-image-list_{context}"] diff --git a/modules/microshift-mirror-container-images.adoc b/modules/microshift-mirror-container-images.adoc index 4c0ad71230f5..7c2418784246 100644 --- a/modules/microshift-mirror-container-images.adoc +++ b/modules/microshift-mirror-container-images.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * microshift/running_applications/microshift-deploy-with-mirror-registry.adoc +// * microshift/microshift_install_rpm_ostree/microshift-deploy-with-mirror-registry.adoc :_mod-docs-content-type: CONCEPT [id="microshift-mirror-container-images_{context}"] diff --git a/modules/microshift-mirroring-prereqs.adoc b/modules/microshift-mirroring-prereqs.adoc index 12e9d925e239..6ce9aed5150f 100644 --- a/modules/microshift-mirroring-prereqs.adoc +++ b/modules/microshift-mirroring-prereqs.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * microshift/running_applications/microshift-deploy-with-mirror-registry.adoc +// * microshift/microshift_install_rpm_ostree/microshift-deploy-with-mirror-registry.adoc :_mod-docs-content-type: CONCEPT [id="microshift-configuring-mirroring-prereqs_{context}"] diff --git a/modules/microshift-upload-cont2-mirror-script.adoc b/modules/microshift-upload-cont2-mirror-script.adoc deleted file mode 100644 index d0bd0d378f18..000000000000 --- a/modules/microshift-upload-cont2-mirror-script.adoc +++ /dev/null @@ -1,27 +0,0 @@ -[source,sh] ----- -# Use timestamp and counter as a tag on the target images to avoid -# their overwrite by the 'latest' automatic tagging -image_tag=mirror-$(date +%y%m%d%H%M%S) -image_cnt=1 - -pushd "${IMAGE_LOCAL_DIR}" >/dev/null -while read -r src_manifest ; do - # Remove the manifest.json file name - src_img=$(dirname "${src_manifest}") - # Add the target registry prefix and remove SHA - dst_img="${TARGET_REGISTRY}/${src_img}" - dst_img=$(echo "${dst_img}" | awk -F'@' '{print $1}') - - # Run the image upload command - echo "Uploading '${src_img}' to '${dst_img}'" - skopeo copy --all --quiet \ - --preserve-digests \ - --authfile "${IMAGE_PULL_FILE}" \ - dir://"${IMAGE_LOCAL_DIR}/${src_img}" docker://"${dst_img}:${image_tag}-${image_cnt}" - # Increment the counter - (( image_cnt += 1 )) - -done < <(find . -type f -name manifest.json -printf '%P\n') -popd >/dev/null ----- \ No newline at end of file diff --git a/modules/microshift-uploading-images-to-mirror.adoc b/modules/microshift-uploading-images-to-mirror.adoc index 078f6e9c40b4..7e72ea5b147c 100644 --- a/modules/microshift-uploading-images-to-mirror.adoc +++ b/modules/microshift-uploading-images-to-mirror.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * microshift/running_applications/microshift-deploy-with-mirror-registry.adoc +// * microshift/pmicroshift_install_rpm_ostree/microshift-deploy-with-mirror-registry.adoc :_mod-docs-content-type: PROCEDURE [id="microshift-uploading-container-images-to-mirror_{context}"] @@ -39,38 +39,29 @@ $ IMAGE_LOCAL_DIR=~/microshift-containers . Set the environment variables pointing to the mirror registry URL for uploading the container images: + -[source,terminal] +[source,terminal,subs="+quotes"] ---- -$ TARGET_REGISTRY=: <1> +$ TARGET_REGISTRY=_:_ # <1> ---- -<1> Replace `:` with the host name and port of your mirror registry server. +<1> Replace `_:_` with the host name and port of your mirror registry server. . Run the following script to upload the container images to the `${TARGET_REGISTRY}` mirror registry: + [source,terminal] ---- -image_tag=mirror-$(date +%y%m%d%H%M%S) -image_cnt=1 - # Uses timestamp and counter as a tag on the target images to avoid - # their overwrite by the 'latest' automatic tagging - pushd "${IMAGE_LOCAL_DIR}" >/dev/null while read -r src_manifest ; do - # Remove the manifest.json file name - src_img=$(dirname "${src_manifest}") - # Add the target registry prefix and remove SHA - dst_img="${TARGET_REGISTRY}/${src_img}" - dst_img=$(echo "${dst_img}" | awk -F'@' '{print $1}') - - # Run the image upload command - echo "Uploading '${src_img}' to '${dst_img}'" - skopeo copy --all --quiet \ - --preserve-digests \ - --authfile "${IMAGE_PULL_FILE}" \ - dir://"${IMAGE_LOCAL_DIR}/${src_img}" docker://"${dst_img}:${image_tag}-${image_cnt}" - # Increment the counter - (( image_cnt += 1 )) - + local src_img + src_img=$(dirname "${src_manifest}") + # Add the target registry prefix and remove SHA + local -r dst_img="${TARGET_REGISTRY}/${src_img}" + local -r dst_img_no_tag="${TARGET_REGISTRY}/${src_img%%[@:]*}" + # Run the image upload + echo "Uploading '${src_img}' to '${dst_img}'" + skopeo copy --all --quiet \ + --preserve-digests \ + --authfile "${IMAGE_PULL_FILE}" \ + dir://"${IMAGE_LOCAL_DIR}/${src_img}" docker://"${dst_img}" done < <(find . -type f -name manifest.json -printf '%P\n') popd >/dev/null ---- From 25d6e5e73d99259fd32f46ccefa574a0d35f2752 Mon Sep 17 00:00:00 2001 From: Steven Smith Date: Tue, 19 Nov 2024 14:27:46 -0500 Subject: [PATCH 233/669] Adds cudn docs two --- modules/cudn-status-conditions.adoc | 84 +++++++++ modules/nw-cudn-best-practices.adoc | 31 ++++ modules/nw-cudn-cr.adoc | 169 ++++++++++++++++++ modules/nw-udn-additional-config-details.adoc | 2 +- modules/nw-udn-best-practices.adoc | 2 +- modules/nw-udn-cr.adoc | 8 +- modules/nw-udn-examples.adoc | 2 +- modules/nw-udn-limitations.adoc | 2 +- .../about-user-defined-networks.adoc | 13 ++ 9 files changed, 305 insertions(+), 8 deletions(-) create mode 100644 modules/cudn-status-conditions.adoc create mode 100644 modules/nw-cudn-best-practices.adoc create mode 100644 modules/nw-cudn-cr.adoc diff --git a/modules/cudn-status-conditions.adoc b/modules/cudn-status-conditions.adoc new file mode 100644 index 000000000000..e059b8769bf6 --- /dev/null +++ b/modules/cudn-status-conditions.adoc @@ -0,0 +1,84 @@ +//module included in the following assembly: +// +// * networking/multiple_networks/primary_networks/about-user-defined-networks.adoc + +:_mod-docs-content-type: REFERENCE +[id="cudn-status-conditions_{context}"] += ClusterUserDefinedNetwork status condition types + +The following tables explain the status condition types returned for `ClusterUserDefinedNetwork` CRs when describing the resource. These conditions can be used to troubleshoot your deployment. + +//The following table is subject to change and will be updated accordingly. + +.NetworkCreated condition types +[cols="2a,2a,3a,6a",options="header"] +|=== + +|Condition type +|Status +2+|Reason and Message + +.3+|`NetworkCreated` +.3+| `True` +2+|When `True`, the following reason and message is returned: +h|Reason +h|Message + +|`NetworkAttachmentDefinitionCreated` +|'NetworkAttachmentDefinition has been created in following namespaces: [example-namespace-1, example-namespace-2, example-namespace-3]'` + +.9+|`NetworkCreated` +.9+| `False` +2+|When `False`, one of the following messages is returned: +h|Reason +h|Message + +|`SyncError` +|`failed to generate NetworkAttachmentDefinition` + +|`SyncError` +|`failed to update NetworkAttachmentDefinition` + +|`SyncError` +|`primary network already exist in namespace "": ""` + +|`SyncError` +|`failed to create NetworkAttachmentDefinition: create NAD error` + +|`SyncError` +|`foreign NetworkAttachmentDefinition with the desired name already exist` + +|`SyncError` +|`failed to add finalizer to UserDefinedNetwork` + +|`NetworkAttachmentDefinitionDeleted` +|`NetworkAttachmentDefinition is being deleted: [/]` +|=== + +.NetworkAllocationSucceeded condition types +[cols="2a,2a,3a,6a",options="header"] +|=== + +|Condition type +|Status +2+|Reason and Message + +.3+|`NetworkAllocationSucceeded` +.3+| `True` +2+|When `True`, the following reason and message is returned: +h|Reason +h|Message + +|`NetworkAllocationSucceeded` +|`Network allocation succeeded for all synced nodes.` + +.3+|`NetworkAllocationSucceeded` +.3+| `False` +2+|When `False`, the following message is returned: +h|Reason +h|Message + +|`InternalError` +|`Network allocation failed for at least one node: [], check UDN events for more info.` + +|=== diff --git a/modules/nw-cudn-best-practices.adoc b/modules/nw-cudn-best-practices.adoc new file mode 100644 index 000000000000..b9bb5f9b2cde --- /dev/null +++ b/modules/nw-cudn-best-practices.adoc @@ -0,0 +1,31 @@ +//module included in the following assembly: +// +// * networking/multiple_networks/primary_networks/about-user-defined-networks.adoc + +:_mod-docs-content-type: CONCEPT +[id="considerations-for-cudn_{context}"] += Best practices for ClusterUserDefinedNetwork CRs + +Before setting up a `ClusterUserDefinedNetwork` custom resource (CR), users should consider the following information: + +* A `ClusterUserDefinedNetwork` CR is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network. + +* `ClusterUserDefinedNetwork` CRs should not be created in the default namespace. This can result in no isolation and, as a result, could introduce security risks to the cluster. + +* `ClusterUserDefinedNetwork` CRs should not target `openshift-*` namespaces. + +* {product-title} administrators should be aware that all namespaces of a cluster are selected when one of the following conditions are met: + +** The `matchLabels` selector is left empty. +** The `matchExpressions` selector is left empty. +** The `namespaceSelector` is initialized, but does not specify `matchExpressions` or `matchLabel`. For example: `namespaceSelector: {}`. + +* For primary networks, the namespace used for the `ClusterUserDefinedNetwork` CR must include the `k8s.ovn.org/primary-user-defined-network` label. This label cannot be updated, and can only be added when the namespace is created. The following conditions apply with the `k8s.ovn.org/primary-user-defined-network` namespace label: + +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a pod is created, the pod attaches itself to the default network. + +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `ClusterUserDefinedNetwork` CR is created that matches the namespace, the CUDN reports an error status and the network is not created. + +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `CluserUserDefinedNetwork` CR already exists, a pod in the namespace is created and attached to the default network. + +** If the namespace _has_ the label, and a primary `ClusterUserDefinedNetwork` CR does not exist, a pod in the namespace is not created until the `ClusterUserDefinedNetwork` CR is created. \ No newline at end of file diff --git a/modules/nw-cudn-cr.adoc b/modules/nw-cudn-cr.adoc new file mode 100644 index 000000000000..fdad6c0fd93a --- /dev/null +++ b/modules/nw-cudn-cr.adoc @@ -0,0 +1,169 @@ +//module included in the following assembly: +// +// * networking/multiple_networks/primary_networks/about-user-defined-networks.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nw-cudn-cr_{context}"] += Creating a ClusterUserDefinedNetwork custom resource + +The following procedure creates a `ClusterUserDefinedNetwork` custom resource definition (CRD). Based upon your use case, create your request using either the `cluster-layer-two-udn.yaml` example for a `Layer2` topology type or the `cluster-layer-three-udn.yaml` example for a `Layer3` topology type. + +[IMPORTANT] +==== +* The `ClusterUserDefinedNetwork` CRD is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network. +* {VirtProductName} only supports the `Layer2` topology. +==== + +.Prerequisites + +* You have logged in as a user with `cluster-admin` privileges. + +.Procedure + +. Optional: For a `ClusterUserDefinedNetwork` CR that uses a primary network, create a namespace with the `k8s.ovn.org/primary-user-defined-network` label by entering the following command: ++ +[source,yaml] +---- +$ cat << EOF | oc apply -f - +apiVersion: v1 +kind: Namespace +metadata: + name: + labels: + k8s.ovn.org/primary-user-defined-network: "" +EOF +---- + +. Create a request for either a `Layer2` or `Layer3` topology type cluster-wide user-defined network: + +.. Create a YAML file, such as `cluster-layer-two-udn.yaml`, to define your request for a `Layer2` topology as in the following example: ++ +[source, yaml] +---- +apiVersion: k8s.ovn.org/v1 +kind: ClusterUserDefinedNetwork +metadata: + name: # <1> +spec: + namespaceSelector: # <2> + matchLabels: # <3> + - "":"" # <4> + - "":"" # <4> + network: # <5> + topology: Layer2 # <6> + layer2: # <7> + role: Primary # <8> + subnets: + - "2001:db8::/64" + - "10.100.0.0/16" # <9> +---- +<1> Name of your `ClusterUserDefinedNetwork` custom resource. +<2> A label query over the set of namespaces that the cluster UDN applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces. +<3> Uses the `matchLabels` selector type, where terms are evaluated with an `AND` relationship. +<4> Because the `matchLabels` selector type is used, provisions namespaces matching both `` _and_ ``. +<5> Describes the network configuration. +<6> The `topology` field describes the network configuration; accepted values are `Layer2` and `Layer3`. Specifying a `Layer2` topology type creates one logical switch that is shared by all nodes. +<7> This field specifies the topology configuration. It can be `layer2` or `layer3`. +<8> Specifies `Primary` or `Secondary`. `Primary` is the only `role` specification supported in {product-version}. +<9> For `Layer2` topology types the following specifies config details for the `subnet` field: ++ +* The subnets field is optional. +* The subnets field is of type `string` and accepts standard CIDR formats for both IPv4 and IPv6. +* The subnets field accepts one or two items. For two items, they must be of a different family. For example, subnets values of `10.100.0.0/16` and `2001:db8::/64`. +* `Layer2` subnets can be omitted. If omitted, users must configure static IP addresses for the pods. As a consequence, port security only prevents MAC spoofing. For more information, see "Configuring pods with a static IP address". ++ +.. Create a YAML file, such as `cluster-layer-three-udn.yaml`, to define your request for a `Layer3` topology as in the following example: ++ +[source, yaml] +---- +apiVersion: k8s.ovn.org/v1 +kind: ClusterUserDefinedNetwork +metadata: + name: # <1> +spec: + namespaceSelector: # <2> + matchExpressions: # <3> + - key: kubernetes.io/metadata.name # <4> + operator: In # <5> + values: [", "] # <6> + network: # <7> + topology: Layer3 # <8> + layer3: # <9> + role: Primary # <10> + subnets: # <11> + - cidr: 10.100.0.0/16 + hostSubnet: 64 +---- +<1> Name of your `ClusterUserDefinedNetwork` custom resource. +<2> A label query over the set of namespaces that the cluster UDN applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces. +<3> Uses the `matchExpressions` selector type, where terms are evaluated with an _*OR*_ relationship. +<4> Specifies the label key to match. +<5> Specifies the operator. Valid values include: `In`, `NotIn`, `Exists`, and `DoesNotExist`. +<6> Because the `matchExpressions` type is used, provisions namespaces matching either `` or ``. +<7> Describes the network configuration. +<8> The `topology` field describes the network configuration; accepted values are `Layer2` and `Layer3`. Specifying a `Layer3` topology type creates a layer 2 segment per node, each with a different subnet. Layer 3 routing is used to interconnect node subnets. +<9> This field specifies the topology configuration. Valid values are `layer2` or `layer3`. +<10> Specifies a `Primary` or `Secondary` role. `Primary` is the only `role` specification supported in {product-version}. +<11> For `Layer3` topology types the following specifies config details for the `subnet` field: ++ +* The `subnets` field is mandatory. +* The type for the `subnets` field is `cidr` and `hostSubnet`: +** `cidr` is the cluster subnet and accepts a string value. +** `hostSubnet` specifies the nodes subnet prefix that the cluster subnet is split to. +** For IPv6, only a `/64` length is supported for `hostSubnet`. ++ +. Apply your request by running the following command: ++ +[source,terminal] +---- +$ oc create --validate=true -f .yaml +---- ++ +Where `.yaml` is the name of your `Layer2` or `Layer3` configuration file. + +. Verify that your request is successful by running the following command: ++ +[source,terminal] +---- +$ oc get clusteruserdefinednetwork -o yaml +---- ++ +Where `` is the name you created of your cluster-wide user-defined network. ++ +.Example output +[source,yaml] +---- +apiVersion: k8s.ovn.org/v1 +kind: ClusterUserDefinedNetwork +metadata: + creationTimestamp: "2024-12-05T15:53:00Z" + finalizers: + - k8s.ovn.org/user-defined-network-protection + generation: 1 + name: my-cudn + resourceVersion: "47985" + uid: 16ee0fcf-74d1-4826-a6b7-25c737c1a634 +spec: + namespaceSelector: + matchExpressions: + - key: custom.network.selector + operator: In + values: + - example-namespace-1 + - example-namespace-2 + - example-namespace-3 + network: + layer3: + role: Primary + subnets: + - cidr: 10.100.0.0/16 + topology: Layer3 +status: + conditions: + - lastTransitionTime: "2024-11-19T16:46:34Z" + message: 'NetworkAttachmentDefinition has been created in following namespaces: + [example-namespace-1, example-namespace-2, example-namespace-3]' + reason: NetworkAttachmentDefinitionReady + status: "True" + type: NetworkCreated +---- diff --git a/modules/nw-udn-additional-config-details.adoc b/modules/nw-udn-additional-config-details.adoc index 7b8926ee2f18..5dff9c355b3a 100644 --- a/modules/nw-udn-additional-config-details.adoc +++ b/modules/nw-udn-additional-config-details.adoc @@ -1,6 +1,6 @@ //module included in the following assembly: // -// *networking/multiple_networks/understanding-user-defined-networks.adoc +// * networking/multiple_networks/primary_networks/about-user-defined-networks.adoc :_mod-docs-content-type: REFERENCE [id="nw-udn-additional-config-details_{context}"] diff --git a/modules/nw-udn-best-practices.adoc b/modules/nw-udn-best-practices.adoc index 024f9e7890aa..eebd3ef61987 100644 --- a/modules/nw-udn-best-practices.adoc +++ b/modules/nw-udn-best-practices.adoc @@ -1,6 +1,6 @@ //module included in the following assembly: // -// *networkking/multiple_networks/understanding-user-defined-networks.adoc +// * networking/multiple_networks/primary_networks/about-user-defined-networks.adoc :_mod-docs-content-type: CONCEPT [id="considerations-for-udn_{context}"] diff --git a/modules/nw-udn-cr.adoc b/modules/nw-udn-cr.adoc index d11d7366a8b8..37b85d11ac67 100644 --- a/modules/nw-udn-cr.adoc +++ b/modules/nw-udn-cr.adoc @@ -1,6 +1,6 @@ //module included in the following assembly: // -// *networking/multiple_networks/understanding-user-defined-networks.adoc +// * networking/multiple_networks/primary_networks/about-user-defined-networks.adoc :_mod-docs-content-type: PROCEDURE [id="nw-udn-cr_{context}"] @@ -11,7 +11,7 @@ The following procedure creates a user-defined network that is namespace scoped. //We won't have these pieces till GA in 4.18. //[NOTE] //==== -//If any cluster default networked pods exist before the user-defined network is created, any further pods created in this namespace will return an error message: `What_is_this`. +//If any cluster default networked pods exist before the user-defined network is created, any further pods created in this namespace will return an error message: `What_is_this`? //==== .Procedure @@ -96,10 +96,10 @@ spec: + [source,terminal] ---- -$ oc apply -f +$ oc apply -f .yaml ---- + -Where `` is the name of your `Layer2` or `Layer3` configuration file. +Where `.yaml` is the name of your `Layer2` or `Layer3` configuration file. . Verify that your request is successful by running the following command: + diff --git a/modules/nw-udn-examples.adoc b/modules/nw-udn-examples.adoc index 3c6bebff33d2..5e9409ed69de 100644 --- a/modules/nw-udn-examples.adoc +++ b/modules/nw-udn-examples.adoc @@ -1,6 +1,6 @@ //module included in the following assembly: // -// *networking/multiple_networks/understanding-user-defined-networks.adoc +// * networking/multiple_networks/primary_networks/about-user-defined-networks.adoc :_mod-docs-content-type: REFERENCE [id="nw-udn-examples_{context}"] diff --git a/modules/nw-udn-limitations.adoc b/modules/nw-udn-limitations.adoc index ee8ff2fcc9be..17d4408f0786 100644 --- a/modules/nw-udn-limitations.adoc +++ b/modules/nw-udn-limitations.adoc @@ -1,6 +1,6 @@ //module included in the following assembly: // -// *networkking/multiple_networks/understanding-user-defined-networks.adoc +// * networking/multiple_networks/primary_networks/about-user-defined-networks.adoc :_mod-docs-content-type: CONCEPT [id="limitations-for-udn_{context}"] diff --git a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc index bcad31b0230e..7f3ca646bff4 100644 --- a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc +++ b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc @@ -43,6 +43,19 @@ include::modules/nw-udn-benefits.adoc[leveloffset=+1] //Limitations that users should consider for UDN. include::modules/nw-udn-limitations.adoc[leveloffset=+1] +//Best practices for using CUDN. +include::modules/nw-cudn-best-practices.adoc[leveloffset=+1] + +//How to implement the CUDN API on a cluster. +include::modules/nw-cudn-cr.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources +* xref:../../../networking/multiple_networks/secondary_networks/creating-secondary-nwt-ovnk.adoc#configuring-pods-static-ip_configuring-additional-network-ovnk[Configuring pods with a static IP address] + +//CUDN status conditions +include::modules/cudn-status-conditions.adoc[leveloffset=+2] + //Best practices for using UDN. include::modules/nw-udn-best-practices.adoc[leveloffset=+1] From e139fffc2da1e1ee7f35540b73f624dbcc778585 Mon Sep 17 00:00:00 2001 From: Steven Smith Date: Thu, 13 Feb 2025 12:33:26 -0500 Subject: [PATCH 234/669] Adds annotation for opening default network ports on udn pods --- .../opening-default-network-ports-udn.adoc | 31 +++++++++++++++++++ .../about-user-defined-networks.adoc | 2 ++ 2 files changed, 33 insertions(+) create mode 100644 modules/opening-default-network-ports-udn.adoc diff --git a/modules/opening-default-network-ports-udn.adoc b/modules/opening-default-network-ports-udn.adoc new file mode 100644 index 000000000000..7dd93e783fc2 --- /dev/null +++ b/modules/opening-default-network-ports-udn.adoc @@ -0,0 +1,31 @@ +//module included in the following assembly: +// +// * networking/multiple_networks/primary_networks/about-user-defined-networks.adoc + +:_mod-docs-content-type: REFERENCE +[id="opening-default-network-ports-udn_{context}"] += Opening default network ports on user-defined network pods + +By default, pods on a user-defined network are isolated from the default network. This means that default network pods, such as those running monitoring services (Prometheus or Alertmanager) or the {product-title} image registry, cannot initiate connections to UDN pods. + +To allow default network pods to connect to a user-defined network pod, you can use the `k8s.ovn.org/open-default-ports` annotation. This annotation opens specific ports on the user-defined network pod for access from the default network. + +The following pod specification allows incoming TCP connections on port `80` and UDP traffic on port `53` from the default network: +[source,yaml] +---- +apiVersion: v1 +kind: Pod +metadata: + annotations: + k8s.ovn.org/open-default-ports: | + - protocol: tcp + port: 80 + - protocol: udp + port: 53 +# ... +---- + +[NOTE] +==== +Open ports are accessible on the pod's default network IP, not its UDN network IP. +==== \ No newline at end of file diff --git a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc index 7f3ca646bff4..3e09f6ad7ed3 100644 --- a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc +++ b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc @@ -65,6 +65,8 @@ include::modules/nw-udn-cr.adoc[leveloffset=+1] //Explanation of optional config details include::modules/nw-udn-additional-config-details.adoc[leveloffset=+1] +include::modules/opening-default-network-ports-udn.adoc[leveloffset=+1] + //Support matrix for UDN //include::modules From 7d9d39097dcd9985cda0fbee7b60dec329975174 Mon Sep 17 00:00:00 2001 From: srir Date: Wed, 12 Feb 2025 14:45:39 +0530 Subject: [PATCH 235/669] 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 --- ...ages-imagestream-specify-architecture.adoc | 8 +- modules/migrating-from-x86-to-arm-cp.adoc | 113 ++++++++++++++++++ .../update-image-stream-to-multi-arch.adoc | 11 ++ .../migrating-to-multi-payload.adoc | 14 ++- 4 files changed, 140 insertions(+), 6 deletions(-) create mode 100644 modules/migrating-from-x86-to-arm-cp.adoc create mode 100644 snippets/update-image-stream-to-multi-arch.adoc diff --git a/modules/images-imagestream-specify-architecture.adoc b/modules/images-imagestream-specify-architecture.adoc index bc8c9bd029e3..d7c6d9c9cfe4 100644 --- a/modules/images-imagestream-specify-architecture.adoc +++ b/modules/images-imagestream-specify-architecture.adoc @@ -18,8 +18,6 @@ $ oc import-image --from=// * Run the following command to update your image stream from single-architecture to multi-architecture: + -[source,terminal] ----- -$ oc import-image --from=// \ ---import-mode='PreserveOriginal' ----- \ No newline at end of file +-- +include::snippets/update-image-stream-to-multi-arch.adoc[] +-- \ No newline at end of file diff --git a/modules/migrating-from-x86-to-arm-cp.adoc b/modules/migrating-from-x86-to-arm-cp.adoc new file mode 100644 index 000000000000..53edc992a438 --- /dev/null +++ b/modules/migrating-from-x86-to-arm-cp.adoc @@ -0,0 +1,113 @@ +// Module included in the following assemblies: +// +// * updating/updating_a_cluster/migrating-to-multi-payload.adoc + +:_mod-docs-content-type: PROCEDURE +[id="migrating-from-x86-to-arm64-cp_{context}"] += Migrating the x86 control plane to arm64 architecture on {aws-full} + +You can migrate the control plane in your cluster from `x86` to `arm64` architecture on {aws-first}. + +.Prerequisites + +* You have installed the {oc-first}. +* You logged in to `oc` as a user with `cluster-admin` privileges. + +.Procedure + +. Check the architecture of the control plane nodes by running the following command: ++ +[source,terminal] +---- +$ oc get nodes -o wide +---- ++ +.Example output +[source,terminal] +---- +NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME +worker-001.example.com Ready worker 100d v1.30.7 10.x.x.x Red Hat Enterprise Linux CoreOS 4xx.xx.xxxxx-0 5.x.x-xxx.x.x.el9_xx.x86_64 cri-o://1.30.x +worker-002.example.com Ready worker 98d v1.30.7 10.x.x.x Red Hat Enterprise Linux CoreOS 4xx.xx.xxxxx-0 5.x.x-xxx.x.x.el9_xx.x86_64 cri-o://1.30.x +worker-003.example.com Ready worker 98d v1.30.7 10.x.x.x Red Hat Enterprise Linux CoreOS 4xx.xx.xxxxx-0 5.x.x-xxx.x.x.el9_xx.x86_64 cri-o://1.30.x +master-001.example.com Ready control-plane,master 120d v1.30.7 10.x.x.x Red Hat Enterprise Linux CoreOS 4xx.xx.xxxxx-0 5.x.x-xxx.x.x.el9_xx.x86_64 cri-o://1.30.x +master-002.example.com Ready control-plane,master 120d v1.30.7 10.x.x.x Red Hat Enterprise Linux CoreOS 4xx.xx.xxxxx-0 5.x.x-xxx.x.x.el9_xx.x86_64 cri-o://1.30.x +master-003.example.com Ready control-plane,master 120d v1.30.7 10.x.x.x Red Hat Enterprise Linux CoreOS 4xx.xx.xxxxx-0 5.x.x-xxx.x.x.el9_xx.x86_64 cri-o://1.30.x +---- ++ +The `KERNEL-VERSION` field in the output indicates the architecture of the nodes. + +. Check that your cluster uses the multi payload by running the following command: ++ +[source,terminal] +---- +$ oc adm release info -o jsonpath="{ .metadata.metadata}" +---- ++ +If you see the following output, the cluster is multi-architecture compatible. ++ +[source,terminal] +---- +{ + "release.openshift.io/architecture": "multi", + "url": "https://access.redhat.com/errata/" +} +---- ++ +If the cluster is not using the multi payload, migrate the cluster to a multi-architecture cluster. For more information, see "Migrating to a cluster with multi-architecture compute machines". + +. Update your image stream from single-architecture to multi-architecture by running the following command: ++ +-- +include::snippets/update-image-stream-to-multi-arch.adoc[] +-- + +. Get the `arm64` compatible Amazon Machine Image (AMI) for configuring the control plane machine set by running the following command: ++ +[source,terminal] +---- +$ oc get configmap/coreos-bootimages -n openshift-machine-config-operator -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.images.aws.regions."".image' <1> +---- +<1> Replace `` with the {aws-short} region where the current cluster is installed. You can get the {aws-short} region for the installed cluster by running the following command: ++ +[source,terminal] +---- +$ oc get infrastructure cluster -o jsonpath='{.status.platformStatus.aws.region}' +---- ++ +.Example output +[source,terminal] +---- +ami-xxxxxxx +---- + +. Update the control plane machine set to support the `arm64` architecture by running the following command: ++ +[source,terminal] +---- +$ oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api +---- ++ +Update the `instanceType` field to a type that supports the `arm64` architecture, and set the `ami.id` field to an AMI that is compatible with the `arm64` architecture. For information about supported instance types, see "Tested instance types for {aws-short} on 64-bit ARM infrastructures". ++ +For more information about configuring the control plane machine set for {aws-short}, see "Control plane configuration options for {aws-full}". + +.Verification + +* Verify that the control plane nodes are now running on the `arm64` architecture: ++ +[source,terminal] +---- +$ oc get nodes -o wide +---- ++ +.Example output +[source,terminal] +---- +NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME +worker-001.example.com Ready worker 100d v1.30.7 10.x.x.x Red Hat Enterprise Linux CoreOS 4xx.xx.xxxxx-0 5.x.x-xxx.x.x.el9_xx.x86_64 cri-o://1.30.x +worker-002.example.com Ready worker 98d v1.30.7 10.x.x.x Red Hat Enterprise Linux CoreOS 4xx.xx.xxxxx-0 5.x.x-xxx.x.x.el9_xx.x86_64 cri-o://1.30.x +worker-003.example.com Ready worker 98d v1.30.7 10.x.x.x Red Hat Enterprise Linux CoreOS 4xx.xx.xxxxx-0 5.x.x-xxx.x.x.el9_xx.x86_64 cri-o://1.30.x +master-001.example.com Ready control-plane,master 120d v1.30.7 10.x.x.x Red Hat Enterprise Linux CoreOS 4xx.xx.xxxxx-0 5.x.x-xxx.x.x.el9_xx.aarch64 cri-o://1.30.x +master-002.example.com Ready control-plane,master 120d v1.30.7 10.x.x.x Red Hat Enterprise Linux CoreOS 4xx.xx.xxxxx-0 5.x.x-xxx.x.x.el9_xx.aarch64 cri-o://1.30.x +master-003.example.com Ready control-plane,master 120d v1.30.7 10.x.x.x Red Hat Enterprise Linux CoreOS 4xx.xx.xxxxx-0 5.x.x-xxx.x.x.el9_xx.aarch64 cri-o://1.30.x +---- diff --git a/snippets/update-image-stream-to-multi-arch.adoc b/snippets/update-image-stream-to-multi-arch.adoc new file mode 100644 index 000000000000..9ecd5b82dc5c --- /dev/null +++ b/snippets/update-image-stream-to-multi-arch.adoc @@ -0,0 +1,11 @@ +// Snippet included in the following assemblies: +// +// * updating/updating_a_cluster/migrating-from-x86-to-arm64-cp.adoc +// * openshift_images/images-imagestream-specify-architecture.adoc + +:_mod-docs-content-type: SNIPPET +[source,terminal] +---- +$ oc import-image --from=// \ +--import-mode='PreserveOriginal' +---- diff --git a/updating/updating_a_cluster/migrating-to-multi-payload.adoc b/updating/updating_a_cluster/migrating-to-multi-payload.adoc index 45a99473feba..e4095132e734 100644 --- a/updating/updating_a_cluster/migrating-to-multi-payload.adoc +++ b/updating/updating_a_cluster/migrating-to-multi-payload.adoc @@ -36,4 +36,16 @@ include::modules/migrating-to-multi-arch-cli.adoc[leveloffset=+1] * xref:../../updating/understanding_updates/understanding-update-channels-release.adoc#understanding-update-channels-releases[Understanding update channels and releases] * xref:../../installing/overview/installing-preparing.adoc#installing-preparing-selecting-cluster-type[Selecting a cluster installation type] * xref:../../machine_management/deploying-machine-health-checks.adoc#machine-health-checks-about_deploying-machine-health-checks[About machine health checks] -* xref:../../updating/updating_a_cluster/updating-cluster-cli.adoc#update-upgrading-oc-adm-upgrade-status_updating-cluster-cli[Gathering cluster update status using oc adm upgrade status (Technology Preview)] \ No newline at end of file +* xref:../../updating/updating_a_cluster/updating-cluster-cli.adoc#update-upgrading-oc-adm-upgrade-status_updating-cluster-cli[Gathering cluster update status using oc adm upgrade status (Technology Preview)] + +// Migrating the x86 control plane to the arm64 architecture on AWS +include::modules/migrating-from-x86-to-arm-cp.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* xref:../../machine_management/control_plane_machine_management/cpmso_provider_configurations/cpmso-config-options-aws.adoc#cpmso-config-options-aws[Control plane configuration options for {aws-full}] + +* xref:../../installing/installing_aws/upi/upi-aws-installation-reqs.adoc#installation-aws-arm-tested-machine-types_upi-aws-installation-reqs[Tested instance types for AWS on 64-bit ARM infrastructures] + +* xref:../../updating/updating_a_cluster/migrating-to-multi-payload.adoc#migrating-to-multi-arch-cli_updating-clusters-overview[Migrating to a cluster with multi-architecture compute machines using the CLI] From 3977988699d7d144041ef058a2f5c2b6ebe60e8c Mon Sep 17 00:00:00 2001 From: Steven Smith Date: Fri, 14 Feb 2025 11:27:08 -0500 Subject: [PATCH 236/669] Minor follow up to OCPBUGS 43762 --- modules/nw-ovn-k-day-2-masq-subnet.adoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/nw-ovn-k-day-2-masq-subnet.adoc b/modules/nw-ovn-k-day-2-masq-subnet.adoc index 7d34ce18bd0c..8f6770f7123e 100644 --- a/modules/nw-ovn-k-day-2-masq-subnet.adoc +++ b/modules/nw-ovn-k-day-2-masq-subnet.adoc @@ -26,7 +26,7 @@ $ oc patch networks.operator.openshift.io cluster --type=merge -p '{"spec":{"def + where: -`ipv4_masquerade_subnet`:: Specifies an IP address to be used as the IPv4 masquerade subnet. This range cannot overlap with any other subnets used by {product-title} or on the host itself. The default value for IPv4 is `169.254.169.0/29`. +`ipv4_masquerade_subnet`:: Specifies an IP address to be used as the IPv4 masquerade subnet. This range cannot overlap with any other subnets used by {product-title} or on the host itself. In versions of {product-title} earlier than 4.17, the default value for IPv4 was `169.254.169.0/29`, and clusters that were upgraded to version 4.17 maintain this value. For new clusters starting from version 4.17, the default value is `169.254.0.0/17`. `ipv6_masquerade_subnet`:: Specifies an IP address to be used as the IPv6 masquerade subnet. This range cannot overlap with any other subnets used by {product-title} or on the host itself. The default value for IPv6 is `fd69::/125`. @@ -39,4 +39,4 @@ $ oc patch networks.operator.openshift.io cluster --type=merge -p '{"spec":{"def + where: -`ipv4_masquerade_subnet`:: Specifies an IP address to be used as the IPv4 masquerade subnet. This range cannot overlap with any other subnets used by {product-title} or on the host itself. The default value for IPv4 is `169.254.169.0/29`. +`ipv4_masquerade_subnet`::Specifies an IP address to be used as the IPv4 masquerade subnet. This range cannot overlap with any other subnets used by {product-title} or on the host itself. In versions of {product-title} earlier than 4.17, the default value for IPv4 was `169.254.169.0/29`, and clusters that were upgraded to version 4.17 maintain this value. For new clusters starting from version 4.17, the default value is `169.254.0.0/17`. From 1bfbf28c2c0d61f7a997d9f5d1cec58611961737 Mon Sep 17 00:00:00 2001 From: Steven Smith Date: Mon, 3 Feb 2025 13:59:15 -0500 Subject: [PATCH 237/669] Updates node firewall operator docs --- .../networking_operators/ingress-node-firewall-operator.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/networking/networking_operators/ingress-node-firewall-operator.adoc b/networking/networking_operators/ingress-node-firewall-operator.adoc index 06727c73a96b..2b6608aa413c 100644 --- a/networking/networking_operators/ingress-node-firewall-operator.adoc +++ b/networking/networking_operators/ingress-node-firewall-operator.adoc @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[] toc::[] -The Ingress Node Firewall Operator allows administrators to manage firewall configurations at the node level. +The Ingress Node Firewall Operator provides a stateless, eBPF-based firewall for managing node-level ingress traffic in {product-title}. include::modules/nw-infw-operator-cr.adoc[leveloffset=+1] From 71361b9e4366d4e1ddc2791e10dfdd837aa3e637 Mon Sep 17 00:00:00 2001 From: sbeskin Date: Mon, 17 Feb 2025 17:07:46 +0200 Subject: [PATCH 238/669] CNV-47330 --- modules/virt-expanding-vm-disk-pvc.adoc | 31 +++++++++++++++++++++++-- 1 file changed, 29 insertions(+), 2 deletions(-) diff --git a/modules/virt-expanding-vm-disk-pvc.adoc b/modules/virt-expanding-vm-disk-pvc.adoc index 0f918d052bda..58f364cebf3d 100644 --- a/modules/virt-expanding-vm-disk-pvc.adoc +++ b/modules/virt-expanding-vm-disk-pvc.adoc @@ -4,11 +4,38 @@ :_mod-docs-content-type: PROCEDURE [id="virt-expanding-vm-disk-pvc_{context}"] -= Expanding a VM disk PVC += Increasing a VM disk size by expanding the PVC of the disk -You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk. +You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk. To specify the increased PVC volume, you can use the web console with the VM running. Alternatively, you can edit the PVC manifest in the CLI. +[NOTE] +==== If the PVC uses the file system volume mode, the disk image file expands to the available size while reserving some space for file system overhead. +==== + +[id="virt-expanding-vm-disk-pvc-web-console_{context}"] +== Expanding a VM disk PVC in the web console + +You can increase the size of a VM disk PVC in the web console without leaving the *VirtualMachines* page and with the VM running. + +.Procedure + +. In the *Administrator* or *Virtualization* perspective, open the *VirtualMachines* page. +. Select the running VM to open its *Details* page. +. Select the *Configuration* tab and click *Storage*. +. Click the options menu {kebab} next to the disk you want to expand. Select the *Edit* option. ++ +The *Edit disk* dialog opens. +. In the *PersistentVolumeClaim size* field, enter the desired size. +. Click *Save*. + +[NOTE] +==== +You can enter any value greater than the current one. However, if the new value exceeds the available size, an error is displayed. +==== + +[id="virt-expanding-vm-disk-pvc-editing-manifest_{context}"] +== Expanding a VM disk PVC by editing its manifest .Procedure From d32efdc6d654771c227b260507202b36a7babf4f Mon Sep 17 00:00:00 2001 From: Max Bridges Date: Mon, 28 Oct 2024 12:09:38 -0400 Subject: [PATCH 239/669] Add 'configDrive: true' to SR-IOV ShiftStack CR YAMLs Address OCPBUGS-43891 --- modules/machineset-yaml-osp-sr-iov-port-security.adoc | 4 +++- modules/machineset-yaml-osp-sr-iov.adoc | 2 ++ 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/modules/machineset-yaml-osp-sr-iov-port-security.adoc b/modules/machineset-yaml-osp-sr-iov-port-security.adoc index 149244158a4d..a094f24c90e6 100644 --- a/modules/machineset-yaml-osp-sr-iov-port-security.adoc +++ b/modules/machineset-yaml-osp-sr-iov-port-security.adoc @@ -82,10 +82,12 @@ spec: trunk: false userDataSecret: name: worker-user-data + configDrive: true <4> ---- <1> Specify allowed address pairs for the API and ingress ports. <2> Specify the machines network and subnet. <3> Specify the compute machines security group. +<4> The value of the `configDrive` parameter must be `true`. [NOTE] ==== @@ -94,4 +96,4 @@ Trunking is enabled for ports that are created by entries in the networks and su You can enable trunking for each port. Optionally, you can add tags to ports as part of their `tags` lists. -==== \ No newline at end of file +==== diff --git a/modules/machineset-yaml-osp-sr-iov.adoc b/modules/machineset-yaml-osp-sr-iov.adoc index 6715fa59f6c6..9dad5442e740 100644 --- a/modules/machineset-yaml-osp-sr-iov.adoc +++ b/modules/machineset-yaml-osp-sr-iov.adoc @@ -91,6 +91,7 @@ spec: userDataSecret: name: -user-data availabilityZone: + configDrive: true <5> ---- <1> Enter a network UUID for each port. <2> Enter a subnet UUID for each port. @@ -98,6 +99,7 @@ spec: <4> The value of the `portSecurity` parameter must be `false` for each port. + You cannot set security groups and allowed address pairs for ports when port security is disabled. Setting security groups on the instance applies the groups to all ports that are attached to it. +<5> The value of the `configDrive` parameter must be `true`. [IMPORTANT] ==== From 2f0726d35fbb9b187ab76b2dcb5ab9d404244b6f Mon Sep 17 00:00:00 2001 From: Shafer Slockett Date: Thu, 13 Feb 2025 14:05:18 -0500 Subject: [PATCH 240/669] OCPBUGS-45750 Fix typo in procedure substep 9.d.5. OCPBUGS-45750 Fix typo in procedure substep 9.d.5. --- modules/nodes-cma-autoscaling-custom-install.adoc | 2 +- modules/sd-nodes-cma-autoscaling-custom-install.adoc | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/nodes-cma-autoscaling-custom-install.adoc b/modules/nodes-cma-autoscaling-custom-install.adoc index 9fbe69221df8..d8d0c9e020e8 100644 --- a/modules/nodes-cma-autoscaling-custom-install.adoc +++ b/modules/nodes-cma-autoscaling-custom-install.adoc @@ -126,7 +126,7 @@ spec: <2> Specifies the level of verbosity for the Custom Metrics Autoscaler Operator log messages. The allowed values are `debug`, `info`, `error`. The default is `info`. <3> Specifies the logging format for the Custom Metrics Autoscaler Operator log messages. The allowed values are `console` or `json`. The default is `console`. <4> Optional: Specifies one or more config maps with CA certificates, which the Custom Metrics Autoscaler Operator can use to connect securely to TLS-enabled metrics sources. -<5> Specifies the logging level for the Custom Metrics Autoscaler Metrics Server. The allowed values are `0` for `info` and `4` or `debug`. The default is `0`. +<5> Specifies the logging level for the Custom Metrics Autoscaler Metrics Server. The allowed values are `0` for `info` and `4` for `debug`. The default is `0`. <6> Activates audit logging for the Custom Metrics Autoscaler Operator and specifies the audit policy to use, as described in the "Configuring audit logging" section. .. Click *Create* to create the KEDA controller. diff --git a/modules/sd-nodes-cma-autoscaling-custom-install.adoc b/modules/sd-nodes-cma-autoscaling-custom-install.adoc index 6dca94939639..21ffb13799c1 100644 --- a/modules/sd-nodes-cma-autoscaling-custom-install.adoc +++ b/modules/sd-nodes-cma-autoscaling-custom-install.adoc @@ -140,7 +140,7 @@ spec: <2> Specifies the level of verbosity for the Custom Metrics Autoscaler Operator log messages. The allowed values are `debug`, `info`, `error`. The default is `info`. <3> Specifies the logging format for the Custom Metrics Autoscaler Operator log messages. The allowed values are `console` or `json`. The default is `console`. <4> Optional: Specifies one or more config maps with CA certificates, which the Custom Metrics Autoscaler Operator can use to connect securely to TLS-enabled metrics sources. -<5> Specifies the logging level for the Custom Metrics Autoscaler Metrics Server. The allowed values are `0` for `info` and `4` or `debug`. The default is `0`. +<5> Specifies the logging level for the Custom Metrics Autoscaler Metrics Server. The allowed values are `0` for `info` and `4` for `debug`. The default is `0`. <6> Activates audit logging for the Custom Metrics Autoscaler Operator and specifies the audit policy to use, as described in the "Configuring audit logging" section. .. Click *Create* to create the KEDA controller. From af8b34cf53b7a97f85f8698665233d1878fe7873 Mon Sep 17 00:00:00 2001 From: Shubha Narayanan Date: Fri, 3 Jan 2025 13:24:24 +0530 Subject: [PATCH 241/669] Multiple NICs configuration --- ...er-provisioned-network-customizations.adoc | 8 ++ .../installation-vsphere-multiple-nics.adoc | 80 +++++++++++++++++++ modules/nw-network-config.adoc | 1 + 3 files changed, 89 insertions(+) create mode 100644 modules/installation-vsphere-multiple-nics.adoc diff --git a/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.adoc b/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.adoc index 63e66f645308..e403cc378a8b 100644 --- a/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.adoc +++ b/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.adoc @@ -58,6 +58,14 @@ include::modules/ipi-install-modifying-install-config-for-dual-stack-network.ado include::modules/configuring-vsphere-regions-zones.adoc[leveloffset=+2] +// Specifying multiple NICS +include::modules/installation-vsphere-multiple-nics.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../installing/installing_vsphere/installation-config-parameters-vsphere.adoc#installation-configuration-parameters-network_installation-config-parameters-vsphere[Network configuration parameters] + // Network configuration phases include::modules/nw-network-config.adoc[leveloffset=+1] diff --git a/modules/installation-vsphere-multiple-nics.adoc b/modules/installation-vsphere-multiple-nics.adoc new file mode 100644 index 000000000000..a81afbc718c0 --- /dev/null +++ b/modules/installation-vsphere-multiple-nics.adoc @@ -0,0 +1,80 @@ +// Module included in the following assemblies: +// +// * installing/installing-vsphere-installer-provisioned-network-customizations.adoc + +:_mod-docs-content-type: PROCEDURE +[id="installation-vsphere-multiple-nics_{context}"] += Configuring multiple NICs + +For scenarios requiring multiple network interface controller (NIC), you can configure multiple network adapters per node. + +:FeatureName: Configuring multiple NICs +include::snippets/technology-preview.adoc[] + +.Procedure + +. Specify the network adapter names in the networks section of `platform.vsphere.failureDomains[*].topology` as shown in the following `install-config.yaml` file: ++ +[source,yaml] +---- +platform: + vsphere: + vcenters: + ... + failureDomains: + - name: + region: + zone: + server: + topology: + datacenter: + computeCluster: "//host/" + networks: # <1> + - + - + - ... + - +---- +<1> Specifies the list of network adapters. You can specify up to 10 network adapters. + +. Specify at least one of the following configurations in the `install-config.yaml` file: + +** `networking.machineNetwork` ++ +.Example configuration ++ +[source,yaml] +---- +networking: + ... + machineNetwork: + - cidr: 10.0.0.0/16 + ... +---- ++ +[NOTE] +==== +The `networking.machineNetwork.cidr` field must correspond to an address on the first adapter defined in `topology.networks`. +==== + +** Add a `nodeNetworking` object to the `install-config.yaml` file and specify internal and external network subnet CIDR implementations for the object. ++ +.Example configuration ++ +[source,yaml] +---- +platform: + vsphere: + nodeNetworking: + external: + networkSubnetCidr: + - + - + internal: + networkSubnetCidr: + - + - + failureDomains: + - name: + region: +---- \ No newline at end of file diff --git a/modules/nw-network-config.adoc b/modules/nw-network-config.adoc index e50b1cef9c8d..b639dd365af1 100644 --- a/modules/nw-network-config.adoc +++ b/modules/nw-network-config.adoc @@ -24,6 +24,7 @@ Phase 1:: You can customize the following network-related fields in the `install * `networking.clusterNetwork` * `networking.serviceNetwork` * `networking.machineNetwork` +* `nodeNetworking` + For more information, see "Installation configuration parameters". + From 878b6ff84d9c107b7c7c564c6fb53047d08879e9 Mon Sep 17 00:00:00 2001 From: Ronan Hennessy Date: Fri, 14 Feb 2025 13:43:50 +0000 Subject: [PATCH 242/669] OCPBUGS-46469: Adding IBGU enhancement --- modules/ztp-image-based-upgrade-procedure-steps.adoc | 6 ++++++ snippets/ibu-ImageBasedGroupUpgrade.adoc | 6 ++++++ 2 files changed, 12 insertions(+) diff --git a/modules/ztp-image-based-upgrade-procedure-steps.adoc b/modules/ztp-image-based-upgrade-procedure-steps.adoc index a5919d01116f..a58e928c4bd5 100644 --- a/modules/ztp-image-based-upgrade-procedure-steps.adoc +++ b/modules/ztp-image-based-upgrade-procedure-steps.adoc @@ -57,6 +57,12 @@ spec: ---- <1> Clusters to upgrade. <2> Target platform version, the seed image to be used, and the secret required to access the image. ++ +[NOTE] +==== +If you add the seed image pull secret in the hub cluster, in the same namespace as the `ImageBasedGroupUpgrade` resource, the secret is added to the manifest list for the `Prep` stage. The secret is recreated in each spoke cluster in the `openshift-lifecycle-agent` namespace. +==== ++ <3> Optional: Applies additional manifests, which are not in the seed image, to the target cluster. Also applies `ConfigMap` objects for custom catalog sources. <4> List of `ConfigMap` resources that contain the {oadp-short} `Backup` and `Restore` CRs. <5> Upgrade plan details. diff --git a/snippets/ibu-ImageBasedGroupUpgrade.adoc b/snippets/ibu-ImageBasedGroupUpgrade.adoc index faddde60ff2a..7ff07b47dab1 100644 --- a/snippets/ibu-ImageBasedGroupUpgrade.adoc +++ b/snippets/ibu-ImageBasedGroupUpgrade.adoc @@ -34,6 +34,12 @@ spec: ---- <1> Clusters to upgrade. <2> Target platform version, the seed image to be used, and the secret required to access the image. ++ +[NOTE] +==== +If you add the seed image pull secret in the hub cluster, in the same namespace as the `ImageBasedGroupUpgrade` resource, the secret is added to the manifest list for the `Prep` stage. The secret is recreated in each spoke cluster in the `openshift-lifecycle-agent` namespace. +==== ++ <3> Optional: Applies additional manifests, which are not in the seed image, to the target cluster. Also applies `ConfigMap` objects for custom catalog sources. <4> `ConfigMap` resources that contain the OADP `Backup` and `Restore` CRs. <5> Upgrade plan details. From c1b2306545c004fecdd83e67856160a46429bd49 Mon Sep 17 00:00:00 2001 From: danielclowers Date: Wed, 12 Feb 2025 11:14:18 -0500 Subject: [PATCH 243/669] CNV#55937 4.18: WSFC validation is not working with multipath LUNs --- modules/virt-configuring-disk-sharing-lun.adoc | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/modules/virt-configuring-disk-sharing-lun.adoc b/modules/virt-configuring-disk-sharing-lun.adoc index d317b8bc3347..cff84b27766b 100644 --- a/modules/virt-configuring-disk-sharing-lun.adoc +++ b/modules/virt-configuring-disk-sharing-lun.adoc @@ -19,6 +19,11 @@ You can set an error policy for each LUN disk. The error policy controls how the For a LUN disk with an SCSi connection and a persistent reservation, as required for Windows Failover Clustering for shared volumes, you set the error policy to `report`. +[IMPORTANT] +==== +{VirtProductName} does not currently support SCSI-3 Persistent Reservations (SCSI-3 PR) over multipath storage. As a workaround, disable multipath or ensure the Windows Server Failover Clustering (WSFC) shared disk is setup from a single device and not part of multipath. +==== + .Prerequisites * You must have cluster administrator privileges to configure the feature gate option. From 04bc57b684e61c7d2b165a5177fb81e2f6811040 Mon Sep 17 00:00:00 2001 From: Sebastian Kopacz Date: Thu, 6 Feb 2025 11:03:17 -0500 Subject: [PATCH 244/669] OSDOCS-12787: iSCSI booting for Agent --- _topic_maps/_topic_map.yml | 2 + .../installing-using-iscsi.adoc | 47 ++++++++++++++++++ ...installing-with-agent-based-installer.adoc | 2 +- modules/installing-ocp-agent-download.adoc | 1 + modules/installing-ocp-agent-inputs.adoc | 34 ++++++++++--- modules/installing-ocp-agent-iscsi-files.adoc | 49 +++++++++++++++++++ ...stalling-ocp-agent-iscsi-requirements.adoc | 16 ++++++ 7 files changed, 142 insertions(+), 9 deletions(-) create mode 100644 installing/installing_with_agent_based_installer/installing-using-iscsi.adoc create mode 100644 modules/installing-ocp-agent-iscsi-files.adoc create mode 100644 modules/installing-ocp-agent-iscsi-requirements.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 6cc2909ead3a..62d0cf7798c9 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -415,6 +415,8 @@ Topics: File: installing-with-agent-based-installer - Name: Preparing PXE assets for OCP File: prepare-pxe-assets-agent + - Name: Preparing installation assets for iSCSI booting + File: installing-using-iscsi - Name: Preparing an Agent-based installed cluster for the multicluster engine for Kubernetes File: preparing-an-agent-based-installed-cluster-for-mce - Name: Installation configuration parameters for the Agent-based Installer diff --git a/installing/installing_with_agent_based_installer/installing-using-iscsi.adoc b/installing/installing_with_agent_based_installer/installing-using-iscsi.adoc new file mode 100644 index 000000000000..988a90daa62a --- /dev/null +++ b/installing/installing_with_agent_based_installer/installing-using-iscsi.adoc @@ -0,0 +1,47 @@ +:_mod-docs-content-type: ASSEMBLY +[id="installing-using-iscsi"] += Preparing installation assets for iSCSI booting +include::_attributes/common-attributes.adoc[] +:context: installing-using-iscsi + +toc::[] + +You can boot an {product-title} cluster through Internet Small Computer System Interface (iSCSI) by using an ISO image generated by the Agent-based Installer. +The following procedures describe how to prepare the necessary installation resources to boot from an iSCSI target. + +The assets you create in these procedures deploy a single-node {product-title} installation. +You can use these procedures as a basis and modify configurations according to your requirements. + +// Downloading the Agent-based Installer +include::modules/installing-ocp-agent-iscsi-requirements.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources +* xref:../../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc#agent-install-networking-DHCP_preparing-to-install-with-agent-based-installer[DHCP] +* xref:../../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc#root-device-hints_preparing-to-install-with-agent-based-installer[About root device hints] + +[id="prerequisites_{context}"] +== Prerequisites + +* You reviewed details about the xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] processes. +* You read the documentation on xref:../../installing/overview/installing-preparing.adoc#installing-preparing[selecting a cluster installation method and preparing it for users]. +* If you use a firewall or proxy, you xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configured it to allow the sites] that your cluster requires access to. + +// Downloading the Agent-based Installer +include::modules/installing-ocp-agent-download.adoc[leveloffset=+1] + +// Creating the preferred configuration inputs +include::modules/installing-ocp-agent-inputs.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources +* xref:../../installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc#modifying-install-config-for-dual-stack-network_ipi-install-installation-workflow[Deploying with dual-stack networking] +* xref:../../installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc#configuring-the-install-config-file_ipi-install-installation-workflow[Configuring the install-config yaml file] +* xref:../../installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.adoc#installation-three-node-cluster_installing-restricted-networks-bare-metal[Configuring a three-node cluster] +* xref:../../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc#root-device-hints_preparing-to-install-with-agent-based-installer[About root device hints] +* link:https://nmstate.io/examples.html[NMState state examples] (NMState documentation) +* xref:../../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#installing-ocp-agent-opt-manifests_installing-with-agent-based-installer[Optional: Creating additional manifest files] +* xref:../../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#agent-install-verifying-architectures_installing-with-agent-based-installer[Verifying the supported architecture for an Agent-based installation] + +// Creating the installation files +include::modules/installing-ocp-agent-iscsi-files.adoc[leveloffset=+1] \ No newline at end of file diff --git a/installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc b/installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc index a85abd00e6c6..664d15b90b6a 100644 --- a/installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc +++ b/installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc @@ -83,7 +83,7 @@ include::modules/installing-ocp-agent-verify.adoc[leveloffset=+2] .Additional resources * See xref:../../installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc#modifying-install-config-for-dual-stack-network_ipi-install-installation-workflow[Deploying with dual-stack networking]. * See xref:../../installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc#configuring-the-install-config-file_ipi-install-installation-workflow[Configuring the install-config yaml file]. -* See xref:../../installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.adoc#installation-three-node-cluster_installing-restricted-networks-bare-metal[Configuring a three-node cluster] to deploy three-node clusters in bare metal environments. +* See xref:../../installing/installing_bare_metal/upi/installing-restricted-networks-bare-metal.adoc#installation-three-node-cluster_installing-restricted-networks-bare-metal[Configuring a three-node cluster] to deploy three-node clusters in bare-metal environments. * See xref:../../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc#root-device-hints_preparing-to-install-with-agent-based-installer[About root device hints]. * See link:https://nmstate.io/examples.html[NMState state examples]. diff --git a/modules/installing-ocp-agent-download.adoc b/modules/installing-ocp-agent-download.adoc index 7fe462b0a270..57bd6ff1ac24 100644 --- a/modules/installing-ocp-agent-download.adoc +++ b/modules/installing-ocp-agent-download.adoc @@ -2,6 +2,7 @@ // // * installing/installing-with-agent-based-installer/installing-with-agent-based-installer.adoc // * installing/installing_with_agent_based_installer/prepare-pxe-infra-agent.adoc +// * installing/installing_with_agent_based_installer/installing-using-iscsi.adoc :_mod-docs-content-type: PROCEDURE [id="installing-ocp-agent-retrieve_{context}"] diff --git a/modules/installing-ocp-agent-inputs.adoc b/modules/installing-ocp-agent-inputs.adoc index dfd498acc9cf..08b8616e9a60 100644 --- a/modules/installing-ocp-agent-inputs.adoc +++ b/modules/installing-ocp-agent-inputs.adoc @@ -2,25 +2,40 @@ // // * installing/installing-with-agent-based-installer/installing-with-agent-based-installer.adoc // *installing/installing_with_agent_based_installer/prepare-pxe-infra-agent.adoc +// * installing/installing_with_agent_based_installer/installing-using-iscsi.adoc ifeval::["{context}" == "prepare-pxe-assets-agent"] :pxe-boot: endif::[] +ifeval::["{context}" == "installing-using-iscsi"] +:iscsi-boot: +endif::[] + :_mod-docs-content-type: PROCEDURE [id="installing-ocp-agent-inputs_{context}"] = Creating the preferred configuration inputs ifndef::pxe-boot[] Use this procedure to create the preferred configuration inputs used to create the agent image. + +[NOTE] +==== +Configuring the `install-config.yaml` and `agent-config.yaml` files is the preferred method for using the Agent-based Installer. Using {ztp} manifests is optional. +==== endif::pxe-boot[] ifdef::pxe-boot[] Use this procedure to create the preferred configuration inputs used to create the PXE files. + +[NOTE] +==== +Configuring the `install-config.yaml` and `agent-config.yaml` files is the preferred method for using the Agent-based Installer. Using {ztp} manifests is optional. +==== endif::pxe-boot[] .Procedure -. Install `nmstate` dependency by running the following command: +. Install the `nmstate` dependency by running the following command: + [source,terminal] ---- @@ -36,12 +51,6 @@ $ sudo dnf install /usr/bin/nmstatectl -y $ mkdir ~/ ---- -+ -[NOTE] -==== -This is the preferred method for the Agent-based installation. Using {ztp} manifests is optional. -==== - . Create the `install-config.yaml` file by running the following command: + -- @@ -86,7 +95,7 @@ If you are using the release image with the `multi` payload, you can install the + [NOTE] ==== -For bare metal platforms, host settings made in the platform section of the `install-config.yaml` file are used by default, unless they are overridden by configurations made in the `agent-config.yaml` file. +For bare-metal platforms, host settings made in the platform section of the `install-config.yaml` file are used by default, unless they are overridden by configurations made in the `agent-config.yaml` file. ==== <5> Specify your pull secret. <6> Specify your SSH public key. @@ -173,6 +182,7 @@ hosts: // <2> next-hop-address: 192.168.111.2 next-hop-interface: eno1 table-id: 254 +ifdef::iscsi-boot[minimalISO: true <6>] EOF ---- + @@ -182,6 +192,10 @@ You must provide the rendezvous IP address when you do not specify at least one <3> Optional: Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods. <4> Enables provisioning of the {op-system-first} image to a particular device. The installation program examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. <5> Optional: Configures the network interface of a host in NMState format. +ifdef::iscsi-boot[] +<6> Generates an ISO image without the rootfs image file, and instead provides details about where to pull the rootfs file from. +You must set this parameter to `true` to enable iSCSI booting. +endif::iscsi-boot[] ifdef::pxe-boot[] @@ -203,3 +217,7 @@ endif::pxe-boot[] ifeval::["{context}" == "prepare-pxe-assets-agent"] :!pxe-boot: endif::[] + +ifeval::["{context}" == "installing-using-iscsi"] +:!iscsi-boot: +endif::[] \ No newline at end of file diff --git a/modules/installing-ocp-agent-iscsi-files.adoc b/modules/installing-ocp-agent-iscsi-files.adoc new file mode 100644 index 000000000000..4ab5801e640c --- /dev/null +++ b/modules/installing-ocp-agent-iscsi-files.adoc @@ -0,0 +1,49 @@ +// Module included in the following assemblies: +// +// * installing/installing_with_agent_based_installer/installing-using-iscsi.adoc + +:_mod-docs-content-type: PROCEDURE +[id="installing-ocp-agent-iscsi-files_{context}"] += Creating the installation files + +Use the following procedure to generate the ISO image and create an iPXE script to upload to your iSCSI target. + +.Procedure + +. Create the agent image by running the following command: ++ +[source,terminal] +---- +$ openshift-install --dir agent create image +---- + +. Create an iPXE script by running the following command: ++ +[source,terminal] +---- +$ cat << EOF > agent.ipxe +!ipxe +set initiator-iqn :\${hostname} +sanboot --keep iscsi:.1:::::\${hostname} +EOF +---- ++ +-- +where: + +:: Specifies the iSCSI initiator name on the host that is booting the ISO. +This name can also be used by the iSCSI target. +:: Specifies the IP address of the iSCSI target. +:: Specifies the iSCSI target name. +This name can be the same as the initiator name. +-- ++ +.Example Command +[source,terminal] +---- +$ cat << EOF > agent.ipxe +!ipxe +set initiator-iqn iqn.2023-01.com.example:\${hostname} +sanboot --keep iscsi:192.168.45.1::::iqn.2023-01.com.example:\${hostname} +EOF +---- \ No newline at end of file diff --git a/modules/installing-ocp-agent-iscsi-requirements.adoc b/modules/installing-ocp-agent-iscsi-requirements.adoc new file mode 100644 index 000000000000..d32d9d0fb333 --- /dev/null +++ b/modules/installing-ocp-agent-iscsi-requirements.adoc @@ -0,0 +1,16 @@ +// Module included in the following assemblies: +// +// * installing/installing_with_agent_based_installer/installing-using-iscsi.adoc + +:_mod-docs-content-type: PROCEDURE +[id="iscsi-boot-requirements_{context}"] += Requirements for iSCSI booting + +The following configurations are necessary to enable iSCSI booting when using the Agent-based Installer: + +* Dynamic Host Configuration Protocol (DHCP) must be configured. +Static networking is not supported. +* You must create an additional network for iSCSI that is separate from the machine network of the cluster. +The machine network is rebooted during cluster installation and cannot be used for the iSCSI session. +* If the host on which you are booting the agent ISO image also has an installed disk, it might be necessary to specify the iSCSI disk name in the `rootDeviceHints` parameter to ensure that it is chosen as the boot disk for the final {op-system-first} image. +You can also use a diskless environment for iSCSI booting, in which case you do not need to set the `rootDeviceHints` parameter. \ No newline at end of file From 8b7edbd304f2e459f3a80311f633faf5000e31e4 Mon Sep 17 00:00:00 2001 From: Alex Dellapenta Date: Tue, 7 Jan 2025 13:41:50 -0700 Subject: [PATCH 245/669] Add disconnected support for OLM v1 --- _topic_maps/_topic_map.yml | 2 ++ .../about-installing-oc-mirror-v2.adoc | 13 +++++++++--- .../installing-mirroring-disconnected.adoc | 5 +++++ .../catalogs/disconnected-catalogs.adoc | 16 +++++++++++++++ ...etworks-nutanix-installer-provisioned.adoc | 6 ++++++ modules/oc-mirror-IDMS-ITMS-about.adoc | 15 +++++++++----- modules/oc-mirror-disk-to-mirror.adoc | 2 +- modules/oc-mirror-dry-run.adoc | 2 +- ...-mirror-imageset-config-parameters-v2.adoc | 2 +- modules/oc-mirror-mirror-to-disk.adoc | 2 +- modules/oc-mirror-mirror-to-mirror.adoc | 2 +- modules/oc-mirror-oci-format.adoc | 2 +- ...-mirror-updating-cluster-manifests-v2.adoc | 7 +++++++ .../oc-mirror-updating-cluster-manifests.adoc | 14 +++++++++---- ...updating-restricted-cluster-manifests.adoc | 13 ++++++++++-- modules/olmv1-about-disconnected.adoc | 20 +++++++++++++++++++ modules/olmv1-adding-a-catalog.adoc | 6 +++--- 17 files changed, 106 insertions(+), 23 deletions(-) create mode 100644 extensions/catalogs/disconnected-catalogs.adoc create mode 100644 modules/olmv1-about-disconnected.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 62d0cf7798c9..fb950d1f5e64 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2095,6 +2095,8 @@ Topics: File: managing-catalogs - Name: Creating catalogs File: creating-catalogs + - Name: Disconnected environment support in OLM v1 + File: disconnected-catalogs - Name: Cluster extensions Dir: ce Topics: diff --git a/disconnected/mirroring/about-installing-oc-mirror-v2.adoc b/disconnected/mirroring/about-installing-oc-mirror-v2.adoc index b2635378502a..8424039eae0e 100644 --- a/disconnected/mirroring/about-installing-oc-mirror-v2.adoc +++ b/disconnected/mirroring/about-installing-oc-mirror-v2.adoc @@ -8,7 +8,7 @@ toc::[] You can run your cluster in a disconnected environment if you install the cluster from a mirrored set of {product-title} container images in a private registry. This registry must be running whenever your cluster is running. -Just as you can use the `oc-mirror` OpenShift CLI (`oc`) plugin, you can also use oc-mirror plugin v2 to mirror images to a mirror registry in your fully or partially disconnected environments. To download the required images from the official Red{nbsp}Hat registries, you must run oc-mirror plugin v2 from a system with internet connectivity. +Just as you can use the oc-mirror OpenShift CLI (`oc`) plugin, you can also use oc-mirror plugin v2 to mirror images to a mirror registry in your fully or partially disconnected environments. To download the required images from the official Red{nbsp}Hat registries, you must run oc-mirror plugin v2 from a system with internet connectivity. :FeatureName: oc-mirror plugin v2 include::snippets/technology-preview.adoc[] @@ -63,7 +63,7 @@ include::modules/oc-mirror-workflows-fully-disconnected-v2.adoc[leveloffset=+2] include::modules/oc-mirror-mirror-to-disk-v2.adoc[leveloffset=+2] include::modules/oc-mirror-disk-to-mirror-v2.adoc[leveloffset=+2] -// About custom resources generated by v2 +// About custom resources generated by oc-mirror plugin v2 include::modules/oc-mirror-IDMS-ITMS-about.adoc[leveloffset=+1] [role="_additional-resources"] @@ -73,10 +73,17 @@ include::modules/oc-mirror-IDMS-ITMS-about.adoc[leveloffset=+1] * xref:../../rest_api/config_apis/imagetagmirrorset-config-openshift-io-v1.adoc#imagetagmirrorset-config-openshift-io-v1[ImageTagMirrorSet] +* xref:../../extensions/catalogs/managing-catalogs.adoc#olmv1-about-catalogs_managing-catalogs[About catalogs in {olmv1}] + // Configuring your cluster to use the resources generated by oc-mirror include::modules/oc-mirror-updating-cluster-manifests-v2.adoc[leveloffset=+2] -Once your cluster is configured to use the resources generated by oc-mirror plugin v2, see xref:../../disconnected/mirroring/about-installing-oc-mirror-v2.adoc#next-steps_about-installing-oc-mirror-v2[Next Steps] for information about tasks that you can perform using your mirrored images. +After your cluster is configured to use the resources generated by oc-mirror plugin v2, see xref:../../disconnected/mirroring/about-installing-oc-mirror-v2.adoc#next-steps_about-installing-oc-mirror-v2[Next Steps] for information about tasks that you can perform using your mirrored images. + +[role="_additional-resources"] +.Additional resources + +* xref:../../extensions/catalogs/disconnected-catalogs.adoc#disconnected-catalogs[Disconnected environment support in {olmv1}] //Delete Feature // workflows of delete feature diff --git a/disconnected/mirroring/installing-mirroring-disconnected.adoc b/disconnected/mirroring/installing-mirroring-disconnected.adoc index ec2d25c043fe..e3ea34398c1d 100644 --- a/disconnected/mirroring/installing-mirroring-disconnected.adoc +++ b/disconnected/mirroring/installing-mirroring-disconnected.adoc @@ -96,6 +96,11 @@ include::modules/oc-mirror-disk-to-mirror.adoc[leveloffset=+3] // Configuring your cluster to use the resources generated by oc-mirror include::modules/oc-mirror-updating-cluster-manifests.adoc[leveloffset=+1] +[role="_additional-resources"] +.Additional resources + +* xref:../../extensions/catalogs/managing-catalogs.adoc#olmv1-adding-a-catalog-to-a-cluster_managing-catalogs[Adding a catalog to a cluster] in "Extensions" + // About updating your mirror registry content include::modules/oc-mirror-updating-registry-about.adoc[leveloffset=+1] diff --git a/extensions/catalogs/disconnected-catalogs.adoc b/extensions/catalogs/disconnected-catalogs.adoc new file mode 100644 index 000000000000..d1ae48d488fb --- /dev/null +++ b/extensions/catalogs/disconnected-catalogs.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: ASSEMBLY +[id="disconnected-catalogs"] += Disconnected environment support in {olmv1} +include::_attributes/common-attributes.adoc[] +:context: disconnected-catalogs + +toc::[] + +To support cluster administrators that prioritize high security by running their clusters in internet-disconnected environments, especially for mission-critical production workloads, {olmv1-first} includes cluster extension lifecycle management functionality that works within these disconnected environments, starting in {product-title} 4.18. + +include::modules/olmv1-about-disconnected.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources +* xref:../../disconnected/mirroring/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin v1] +* xref:../../disconnected/mirroring/about-installing-oc-mirror-v2.adoc#about-installing-oc-mirror-v2[Mirroring images for a disconnected installation using the oc-mirror plugin v2] \ No newline at end of file diff --git a/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.adoc b/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.adoc index dd15f5052484..f191ef388a3f 100644 --- a/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.adoc +++ b/installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.adoc @@ -60,6 +60,12 @@ Complete the following steps to complete the configuration of your cluster. include::modules/olm-restricted-networks-configuring-operatorhub.adoc[leveloffset=+2] include::modules/oc-mirror-updating-restricted-cluster-manifests.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../extensions/catalogs/managing-catalogs.adoc#olmv1-adding-a-catalog-to-a-cluster_managing-catalogs[Adding a catalog to a cluster] in "Extensions" + include::modules/registry-configuring-storage-nutanix.adoc[leveloffset=+2] include::modules/cluster-telemetry.adoc[leveloffset=+1] diff --git a/modules/oc-mirror-IDMS-ITMS-about.adoc b/modules/oc-mirror-IDMS-ITMS-about.adoc index d9ce4deec091..291051a5ea25 100644 --- a/modules/oc-mirror-IDMS-ITMS-about.adoc +++ b/modules/oc-mirror-IDMS-ITMS-about.adoc @@ -4,13 +4,18 @@ :_mod-docs-content-type: CONCEPT [id="oc-mirror-custom-resources-v2_{context}"] -= About custom resources generated by v2 += About custom resources generated by oc-mirror plugin v2 // Should sentence below say "to which a digest or tag refers"? -With oc-mirror plugin v2, `ImageDigestMirrorSet` (IDMS) resources are generated by default if at least one image of the image set is mirrored by digest. -`ImageTagMirrorSet` (ITMS) resources are generated if at least one image from the image set is mirrored by tag. +The oc-mirror plugin v2 automatically generates the following custom resources: -Operator Lifecycle Manager (OLM) uses the `CatalogSource` resource to retrieve information about the available Operators in the mirror registry. +`ImageDigestMirrorSet` (IDMS):: Handles registry mirror rules when using image digest pull specifications. Generated if at least one image of the image set is mirrored by digest. -The OpenShift Update Service uses the `UpdateService` resource to provide update graph data to the disconnected environment. \ No newline at end of file +`ImageTagMirrorSet` (ITMS):: Handles registry mirror rules when using image tag pull specifications. Generated if at least one image from the image set is mirrored by tag. + +`CatalogSource`:: Retrieves information about the available Operators in the mirror registry. Used by {olmv0-first}. + +`ClusterCatalog`:: Retrieves information about the available cluster extensions (which includes Operators) in the mirror registry. Used by {olmv1}. + +`UpdateService`:: Provides update graph data to the disconnected environment. Used by the OpenShift Update Service. \ No newline at end of file diff --git a/modules/oc-mirror-disk-to-mirror.adoc b/modules/oc-mirror-disk-to-mirror.adoc index 7164bf5da7d5..526ec87fcd0d 100644 --- a/modules/oc-mirror-disk-to-mirror.adoc +++ b/modules/oc-mirror-disk-to-mirror.adoc @@ -12,7 +12,7 @@ You can use the oc-mirror plugin to mirror the contents of a generated image set .Prerequisites * You have installed the OpenShift CLI (`oc`) in the disconnected environment. -* You have installed the `oc-mirror` CLI plugin in the disconnected environment. +* You have installed the oc-mirror CLI plugin in the disconnected environment. * You have generated the image set file by using the `oc mirror` command. * You have transferred the image set file to the disconnected environment. // TODO: Confirm prereq about not needing a cluster, but need pull secret misc diff --git a/modules/oc-mirror-dry-run.adoc b/modules/oc-mirror-dry-run.adoc index aa2c7fa4d3e3..65b2d368741f 100644 --- a/modules/oc-mirror-dry-run.adoc +++ b/modules/oc-mirror-dry-run.adoc @@ -14,7 +14,7 @@ You can use oc-mirror to perform a dry run, without actually mirroring any image * You have access to the internet to obtain the necessary container images. * You have installed the OpenShift CLI (`oc`). -* You have installed the `oc-mirror` CLI plugin. +* You have installed the oc-mirror CLI plugin. * You have created the image set configuration file. .Procedure diff --git a/modules/oc-mirror-imageset-config-parameters-v2.adoc b/modules/oc-mirror-imageset-config-parameters-v2.adoc index 4e8b77340452..9b25de62ad75 100644 --- a/modules/oc-mirror-imageset-config-parameters-v2.adoc +++ b/modules/oc-mirror-imageset-config-parameters-v2.adoc @@ -78,7 +78,7 @@ Example: `registry.redhat.io/ubi8/ubi:latest` Example: `docker.io/library/alpine` |`mirror.helm` -|The helm configuration of the image set. The `oc-mirror` plugin does not support helm charts with manually modified `values.yaml` files. +|The helm configuration of the image set. The oc-mirror plugin does not support helm charts with manually modified `values.yaml` files. |Object |`mirror.helm.local` diff --git a/modules/oc-mirror-mirror-to-disk.adoc b/modules/oc-mirror-mirror-to-disk.adoc index 831b0937c974..762c2c354dd2 100644 --- a/modules/oc-mirror-mirror-to-disk.adoc +++ b/modules/oc-mirror-mirror-to-disk.adoc @@ -27,7 +27,7 @@ Do not delete or modify the metadata that is generated by the oc-mirror plugin. * You have access to the internet to obtain the necessary container images. * You have installed the OpenShift CLI (`oc`). -* You have installed the `oc-mirror` CLI plugin. +* You have installed the oc-mirror CLI plugin. * You have created the image set configuration file. // TODO: Don't need a running cluster, but need some pull secrets. Sync w/ team on this diff --git a/modules/oc-mirror-mirror-to-mirror.adoc b/modules/oc-mirror-mirror-to-mirror.adoc index 72e8683a37f5..0dd49bc8a967 100644 --- a/modules/oc-mirror-mirror-to-mirror.adoc +++ b/modules/oc-mirror-mirror-to-mirror.adoc @@ -21,7 +21,7 @@ Do not delete or modify the metadata that is generated by the oc-mirror plugin. * You have access to the internet to get the necessary container images. * You have installed the OpenShift CLI (`oc`). -* You have installed the `oc-mirror` CLI plugin. +* You have installed the oc-mirror CLI plugin. * You have created the image set configuration file. .Procedure diff --git a/modules/oc-mirror-oci-format.adoc b/modules/oc-mirror-oci-format.adoc index a716103c9c28..b1fe0b208b34 100644 --- a/modules/oc-mirror-oci-format.adoc +++ b/modules/oc-mirror-oci-format.adoc @@ -29,7 +29,7 @@ If you used the Technology Preview OCI local catalogs feature for the oc-mirror * You have access to the internet to obtain the necessary container images. * You have installed the OpenShift CLI (`oc`). -* You have installed the `oc-mirror` CLI plugin. +* You have installed the oc-mirror CLI plugin. .Procedure diff --git a/modules/oc-mirror-updating-cluster-manifests-v2.adoc b/modules/oc-mirror-updating-cluster-manifests-v2.adoc index 2a15b181aa9a..8a7f4a685f48 100644 --- a/modules/oc-mirror-updating-cluster-manifests-v2.adoc +++ b/modules/oc-mirror-updating-cluster-manifests-v2.adoc @@ -51,3 +51,10 @@ $ oc get imagetagmirrorset ---- $ oc get catalogsource -n openshift-marketplace ---- + +. Verify that the `ClusterCatalog` resources are successfully installed by running the following command: ++ +[source,terminal] +---- +$ oc get clustercatalog +---- \ No newline at end of file diff --git a/modules/oc-mirror-updating-cluster-manifests.adoc b/modules/oc-mirror-updating-cluster-manifests.adoc index 56b673f2d8aa..e4ecbf72c9ce 100644 --- a/modules/oc-mirror-updating-cluster-manifests.adoc +++ b/modules/oc-mirror-updating-cluster-manifests.adoc @@ -1,7 +1,6 @@ // Module included in the following assemblies: // -// * installing/disconnected_install/installing-mirroring-disconnected.adoc -// * updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc +// * disconnected/mirroring/installing-mirroring-disconnected.adoc :_mod-docs-content-type: PROCEDURE [id="oc-mirror-updating-cluster-manifests_{context}"] @@ -9,7 +8,14 @@ After you have mirrored your image set to the mirror registry, you must apply the generated `ImageContentSourcePolicy`, `CatalogSource`, and release image signature resources into the cluster. -The `ImageContentSourcePolicy` resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The `CatalogSource` resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry. The release image signatures are used to verify the mirrored release images. +The `ImageContentSourcePolicy` resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The `CatalogSource` resource is used by {olmv0-first} to retrieve information about the available Operators in the mirror registry. The release image signatures are used to verify the mirrored release images. + +[NOTE] +==== +{olmv1} uses the `ClusterCatalog` resource to retrieve information about the available cluster extensions in the mirror registry. + +The oc-mirror plugin v1 does not generate `ClusterCatalog` resources automatically; you must manually create them. For more information on creating and applying `ClusterCatalog` resources, see "Adding a catalog to a cluster" in "Extensions". +==== .Prerequisites @@ -18,7 +24,7 @@ The `ImageContentSourcePolicy` resource associates the mirror registry with the .Procedure -. Log in to the OpenShift CLI as a user with the `cluster-admin` role. +. Log in to {oc-first} as a user with the `cluster-admin` role. . Apply the YAML files from the results directory to the cluster by running the following command: + diff --git a/modules/oc-mirror-updating-restricted-cluster-manifests.adoc b/modules/oc-mirror-updating-restricted-cluster-manifests.adoc index 434aa2d57a8b..12ab8e9d7054 100644 --- a/modules/oc-mirror-updating-restricted-cluster-manifests.adoc +++ b/modules/oc-mirror-updating-restricted-cluster-manifests.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * installing/installing-restricted-networks-nutanix-installer-provisioned.adoc +// * installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.adoc :_mod-docs-content-type: PROCEDURE [id="oc-mirror-updating-cluster-manifests_{context}"] @@ -9,7 +9,16 @@ Mirroring the {product-title} content using the oc-mirror OpenShift CLI (oc) plugin creates resources, which include `catalogSource-certified-operator-index.yaml` and `imageContentSourcePolicy.yaml`. * The `ImageContentSourcePolicy` resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. -* The `CatalogSource` resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry, which lets users discover and install Operators. +* The `CatalogSource` resource is used by {olmv0-first} to retrieve information about the available Operators in the mirror registry, which lets users discover and install Operators. ++ +[NOTE] +==== +{olmv1} uses the `ClusterCatalog` resource to retrieve information about the available cluster extensions in the mirror registry. + +The oc-mirror plugin v1 does not generate `ClusterCatalog` resources automatically; you must manually create them. The oc-mirror plugin v2 does, however, generate `ClusterCatalog` resources automatically. + +For more information on creating and applying `ClusterCatalog` resources, see "Adding a catalog to a cluster" in "Extensions". +==== After you install the cluster, you must install these resources into the cluster. diff --git a/modules/olmv1-about-disconnected.adoc b/modules/olmv1-about-disconnected.adoc new file mode 100644 index 000000000000..918c01db2ed6 --- /dev/null +++ b/modules/olmv1-about-disconnected.adoc @@ -0,0 +1,20 @@ +// Module included in the following assemblies: +// +// * extensions/catalogs/disconnected-catalogs.adoc + +:_mod-docs-content-type: CONCEPT + +[id="olmv1-about-disconnected_{context}"] += About disconnected support and the oc-mirror plugin in {olmv1} + +{olmv1-first} supports disconnected environments starting in {product-title} 4.18. After using the oc-mirror plugin for the {oc-first} to mirror the images required for your cluster to a mirror registry in your fully or partially disconnected environments, {olmv1} can function properly in these environments by utilizing either of the following sets of resources, depending on which oc-mirror plugin version you are using: + +* `ImageContentSourcePolicy` resources, which are automatically generated by oc-mirror plugin v1, and `ClusterCatalog` resources, which must be manually created after using oc-mirror plugin v1 +* `ImageDigestMirrorSet`, `ImageTagMirrorSet`, and `ClusterCatalog` resources, which are all automatically generated by oc-mirror plugin v2 + +[NOTE] +==== +Starting in {product-title} 4.18, oc-mirror plugin v2 is the recommended version for mirroring. +==== + +For more information and procedures, see the _Disconnected environments_ guide for the oc-mirror plugin version you plan to use. \ No newline at end of file diff --git a/modules/olmv1-adding-a-catalog.adoc b/modules/olmv1-adding-a-catalog.adoc index a0ff11f565b5..5cdbf7c09362 100644 --- a/modules/olmv1-adding-a-catalog.adoc +++ b/modules/olmv1-adding-a-catalog.adoc @@ -1,13 +1,13 @@ // Module included in the following assemblies: // -// * operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc +// * extensions/catalogs/managing-catalogs.adoc :_mod-docs-content-type: PROCEDURE [id="olmv1-adding-a-catalog-to-a-cluster_{context}"] = Adding a catalog to a cluster -To add a catalog to a cluster, create a catalog custom resource (CR) and apply it to the cluster. +To add a catalog to a cluster for {olmv1-first} usage, create a `ClusterCatalog` custom resource (CR) and apply it to the cluster. .Procedure @@ -113,4 +113,4 @@ Events: <1> Describes the status of the catalog. <2> Displays the reason the catalog is in the current state. <3> Displays the phase of the installation process. -<4> Displays the image reference of the catalog. +<4> Displays the image reference of the catalog. \ No newline at end of file From bbf6897acf9e4c7898472082649869012811065e Mon Sep 17 00:00:00 2001 From: Jason Boxman Date: Fri, 14 Feb 2025 15:44:46 -0500 Subject: [PATCH 246/669] Add OpenShift 4.18 rc9 APIs --- .../egressfirewall-k8s-ovn-org-v1.adoc | 35 +- rest_api/network_apis/network-apis-index.adoc | 6 +- ...mageregistry-operator-openshift-io-v1.adoc | 1281 +++++++++++++---- ...mageregistry-operator-openshift-io-v1.adoc | 660 +++++++-- ...sionmigrator-operator-openshift-io-v1.adoc | 46 +- .../operator_apis/operator-apis-index.adoc | 17 +- 6 files changed, 1611 insertions(+), 434 deletions(-) diff --git a/rest_api/network_apis/egressfirewall-k8s-ovn-org-v1.adoc b/rest_api/network_apis/egressfirewall-k8s-ovn-org-v1.adoc index 19e3b60b4eb6..951a9cd3e267 100644 --- a/rest_api/network_apis/egressfirewall-k8s-ovn-org-v1.adoc +++ b/rest_api/network_apis/egressfirewall-k8s-ovn-org-v1.adoc @@ -11,7 +11,11 @@ toc::[] Description:: + -- -EgressFirewall describes the current egress firewall for a Namespace. Traffic from a pod to an IP address outside the cluster will be checked against each EgressFirewallRule in the pod's namespace's EgressFirewall, in order. If no rule matches (or no EgressFirewall is present) then the traffic will be allowed by default. +EgressFirewall describes the current egress firewall for a Namespace. +Traffic from a pod to an IP address outside the cluster will be checked against +each EgressFirewallRule in the pod's namespace's EgressFirewall, in +order. If no rule matches (or no EgressFirewall is present) then the traffic +will be allowed by default. -- Type:: @@ -191,18 +195,23 @@ Type:: | `dnsName` | `string` -| dnsName is the domain name to allow/deny traffic to. If this is set, cidrSelector and nodeSelector must be unset. For a wildcard DNS name, the '*' will match only one label. Additionally, only a single '*' can be used at the beginning of the wildcard DNS name. For example, '*.example.com' will match 'sub1.example.com' but won't match 'sub2.sub1.example.com' +| dnsName is the domain name to allow/deny traffic to. If this is set, cidrSelector and nodeSelector must be unset. +For a wildcard DNS name, the '*' will match only one label. Additionally, only a single '*' can be +used at the beginning of the wildcard DNS name. For example, '*.example.com' will match 'sub1.example.com' +but won't match 'sub2.sub1.example.com'. | `nodeSelector` | `object` -| nodeSelector will allow/deny traffic to the Kubernetes node IP of selected nodes. If this is set, cidrSelector and DNSName must be unset. +| nodeSelector will allow/deny traffic to the Kubernetes node IP of selected nodes. If this is set, +cidrSelector and DNSName must be unset. |=== === .spec.egress[].to.nodeSelector Description:: + -- -nodeSelector will allow/deny traffic to the Kubernetes node IP of selected nodes. If this is set, cidrSelector and DNSName must be unset. +nodeSelector will allow/deny traffic to the Kubernetes node IP of selected nodes. If this is set, +cidrSelector and DNSName must be unset. -- Type:: @@ -221,11 +230,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.egress[].to.nodeSelector.matchExpressions @@ -245,7 +257,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -267,11 +280,15 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .status diff --git a/rest_api/network_apis/network-apis-index.adoc b/rest_api/network_apis/network-apis-index.adoc index 690ca6bf0b58..5561b64ab166 100644 --- a/rest_api/network_apis/network-apis-index.adoc +++ b/rest_api/network_apis/network-apis-index.adoc @@ -69,7 +69,11 @@ Type:: Description:: + -- -EgressFirewall describes the current egress firewall for a Namespace. Traffic from a pod to an IP address outside the cluster will be checked against each EgressFirewallRule in the pod's namespace's EgressFirewall, in order. If no rule matches (or no EgressFirewall is present) then the traffic will be allowed by default. +EgressFirewall describes the current egress firewall for a Namespace. +Traffic from a pod to an IP address outside the cluster will be checked against +each EgressFirewallRule in the pod's namespace's EgressFirewall, in +order. If no rule matches (or no EgressFirewall is present) then the traffic +will be allowed by default. -- Type:: diff --git a/rest_api/operator_apis/config-imageregistry-operator-openshift-io-v1.adoc b/rest_api/operator_apis/config-imageregistry-operator-openshift-io-v1.adoc index edb2cdcf1f51..dce17e0f99fb 100644 --- a/rest_api/operator_apis/config-imageregistry-operator-openshift-io-v1.adoc +++ b/rest_api/operator_apis/config-imageregistry-operator-openshift-io-v1.adoc @@ -11,8 +11,10 @@ toc::[] Description:: + -- -Config is the configuration object for a registry instance managed by the registry operator - Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). +Config is the configuration object for a registry instance managed by +the registry operator + +Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). -- Type:: @@ -75,11 +77,13 @@ Required:: | `defaultRoute` | `boolean` -| defaultRoute indicates whether an external facing route for the registry should be created using the default generated hostname. +| defaultRoute indicates whether an external facing route for the registry +should be created using the default generated hostname. | `disableRedirect` | `boolean` -| disableRedirect controls whether to route all data through the Registry, rather than redirecting to the backend. +| disableRedirect controls whether to route all data through the Registry, +rather than redirecting to the backend. | `httpSecret` | `string` @@ -87,8 +91,11 @@ Required:: | `logLevel` | `string` -| logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. - Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". +| logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a +simple way to manage coarse grained logging choices that operators have to interpret for their operands. + +Valid values are: "Normal", "Debug", "Trace", "TraceAll". +Defaults to "Normal". | `logging` | `integer` @@ -100,24 +107,31 @@ Required:: | `nodeSelector` | `object (string)` -| nodeSelector defines the node selection constraints for the registry pod. +| nodeSelector defines the node selection constraints for the registry +pod. | `observedConfig` | `` -| observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator +| observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because +it is an input to the level for the operator | `operatorLogLevel` | `string` -| operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. - Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". +| operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a +simple way to manage coarse grained logging choices that operators have to interpret for themselves. + +Valid values are: "Normal", "Debug", "Trace", "TraceAll". +Defaults to "Normal". | `proxy` | `object` -| proxy defines the proxy to be used when calling master api, upstream registries, etc. +| proxy defines the proxy to be used when calling master api, upstream +registries, etc. | `readOnly` | `boolean` -| readOnly indicates whether the registry instance should reject attempts to push new images or delete existing ones. +| readOnly indicates whether the registry instance should reject attempts +to push new images or delete existing ones. | `replicas` | `integer` @@ -125,7 +139,8 @@ Required:: | `requests` | `object` -| requests controls how many parallel requests a given registry instance will handle before queuing additional requests. +| requests controls how many parallel requests a given registry instance +will handle before queuing additional requests. | `resources` | `object` @@ -133,19 +148,23 @@ Required:: | `rolloutStrategy` | `string` -| rolloutStrategy defines rollout strategy for the image registry deployment. +| rolloutStrategy defines rollout strategy for the image registry +deployment. | `routes` | `array` -| routes defines additional external facing routes which should be created for the registry. +| routes defines additional external facing routes which should be +created for the registry. | `routes[]` | `object` -| ImageRegistryConfigRoute holds information on external route access to image registry. +| ImageRegistryConfigRoute holds information on external route access to image +registry. | `storage` | `object` -| storage details for configuring registry storage, e.g. S3 bucket coordinates. +| storage details for configuring registry storage, e.g. S3 bucket +coordinates. | `tolerations` | `array` @@ -153,7 +172,8 @@ Required:: | `tolerations[]` | `object` -| The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . +| The pod this Toleration is attached to tolerates any taint that matches +the triple using the matching operator . | `topologySpreadConstraints` | `array` @@ -165,7 +185,11 @@ Required:: | `unsupportedConfigOverrides` | `` -| unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. +| unsupportedConfigOverrides overrides the final configuration that was computed by the operator. +Red Hat does not support the use of this field. +Misuse of this field could lead to unexpected behavior or conflict with other configuration options. +Seek guidance from the Red Hat support before using this field. +Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. |=== === .spec.affinity @@ -217,22 +241,43 @@ Type:: | `preferredDuringSchedulingIgnoredDuringExecution` | `array` -| The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. +| The scheduler will prefer to schedule pods to nodes that satisfy +the affinity expressions specified by this field, but it may choose +a node that violates one or more of the expressions. The node that is +most preferred is the one with the greatest sum of weights, i.e. +for each node that meets all of the scheduling requirements (resource +request, requiredDuringScheduling affinity expressions, etc.), +compute a sum by iterating through the elements of this field and adding +"weight" to the sum if the node matches the corresponding matchExpressions; the +node(s) with the highest sum are the most preferred. | `preferredDuringSchedulingIgnoredDuringExecution[]` | `object` -| An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). +| An empty preferred scheduling term matches all objects with implicit weight 0 +(i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). | `requiredDuringSchedulingIgnoredDuringExecution` | `object` -| If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. +| If the affinity requirements specified by this field are not met at +scheduling time, the pod will not be scheduled onto the node. +If the affinity requirements specified by this field cease to be met +at some point during pod execution (e.g. due to an update), the system +may or may not try to eventually evict the pod from its node. |=== === .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description:: + -- -The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. +The scheduler will prefer to schedule pods to nodes that satisfy +the affinity expressions specified by this field, but it may choose +a node that violates one or more of the expressions. The node that is +most preferred is the one with the greatest sum of weights, i.e. +for each node that meets all of the scheduling requirements (resource +request, requiredDuringScheduling affinity expressions, etc.), +compute a sum by iterating through the elements of this field and adding +"weight" to the sum if the node matches the corresponding matchExpressions; the +node(s) with the highest sum are the most preferred. -- Type:: @@ -245,7 +290,8 @@ Type:: Description:: + -- -An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). +An empty preferred scheduling term matches all objects with implicit weight 0 +(i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). -- Type:: @@ -293,7 +339,8 @@ Type:: | `matchExpressions[]` | `object` -| A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. | `matchFields` | `array` @@ -301,7 +348,8 @@ Type:: | `matchFields[]` | `object` -| A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. |=== === .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions @@ -321,7 +369,8 @@ Type:: Description:: + -- -A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. -- Type:: @@ -343,11 +392,16 @@ Required:: | `operator` | `string` -| Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. +| Represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. | `values` | `array (string)` -| An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. +| An array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. If the operator is Gt or Lt, the values +array must have a single element, which will be interpreted as an integer. +This array is replaced during a strategic merge patch. |=== === .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields @@ -367,7 +421,8 @@ Type:: Description:: + -- -A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. -- Type:: @@ -389,18 +444,27 @@ Required:: | `operator` | `string` -| Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. +| Represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. | `values` | `array (string)` -| An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. +| An array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. If the operator is Gt or Lt, the values +array must have a single element, which will be interpreted as an integer. +This array is replaced during a strategic merge patch. |=== === .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description:: + -- -If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. +If the affinity requirements specified by this field are not met at +scheduling time, the pod will not be scheduled onto the node. +If the affinity requirements specified by this field cease to be met +at some point during pod execution (e.g. due to an update), the system +may or may not try to eventually evict the pod from its node. -- Type:: @@ -421,7 +485,9 @@ Required:: | `nodeSelectorTerms[]` | `object` -| A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. +| A null or empty node selector term matches no objects. The requirements of +them are ANDed. +The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. |=== === .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms @@ -441,7 +507,9 @@ Type:: Description:: + -- -A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. +A null or empty node selector term matches no objects. The requirements of +them are ANDed. +The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. -- Type:: @@ -460,7 +528,8 @@ Type:: | `matchExpressions[]` | `object` -| A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. | `matchFields` | `array` @@ -468,7 +537,8 @@ Type:: | `matchFields[]` | `object` -| A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. |=== === .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions @@ -488,7 +558,8 @@ Type:: Description:: + -- -A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. -- Type:: @@ -510,11 +581,16 @@ Required:: | `operator` | `string` -| Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. +| Represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. | `values` | `array (string)` -| An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. +| An array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. If the operator is Gt or Lt, the values +array must have a single element, which will be interpreted as an integer. +This array is replaced during a strategic merge patch. |=== === .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields @@ -534,7 +610,8 @@ Type:: Description:: + -- -A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. -- Type:: @@ -556,11 +633,16 @@ Required:: | `operator` | `string` -| Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. +| Represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. | `values` | `array (string)` -| An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. +| An array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. If the operator is Gt or Lt, the values +array must have a single element, which will be interpreted as an integer. +This array is replaced during a strategic merge patch. |=== === .spec.affinity.podAffinity @@ -582,7 +664,15 @@ Type:: | `preferredDuringSchedulingIgnoredDuringExecution` | `array` -| The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +| The scheduler will prefer to schedule pods to nodes that satisfy +the affinity expressions specified by this field, but it may choose +a node that violates one or more of the expressions. The node that is +most preferred is the one with the greatest sum of weights, i.e. +for each node that meets all of the scheduling requirements (resource +request, requiredDuringScheduling affinity expressions, etc.), +compute a sum by iterating through the elements of this field and adding +"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the +node(s) with the highest sum are the most preferred. | `preferredDuringSchedulingIgnoredDuringExecution[]` | `object` @@ -590,18 +680,37 @@ Type:: | `requiredDuringSchedulingIgnoredDuringExecution` | `array` -| If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +| If the affinity requirements specified by this field are not met at +scheduling time, the pod will not be scheduled onto the node. +If the affinity requirements specified by this field cease to be met +at some point during pod execution (e.g. due to a pod label update), the +system may or may not try to eventually evict the pod from its node. +When there are multiple elements, the lists of nodes corresponding to each +podAffinityTerm are intersected, i.e. all terms must be satisfied. | `requiredDuringSchedulingIgnoredDuringExecution[]` | `object` -| Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running +| Defines a set of pods (namely those matching the labelSelector +relative to the given namespace(s)) that this pod should be +co-located (affinity) or not co-located (anti-affinity) with, +where co-located is defined as running on a node whose value of +the label with key matches that of any node on which +a pod of the set of pods is running |=== === .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description:: + -- -The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +The scheduler will prefer to schedule pods to nodes that satisfy +the affinity expressions specified by this field, but it may choose +a node that violates one or more of the expressions. The node that is +most preferred is the one with the greatest sum of weights, i.e. +for each node that meets all of the scheduling requirements (resource +request, requiredDuringScheduling affinity expressions, etc.), +compute a sum by iterating through the elements of this field and adding +"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the +node(s) with the highest sum are the most preferred. -- Type:: @@ -636,7 +745,8 @@ Required:: | `weight` | `integer` -| weight associated with matching the corresponding podAffinityTerm, in the range 1-100. +| weight associated with matching the corresponding podAffinityTerm, +in the range 1-100. |=== === .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm @@ -660,34 +770,63 @@ Required:: | `labelSelector` | `object` -| A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +| A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. | `matchLabelKeys` | `array (string)` -| MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both matchLabelKeys and labelSelector. +Also, matchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `mismatchLabelKeys` | `array (string)` -| MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MismatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. +Also, mismatchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `namespaceSelector` | `object` -| A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +| A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. | `namespaces` | `array (string)` -| namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". +| namespaces specifies a static list of namespace names that the term applies to. +The term is applied to the union of the namespaces listed in this field +and the ones selected by namespaceSelector. +null or empty namespaces list and null namespaceSelector means "this pod's namespace". | `topologyKey` | `string` -| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. +| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching +the labelSelector in the specified namespaces, where co-located is defined as running on a node +whose value of the label with key topologyKey matches that of any node on which any of the +selected pods is running. +Empty topologyKey is not allowed. |=== === .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description:: + -- -A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. -- Type:: @@ -706,11 +845,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions @@ -730,7 +872,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -752,18 +895,26 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description:: + -- -A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. -- Type:: @@ -782,11 +933,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions @@ -806,7 +960,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -828,18 +983,28 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description:: + -- -If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +If the affinity requirements specified by this field are not met at +scheduling time, the pod will not be scheduled onto the node. +If the affinity requirements specified by this field cease to be met +at some point during pod execution (e.g. due to a pod label update), the +system may or may not try to eventually evict the pod from its node. +When there are multiple elements, the lists of nodes corresponding to each +podAffinityTerm are intersected, i.e. all terms must be satisfied. -- Type:: @@ -852,7 +1017,12 @@ Type:: Description:: + -- -Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running +Defines a set of pods (namely those matching the labelSelector +relative to the given namespace(s)) that this pod should be +co-located (affinity) or not co-located (anti-affinity) with, +where co-located is defined as running on a node whose value of +the label with key matches that of any node on which +a pod of the set of pods is running -- Type:: @@ -869,34 +1039,63 @@ Required:: | `labelSelector` | `object` -| A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +| A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. | `matchLabelKeys` | `array (string)` -| MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both matchLabelKeys and labelSelector. +Also, matchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `mismatchLabelKeys` | `array (string)` -| MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MismatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. +Also, mismatchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `namespaceSelector` | `object` -| A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +| A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. | `namespaces` | `array (string)` -| namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". +| namespaces specifies a static list of namespace names that the term applies to. +The term is applied to the union of the namespaces listed in this field +and the ones selected by namespaceSelector. +null or empty namespaces list and null namespaceSelector means "this pod's namespace". | `topologyKey` | `string` -| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. +| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching +the labelSelector in the specified namespaces, where co-located is defined as running on a node +whose value of the label with key topologyKey matches that of any node on which any of the +selected pods is running. +Empty topologyKey is not allowed. |=== === .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description:: + -- -A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. -- Type:: @@ -915,11 +1114,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions @@ -939,7 +1141,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -961,18 +1164,26 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description:: + -- -A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. -- Type:: @@ -991,11 +1202,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions @@ -1015,7 +1229,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -1037,11 +1252,15 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.affinity.podAntiAffinity @@ -1063,7 +1282,15 @@ Type:: | `preferredDuringSchedulingIgnoredDuringExecution` | `array` -| The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +| The scheduler will prefer to schedule pods to nodes that satisfy +the anti-affinity expressions specified by this field, but it may choose +a node that violates one or more of the expressions. The node that is +most preferred is the one with the greatest sum of weights, i.e. +for each node that meets all of the scheduling requirements (resource +request, requiredDuringScheduling anti-affinity expressions, etc.), +compute a sum by iterating through the elements of this field and adding +"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the +node(s) with the highest sum are the most preferred. | `preferredDuringSchedulingIgnoredDuringExecution[]` | `object` @@ -1071,18 +1298,37 @@ Type:: | `requiredDuringSchedulingIgnoredDuringExecution` | `array` -| If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +| If the anti-affinity requirements specified by this field are not met at +scheduling time, the pod will not be scheduled onto the node. +If the anti-affinity requirements specified by this field cease to be met +at some point during pod execution (e.g. due to a pod label update), the +system may or may not try to eventually evict the pod from its node. +When there are multiple elements, the lists of nodes corresponding to each +podAffinityTerm are intersected, i.e. all terms must be satisfied. | `requiredDuringSchedulingIgnoredDuringExecution[]` | `object` -| Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running +| Defines a set of pods (namely those matching the labelSelector +relative to the given namespace(s)) that this pod should be +co-located (affinity) or not co-located (anti-affinity) with, +where co-located is defined as running on a node whose value of +the label with key matches that of any node on which +a pod of the set of pods is running |=== === .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description:: + -- -The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +The scheduler will prefer to schedule pods to nodes that satisfy +the anti-affinity expressions specified by this field, but it may choose +a node that violates one or more of the expressions. The node that is +most preferred is the one with the greatest sum of weights, i.e. +for each node that meets all of the scheduling requirements (resource +request, requiredDuringScheduling anti-affinity expressions, etc.), +compute a sum by iterating through the elements of this field and adding +"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the +node(s) with the highest sum are the most preferred. -- Type:: @@ -1117,7 +1363,8 @@ Required:: | `weight` | `integer` -| weight associated with matching the corresponding podAffinityTerm, in the range 1-100. +| weight associated with matching the corresponding podAffinityTerm, +in the range 1-100. |=== === .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm @@ -1141,34 +1388,63 @@ Required:: | `labelSelector` | `object` -| A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +| A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. | `matchLabelKeys` | `array (string)` -| MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both matchLabelKeys and labelSelector. +Also, matchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `mismatchLabelKeys` | `array (string)` -| MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MismatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. +Also, mismatchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `namespaceSelector` | `object` -| A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +| A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. | `namespaces` | `array (string)` -| namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". +| namespaces specifies a static list of namespace names that the term applies to. +The term is applied to the union of the namespaces listed in this field +and the ones selected by namespaceSelector. +null or empty namespaces list and null namespaceSelector means "this pod's namespace". | `topologyKey` | `string` -| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. +| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching +the labelSelector in the specified namespaces, where co-located is defined as running on a node +whose value of the label with key topologyKey matches that of any node on which any of the +selected pods is running. +Empty topologyKey is not allowed. |=== === .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description:: + -- -A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. -- Type:: @@ -1187,11 +1463,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions @@ -1211,7 +1490,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -1233,18 +1513,26 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description:: + -- -A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. -- Type:: @@ -1263,11 +1551,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions @@ -1287,7 +1578,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -1309,18 +1601,28 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description:: + -- -If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +If the anti-affinity requirements specified by this field are not met at +scheduling time, the pod will not be scheduled onto the node. +If the anti-affinity requirements specified by this field cease to be met +at some point during pod execution (e.g. due to a pod label update), the +system may or may not try to eventually evict the pod from its node. +When there are multiple elements, the lists of nodes corresponding to each +podAffinityTerm are intersected, i.e. all terms must be satisfied. -- Type:: @@ -1333,7 +1635,12 @@ Type:: Description:: + -- -Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running +Defines a set of pods (namely those matching the labelSelector +relative to the given namespace(s)) that this pod should be +co-located (affinity) or not co-located (anti-affinity) with, +where co-located is defined as running on a node whose value of +the label with key matches that of any node on which +a pod of the set of pods is running -- Type:: @@ -1350,34 +1657,63 @@ Required:: | `labelSelector` | `object` -| A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +| A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. | `matchLabelKeys` | `array (string)` -| MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both matchLabelKeys and labelSelector. +Also, matchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `mismatchLabelKeys` | `array (string)` -| MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MismatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. +Also, mismatchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `namespaceSelector` | `object` -| A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +| A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. | `namespaces` | `array (string)` -| namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". +| namespaces specifies a static list of namespace names that the term applies to. +The term is applied to the union of the namespaces listed in this field +and the ones selected by namespaceSelector. +null or empty namespaces list and null namespaceSelector means "this pod's namespace". | `topologyKey` | `string` -| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. +| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching +the labelSelector in the specified namespaces, where co-located is defined as running on a node +whose value of the label with key topologyKey matches that of any node on which any of the +selected pods is running. +Empty topologyKey is not allowed. |=== === .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description:: + -- -A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. -- Type:: @@ -1396,11 +1732,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions @@ -1420,7 +1759,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -1442,18 +1782,26 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description:: + -- -A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. -- Type:: @@ -1472,11 +1820,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions @@ -1496,7 +1847,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -1518,18 +1870,23 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.proxy Description:: + -- -proxy defines the proxy to be used when calling master api, upstream registries, etc. +proxy defines the proxy to be used when calling master api, upstream +registries, etc. -- Type:: @@ -1544,22 +1901,26 @@ Type:: | `http` | `string` -| http defines the proxy to be used by the image registry when accessing HTTP endpoints. +| http defines the proxy to be used by the image registry when +accessing HTTP endpoints. | `https` | `string` -| https defines the proxy to be used by the image registry when accessing HTTPS endpoints. +| https defines the proxy to be used by the image registry when +accessing HTTPS endpoints. | `noProxy` | `string` -| noProxy defines a comma-separated list of host names that shouldn't go through any proxy. +| noProxy defines a comma-separated list of host names that shouldn't +go through any proxy. |=== === .spec.requests Description:: + -- -requests controls how many parallel requests a given registry instance will handle before queuing additional requests. +requests controls how many parallel requests a given registry instance +will handle before queuing additional requests. -- Type:: @@ -1608,7 +1969,8 @@ Type:: | `maxWaitInQueue` | `string` -| maxWaitInQueue sets the maximum time a request can wait in the queue before being rejected. +| maxWaitInQueue sets the maximum time a request can wait in the queue +before being rejected. |=== === .spec.requests.write @@ -1638,7 +2000,8 @@ Type:: | `maxWaitInQueue` | `string` -| maxWaitInQueue sets the maximum time a request can wait in the queue before being rejected. +| maxWaitInQueue sets the maximum time a request can wait in the queue +before being rejected. |=== === .spec.resources @@ -1660,9 +2023,13 @@ Type:: | `claims` | `array` -| Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. +| Claims lists the names of resources, defined in spec.resourceClaims, +that are used by this container. + +This is an alpha field and requires enabling the +DynamicResourceAllocation feature gate. + +This field is immutable. It can only be set for containers. | `claims[]` | `object` @@ -1670,20 +2037,28 @@ Type:: | `limits` | `integer-or-string` -| Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ +| Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | `requests` | `integer-or-string` -| Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ +| Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |=== === .spec.resources.claims Description:: + -- -Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. +Claims lists the names of resources, defined in spec.resourceClaims, +that are used by this container. + +This is an alpha field and requires enabling the +DynamicResourceAllocation feature gate. + +This field is immutable. It can only be set for containers. -- Type:: @@ -1713,14 +2088,23 @@ Required:: | `name` | `string` -| Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. +| Name must match the name of one entry in pod.spec.resourceClaims of +the Pod where this field is used. It makes that resource available +inside a container. + +| `request` +| `string` +| Request is the name chosen for a request in the referenced claim. +If empty, everything from the claim is made available, otherwise +only the result of this request. |=== === .spec.routes Description:: + -- -routes defines additional external facing routes which should be created for the registry. +routes defines additional external facing routes which should be +created for the registry. -- Type:: @@ -1733,7 +2117,8 @@ Type:: Description:: + -- -ImageRegistryConfigRoute holds information on external route access to image registry. +ImageRegistryConfigRoute holds information on external route access to image +registry. -- Type:: @@ -1758,14 +2143,16 @@ Required:: | `secretName` | `string` -| secretName points to secret containing the certificates to be used by the route. +| secretName points to secret containing the certificates to be used +by the route. |=== === .spec.storage Description:: + -- -storage details for configuring registry storage, e.g. S3 bucket coordinates. +storage details for configuring registry storage, e.g. S3 bucket +coordinates. -- Type:: @@ -1784,7 +2171,10 @@ Type:: | `emptyDir` | `object` -| emptyDir represents ephemeral storage on the pod's host node. WARNING: this storage cannot be used with more than 1 replica and is not suitable for production use. When the pod is removed from a node for any reason, the data in the emptyDir is deleted forever. +| emptyDir represents ephemeral storage on the pod's host node. +WARNING: this storage cannot be used with more than 1 replica and +is not suitable for production use. When the pod is removed from a +node for any reason, the data in the emptyDir is deleted forever. | `gcs` | `object` @@ -1796,7 +2186,9 @@ Type:: | `managementState` | `string` -| managementState indicates if the operator manages the underlying storage unit. If Managed the operator will remove the storage when this operator gets Removed. +| managementState indicates if the operator manages the underlying +storage unit. If Managed the operator will remove the storage when +this operator gets Removed. | `oss` | `object` @@ -1838,7 +2230,9 @@ Type:: | `cloudName` | `string` -| cloudName is the name of the Azure cloud environment to be used by the registry. If empty, the operator will set it based on the infrastructure object. +| cloudName is the name of the Azure cloud environment to be used by the +registry. If empty, the operator will set it based on the infrastructure +object. | `container` | `string` @@ -1846,14 +2240,16 @@ Type:: | `networkAccess` | `object` -| networkAccess defines the network access properties for the storage account. Defaults to type: External. +| networkAccess defines the network access properties for the storage account. +Defaults to type: External. |=== === .spec.storage.azure.networkAccess Description:: + -- -networkAccess defines the network access properties for the storage account. Defaults to type: External. +networkAccess defines the network access properties for the storage account. +Defaults to type: External. -- Type:: @@ -1868,18 +2264,36 @@ Type:: | `internal` | `object` -| internal defines the vnet and subnet names to configure a private endpoint and connect it to the storage account in order to make it private. when type: Internal and internal is unset, the image registry operator will discover vnet and subnet names, and generate a private endpoint name. +| internal defines the vnet and subnet names to configure a private +endpoint and connect it to the storage account in order to make it +private. +when type: Internal and internal is unset, the image registry operator +will discover vnet and subnet names, and generate a private endpoint +name. | `type` | `string` -| type is the network access level to be used for the storage account. type: Internal means the storage account will be private, type: External means the storage account will be publicly accessible. Internal storage accounts are only exposed within the cluster's vnet. External storage accounts are publicly exposed on the internet. When type: Internal is used, a vnetName, subNetName and privateEndpointName may optionally be specified. If unspecificed, the image registry operator will discover vnet and subnet names, and generate a privateEndpointName. Defaults to "External". +| type is the network access level to be used for the storage account. +type: Internal means the storage account will be private, type: External +means the storage account will be publicly accessible. +Internal storage accounts are only exposed within the cluster's vnet. +External storage accounts are publicly exposed on the internet. +When type: Internal is used, a vnetName, subNetName and privateEndpointName +may optionally be specified. If unspecificed, the image registry operator +will discover vnet and subnet names, and generate a privateEndpointName. +Defaults to "External". |=== === .spec.storage.azure.networkAccess.internal Description:: + -- -internal defines the vnet and subnet names to configure a private endpoint and connect it to the storage account in order to make it private. when type: Internal and internal is unset, the image registry operator will discover vnet and subnet names, and generate a private endpoint name. +internal defines the vnet and subnet names to configure a private +endpoint and connect it to the storage account in order to make it +private. +when type: Internal and internal is unset, the image registry operator +will discover vnet and subnet names, and generate a private endpoint +name. -- Type:: @@ -1894,26 +2308,58 @@ Type:: | `networkResourceGroupName` | `string` -| networkResourceGroupName is the resource group name where the cluster's vnet and subnet are. When omitted, the registry operator will use the cluster resource group (from in the infrastructure status). If you set a networkResourceGroupName on your install-config.yaml, that value will be used automatically (for clusters configured with publish:Internal). Note that both vnet and subnet must be in the same resource group. It must be between 1 and 90 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_), and not end with a period. +| networkResourceGroupName is the resource group name where the cluster's vnet +and subnet are. When omitted, the registry operator will use the cluster +resource group (from in the infrastructure status). +If you set a networkResourceGroupName on your install-config.yaml, that +value will be used automatically (for clusters configured with publish:Internal). +Note that both vnet and subnet must be in the same resource group. +It must be between 1 and 90 characters in length and must consist only of +alphanumeric characters, hyphens (-), periods (.) and underscores (_), and +not end with a period. | `privateEndpointName` | `string` -| privateEndpointName is the name of the private endpoint for the registry. When provided, the registry will use it as the name of the private endpoint it will create for the storage account. When omitted, the registry will generate one. It must be between 2 and 64 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). It must start with an alphanumeric character and end with an alphanumeric character or an underscore. +| privateEndpointName is the name of the private endpoint for the registry. +When provided, the registry will use it as the name of the private endpoint +it will create for the storage account. When omitted, the registry will +generate one. +It must be between 2 and 64 characters in length and must consist only of +alphanumeric characters, hyphens (-), periods (.) and underscores (_). +It must start with an alphanumeric character and end with an alphanumeric character or an underscore. | `subnetName` | `string` -| subnetName is the name of the subnet the registry operates in. When omitted, the registry operator will discover and set this by using the `kubernetes.io_cluster.` tag in the vnet resource, then using one of listed subnets. Advanced cluster network configurations that use network security groups to protect subnets should ensure the provided subnetName has access to Azure Storage service. It must be between 1 and 80 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). +| subnetName is the name of the subnet the registry operates in. When omitted, +the registry operator will discover and set this by using the `kubernetes.io_cluster.` +tag in the vnet resource, then using one of listed subnets. +Advanced cluster network configurations that use network security groups +to protect subnets should ensure the provided subnetName has access to +Azure Storage service. +It must be between 1 and 80 characters in length and must consist only of +alphanumeric characters, hyphens (-), periods (.) and underscores (_). | `vnetName` | `string` -| vnetName is the name of the vnet the registry operates in. When omitted, the registry operator will discover and set this by using the `kubernetes.io_cluster.` tag in the vnet resource. This tag is set automatically by the installer. Commonly, this will be the same vnet as the cluster. Advanced cluster network configurations should ensure the provided vnetName is the vnet of the nodes where the image registry pods are running from. It must be between 2 and 64 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). It must start with an alphanumeric character and end with an alphanumeric character or an underscore. +| vnetName is the name of the vnet the registry operates in. When omitted, +the registry operator will discover and set this by using the `kubernetes.io_cluster.` +tag in the vnet resource. This tag is set automatically by the installer. +Commonly, this will be the same vnet as the cluster. +Advanced cluster network configurations should ensure the provided vnetName +is the vnet of the nodes where the image registry pods are running from. +It must be between 2 and 64 characters in length and must consist only of +alphanumeric characters, hyphens (-), periods (.) and underscores (_). +It must start with an alphanumeric character and end with an alphanumeric character or an underscore. |=== === .spec.storage.emptyDir Description:: + -- -emptyDir represents ephemeral storage on the pod's host node. WARNING: this storage cannot be used with more than 1 replica and is not suitable for production use. When the pod is removed from a node for any reason, the data in the emptyDir is deleted forever. +emptyDir represents ephemeral storage on the pod's host node. +WARNING: this storage cannot be used with more than 1 replica and +is not suitable for production use. When the pod is removed from a +node for any reason, the data in the emptyDir is deleted forever. -- Type:: @@ -1941,19 +2387,25 @@ Type:: | `bucket` | `string` -| bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. +| bucket is the bucket name in which you want to store the registry's +data. +Optional, will be generated if not provided. | `keyID` | `string` -| keyID is the KMS key ID to use for encryption. Optional, buckets are encrypted by default on GCP. This allows for the use of a custom encryption key. +| keyID is the KMS key ID to use for encryption. +Optional, buckets are encrypted by default on GCP. +This allows for the use of a custom encryption key. | `projectID` | `string` -| projectID is the Project ID of the GCP project that this bucket should be associated with. +| projectID is the Project ID of the GCP project that this bucket should +be associated with. | `region` | `string` -| region is the GCS location in which your bucket exists. Optional, will be set based on the installed GCS Region. +| region is the GCS location in which your bucket exists. +Optional, will be set based on the installed GCS Region. |=== === .spec.storage.ibmcos @@ -1975,23 +2427,33 @@ Type:: | `bucket` | `string` -| bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. +| bucket is the bucket name in which you want to store the registry's +data. +Optional, will be generated if not provided. | `location` | `string` -| location is the IBM Cloud location in which your bucket exists. Optional, will be set based on the installed IBM Cloud location. +| location is the IBM Cloud location in which your bucket exists. +Optional, will be set based on the installed IBM Cloud location. | `resourceGroupName` | `string` -| resourceGroupName is the name of the IBM Cloud resource group that this bucket and its service instance is associated with. Optional, will be set based on the installed IBM Cloud resource group. +| resourceGroupName is the name of the IBM Cloud resource group that this +bucket and its service instance is associated with. +Optional, will be set based on the installed IBM Cloud resource group. | `resourceKeyCRN` | `string` -| resourceKeyCRN is the CRN of the IBM Cloud resource key that is created for the service instance. Commonly referred as a service credential and must contain HMAC type credentials. Optional, will be computed if not provided. +| resourceKeyCRN is the CRN of the IBM Cloud resource key that is created +for the service instance. Commonly referred as a service credential and +must contain HMAC type credentials. +Optional, will be computed if not provided. | `serviceInstanceCRN` | `string` -| serviceInstanceCRN is the CRN of the IBM Cloud Object Storage service instance that this bucket is associated with. Optional, will be computed if not provided. +| serviceInstanceCRN is the CRN of the IBM Cloud Object Storage service +instance that this bucket is associated with. +Optional, will be computed if not provided. |=== === .spec.storage.oss @@ -2013,26 +2475,36 @@ Type:: | `bucket` | `string` -| Bucket is the bucket name in which you want to store the registry's data. About Bucket naming, more details you can look at the [official documentation](https://www.alibabacloud.com/help/doc-detail/257087.htm) Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default will be autogenerated in the form of -image-registry-- +| Bucket is the bucket name in which you want to store the registry's data. +About Bucket naming, more details you can look at the [official documentation](https://www.alibabacloud.com/help/doc-detail/257087.htm) +Empty value means no opinion and the platform chooses the a default, which is subject to change over time. +Currently the default will be autogenerated in the form of -image-registry-- | `encryption` | `object` -| Encryption specifies whether you would like your data encrypted on the server side. More details, you can look cat the [official documentation](https://www.alibabacloud.com/help/doc-detail/117914.htm) +| Encryption specifies whether you would like your data encrypted on the server side. +More details, you can look cat the [official documentation](https://www.alibabacloud.com/help/doc-detail/117914.htm) | `endpointAccessibility` | `string` -| EndpointAccessibility specifies whether the registry use the OSS VPC internal endpoint Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default is `Internal`. +| EndpointAccessibility specifies whether the registry use the OSS VPC internal endpoint +Empty value means no opinion and the platform chooses the a default, which is subject to change over time. +Currently the default is `Internal`. | `region` | `string` -| Region is the Alibaba Cloud Region in which your bucket exists. For a list of regions, you can look at the [official documentation](https://www.alibabacloud.com/help/doc-detail/31837.html). Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default will be based on the installed Alibaba Cloud Region. +| Region is the Alibaba Cloud Region in which your bucket exists. +For a list of regions, you can look at the [official documentation](https://www.alibabacloud.com/help/doc-detail/31837.html). +Empty value means no opinion and the platform chooses the a default, which is subject to change over time. +Currently the default will be based on the installed Alibaba Cloud Region. |=== === .spec.storage.oss.encryption Description:: + -- -Encryption specifies whether you would like your data encrypted on the server side. More details, you can look cat the [official documentation](https://www.alibabacloud.com/help/doc-detail/117914.htm) +Encryption specifies whether you would like your data encrypted on the server side. +More details, you can look cat the [official documentation](https://www.alibabacloud.com/help/doc-detail/117914.htm) -- Type:: @@ -2051,7 +2523,9 @@ Type:: | `method` | `string` -| Method defines the different encrytion modes available Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default is `AES256`. +| Method defines the different encrytion modes available +Empty value means no opinion and the platform chooses the a default, which is subject to change over time. +Currently the default is `AES256`. |=== === .spec.storage.oss.encryption.kms @@ -2119,47 +2593,69 @@ Type:: | `bucket` | `string` -| bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. +| bucket is the bucket name in which you want to store the registry's +data. +Optional, will be generated if not provided. | `chunkSizeMiB` | `integer` -| chunkSizeMiB defines the size of the multipart upload chunks of the S3 API. The S3 API requires multipart upload chunks to be at least 5MiB. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default value is 10 MiB. The value is an integer number of MiB. The minimum value is 5 and the maximum value is 5120 (5 GiB). +| chunkSizeMiB defines the size of the multipart upload chunks of the S3 API. +The S3 API requires multipart upload chunks to be at least 5MiB. +When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. +The current default value is 10 MiB. +The value is an integer number of MiB. +The minimum value is 5 and the maximum value is 5120 (5 GiB). | `cloudFront` | `object` -| cloudFront configures Amazon Cloudfront as the storage middleware in a registry. +| cloudFront configures Amazon Cloudfront as the storage middleware in a +registry. | `encrypt` | `boolean` -| encrypt specifies whether the registry stores the image in encrypted format or not. Optional, defaults to false. +| encrypt specifies whether the registry stores the image in encrypted +format or not. +Optional, defaults to false. | `keyID` | `string` -| keyID is the KMS key ID to use for encryption. Optional, Encrypt must be true, or this parameter is ignored. +| keyID is the KMS key ID to use for encryption. +Optional, Encrypt must be true, or this parameter is ignored. | `region` | `string` -| region is the AWS region in which your bucket exists. Optional, will be set based on the installed AWS Region. +| region is the AWS region in which your bucket exists. +Optional, will be set based on the installed AWS Region. | `regionEndpoint` | `string` -| regionEndpoint is the endpoint for S3 compatible storage services. It should be a valid URL with scheme, e.g. https://s3.example.com. Optional, defaults based on the Region that is provided. +| regionEndpoint is the endpoint for S3 compatible storage services. +It should be a valid URL with scheme, e.g. https://s3.example.com. +Optional, defaults based on the Region that is provided. | `trustedCA` | `object` -| trustedCA is a reference to a config map containing a CA bundle. The image registry and its operator use certificates from this bundle to verify S3 server certificates. - The namespace for the config map referenced by trustedCA is "openshift-config". The key for the bundle in the config map is "ca-bundle.crt". +| trustedCA is a reference to a config map containing a CA bundle. The +image registry and its operator use certificates from this bundle to +verify S3 server certificates. + +The namespace for the config map referenced by trustedCA is +"openshift-config". The key for the bundle in the config map is +"ca-bundle.crt". | `virtualHostedStyle` | `boolean` -| virtualHostedStyle enables using S3 virtual hosted style bucket paths with a custom RegionEndpoint Optional, defaults to false. +| virtualHostedStyle enables using S3 virtual hosted style bucket paths with +a custom RegionEndpoint +Optional, defaults to false. |=== === .spec.storage.s3.cloudFront Description:: + -- -cloudFront configures Amazon Cloudfront as the storage middleware in a registry. +cloudFront configures Amazon Cloudfront as the storage middleware in a +registry. -- Type:: @@ -2218,7 +2714,11 @@ Required:: | `name` | `string` -| Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop `kubebuilder:default` when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896. +| Name of the referent. +This field is effectively required, but due to backwards compatibility is +allowed to be empty. Instances of this type with an empty value here are +almost certainly wrong. +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names | `optional` | `boolean` @@ -2229,8 +2729,13 @@ Required:: Description:: + -- -trustedCA is a reference to a config map containing a CA bundle. The image registry and its operator use certificates from this bundle to verify S3 server certificates. - The namespace for the config map referenced by trustedCA is "openshift-config". The key for the bundle in the config map is "ca-bundle.crt". +trustedCA is a reference to a config map containing a CA bundle. The +image registry and its operator use certificates from this bundle to +verify S3 server certificates. + +The namespace for the config map referenced by trustedCA is +"openshift-config". The key for the bundle in the config map is +"ca-bundle.crt". -- Type:: @@ -2245,7 +2750,12 @@ Type:: | `name` | `string` -| name is the metadata.name of the referenced config map. This field must adhere to standard config map naming restrictions. The name must consist solely of alphanumeric characters, hyphens (-) and periods (.). It has a maximum length of 253 characters. If this field is not specified or is empty string, the default trust bundle will be used. +| name is the metadata.name of the referenced config map. +This field must adhere to standard config map naming restrictions. +The name must consist solely of alphanumeric characters, hyphens (-) +and periods (.). It has a maximum length of 253 characters. +If this field is not specified or is empty string, the default trust +bundle will be used. |=== === .spec.storage.swift @@ -2275,7 +2785,8 @@ Type:: | `container` | `string` -| container defines the name of Swift container where to store the registry's data. +| container defines the name of Swift container where to store the +registry's data. | `domain` | `string` @@ -2315,7 +2826,8 @@ Type:: Description:: + -- -The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . +The pod this Toleration is attached to tolerates any taint that matches +the triple using the matching operator . -- Type:: @@ -2330,23 +2842,32 @@ Type:: | `effect` | `string` -| Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. +| Effect indicates the taint effect to match. Empty means match all taint effects. +When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. | `key` | `string` -| Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. +| Key is the taint key that the toleration applies to. Empty means match all taint keys. +If the key is empty, operator must be Exists; this combination means to match all values and all keys. | `operator` | `string` -| Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. +| Operator represents a key's relationship to the value. +Valid operators are Exists and Equal. Defaults to Equal. +Exists is equivalent to wildcard for value, so that a pod can +tolerate all taints of a particular category. | `tolerationSeconds` | `integer` -| TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. +| TolerationSeconds represents the period of time the toleration (which must be +of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, +it is not set, which means tolerate the taint forever (do not evict). Zero and +negative values will be treated as 0 (evict immediately) by the system. | `value` | `string` -| Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. +| Value is the taint value the toleration matches to. +If the operator is Exists, the value should be empty, otherwise just a regular string. |=== === .spec.topologySpreadConstraints @@ -2385,46 +2906,128 @@ Required:: | `labelSelector` | `object` -| LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. +| LabelSelector is used to find matching pods. +Pods that match this label selector are counted to determine the number of pods +in their corresponding topology domain. | `matchLabelKeys` | `array (string)` -| MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. - This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). +| MatchLabelKeys is a set of pod label keys to select the pods over which +spreading will be calculated. The keys are used to lookup values from the +incoming pod labels, those key-value labels are ANDed with labelSelector +to select the group of existing pods over which spreading will be calculated +for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. +MatchLabelKeys cannot be set when LabelSelector isn't set. +Keys that don't exist in the incoming pod labels will +be ignored. A null or empty list means only match against labelSelector. + +This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). | `maxSkew` | `integer` -| MaxSkew describes the degree to which pods may be unevenly distributed. When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. \| zone1 \| zone2 \| zone3 \| \| P P \| P P \| P \| - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. +| MaxSkew describes the degree to which pods may be unevenly distributed. +When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference +between the number of matching pods in the target topology and the global minimum. +The global minimum is the minimum number of matching pods in an eligible domain +or zero if the number of eligible domains is less than MinDomains. +For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same +labelSelector spread as 2/2/1: +In this case, the global minimum is 1. +\| zone1 \| zone2 \| zone3 \| +\| P P \| P P \| P \| +- if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; +scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) +violate MaxSkew(1). +- if MaxSkew is 2, incoming pod can be scheduled onto any zone. +When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence +to topologies that satisfy it. +It's a required field. Default value is 1 and 0 is not allowed. | `minDomains` | `integer` -| MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. - For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: \| zone1 \| zone2 \| zone3 \| \| P P \| P P \| P P \| The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. +| MinDomains indicates a minimum number of eligible domains. +When the number of eligible domains with matching topology keys is less than minDomains, +Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. +And when the number of eligible domains with matching topology keys equals or greater than minDomains, +this value has no effect on scheduling. +As a result, when the number of eligible domains is less than minDomains, +scheduler won't schedule more than maxSkew Pods to those domains. +If value is nil, the constraint behaves as if MinDomains is equal to 1. +Valid values are integers greater than 0. +When value is not nil, WhenUnsatisfiable must be DoNotSchedule. + +For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same +labelSelector spread as 2/2/2: +\| zone1 \| zone2 \| zone3 \| +\| P P \| P P \| P P \| +The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. +In this situation, new pod with the same labelSelector cannot be scheduled, +because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, +it will violate MaxSkew. | `nodeAffinityPolicy` | `string` -| NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. - If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. +| NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector +when calculating pod topology spread skew. Options are: +- Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. +- Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. + +If this value is nil, the behavior is equivalent to the Honor policy. +This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. | `nodeTaintsPolicy` | `string` -| NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. - If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. +| NodeTaintsPolicy indicates how we will treat node taints when calculating +pod topology spread skew. Options are: +- Honor: nodes without taints, along with tainted nodes for which the incoming pod +has a toleration, are included. +- Ignore: node taints are ignored. All nodes are included. + +If this value is nil, the behavior is equivalent to the Ignore policy. +This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. | `topologyKey` | `string` -| TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. +| TopologyKey is the key of node labels. Nodes that have a label with this key +and identical values are considered to be in the same topology. +We consider each as a "bucket", and try to put balanced number +of pods into each bucket. +We define a domain as a particular instance of a topology. +Also, we define an eligible domain as a domain whose nodes meet the requirements of +nodeAffinityPolicy and nodeTaintsPolicy. +e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. +And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. +It's a required field. | `whenUnsatisfiable` | `string` -| WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: \| zone1 \| zone2 \| zone3 \| \| P P P \| P \| P \| If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it *more* imbalanced. It's a required field. +| WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy +the spread constraint. +- DoNotSchedule (default) tells the scheduler not to schedule it. +- ScheduleAnyway tells the scheduler to schedule the pod in any location, + but giving higher precedence to topologies that would help reduce the + skew. +A constraint is considered "Unsatisfiable" for an incoming pod +if and only if every possible node assignment for that pod would violate +"MaxSkew" on some topology. +For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same +labelSelector spread as 3/1/1: +\| zone1 \| zone2 \| zone3 \| +\| P P P \| P \| P \| +If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled +to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies +MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler +won't make it *more* imbalanced. +It's a required field. |=== === .spec.topologySpreadConstraints[].labelSelector Description:: + -- -LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. +LabelSelector is used to find matching pods. +Pods that match this label selector are counted to determine the number of pods +in their corresponding topology domain. -- Type:: @@ -2443,11 +3046,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.topologySpreadConstraints[].labelSelector.matchExpressions @@ -2467,7 +3073,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -2489,11 +3096,15 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .status @@ -2532,6 +3143,10 @@ Required:: | `object` | GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. +| `latestAvailableRevision` +| `integer` +| latestAvailableRevision is the deploymentID of the most recent deployment + | `observedGeneration` | `integer` | observedGeneration is the last generation change you've dealt with @@ -2542,7 +3157,8 @@ Required:: | `storage` | `object` -| storage indicates the current applied storage configuration of the registry. +| storage indicates the current applied storage configuration of the +registry. | `storageManaged` | `boolean` @@ -2577,6 +3193,8 @@ Type:: `object` Required:: + - `lastTransitionTime` + - `status` - `type` @@ -2587,7 +3205,8 @@ Required:: | `lastTransitionTime` | `string` -| +| lastTransitionTime is the last time the condition transitioned from one status to another. +This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. | `message` | `string` @@ -2599,11 +3218,11 @@ Required:: | `status` | `string` -| +| status of the condition, one of True, False, Unknown. | `type` | `string` -| +| type of condition in CamelCase or in foo.example.com/CamelCase. |=== === .status.generations @@ -2629,6 +3248,11 @@ GenerationStatus keeps track of the generation for a given resource so that deci Type:: `object` +Required:: + - `group` + - `name` + - `namespace` + - `resource` @@ -2665,7 +3289,8 @@ Type:: Description:: + -- -storage indicates the current applied storage configuration of the registry. +storage indicates the current applied storage configuration of the +registry. -- Type:: @@ -2684,7 +3309,10 @@ Type:: | `emptyDir` | `object` -| emptyDir represents ephemeral storage on the pod's host node. WARNING: this storage cannot be used with more than 1 replica and is not suitable for production use. When the pod is removed from a node for any reason, the data in the emptyDir is deleted forever. +| emptyDir represents ephemeral storage on the pod's host node. +WARNING: this storage cannot be used with more than 1 replica and +is not suitable for production use. When the pod is removed from a +node for any reason, the data in the emptyDir is deleted forever. | `gcs` | `object` @@ -2696,7 +3324,9 @@ Type:: | `managementState` | `string` -| managementState indicates if the operator manages the underlying storage unit. If Managed the operator will remove the storage when this operator gets Removed. +| managementState indicates if the operator manages the underlying +storage unit. If Managed the operator will remove the storage when +this operator gets Removed. | `oss` | `object` @@ -2738,7 +3368,9 @@ Type:: | `cloudName` | `string` -| cloudName is the name of the Azure cloud environment to be used by the registry. If empty, the operator will set it based on the infrastructure object. +| cloudName is the name of the Azure cloud environment to be used by the +registry. If empty, the operator will set it based on the infrastructure +object. | `container` | `string` @@ -2746,14 +3378,16 @@ Type:: | `networkAccess` | `object` -| networkAccess defines the network access properties for the storage account. Defaults to type: External. +| networkAccess defines the network access properties for the storage account. +Defaults to type: External. |=== === .status.storage.azure.networkAccess Description:: + -- -networkAccess defines the network access properties for the storage account. Defaults to type: External. +networkAccess defines the network access properties for the storage account. +Defaults to type: External. -- Type:: @@ -2768,18 +3402,36 @@ Type:: | `internal` | `object` -| internal defines the vnet and subnet names to configure a private endpoint and connect it to the storage account in order to make it private. when type: Internal and internal is unset, the image registry operator will discover vnet and subnet names, and generate a private endpoint name. +| internal defines the vnet and subnet names to configure a private +endpoint and connect it to the storage account in order to make it +private. +when type: Internal and internal is unset, the image registry operator +will discover vnet and subnet names, and generate a private endpoint +name. | `type` | `string` -| type is the network access level to be used for the storage account. type: Internal means the storage account will be private, type: External means the storage account will be publicly accessible. Internal storage accounts are only exposed within the cluster's vnet. External storage accounts are publicly exposed on the internet. When type: Internal is used, a vnetName, subNetName and privateEndpointName may optionally be specified. If unspecificed, the image registry operator will discover vnet and subnet names, and generate a privateEndpointName. Defaults to "External". +| type is the network access level to be used for the storage account. +type: Internal means the storage account will be private, type: External +means the storage account will be publicly accessible. +Internal storage accounts are only exposed within the cluster's vnet. +External storage accounts are publicly exposed on the internet. +When type: Internal is used, a vnetName, subNetName and privateEndpointName +may optionally be specified. If unspecificed, the image registry operator +will discover vnet and subnet names, and generate a privateEndpointName. +Defaults to "External". |=== === .status.storage.azure.networkAccess.internal Description:: + -- -internal defines the vnet and subnet names to configure a private endpoint and connect it to the storage account in order to make it private. when type: Internal and internal is unset, the image registry operator will discover vnet and subnet names, and generate a private endpoint name. +internal defines the vnet and subnet names to configure a private +endpoint and connect it to the storage account in order to make it +private. +when type: Internal and internal is unset, the image registry operator +will discover vnet and subnet names, and generate a private endpoint +name. -- Type:: @@ -2794,26 +3446,58 @@ Type:: | `networkResourceGroupName` | `string` -| networkResourceGroupName is the resource group name where the cluster's vnet and subnet are. When omitted, the registry operator will use the cluster resource group (from in the infrastructure status). If you set a networkResourceGroupName on your install-config.yaml, that value will be used automatically (for clusters configured with publish:Internal). Note that both vnet and subnet must be in the same resource group. It must be between 1 and 90 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_), and not end with a period. +| networkResourceGroupName is the resource group name where the cluster's vnet +and subnet are. When omitted, the registry operator will use the cluster +resource group (from in the infrastructure status). +If you set a networkResourceGroupName on your install-config.yaml, that +value will be used automatically (for clusters configured with publish:Internal). +Note that both vnet and subnet must be in the same resource group. +It must be between 1 and 90 characters in length and must consist only of +alphanumeric characters, hyphens (-), periods (.) and underscores (_), and +not end with a period. | `privateEndpointName` | `string` -| privateEndpointName is the name of the private endpoint for the registry. When provided, the registry will use it as the name of the private endpoint it will create for the storage account. When omitted, the registry will generate one. It must be between 2 and 64 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). It must start with an alphanumeric character and end with an alphanumeric character or an underscore. +| privateEndpointName is the name of the private endpoint for the registry. +When provided, the registry will use it as the name of the private endpoint +it will create for the storage account. When omitted, the registry will +generate one. +It must be between 2 and 64 characters in length and must consist only of +alphanumeric characters, hyphens (-), periods (.) and underscores (_). +It must start with an alphanumeric character and end with an alphanumeric character or an underscore. | `subnetName` | `string` -| subnetName is the name of the subnet the registry operates in. When omitted, the registry operator will discover and set this by using the `kubernetes.io_cluster.` tag in the vnet resource, then using one of listed subnets. Advanced cluster network configurations that use network security groups to protect subnets should ensure the provided subnetName has access to Azure Storage service. It must be between 1 and 80 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). +| subnetName is the name of the subnet the registry operates in. When omitted, +the registry operator will discover and set this by using the `kubernetes.io_cluster.` +tag in the vnet resource, then using one of listed subnets. +Advanced cluster network configurations that use network security groups +to protect subnets should ensure the provided subnetName has access to +Azure Storage service. +It must be between 1 and 80 characters in length and must consist only of +alphanumeric characters, hyphens (-), periods (.) and underscores (_). | `vnetName` | `string` -| vnetName is the name of the vnet the registry operates in. When omitted, the registry operator will discover and set this by using the `kubernetes.io_cluster.` tag in the vnet resource. This tag is set automatically by the installer. Commonly, this will be the same vnet as the cluster. Advanced cluster network configurations should ensure the provided vnetName is the vnet of the nodes where the image registry pods are running from. It must be between 2 and 64 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). It must start with an alphanumeric character and end with an alphanumeric character or an underscore. +| vnetName is the name of the vnet the registry operates in. When omitted, +the registry operator will discover and set this by using the `kubernetes.io_cluster.` +tag in the vnet resource. This tag is set automatically by the installer. +Commonly, this will be the same vnet as the cluster. +Advanced cluster network configurations should ensure the provided vnetName +is the vnet of the nodes where the image registry pods are running from. +It must be between 2 and 64 characters in length and must consist only of +alphanumeric characters, hyphens (-), periods (.) and underscores (_). +It must start with an alphanumeric character and end with an alphanumeric character or an underscore. |=== === .status.storage.emptyDir Description:: + -- -emptyDir represents ephemeral storage on the pod's host node. WARNING: this storage cannot be used with more than 1 replica and is not suitable for production use. When the pod is removed from a node for any reason, the data in the emptyDir is deleted forever. +emptyDir represents ephemeral storage on the pod's host node. +WARNING: this storage cannot be used with more than 1 replica and +is not suitable for production use. When the pod is removed from a +node for any reason, the data in the emptyDir is deleted forever. -- Type:: @@ -2841,19 +3525,25 @@ Type:: | `bucket` | `string` -| bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. +| bucket is the bucket name in which you want to store the registry's +data. +Optional, will be generated if not provided. | `keyID` | `string` -| keyID is the KMS key ID to use for encryption. Optional, buckets are encrypted by default on GCP. This allows for the use of a custom encryption key. +| keyID is the KMS key ID to use for encryption. +Optional, buckets are encrypted by default on GCP. +This allows for the use of a custom encryption key. | `projectID` | `string` -| projectID is the Project ID of the GCP project that this bucket should be associated with. +| projectID is the Project ID of the GCP project that this bucket should +be associated with. | `region` | `string` -| region is the GCS location in which your bucket exists. Optional, will be set based on the installed GCS Region. +| region is the GCS location in which your bucket exists. +Optional, will be set based on the installed GCS Region. |=== === .status.storage.ibmcos @@ -2875,23 +3565,33 @@ Type:: | `bucket` | `string` -| bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. +| bucket is the bucket name in which you want to store the registry's +data. +Optional, will be generated if not provided. | `location` | `string` -| location is the IBM Cloud location in which your bucket exists. Optional, will be set based on the installed IBM Cloud location. +| location is the IBM Cloud location in which your bucket exists. +Optional, will be set based on the installed IBM Cloud location. | `resourceGroupName` | `string` -| resourceGroupName is the name of the IBM Cloud resource group that this bucket and its service instance is associated with. Optional, will be set based on the installed IBM Cloud resource group. +| resourceGroupName is the name of the IBM Cloud resource group that this +bucket and its service instance is associated with. +Optional, will be set based on the installed IBM Cloud resource group. | `resourceKeyCRN` | `string` -| resourceKeyCRN is the CRN of the IBM Cloud resource key that is created for the service instance. Commonly referred as a service credential and must contain HMAC type credentials. Optional, will be computed if not provided. +| resourceKeyCRN is the CRN of the IBM Cloud resource key that is created +for the service instance. Commonly referred as a service credential and +must contain HMAC type credentials. +Optional, will be computed if not provided. | `serviceInstanceCRN` | `string` -| serviceInstanceCRN is the CRN of the IBM Cloud Object Storage service instance that this bucket is associated with. Optional, will be computed if not provided. +| serviceInstanceCRN is the CRN of the IBM Cloud Object Storage service +instance that this bucket is associated with. +Optional, will be computed if not provided. |=== === .status.storage.oss @@ -2913,26 +3613,36 @@ Type:: | `bucket` | `string` -| Bucket is the bucket name in which you want to store the registry's data. About Bucket naming, more details you can look at the [official documentation](https://www.alibabacloud.com/help/doc-detail/257087.htm) Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default will be autogenerated in the form of -image-registry-- +| Bucket is the bucket name in which you want to store the registry's data. +About Bucket naming, more details you can look at the [official documentation](https://www.alibabacloud.com/help/doc-detail/257087.htm) +Empty value means no opinion and the platform chooses the a default, which is subject to change over time. +Currently the default will be autogenerated in the form of -image-registry-- | `encryption` | `object` -| Encryption specifies whether you would like your data encrypted on the server side. More details, you can look cat the [official documentation](https://www.alibabacloud.com/help/doc-detail/117914.htm) +| Encryption specifies whether you would like your data encrypted on the server side. +More details, you can look cat the [official documentation](https://www.alibabacloud.com/help/doc-detail/117914.htm) | `endpointAccessibility` | `string` -| EndpointAccessibility specifies whether the registry use the OSS VPC internal endpoint Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default is `Internal`. +| EndpointAccessibility specifies whether the registry use the OSS VPC internal endpoint +Empty value means no opinion and the platform chooses the a default, which is subject to change over time. +Currently the default is `Internal`. | `region` | `string` -| Region is the Alibaba Cloud Region in which your bucket exists. For a list of regions, you can look at the [official documentation](https://www.alibabacloud.com/help/doc-detail/31837.html). Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default will be based on the installed Alibaba Cloud Region. +| Region is the Alibaba Cloud Region in which your bucket exists. +For a list of regions, you can look at the [official documentation](https://www.alibabacloud.com/help/doc-detail/31837.html). +Empty value means no opinion and the platform chooses the a default, which is subject to change over time. +Currently the default will be based on the installed Alibaba Cloud Region. |=== === .status.storage.oss.encryption Description:: + -- -Encryption specifies whether you would like your data encrypted on the server side. More details, you can look cat the [official documentation](https://www.alibabacloud.com/help/doc-detail/117914.htm) +Encryption specifies whether you would like your data encrypted on the server side. +More details, you can look cat the [official documentation](https://www.alibabacloud.com/help/doc-detail/117914.htm) -- Type:: @@ -2951,7 +3661,9 @@ Type:: | `method` | `string` -| Method defines the different encrytion modes available Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default is `AES256`. +| Method defines the different encrytion modes available +Empty value means no opinion and the platform chooses the a default, which is subject to change over time. +Currently the default is `AES256`. |=== === .status.storage.oss.encryption.kms @@ -3019,47 +3731,69 @@ Type:: | `bucket` | `string` -| bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. +| bucket is the bucket name in which you want to store the registry's +data. +Optional, will be generated if not provided. | `chunkSizeMiB` | `integer` -| chunkSizeMiB defines the size of the multipart upload chunks of the S3 API. The S3 API requires multipart upload chunks to be at least 5MiB. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default value is 10 MiB. The value is an integer number of MiB. The minimum value is 5 and the maximum value is 5120 (5 GiB). +| chunkSizeMiB defines the size of the multipart upload chunks of the S3 API. +The S3 API requires multipart upload chunks to be at least 5MiB. +When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. +The current default value is 10 MiB. +The value is an integer number of MiB. +The minimum value is 5 and the maximum value is 5120 (5 GiB). | `cloudFront` | `object` -| cloudFront configures Amazon Cloudfront as the storage middleware in a registry. +| cloudFront configures Amazon Cloudfront as the storage middleware in a +registry. | `encrypt` | `boolean` -| encrypt specifies whether the registry stores the image in encrypted format or not. Optional, defaults to false. +| encrypt specifies whether the registry stores the image in encrypted +format or not. +Optional, defaults to false. | `keyID` | `string` -| keyID is the KMS key ID to use for encryption. Optional, Encrypt must be true, or this parameter is ignored. +| keyID is the KMS key ID to use for encryption. +Optional, Encrypt must be true, or this parameter is ignored. | `region` | `string` -| region is the AWS region in which your bucket exists. Optional, will be set based on the installed AWS Region. +| region is the AWS region in which your bucket exists. +Optional, will be set based on the installed AWS Region. | `regionEndpoint` | `string` -| regionEndpoint is the endpoint for S3 compatible storage services. It should be a valid URL with scheme, e.g. https://s3.example.com. Optional, defaults based on the Region that is provided. +| regionEndpoint is the endpoint for S3 compatible storage services. +It should be a valid URL with scheme, e.g. https://s3.example.com. +Optional, defaults based on the Region that is provided. | `trustedCA` | `object` -| trustedCA is a reference to a config map containing a CA bundle. The image registry and its operator use certificates from this bundle to verify S3 server certificates. - The namespace for the config map referenced by trustedCA is "openshift-config". The key for the bundle in the config map is "ca-bundle.crt". +| trustedCA is a reference to a config map containing a CA bundle. The +image registry and its operator use certificates from this bundle to +verify S3 server certificates. + +The namespace for the config map referenced by trustedCA is +"openshift-config". The key for the bundle in the config map is +"ca-bundle.crt". | `virtualHostedStyle` | `boolean` -| virtualHostedStyle enables using S3 virtual hosted style bucket paths with a custom RegionEndpoint Optional, defaults to false. +| virtualHostedStyle enables using S3 virtual hosted style bucket paths with +a custom RegionEndpoint +Optional, defaults to false. |=== === .status.storage.s3.cloudFront Description:: + -- -cloudFront configures Amazon Cloudfront as the storage middleware in a registry. +cloudFront configures Amazon Cloudfront as the storage middleware in a +registry. -- Type:: @@ -3118,7 +3852,11 @@ Required:: | `name` | `string` -| Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop `kubebuilder:default` when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896. +| Name of the referent. +This field is effectively required, but due to backwards compatibility is +allowed to be empty. Instances of this type with an empty value here are +almost certainly wrong. +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names | `optional` | `boolean` @@ -3129,8 +3867,13 @@ Required:: Description:: + -- -trustedCA is a reference to a config map containing a CA bundle. The image registry and its operator use certificates from this bundle to verify S3 server certificates. - The namespace for the config map referenced by trustedCA is "openshift-config". The key for the bundle in the config map is "ca-bundle.crt". +trustedCA is a reference to a config map containing a CA bundle. The +image registry and its operator use certificates from this bundle to +verify S3 server certificates. + +The namespace for the config map referenced by trustedCA is +"openshift-config". The key for the bundle in the config map is +"ca-bundle.crt". -- Type:: @@ -3145,7 +3888,12 @@ Type:: | `name` | `string` -| name is the metadata.name of the referenced config map. This field must adhere to standard config map naming restrictions. The name must consist solely of alphanumeric characters, hyphens (-) and periods (.). It has a maximum length of 253 characters. If this field is not specified or is empty string, the default trust bundle will be used. +| name is the metadata.name of the referenced config map. +This field must adhere to standard config map naming restrictions. +The name must consist solely of alphanumeric characters, hyphens (-) +and periods (.). It has a maximum length of 253 characters. +If this field is not specified or is empty string, the default trust +bundle will be used. |=== === .status.storage.swift @@ -3175,7 +3923,8 @@ Type:: | `container` | `string` -| container defines the name of Swift container where to store the registry's data. +| container defines the name of Swift container where to store the +registry's data. | `domain` | `string` diff --git a/rest_api/operator_apis/imagepruner-imageregistry-operator-openshift-io-v1.adoc b/rest_api/operator_apis/imagepruner-imageregistry-operator-openshift-io-v1.adoc index 76678011fd35..cb3bdf209f27 100644 --- a/rest_api/operator_apis/imagepruner-imageregistry-operator-openshift-io-v1.adoc +++ b/rest_api/operator_apis/imagepruner-imageregistry-operator-openshift-io-v1.adoc @@ -11,8 +11,10 @@ toc::[] Description:: + -- -ImagePruner is the configuration object for an image registry pruner managed by the registry operator. - Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). +ImagePruner is the configuration object for an image registry pruner +managed by the registry operator. + +Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). -- Type:: @@ -73,28 +75,35 @@ Type:: | `failedJobsHistoryLimit` | `integer` -| failedJobsHistoryLimit specifies how many failed image pruner jobs to retain. Defaults to 3 if not set. +| failedJobsHistoryLimit specifies how many failed image pruner jobs to retain. +Defaults to 3 if not set. | `ignoreInvalidImageReferences` | `boolean` -| ignoreInvalidImageReferences indicates whether the pruner can ignore errors while parsing image references. +| ignoreInvalidImageReferences indicates whether the pruner can ignore +errors while parsing image references. | `keepTagRevisions` | `integer` -| keepTagRevisions specifies the number of image revisions for a tag in an image stream that will be preserved. Defaults to 3. +| keepTagRevisions specifies the number of image revisions for a tag in an image stream that will be preserved. +Defaults to 3. | `keepYoungerThan` | `integer` -| keepYoungerThan specifies the minimum age in nanoseconds of an image and its referrers for it to be considered a candidate for pruning. DEPRECATED: This field is deprecated in favor of keepYoungerThanDuration. If both are set, this field is ignored and keepYoungerThanDuration takes precedence. +| keepYoungerThan specifies the minimum age in nanoseconds of an image and its referrers for it to be considered a candidate for pruning. +DEPRECATED: This field is deprecated in favor of keepYoungerThanDuration. If both are set, this field is ignored and keepYoungerThanDuration takes precedence. | `keepYoungerThanDuration` | `string` -| keepYoungerThanDuration specifies the minimum age of an image and its referrers for it to be considered a candidate for pruning. Defaults to 60m (60 minutes). +| keepYoungerThanDuration specifies the minimum age of an image and its referrers for it to be considered a candidate for pruning. +Defaults to 60m (60 minutes). | `logLevel` | `string` -| logLevel sets the level of log output for the pruner job. - Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". +| logLevel sets the level of log output for the pruner job. + +Valid values are: "Normal", "Debug", "Trace", "TraceAll". +Defaults to "Normal". | `nodeSelector` | `object (string)` @@ -106,15 +115,18 @@ Type:: | `schedule` | `string` -| schedule specifies when to execute the job using standard cronjob syntax: https://wikipedia.org/wiki/Cron. Defaults to `0 0 * * *`. +| schedule specifies when to execute the job using standard cronjob syntax: https://wikipedia.org/wiki/Cron. +Defaults to `0 0 * * *`. | `successfulJobsHistoryLimit` | `integer` -| successfulJobsHistoryLimit specifies how many successful image pruner jobs to retain. Defaults to 3 if not set. +| successfulJobsHistoryLimit specifies how many successful image pruner jobs to retain. +Defaults to 3 if not set. | `suspend` | `boolean` -| suspend specifies whether or not to suspend subsequent executions of this cronjob. Defaults to false. +| suspend specifies whether or not to suspend subsequent executions of this cronjob. +Defaults to false. | `tolerations` | `array` @@ -122,7 +134,8 @@ Type:: | `tolerations[]` | `object` -| The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . +| The pod this Toleration is attached to tolerates any taint that matches +the triple using the matching operator . |=== === .spec.affinity @@ -174,22 +187,43 @@ Type:: | `preferredDuringSchedulingIgnoredDuringExecution` | `array` -| The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. +| The scheduler will prefer to schedule pods to nodes that satisfy +the affinity expressions specified by this field, but it may choose +a node that violates one or more of the expressions. The node that is +most preferred is the one with the greatest sum of weights, i.e. +for each node that meets all of the scheduling requirements (resource +request, requiredDuringScheduling affinity expressions, etc.), +compute a sum by iterating through the elements of this field and adding +"weight" to the sum if the node matches the corresponding matchExpressions; the +node(s) with the highest sum are the most preferred. | `preferredDuringSchedulingIgnoredDuringExecution[]` | `object` -| An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). +| An empty preferred scheduling term matches all objects with implicit weight 0 +(i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). | `requiredDuringSchedulingIgnoredDuringExecution` | `object` -| If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. +| If the affinity requirements specified by this field are not met at +scheduling time, the pod will not be scheduled onto the node. +If the affinity requirements specified by this field cease to be met +at some point during pod execution (e.g. due to an update), the system +may or may not try to eventually evict the pod from its node. |=== === .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description:: + -- -The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. +The scheduler will prefer to schedule pods to nodes that satisfy +the affinity expressions specified by this field, but it may choose +a node that violates one or more of the expressions. The node that is +most preferred is the one with the greatest sum of weights, i.e. +for each node that meets all of the scheduling requirements (resource +request, requiredDuringScheduling affinity expressions, etc.), +compute a sum by iterating through the elements of this field and adding +"weight" to the sum if the node matches the corresponding matchExpressions; the +node(s) with the highest sum are the most preferred. -- Type:: @@ -202,7 +236,8 @@ Type:: Description:: + -- -An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). +An empty preferred scheduling term matches all objects with implicit weight 0 +(i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). -- Type:: @@ -250,7 +285,8 @@ Type:: | `matchExpressions[]` | `object` -| A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. | `matchFields` | `array` @@ -258,7 +294,8 @@ Type:: | `matchFields[]` | `object` -| A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. |=== === .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions @@ -278,7 +315,8 @@ Type:: Description:: + -- -A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. -- Type:: @@ -300,11 +338,16 @@ Required:: | `operator` | `string` -| Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. +| Represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. | `values` | `array (string)` -| An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. +| An array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. If the operator is Gt or Lt, the values +array must have a single element, which will be interpreted as an integer. +This array is replaced during a strategic merge patch. |=== === .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields @@ -324,7 +367,8 @@ Type:: Description:: + -- -A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. -- Type:: @@ -346,18 +390,27 @@ Required:: | `operator` | `string` -| Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. +| Represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. | `values` | `array (string)` -| An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. +| An array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. If the operator is Gt or Lt, the values +array must have a single element, which will be interpreted as an integer. +This array is replaced during a strategic merge patch. |=== === .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description:: + -- -If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. +If the affinity requirements specified by this field are not met at +scheduling time, the pod will not be scheduled onto the node. +If the affinity requirements specified by this field cease to be met +at some point during pod execution (e.g. due to an update), the system +may or may not try to eventually evict the pod from its node. -- Type:: @@ -378,7 +431,9 @@ Required:: | `nodeSelectorTerms[]` | `object` -| A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. +| A null or empty node selector term matches no objects. The requirements of +them are ANDed. +The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. |=== === .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms @@ -398,7 +453,9 @@ Type:: Description:: + -- -A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. +A null or empty node selector term matches no objects. The requirements of +them are ANDed. +The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. -- Type:: @@ -417,7 +474,8 @@ Type:: | `matchExpressions[]` | `object` -| A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. | `matchFields` | `array` @@ -425,7 +483,8 @@ Type:: | `matchFields[]` | `object` -| A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. |=== === .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions @@ -445,7 +504,8 @@ Type:: Description:: + -- -A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. -- Type:: @@ -467,11 +527,16 @@ Required:: | `operator` | `string` -| Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. +| Represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. | `values` | `array (string)` -| An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. +| An array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. If the operator is Gt or Lt, the values +array must have a single element, which will be interpreted as an integer. +This array is replaced during a strategic merge patch. |=== === .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields @@ -491,7 +556,8 @@ Type:: Description:: + -- -A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A node selector requirement is a selector that contains values, a key, and an operator +that relates the key and values. -- Type:: @@ -513,11 +579,16 @@ Required:: | `operator` | `string` -| Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. +| Represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. | `values` | `array (string)` -| An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. +| An array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. If the operator is Gt or Lt, the values +array must have a single element, which will be interpreted as an integer. +This array is replaced during a strategic merge patch. |=== === .spec.affinity.podAffinity @@ -539,7 +610,15 @@ Type:: | `preferredDuringSchedulingIgnoredDuringExecution` | `array` -| The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +| The scheduler will prefer to schedule pods to nodes that satisfy +the affinity expressions specified by this field, but it may choose +a node that violates one or more of the expressions. The node that is +most preferred is the one with the greatest sum of weights, i.e. +for each node that meets all of the scheduling requirements (resource +request, requiredDuringScheduling affinity expressions, etc.), +compute a sum by iterating through the elements of this field and adding +"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the +node(s) with the highest sum are the most preferred. | `preferredDuringSchedulingIgnoredDuringExecution[]` | `object` @@ -547,18 +626,37 @@ Type:: | `requiredDuringSchedulingIgnoredDuringExecution` | `array` -| If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +| If the affinity requirements specified by this field are not met at +scheduling time, the pod will not be scheduled onto the node. +If the affinity requirements specified by this field cease to be met +at some point during pod execution (e.g. due to a pod label update), the +system may or may not try to eventually evict the pod from its node. +When there are multiple elements, the lists of nodes corresponding to each +podAffinityTerm are intersected, i.e. all terms must be satisfied. | `requiredDuringSchedulingIgnoredDuringExecution[]` | `object` -| Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running +| Defines a set of pods (namely those matching the labelSelector +relative to the given namespace(s)) that this pod should be +co-located (affinity) or not co-located (anti-affinity) with, +where co-located is defined as running on a node whose value of +the label with key matches that of any node on which +a pod of the set of pods is running |=== === .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description:: + -- -The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +The scheduler will prefer to schedule pods to nodes that satisfy +the affinity expressions specified by this field, but it may choose +a node that violates one or more of the expressions. The node that is +most preferred is the one with the greatest sum of weights, i.e. +for each node that meets all of the scheduling requirements (resource +request, requiredDuringScheduling affinity expressions, etc.), +compute a sum by iterating through the elements of this field and adding +"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the +node(s) with the highest sum are the most preferred. -- Type:: @@ -593,7 +691,8 @@ Required:: | `weight` | `integer` -| weight associated with matching the corresponding podAffinityTerm, in the range 1-100. +| weight associated with matching the corresponding podAffinityTerm, +in the range 1-100. |=== === .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm @@ -617,34 +716,63 @@ Required:: | `labelSelector` | `object` -| A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +| A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. | `matchLabelKeys` | `array (string)` -| MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both matchLabelKeys and labelSelector. +Also, matchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `mismatchLabelKeys` | `array (string)` -| MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MismatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. +Also, mismatchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `namespaceSelector` | `object` -| A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +| A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. | `namespaces` | `array (string)` -| namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". +| namespaces specifies a static list of namespace names that the term applies to. +The term is applied to the union of the namespaces listed in this field +and the ones selected by namespaceSelector. +null or empty namespaces list and null namespaceSelector means "this pod's namespace". | `topologyKey` | `string` -| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. +| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching +the labelSelector in the specified namespaces, where co-located is defined as running on a node +whose value of the label with key topologyKey matches that of any node on which any of the +selected pods is running. +Empty topologyKey is not allowed. |=== === .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description:: + -- -A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. -- Type:: @@ -663,11 +791,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions @@ -687,7 +818,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -709,18 +841,26 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description:: + -- -A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. -- Type:: @@ -739,11 +879,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions @@ -763,7 +906,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -785,18 +929,28 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description:: + -- -If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +If the affinity requirements specified by this field are not met at +scheduling time, the pod will not be scheduled onto the node. +If the affinity requirements specified by this field cease to be met +at some point during pod execution (e.g. due to a pod label update), the +system may or may not try to eventually evict the pod from its node. +When there are multiple elements, the lists of nodes corresponding to each +podAffinityTerm are intersected, i.e. all terms must be satisfied. -- Type:: @@ -809,7 +963,12 @@ Type:: Description:: + -- -Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running +Defines a set of pods (namely those matching the labelSelector +relative to the given namespace(s)) that this pod should be +co-located (affinity) or not co-located (anti-affinity) with, +where co-located is defined as running on a node whose value of +the label with key matches that of any node on which +a pod of the set of pods is running -- Type:: @@ -826,34 +985,63 @@ Required:: | `labelSelector` | `object` -| A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +| A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. | `matchLabelKeys` | `array (string)` -| MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both matchLabelKeys and labelSelector. +Also, matchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `mismatchLabelKeys` | `array (string)` -| MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MismatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. +Also, mismatchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `namespaceSelector` | `object` -| A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +| A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. | `namespaces` | `array (string)` -| namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". +| namespaces specifies a static list of namespace names that the term applies to. +The term is applied to the union of the namespaces listed in this field +and the ones selected by namespaceSelector. +null or empty namespaces list and null namespaceSelector means "this pod's namespace". | `topologyKey` | `string` -| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. +| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching +the labelSelector in the specified namespaces, where co-located is defined as running on a node +whose value of the label with key topologyKey matches that of any node on which any of the +selected pods is running. +Empty topologyKey is not allowed. |=== === .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description:: + -- -A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. -- Type:: @@ -872,11 +1060,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions @@ -896,7 +1087,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -918,18 +1110,26 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description:: + -- -A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. -- Type:: @@ -948,11 +1148,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions @@ -972,7 +1175,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -994,11 +1198,15 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.affinity.podAntiAffinity @@ -1020,7 +1228,15 @@ Type:: | `preferredDuringSchedulingIgnoredDuringExecution` | `array` -| The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +| The scheduler will prefer to schedule pods to nodes that satisfy +the anti-affinity expressions specified by this field, but it may choose +a node that violates one or more of the expressions. The node that is +most preferred is the one with the greatest sum of weights, i.e. +for each node that meets all of the scheduling requirements (resource +request, requiredDuringScheduling anti-affinity expressions, etc.), +compute a sum by iterating through the elements of this field and adding +"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the +node(s) with the highest sum are the most preferred. | `preferredDuringSchedulingIgnoredDuringExecution[]` | `object` @@ -1028,18 +1244,37 @@ Type:: | `requiredDuringSchedulingIgnoredDuringExecution` | `array` -| If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +| If the anti-affinity requirements specified by this field are not met at +scheduling time, the pod will not be scheduled onto the node. +If the anti-affinity requirements specified by this field cease to be met +at some point during pod execution (e.g. due to a pod label update), the +system may or may not try to eventually evict the pod from its node. +When there are multiple elements, the lists of nodes corresponding to each +podAffinityTerm are intersected, i.e. all terms must be satisfied. | `requiredDuringSchedulingIgnoredDuringExecution[]` | `object` -| Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running +| Defines a set of pods (namely those matching the labelSelector +relative to the given namespace(s)) that this pod should be +co-located (affinity) or not co-located (anti-affinity) with, +where co-located is defined as running on a node whose value of +the label with key matches that of any node on which +a pod of the set of pods is running |=== === .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description:: + -- -The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. +The scheduler will prefer to schedule pods to nodes that satisfy +the anti-affinity expressions specified by this field, but it may choose +a node that violates one or more of the expressions. The node that is +most preferred is the one with the greatest sum of weights, i.e. +for each node that meets all of the scheduling requirements (resource +request, requiredDuringScheduling anti-affinity expressions, etc.), +compute a sum by iterating through the elements of this field and adding +"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the +node(s) with the highest sum are the most preferred. -- Type:: @@ -1074,7 +1309,8 @@ Required:: | `weight` | `integer` -| weight associated with matching the corresponding podAffinityTerm, in the range 1-100. +| weight associated with matching the corresponding podAffinityTerm, +in the range 1-100. |=== === .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm @@ -1098,34 +1334,63 @@ Required:: | `labelSelector` | `object` -| A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +| A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. | `matchLabelKeys` | `array (string)` -| MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both matchLabelKeys and labelSelector. +Also, matchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `mismatchLabelKeys` | `array (string)` -| MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MismatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. +Also, mismatchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `namespaceSelector` | `object` -| A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +| A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. | `namespaces` | `array (string)` -| namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". +| namespaces specifies a static list of namespace names that the term applies to. +The term is applied to the union of the namespaces listed in this field +and the ones selected by namespaceSelector. +null or empty namespaces list and null namespaceSelector means "this pod's namespace". | `topologyKey` | `string` -| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. +| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching +the labelSelector in the specified namespaces, where co-located is defined as running on a node +whose value of the label with key topologyKey matches that of any node on which any of the +selected pods is running. +Empty topologyKey is not allowed. |=== === .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description:: + -- -A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. -- Type:: @@ -1144,11 +1409,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions @@ -1168,7 +1436,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -1190,18 +1459,26 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description:: + -- -A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. -- Type:: @@ -1220,11 +1497,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions @@ -1244,7 +1524,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -1266,18 +1547,28 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description:: + -- -If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. +If the anti-affinity requirements specified by this field are not met at +scheduling time, the pod will not be scheduled onto the node. +If the anti-affinity requirements specified by this field cease to be met +at some point during pod execution (e.g. due to a pod label update), the +system may or may not try to eventually evict the pod from its node. +When there are multiple elements, the lists of nodes corresponding to each +podAffinityTerm are intersected, i.e. all terms must be satisfied. -- Type:: @@ -1290,7 +1581,12 @@ Type:: Description:: + -- -Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running +Defines a set of pods (namely those matching the labelSelector +relative to the given namespace(s)) that this pod should be +co-located (affinity) or not co-located (anti-affinity) with, +where co-located is defined as running on a node whose value of +the label with key matches that of any node on which +a pod of the set of pods is running -- Type:: @@ -1307,34 +1603,63 @@ Required:: | `labelSelector` | `object` -| A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +| A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. | `matchLabelKeys` | `array (string)` -| MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both matchLabelKeys and labelSelector. +Also, matchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `mismatchLabelKeys` | `array (string)` -| MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. +| MismatchLabelKeys is a set of pod label keys to select which pods will +be taken into consideration. The keys are used to lookup values from the +incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` +to select the group of existing pods which pods will be taken into consideration +for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming +pod labels will be ignored. The default value is empty. +The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. +Also, mismatchLabelKeys cannot be set when labelSelector isn't set. +This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). | `namespaceSelector` | `object` -| A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +| A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. | `namespaces` | `array (string)` -| namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". +| namespaces specifies a static list of namespace names that the term applies to. +The term is applied to the union of the namespaces listed in this field +and the ones selected by namespaceSelector. +null or empty namespaces list and null namespaceSelector means "this pod's namespace". | `topologyKey` | `string` -| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. +| This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching +the labelSelector in the specified namespaces, where co-located is defined as running on a node +whose value of the label with key topologyKey matches that of any node on which any of the +selected pods is running. +Empty topologyKey is not allowed. |=== === .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description:: + -- -A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. +A label query over a set of resources, in this case pods. +If it's null, this PodAffinityTerm matches with no Pods. -- Type:: @@ -1353,11 +1678,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions @@ -1377,7 +1705,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -1399,18 +1728,26 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description:: + -- -A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. +A label query over the set of namespaces that the term applies to. +The term is applied to the union of the namespaces selected by this field +and the ones listed in the namespaces field. +null selector and null or empty namespaces list means "this pod's namespace". +An empty selector ({}) matches all namespaces. -- Type:: @@ -1429,11 +1766,14 @@ Type:: | `matchExpressions[]` | `object` -| A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +| A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. | `matchLabels` | `object (string)` -| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. +| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels +map is equivalent to an element of matchExpressions, whose key field is "key", the +operator is "In", and the values array contains only "value". The requirements are ANDed. |=== === .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions @@ -1453,7 +1793,8 @@ Type:: Description:: + -- -A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. +A label selector requirement is a selector that contains values, a key, and an operator that +relates the key and values. -- Type:: @@ -1475,11 +1816,15 @@ Required:: | `operator` | `string` -| operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. +| operator represents a key's relationship to a set of values. +Valid operators are In, NotIn, Exists and DoesNotExist. | `values` | `array (string)` -| values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. +| values is an array of string values. If the operator is In or NotIn, +the values array must be non-empty. If the operator is Exists or DoesNotExist, +the values array must be empty. This array is replaced during a strategic +merge patch. |=== === .spec.resources @@ -1501,9 +1846,13 @@ Type:: | `claims` | `array` -| Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. +| Claims lists the names of resources, defined in spec.resourceClaims, +that are used by this container. + +This is an alpha field and requires enabling the +DynamicResourceAllocation feature gate. + +This field is immutable. It can only be set for containers. | `claims[]` | `object` @@ -1511,20 +1860,28 @@ Type:: | `limits` | `integer-or-string` -| Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ +| Limits describes the maximum amount of compute resources allowed. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | `requests` | `integer-or-string` -| Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ +| Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, +otherwise to an implementation-defined value. Requests cannot exceed Limits. +More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |=== === .spec.resources.claims Description:: + -- -Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. +Claims lists the names of resources, defined in spec.resourceClaims, +that are used by this container. + +This is an alpha field and requires enabling the +DynamicResourceAllocation feature gate. + +This field is immutable. It can only be set for containers. -- Type:: @@ -1554,7 +1911,15 @@ Required:: | `name` | `string` -| Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. +| Name must match the name of one entry in pod.spec.resourceClaims of +the Pod where this field is used. It makes that resource available +inside a container. + +| `request` +| `string` +| Request is the name chosen for a request in the referenced claim. +If empty, everything from the claim is made available, otherwise +only the result of this request. |=== === .spec.tolerations @@ -1574,7 +1939,8 @@ Type:: Description:: + -- -The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . +The pod this Toleration is attached to tolerates any taint that matches +the triple using the matching operator . -- Type:: @@ -1589,23 +1955,32 @@ Type:: | `effect` | `string` -| Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. +| Effect indicates the taint effect to match. Empty means match all taint effects. +When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. | `key` | `string` -| Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. +| Key is the taint key that the toleration applies to. Empty means match all taint keys. +If the key is empty, operator must be Exists; this combination means to match all values and all keys. | `operator` | `string` -| Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. +| Operator represents a key's relationship to the value. +Valid operators are Exists and Equal. Defaults to Equal. +Exists is equivalent to wildcard for value, so that a pod can +tolerate all taints of a particular category. | `tolerationSeconds` | `integer` -| TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. +| TolerationSeconds represents the period of time the toleration (which must be +of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, +it is not set, which means tolerate the taint forever (do not evict). Zero and +negative values will be treated as 0 (evict immediately) by the system. | `value` | `string` -| Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. +| Value is the taint value the toleration matches to. +If the operator is Exists, the value should be empty, otherwise just a regular string. |=== === .status @@ -1662,6 +2037,8 @@ Type:: `object` Required:: + - `lastTransitionTime` + - `status` - `type` @@ -1672,7 +2049,8 @@ Required:: | `lastTransitionTime` | `string` -| +| lastTransitionTime is the last time the condition transitioned from one status to another. +This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. | `message` | `string` @@ -1684,11 +2062,11 @@ Required:: | `status` | `string` -| +| status of the condition, one of True, False, Unknown. | `type` | `string` -| +| type of condition in CamelCase or in foo.example.com/CamelCase. |=== diff --git a/rest_api/operator_apis/kubestorageversionmigrator-operator-openshift-io-v1.adoc b/rest_api/operator_apis/kubestorageversionmigrator-operator-openshift-io-v1.adoc index 3d404c15ed63..38f50a1b52e0 100644 --- a/rest_api/operator_apis/kubestorageversionmigrator-operator-openshift-io-v1.adoc +++ b/rest_api/operator_apis/kubestorageversionmigrator-operator-openshift-io-v1.adoc @@ -11,8 +11,9 @@ toc::[] Description:: + -- -KubeStorageVersionMigrator provides information to configure an operator to manage kube-storage-version-migrator. - Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). +KubeStorageVersionMigrator provides information to configure an operator to manage kube-storage-version-migrator. + +Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). -- Type:: @@ -68,8 +69,11 @@ Type:: | `logLevel` | `string` -| logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. - Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". +| logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a +simple way to manage coarse grained logging choices that operators have to interpret for their operands. + +Valid values are: "Normal", "Debug", "Trace", "TraceAll". +Defaults to "Normal". | `managementState` | `string` @@ -77,16 +81,24 @@ Type:: | `observedConfig` | `` -| observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator +| observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because +it is an input to the level for the operator | `operatorLogLevel` | `string` -| operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. - Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". +| operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a +simple way to manage coarse grained logging choices that operators have to interpret for themselves. + +Valid values are: "Normal", "Debug", "Trace", "TraceAll". +Defaults to "Normal". | `unsupportedConfigOverrides` | `` -| unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. +| unsupportedConfigOverrides overrides the final configuration that was computed by the operator. +Red Hat does not support the use of this field. +Misuse of this field could lead to unexpected behavior or conflict with other configuration options. +Seek guidance from the Red Hat support before using this field. +Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. |=== === .status @@ -122,6 +134,10 @@ Type:: | `object` | GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. +| `latestAvailableRevision` +| `integer` +| latestAvailableRevision is the deploymentID of the most recent deployment + | `observedGeneration` | `integer` | observedGeneration is the last generation change you've dealt with @@ -159,6 +175,8 @@ Type:: `object` Required:: + - `lastTransitionTime` + - `status` - `type` @@ -169,7 +187,8 @@ Required:: | `lastTransitionTime` | `string` -| +| lastTransitionTime is the last time the condition transitioned from one status to another. +This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. | `message` | `string` @@ -181,11 +200,11 @@ Required:: | `status` | `string` -| +| status of the condition, one of True, False, Unknown. | `type` | `string` -| +| type of condition in CamelCase or in foo.example.com/CamelCase. |=== === .status.generations @@ -211,6 +230,11 @@ GenerationStatus keeps track of the generation for a given resource so that deci Type:: `object` +Required:: + - `group` + - `name` + - `namespace` + - `resource` diff --git a/rest_api/operator_apis/operator-apis-index.adoc b/rest_api/operator_apis/operator-apis-index.adoc index d0e33659784c..56fb6dc6b0ca 100644 --- a/rest_api/operator_apis/operator-apis-index.adoc +++ b/rest_api/operator_apis/operator-apis-index.adoc @@ -79,8 +79,10 @@ Type:: Description:: + -- -Config is the configuration object for a registry instance managed by the registry operator - Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). +Config is the configuration object for a registry instance managed by +the registry operator + +Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). -- Type:: @@ -177,8 +179,10 @@ Type:: Description:: + -- -ImagePruner is the configuration object for an image registry pruner managed by the registry operator. - Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). +ImagePruner is the configuration object for an image registry pruner +managed by the registry operator. + +Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). -- Type:: @@ -265,8 +269,9 @@ Type:: Description:: + -- -KubeStorageVersionMigrator provides information to configure an operator to manage kube-storage-version-migrator. - Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). +KubeStorageVersionMigrator provides information to configure an operator to manage kube-storage-version-migrator. + +Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). -- Type:: From 631d832e870e20576b30d13f3d503518ba4ad6df Mon Sep 17 00:00:00 2001 From: Lisa Pettyjohn Date: Tue, 17 Dec 2024 10:20:47 -0500 Subject: [PATCH 247/669] OSDOCS-12890 and 12891#GCP PD support for C3 and N4 instance types --- ...sistent-storage-csi-drivers-supported.adoc | 9 +- ...storage-csi-gcp-hyperdisk-limitations.adoc | 25 ++ ...-gcp-hyperdisk-storage-pools-overview.adoc | 9 + ...gcp-hyperdisk-storage-pools-procedure.adoc | 249 ++++++++++++++++++ .../persistent-storage-csi-gcp-pd.adoc | 22 +- 5 files changed, 311 insertions(+), 3 deletions(-) create mode 100644 modules/persistent-storage-csi-gcp-hyperdisk-limitations.adoc create mode 100644 modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-overview.adoc create mode 100644 modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure.adoc diff --git a/modules/persistent-storage-csi-drivers-supported.adoc b/modules/persistent-storage-csi-drivers-supported.adoc index 5cc5c20ad96f..f055d8383518 100644 --- a/modules/persistent-storage-csi-drivers-supported.adoc +++ b/modules/persistent-storage-csi-drivers-supported.adoc @@ -41,7 +41,7 @@ endif::openshift-rosa,openshift-aro[] |AWS EBS | ✅ | | ✅| |AWS EFS | | | | ifndef::openshift-rosa[] -|Google Compute Platform (GCP) persistent disk (PD)| ✅| ✅ | ✅| +|Google Compute Platform (GCP) persistent disk (PD)| ✅| ✅^[5]^ | ✅| |GCP Filestore | ✅ | | ✅| endif::openshift-rosa[] ifndef::openshift-dedicated,openshift-rosa[] @@ -85,6 +85,11 @@ ifndef::openshift-dedicated,openshift-rosa[] * Azure File cloning and snapshot are Technology Preview features: :FeatureName: Azure File CSI cloning and snapshot -include::snippets/technology-preview.adoc[leveloffset=+1] +include::snippets/technology-preview.adoc[leveloffset=+2] + +5. + +* Cloning is not supported on hyperdisk-balanced disks with storage pools. + -- endif::openshift-dedicated,openshift-rosa[] \ No newline at end of file diff --git a/modules/persistent-storage-csi-gcp-hyperdisk-limitations.adoc b/modules/persistent-storage-csi-gcp-hyperdisk-limitations.adoc new file mode 100644 index 000000000000..269d9bcd3ad9 --- /dev/null +++ b/modules/persistent-storage-csi-gcp-hyperdisk-limitations.adoc @@ -0,0 +1,25 @@ +// Module included in the following assemblies: +// +// * storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc + +:_mod-docs-content-type: CONCEPT +[id="persistent-storage-csi-gcp-hyperdisk-limitations_{context}"] += C3 and N4 instance type limitations +The GCP PD CSI driver support for the C3 instance type for bare metal and N4 machine series have the following limitations: + +* Cloning volumes is not supported when using storage pools. + +* For cloning or resizing, hyperdisk-balanced disks original volume size must be 6Gi or greater. + +* The default storage class is standard-csi. ++ +[IMPORTANT] +==== +You need to manually create a storage class. + +For information about creating the storage class, see Step 2 in Section _Setting up hyperdisk-balanced disks_. +==== + +* Clusters with mixed virtual machines (VMs) that use different storage types, for example, N2 and N4, are not supported. This is due to hyperdisks-balanced disks not being usable on most legacy VMs. Similarly, regular persistent disks are not usable on N4/C3 VMs. + +* A GCP cluster with c3-standard-2, c3-standard-4, n4-standard-2, and n4-standard-4 nodes can erroneously exceed the maximum attachable disk number, which should be 16 (link:https://issues.redhat.com/browse/OCPBUGS-39258[JIRA link]). \ No newline at end of file diff --git a/modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-overview.adoc b/modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-overview.adoc new file mode 100644 index 000000000000..e8f632449beb --- /dev/null +++ b/modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-overview.adoc @@ -0,0 +1,9 @@ +// Module included in the following assemblies: +// +// * storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc + +:_mod-docs-content-type: CONCEPT +[id="persistent-storage-csi-gcp-hyperdisk-storage-pools-overview_{context}"] += Storage pools for hyperdisk-balanced disks overview + +Hyperdisk storage pools can be used with Compute Engine for large-scale storage. A hyperdisk storage pool is a purchased collection of capacity, throughput, and IOPS, which you can then provision for your applications as needed. You can use hyperdisk storage pools to create and manage disks in pools and use the disks across multiple workloads. By managing disks in aggregate, you can save costs while achieving expected capacity and performance growth. By using only the storage that you need in hyperdisk storage pools, you reduce the complexity of forecasting capacity and reduce management by going from managing hundreds of disks to managing a single storage pool. \ No newline at end of file diff --git a/modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure.adoc b/modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure.adoc new file mode 100644 index 000000000000..9d9e983a01e4 --- /dev/null +++ b/modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure.adoc @@ -0,0 +1,249 @@ +// Module included in the following assemblies: +// +// * storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc + +:_mod-docs-content-type: PROCEDURE +[id="persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure_{context}"] += Setting up hyperdisk-balanced disks + +.Prerequisites +* Access to the cluster with administrative privileges + +.Procedure +To set up hyperdisk-balanced disks: + +ifndef::openshift-dedicated[] +. Create GCP cluster with attached disks provisioned with hyperdisk-balanced disks. +endif::openshift-dedicated[] + +ifndef::openshift-dedicated[] +. Create a storage class specifying the hyperdisk-balanced disks during installation: +endif::openshift-dedicated[] + +ifndef::openshift-dedicated[] +.. Follow the procedure in the _Installing a cluster on GCP with customizations_ section. ++ +For your install-config.yaml file, use the following example file: ++ +.Example install-config YAML file +[source, yaml] +---- +apiVersion: v1 +metadata: + name: ci-op-9976b7t2-8aa6b + +sshKey: | + XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX +baseDomain: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX +platform: + gcp: + projectID: XXXXXXXXXXXXXXXXXXXXXX + region: us-central1 +controlPlane: + architecture: amd64 + name: master + platform: + gcp: + type: n4-standard-4 <1> + osDisk: + diskType: hyperdisk-balanced <2> + diskSizeGB: 200 + replicas: 3 +compute: +- architecture: amd64 + name: worker + replicas: 3 + platform: + gcp: + type: n4-standard-4 <1> + osDisk: + diskType: hyperdisk-balanced <2> +---- +<1> Specifies the node type as n4-standard-4. +<2> Specifies the node has the root disk backed by hyperdisk-balanced disk type. All nodes in the cluster should use the same disk type, either hyperdisks-balanced or pd-*. ++ +[NOTE] +==== +All nodes in the cluster must support hyperdisk-balanced volumes. Clusters with mixed nodes are not supported, for example N2 and N3 using hyperdisk-balanced disks. +==== +endif::openshift-dedicated[] + +ifndef::openshift-dedicated[] +.. After step 3 in _Incorporating the Cloud Credential Operator utility manifests_ section, copy the following manifests into the manifests directory created by the installation program: ++ +* cluster_csi_driver.yaml - specifies opting out of the default storage class creation +* storageclass.yaml - creates a hyperdisk-specific storage class ++ +.Example cluster CSI driver YAML file +[source, yaml] +---- +apiVersion: operator.openshift.io/v1 +kind: "ClusterCSIDriver" +metadata: + name: "pd.csi.storage.gke.io" +spec: + logLevel: Normal + managementState: Managed + operatorLogLevel: Normal + storageClassState: Unmanaged <1> +---- +<1> Specifies disabling creation of the default {product-title} storage classes. ++ +.Example storage class YAML file +[source, yaml] +---- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: hyperdisk-sc <1> + annotations: + storageclass.kubernetes.io/is-default-class: "true" +provisioner: pd.csi.storage.gke.io <2> +volumeBindingMode: WaitForFirstConsumer +allowVolumeExpansion: true +reclaimPolicy: Delete +parameters: + type: hyperdisk-balanced <3> + replication-type: none + provisioned-throughput-on-create: "140Mi" <4> + provisioned-iops-on-create: "3000" <5> + storage-pools: projects/my-project/zones/us-east4-c/storagePools/pool-us-east4-c <6> +allowedTopologies: <7> +- matchLabelExpressions: + - key: topology.kubernetes.io/zone + values: + - us-east4-c +... +---- +<1> Specify the name for your storage class. In this example, it is `hyperdisk-sc`. +<2> `pd.csi.storage.gke.io` specifies GCP CSI provisioner. +<3> Specifies using hyperdisk-balanced disks. +<4> Specifies the throughput value in MiBps using the "Mi" qualifier. For example, if your required throughput is 250 MiBps, specify "250Mi". If you do not specify a value, the capacity is based upon the disk type default. +<5> Specifies the IOPS value without any qualifiers. For example, if you require 7,000 IOPS, specify "7000". If you do not specify a value, the capacity is based upon the disk type default. +<6> If using storage pools, specify a list of specific storage pools that you want to use in the format: projects/PROJECT_ID/zones/ZONE/storagePools/STORAGE_POOL_NAME. +<7> If using storage pools, set `allowedTopologies` to restrict the topology of provisioned volumes to where the storage pool exists. In this example, `us-east4-c`. +endif::openshift-dedicated[] + +. Create a persistent volume claim (PVC) that uses the hyperdisk-specific storage class using the following example YAML file: ++ +.Example PVC YAML file +[source, yaml] +---- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: my-pvc +spec: + storageClassName: hyperdisk-sc <1> + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2048Gi <2> +---- +<1> PVC references the the storage pool-specific storage class. In this example, `hyperdisk-sc`. +<2> Target storage capacity of the hyperdisk-balanced volume. In this example, `2048Gi`. + +. Create a deployment that uses the PVC that you just created. Using a deployment helps ensure that your application has access to the persistent storage even after the pod restarts and rescheduling: + +.. Ensure a node pool with the specified machine series is up and running before creating the deployment. Otherwise, the pod fails to schedule. + +.. Use the following example YAML file to create the deployment: ++ +.Example deployment YAML file +[source, yaml] +---- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: postgres +spec: + selector: + matchLabels: + app: postgres + template: + metadata: + labels: + app: postgres + spec: + nodeSelector: + cloud.google.com/machine-family: n4 <1> + containers: + - name: postgres + image: postgres:14-alpine + args: [ "sleep", "3600" ] + volumeMounts: + - name: sdk-volume + mountPath: /usr/share/data/ + volumes: + - name: sdk-volume + persistentVolumeClaim: + claimName: my-pvc <2> +---- +<1> Specifies the machine family. In this example, it is `n4`. +<2> Specifies the name of the PVC created in the preceding step. In this example, it is `my-pfc`. + +.. Confirm that the deployment was successfully created by running the following command: ++ +[source, terminal] +---- +$ oc get deployment +---- ++ +.Example output +[source, terminal] +---- +NAME READY UP-TO-DATE AVAILABLE AGE +postgres 0/1 1 0 42s +---- ++ +It might take a few minutes for hyperdisk instances to complete provisioning and display a READY status. + +.. Confirm that PVC `my-pvc` has been successfully bound to a persistent volume (PV) by running the following command: ++ +[source, terminal] +---- +$ oc get pvc my-pvc +---- ++ +.Example output ++ +[source, terminal] +---- +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE +my-pvc Bound pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6 2Ti RWO hyperdisk-sc 2m24s +---- + +.. Confirm the expected configuration of your hyperdisk-balanced disk: ++ +[source, terminal] +---- +$ gcloud compute disks list +---- ++ +.Example output ++ +[source, terminal] +---- +NAME LOCATION LOCATION_SCOPE SIZE_GB TYPE STATUS +instance-20240914-173145-boot us-central1-a zone 150 pd-standard READY +instance-20240914-173145-data-workspace us-central1-a zone 100 pd-balanced READY +c4a-rhel-vm us-central1-a zone 50 hyperdisk-balanced READY <1> +---- +<1> Hyperdisk-balanced disk. + +.. If using storage pools, check that the volume is provisioned as specified in your storage class and PVC by running the following command: ++ +[source, terminal] +---- +$ gcloud compute storage-pools list-disks pool-us-east4-c --zone=us-east4-c +---- ++ +.Example output ++ +[source, terminal] +---- +NAME STATUS PROVISIONED_IOPS PROVISIONED_THROUGHPUT SIZE_GB +pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6 READY 3000 140 2048 +---- + diff --git a/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc b/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc index 60670a5edd0c..0de2013920bd 100644 --- a/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc +++ b/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc @@ -18,7 +18,9 @@ To create CSI-provisioned persistent volumes (PVs) that mount to GCP PD storage * *GCP PD CSI Driver Operator*: By default, the Operator provides a storage class that you can use to create PVCs. You can disable this default storage class if desired (see xref:../../storage/container_storage_interface/persistent-storage-csi-sc-manage.adoc#persistent-storage-csi-sc-manage[Managing the default storage class]). You also have the option to create the GCP PD storage class as described in xref:../../storage/persistent_storage/persistent-storage-gce.adoc#persistent-storage-using-gce[Persistent storage using GCE Persistent Disk]. -* *GCP PD driver*: The driver enables you to create and mount GCP PD PVs. +* *GCP PD driver*: The driver enables you to create and mount GCP PD PVs. ++ +GCP PD CSI driver supports the C3 instance type for bare metal and N4 machine series. The C3 instance type and N4 machine series support the hyperdisk-balanced disks. ifndef::openshift-dedicated[] [NOTE] @@ -27,6 +29,23 @@ ifndef::openshift-dedicated[] ==== endif::openshift-dedicated[] +== C3 instance type for bare metal and N4 machine series + +include::modules/persistent-storage-csi-gcp-hyperdisk-limitations.adoc[leveloffset=+2] + +include::modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-overview.adoc[leveloffset=+2] + +To set up storage pools, see xref:../../storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc#persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure_persistent-storage-csi-gcp-pd[Setting up hyperdisk-balanced disks]. + +include::modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure.adoc[leveloffset=+2] + +ifndef::openshift-dedicated[] +[id="resources-for-gcp-c3-n4-instances"] +[role="_additional-resources"] +=== Additional resources +* xref:../../installing/installing_gcp/installing-gcp-customizations.adoc#installing-gcp-customizations[Installing a cluster on GCP with customizations] +endif::openshift-dedicated[] + include::modules/persistent-storage-csi-about.adoc[leveloffset=+1] include::modules/persistent-storage-csi-gcp-pd-storage-class-ref.adoc[leveloffset=+1] @@ -39,6 +58,7 @@ include::modules/persistent-storage-byok.adoc[leveloffset=+1] For information about installing with user-managed encryption for GCP PD, see xref:../../installing/installing_gcp/installing-gcp-customizations.adoc#installation-configuration-parameters_installing-gcp-customizations[Installation configuration parameters]. endif::openshift-rosa,openshift-dedicated[] +[id="resources-for-gcp"] [role="_additional-resources"] == Additional resources * xref:../../storage/persistent_storage/persistent-storage-gce.adoc#persistent-storage-using-gce[Persistent storage using GCE Persistent Disk] From a3e603f8b92b11154a79fa2482ac227dce3b8875 Mon Sep 17 00:00:00 2001 From: Kevin Owen Date: Mon, 10 Feb 2025 12:13:33 -0500 Subject: [PATCH 248/669] OSDOCS#12641: Reorganize and add docs for etcd recovery --- _topic_maps/_topic_map.yml | 2 + .../about-disaster-recovery.adoc | 20 +- .../disaster_recovery/quorum-restoration.adoc | 12 + .../scenario-2-restoring-cluster-state.adoc | 5 +- modules/dr-restoring-cluster-state-about.adoc | 2 +- modules/dr-restoring-cluster-state-sno.adoc | 45 ++ modules/dr-restoring-cluster-state.adoc | 633 +----------------- modules/dr-restoring-etcd-quorum-ha.adoc | 47 ++ modules/dr-testing-restore-procedures.adoc | 78 +++ 9 files changed, 226 insertions(+), 618 deletions(-) create mode 100644 backup_and_restore/control_plane_backup_and_restore/disaster_recovery/quorum-restoration.adoc create mode 100644 modules/dr-restoring-cluster-state-sno.adoc create mode 100644 modules/dr-restoring-etcd-quorum-ha.adoc create mode 100644 modules/dr-testing-restore-procedures.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index fb950d1f5e64..b1100a47b611 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -3670,6 +3670,8 @@ Topics: Topics: - Name: About disaster recovery File: about-disaster-recovery + - Name: Quorum restoration + File: quorum-restoration - Name: Restoring to a previous cluster state File: scenario-2-restoring-cluster-state - Name: Recovering from expired control plane certificates diff --git a/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/about-disaster-recovery.adoc b/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/about-disaster-recovery.adoc index 28136968142d..326c9ecee553 100644 --- a/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/about-disaster-recovery.adoc +++ b/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/about-disaster-recovery.adoc @@ -17,10 +17,17 @@ state. Disaster recovery requires you to have at least one healthy control plane host. ==== +xref:../../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/quorum-restoration.adoc#dr-quorum-restoration[Quorum restoration]:: This solution handles situations where you have lost the majority of your control plane hosts, leading to etcd quorum loss and the cluster going offline. This solution does not require an etcd backup. ++ +[NOTE] +==== +If you have a majority of your control plane nodes still available and have an etcd quorum, then xref:../../../backup_and_restore/control_plane_backup_and_restore/replacing-unhealthy-etcd-member.adoc#replacing-unhealthy-etcd-member[replace a single unhealthy etcd member]. +==== + xref:../../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[Restoring to a previous cluster state]:: This solution handles situations where you want to restore your cluster to a previous state, for example, if an administrator deletes something critical. -This also includes situations where you have lost the majority of your control plane hosts, leading to etcd quorum loss and the cluster going offline. As long as you have taken an etcd backup, you can follow this procedure to restore your cluster to a previous state. +If you have taken an etcd backup, you can restore your cluster to a previous state. + If applicable, you might also need to xref:../../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-3-expired-certs.adoc#dr-recovering-expired-certs[recover from expired control plane certificates]. + @@ -30,11 +37,6 @@ Restoring to a previous cluster state is a destructive and destablizing action t Prior to performing a restore, see xref:../../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-scenario-2-restoring-cluster-state-about_dr-restoring-cluster-state[About restoring cluster state] for more information on the impact to the cluster. ==== -+ -[NOTE] -==== -If you have a majority of your masters still available and have an etcd quorum, then follow the procedure to xref:../../../backup_and_restore/control_plane_backup_and_restore/replacing-unhealthy-etcd-member.adoc#replacing-unhealthy-etcd-member[replace a single unhealthy etcd member]. -==== xref:../../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-3-expired-certs.adoc#dr-recovering-expired-certs[Recovering from expired control plane certificates]:: This solution handles situations where your control plane certificates have @@ -42,3 +44,9 @@ expired. For example, if you shut down your cluster before the first certificate rotation, which occurs 24 hours after installation, your certificates will not be rotated and will expire. You can follow this procedure to recover from expired control plane certificates. + +// Testing restore procedures +include::modules/dr-testing-restore-procedures.adoc[leveloffset=+1] +[role="_additional-resources"] +.Additional resources +* xref:../../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[Restoring to a previous cluster state] \ No newline at end of file diff --git a/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/quorum-restoration.adoc b/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/quorum-restoration.adoc new file mode 100644 index 000000000000..2f8c6ef95073 --- /dev/null +++ b/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/quorum-restoration.adoc @@ -0,0 +1,12 @@ +:_mod-docs-content-type: ASSEMBLY +[id="dr-quorum-restoration"] += Quorum restoration +include::_attributes/common-attributes.adoc[] +:context: dr-quorum-restoration + +toc::[] + +You can use the `quorum-restore.sh` script to restore etcd quorum on clusters that are offline due to quorum loss. + +// Restoring etcd quorum for high availability clusters +include::modules/dr-restoring-etcd-quorum-ha.adoc[leveloffset=+1] \ No newline at end of file diff --git a/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc b/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc index 3230a8f43a05..a3189163ffc5 100644 --- a/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc +++ b/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc @@ -11,6 +11,9 @@ To restore the cluster to a previous state, you must have previously xref:../../ // About restoring to a previous cluster state include::modules/dr-restoring-cluster-state-about.adoc[leveloffset=+1] +// Restoring to a previous cluster state for a single node +include::modules/dr-restoring-cluster-state-sno.adoc[leveloffset=+1] + // Restoring to a previous cluster state include::modules/dr-restoring-cluster-state.adoc[leveloffset=+1] @@ -23,5 +26,3 @@ include::modules/dr-restoring-cluster-state.adoc[leveloffset=+1] * xref:../../../installing/installing_bare_metal/ipi/ipi-install-expanding-the-cluster.adoc#replacing-a-bare-metal-control-plane-node_ipi-install-expanding[Replacing a bare-metal control plane node] include::modules/dr-scenario-cluster-state-issues.adoc[leveloffset=+1] - - diff --git a/modules/dr-restoring-cluster-state-about.adoc b/modules/dr-restoring-cluster-state-about.adoc index dc29df659949..0390ba4c3f41 100644 --- a/modules/dr-restoring-cluster-state-about.adoc +++ b/modules/dr-restoring-cluster-state-about.adoc @@ -18,7 +18,7 @@ Restoring to a previous cluster state is a destructive and destablizing action t If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup. ==== -Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, persistent volume controllers, and OpenShift operators, including the network operator. +Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, persistent volume controllers, and {product-title} Operators, including the network Operator. It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues. diff --git a/modules/dr-restoring-cluster-state-sno.adoc b/modules/dr-restoring-cluster-state-sno.adoc new file mode 100644 index 000000000000..ae59b40ab827 --- /dev/null +++ b/modules/dr-restoring-cluster-state-sno.adoc @@ -0,0 +1,45 @@ +// Module included in the following assemblies: +// +// * disaster_recovery/scenario-2-restoring-cluster-state.adoc + +:_mod-docs-content-type: PROCEDURE +[id="dr-restoring-cluster-state-sno_{context}"] += Restoring to a previous cluster state for a single node + +You can use a saved etcd backup to restore a previous cluster state on a single node. + +[IMPORTANT] +==== +When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an {product-title} {product-version}.2 cluster must use an etcd backup that was taken from {product-version}.2. +==== + +.Prerequisites + +* Access to the cluster as a user with the `cluster-admin` role through a certificate-based `kubeconfig` file, like the one that was used during installation. +* You have SSH access to control plane hosts. +* A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: `snapshot_.db` and `static_kuberesources_.tar.gz`. + +.Procedure + +. Use SSH to connect to the single node and copy the etcd backup to the `/home/core` directory by running the following command: ++ +[source,terminal] +---- +$ cp /home/core +---- + +. Run the following command in the single node to restore the cluster from a previous backup: ++ +[source,terminal] +---- +$ sudo -E /usr/local/bin/cluster-restore.sh /home/core/ +---- + +. Exit the SSH session. + +. Monitor the recovery progress of the control plane by running the following command: ++ +[source,terminal] +---- +$ oc adm wait-for-stable-cluster +---- \ No newline at end of file diff --git a/modules/dr-restoring-cluster-state.adoc b/modules/dr-restoring-cluster-state.adoc index 226d74e14234..d9b869c284e8 100644 --- a/modules/dr-restoring-cluster-state.adoc +++ b/modules/dr-restoring-cluster-state.adoc @@ -3,6 +3,8 @@ // * disaster_recovery/scenario-2-restoring-cluster-state.adoc // * post_installation_configuration/cluster-tasks.adoc +// Contributors: The documentation for this section changed drastically for 4.18+. + // Contributors: Some changes for the `etcd` restore procedure are only valid for 4.14+. // In the 4.14+ documentation, OVN-K requires different steps because there is no centralized OVN // control plane to be converted. For more information, see PR #64939. @@ -14,23 +16,25 @@ [id="dr-scenario-2-restoring-cluster-state_{context}"] = Restoring to a previous cluster state -You can use a saved `etcd` backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts. +You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts. + +For high availability (HA) clusters, a three-node HA cluster requires you to shutdown etcd on two hosts to avoid a cluster split. Quorum requires a simple majority of nodes. The minimum number of nodes required for quorum on a three-node HA cluster is two. If you start a new cluster from backup on your recovery host, the other etcd members might still be able to form quorum and continue service. [NOTE] ==== -If your cluster uses a control plane machine set, see "Troubleshooting the control plane machine set" for a more simple `etcd` recovery procedure. +If your cluster uses a control plane machine set, see "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. For {product-title} on a single node, see "Restoring to a previous cluster state for a single node". ==== [IMPORTANT] ==== -When you restore your cluster, you must use an `etcd` backup that was taken from the same z-stream release. For example, an {product-title} 4.7.2 cluster must use an `etcd` backup that was taken from 4.7.2. +When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an {product-title} {product-version}.2 cluster must use an etcd backup that was taken from {product-version}.2. ==== .Prerequisites * Access to the cluster as a user with the `cluster-admin` role through a certificate-based `kubeconfig` file, like the one that was used during installation. * A healthy control plane host to use as the recovery host. -* SSH access to control plane hosts. +* You have SSH access to control plane hosts. * A backup directory containing both the `etcd` snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: `snapshot_.db` and `static_kuberesources_.tar.gz`. [IMPORTANT] @@ -40,7 +44,7 @@ For non-recovery control plane nodes, it is not required to establish SSH connec .Procedure -. Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on. +. Select a control plane host to use as the recovery host. This is the host that you run the restore operation on. . Establish SSH connectivity to each of the control plane nodes, including the recovery host. + @@ -51,641 +55,52 @@ For non-recovery control plane nodes, it is not required to establish SSH connec If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state. ==== -. Copy the `etcd` backup directory to the recovery control plane host. -+ -This procedure assumes that you copied the `backup` directory containing the `etcd` snapshot and the resources for the static pods to the `/home/core/` directory of your recovery control plane host. - -. Stop the static pods on any other control plane nodes. -+ -[NOTE] -==== -You do not need to stop the static pods on the recovery host. -==== - -.. Access a control plane host that is not the recovery host. - -.. Move the existing etcd pod file out of the kubelet manifest directory by running: -+ -[source,terminal] ----- -$ sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp ----- - -.. Verify that the `etcd` pods are stopped by using: -+ -[source,terminal] ----- -$ sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard" ----- -+ -If the output of this command is not empty, wait a few minutes and check again. - -.. Move the existing `kube-apiserver` file out of the kubelet manifest directory by running: -+ -[source,terminal] ----- -$ sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp ----- - -.. Verify that the `kube-apiserver` containers are stopped by running: -+ -[source,terminal] ----- -$ sudo crictl ps | grep kube-apiserver | egrep -v "operator|guard" ----- -+ -If the output of this command is not empty, wait a few minutes and check again. - -.. Move the existing `kube-controller-manager` file out of the kubelet manifest directory by using: -+ -[source,terminal] ----- -$ sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp ----- - -.. Verify that the `kube-controller-manager` containers are stopped by running: -+ -[source,terminal] ----- -$ sudo crictl ps | grep kube-controller-manager | egrep -v "operator|guard" ----- -If the output of this command is not empty, wait a few minutes and check again. - -.. Move the existing `kube-scheduler` file out of the kubelet manifest directory by using: -+ -[source,terminal] ----- -$ sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp ----- - -.. Verify that the `kube-scheduler` containers are stopped by using: -+ -[source,terminal] ----- -$ sudo crictl ps | grep kube-scheduler | egrep -v "operator|guard" ----- -If the output of this command is not empty, wait a few minutes and check again. - -.. Move the `etcd` data directory to a different location with the following example: -+ -[source,terminal] ----- -$ sudo mv -v /var/lib/etcd/ /tmp ----- - -.. If the `/etc/kubernetes/manifests/keepalived.yaml` file exists and the node is deleted, follow these steps: - -... Move the `/etc/kubernetes/manifests/keepalived.yaml` file out of the kubelet manifest directory: -+ -[source,terminal] ----- -$ sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp ----- - -... Verify that any containers managed by the `keepalived` daemon are stopped: -+ -[source,terminal] ----- -$ sudo crictl ps --name keepalived ----- -+ -The output of this command should be empty. If it is not empty, wait a few minutes and check again. - -... Check if the control plane has any Virtual IPs (VIPs) assigned to it: -+ -[source,terminal] ----- -$ ip -o address | egrep '|' ----- - -... For each reported VIP, run the following command to remove it: -+ -[source,terminal] ----- -$ sudo ip address del dev ----- - -.. Repeat this step on each of the other control plane hosts that is not the recovery host. - -. Access the recovery control plane host. - -. If the `keepalived` daemon is in use, verify that the recovery control plane node owns the VIP: -+ -[source,terminal] ----- -$ ip -o address | grep ----- -+ -The address of the VIP is highlighted in the output if it exists. This command returns an empty string if the VIP is not set or configured incorrectly. - -. If the cluster-wide proxy is enabled, be sure that you have exported the `NO_PROXY`, `HTTP_PROXY`, and `HTTPS_PROXY` environment variables. -+ -[TIP] -==== -You can check whether the proxy is enabled by reviewing the output of `oc get proxy cluster -o yaml`. The proxy is enabled if the `httpProxy`, `httpsProxy`, and `noProxy` fields have values set. -==== - -. Run the restore script on the recovery control plane host and pass in the path to the `etcd` backup directory: -+ -[source,terminal] ----- -$ sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup ----- -+ -.Example script output -[source,terminal] ----- -...stopping kube-scheduler-pod.yaml -...stopping kube-controller-manager-pod.yaml -...stopping etcd-pod.yaml -...stopping kube-apiserver-pod.yaml -Waiting for container etcd to stop -.complete -Waiting for container etcdctl to stop -.............................complete -Waiting for container etcd-metrics to stop -complete -Waiting for container kube-controller-manager to stop -complete -Waiting for container kube-apiserver to stop -..........................................................................................complete -Waiting for container kube-scheduler to stop -complete -Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup -starting restore-etcd static pod -starting kube-apiserver-pod.yaml -static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml -starting kube-controller-manager-pod.yaml -static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml -starting kube-scheduler-pod.yaml -static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml ----- -+ - -The cluster-restore.sh script must show that `etcd`, `kube-apiserver`, `kube-controller-manager`, and `kube-scheduler` pods are stopped and then started at the end of the restore process. -+ -[NOTE] -==== -The restore process can cause nodes to enter the `NotReady` state if the node certificates were updated after the last `etcd` backup. -==== - -. Check the nodes to ensure they are in the `Ready` state. -.. Run the following command: -+ -[source,terminal] ----- -$ oc get nodes -w ----- -+ -.Sample output -[source,terminal] ----- -NAME STATUS ROLES AGE VERSION -host-172-25-75-28 Ready master 3d20h v1.31.3 -host-172-25-75-38 Ready infra,worker 3d20h v1.31.3 -host-172-25-75-40 Ready master 3d20h v1.31.3 -host-172-25-75-65 Ready master 3d20h v1.31.3 -host-172-25-75-74 Ready infra,worker 3d20h v1.31.3 -host-172-25-75-79 Ready worker 3d20h v1.31.3 -host-172-25-75-86 Ready worker 3d20h v1.31.3 -host-172-25-75-98 Ready infra,worker 3d20h v1.31.3 ----- -+ -It can take several minutes for all nodes to report their state. - -.. If any nodes are in the `NotReady` state, log in to the nodes and remove all of the PEM files from the `/var/lib/kubelet/pki` directory on each node. You can SSH into the nodes or use the terminal window in the web console. -+ -[source,terminal] ----- -$ ssh -i core@ ----- -+ -.Sample `pki` directory -[source,terminal] ----- -sh-4.4# pwd -/var/lib/kubelet/pki -sh-4.4# ls -kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem -kubelet-client-current.pem kubelet-server-current.pem ----- - -. Restart the kubelet service on all control plane hosts. - -.. From the recovery host, run: -+ -[source,terminal] ----- -$ sudo systemctl restart kubelet.service ----- - -.. Repeat this step on all other control plane hosts. - -. Approve the pending Certificate Signing Requests (CSRs): -+ -[NOTE] -==== -Clusters with no worker nodes, such as single-node clusters or clusters consisting of three schedulable control plane nodes, will not have any pending CSRs to approve. You can skip all the commands listed in this step. -==== - -.. Get the list of current CSRs by running: -+ -[source,terminal] ----- -$ oc get csr ----- -+ -.Example output ----- -NAME AGE SIGNERNAME REQUESTOR CONDITION -csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node: Pending <1> -csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node: Pending <1> -csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending <2> -csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending <2> -... ----- -<1> A pending kubelet serving CSR, requested by the node for the kubelet serving endpoint. -<2> A pending kubelet client CSR, requested with the `node-bootstrapper` node bootstrap credentials. - -.. Review the details of a CSR to verify that it is valid by running: -+ -[source,terminal] ----- -$ oc describe csr <1> ----- -<1> `` is the name of a CSR from the list of current CSRs. - -.. Approve each valid `node-bootstrapper` CSR by running: -+ -[source,terminal] ----- -$ oc adm certificate approve ----- - -.. For user-provisioned installations, approve each valid kubelet service CSR by running: -+ -[source,terminal] ----- -$ oc adm certificate approve ----- - -. Verify that the single member control plane has started successfully. - -.. From the recovery host, verify that the `etcd` container is running by using: -+ -[source,terminal] ----- -$ sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard" ----- -+ -.Example output -[source,terminal] ----- -3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0 ----- - -.. From the recovery host, verify that the `etcd` pod is running by using: -+ -[source,terminal] ----- -$ oc -n openshift-etcd get pods -l k8s-app=etcd ----- -+ -.Example output -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s ----- -+ -If the status is `Pending`, or the output lists more than one running `etcd` pod, wait a few minutes and check again. - -. If you are using the `OVNKubernetes` network plugin, you must restart `ovnkube-controlplane` pods. -.. Delete all of the `ovnkube-controlplane` pods by running: -+ -[source,terminal] ----- -$ oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane ----- -.. Verify that all of the `ovnkube-controlplane` pods were redeployed by using: -+ -[source,terminal] ----- -$ oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane ----- - -. If you are using the OVN-Kubernetes network plugin, restart the Open Virtual Network (OVN) Kubernetes pods on all the nodes one by one. Use the following steps to restart OVN-Kubernetes pods on each node: -+ -[IMPORTANT] -==== -.Restart OVN-Kubernetes pods in the following order -. The recovery control plane host -. The other control plane hosts (if available) -. The other nodes -==== -+ -[NOTE] -==== -Validating and mutating admission webhooks can reject pods. If you add any additional webhooks with the `failurePolicy` set to `Fail`, then they can reject pods and the restoration process can fail. You can avoid this by saving and deleting webhooks while restoring the cluster state. After the cluster state is restored successfully, you can enable the webhooks again. - -Alternatively, you can temporarily set the `failurePolicy` to `Ignore` while restoring the cluster state. After the cluster state is restored successfully, you can set the `failurePolicy` to `Fail`. -==== - -.. Remove the northbound database (nbdb) and southbound database (sbdb). Access the recovery host and the remaining control plane nodes by using Secure Shell (SSH) and run: -+ -[source,terminal] ----- -$ sudo rm -f /var/lib/ovn-ic/etc/*.db ----- - -.. Restart the OpenVSwitch services. Access the node by using Secure Shell (SSH) and run the following command: -+ -[source,terminal] ----- -$ sudo systemctl restart ovs-vswitchd ovsdb-server ----- - -.. Delete the `ovnkube-node` pod on the node by running the following command, replacing `` with the name of the node that you are restarting: -+ -[source,terminal] ----- -$ oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName== ----- -+ - -.. Verify that the `ovnkube-node` pod is running again with: -+ -[source,terminal] ----- -$ oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName== ----- -+ -[NOTE] -==== -It might take several minutes for the pods to restart. -==== - -. Delete and re-create other non-recovery, control plane machines, one by one. After the machines are re-created, a new revision is forced and `etcd` automatically scales up. -+ -** If you use a user-provisioned bare metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal". -+ -[WARNING] -==== -Do not delete and re-create the machine for the recovery host. -==== -+ -** If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps: -+ -[WARNING] -==== -Do not delete and re-create the machine for the recovery host. - -For bare metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node". -==== -.. Obtain the machine for one of the lost control plane hosts. -+ -In a terminal that has access to the cluster as a cluster-admin user, run the following command: +. Using SSH, connect to each control plane node and run the following command to disable etcd: + [source,terminal] ---- -$ oc get machines -n openshift-machine-api -o wide ----- -+ -Example output: -+ -[source,terminal] +$ sudo -E /usr/local/bin/disable-etcd.sh ---- -NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE -clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped <1> -clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running -clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running -clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running -clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running -clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running ----- -<1> This is the control plane machine for the lost control plane host, `ip-10-0-131-183.ec2.internal`. -.. Delete the machine of the lost control plane host by running: -+ -[source,terminal] ----- -$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 <1> ----- -<1> Specify the name of the control plane machine for the lost control plane host. +. Copy the etcd backup directory to the recovery control plane host. + -A new machine is automatically provisioned after deleting the machine of the lost control plane host. +This procedure assumes that you copied the `backup` directory containing the etcd snapshot and the resources for the static pods to the `/home/core/` directory of your recovery control plane host. -.. Verify that a new machine has been created by running: +. Use SSH to connect to the recovery host and restore the cluster from a previous backup by running the following command: + [source,terminal] ---- -$ oc get machines -n openshift-machine-api -o wide +$ sudo -E /usr/local/bin/cluster-restore.sh /home/core/ ---- -+ -Example output: -+ -[source,terminal] ----- -NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE -clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running -clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running -clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running <1> -clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running -clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running -clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running ----- -<1> The new machine, `clustername-8qw5l-master-3` is being created and is ready after the phase changes from `Provisioning` to `Running`. -+ -It might take a few minutes for the new machine to be created. The `etcd` cluster Operator will automatically sync when the machine or node returns to a healthy state. -.. Repeat these steps for each lost control plane host that is not the recovery host. +. Exit the SSH session. -. Turn off the quorum guard by entering: +. Once the API responds, turn off the etcd Operator quorum guard by runnning the following command: + [source,terminal] ---- $ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' ---- -+ -This command ensures that you can successfully re-create secrets and roll out the static pods. - -. In a separate terminal window within the recovery host, export the recovery `kubeconfig` file by running: -+ -[source,terminal] ----- -$ export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig ----- - -. Force `etcd` redeployment. -+ -In the same terminal window where you exported the recovery `kubeconfig` file, run: -+ -[source,terminal] ----- -$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge <1> ----- -<1> The `forceRedeploymentReason` value must be unique, which is why a timestamp is appended. -+ -The `etcd` redeployment starts. -+ -When the `etcd` cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up. - -. Turn the quorum guard back on by entering: -+ -[source,terminal] ----- -$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' ----- - -. You can verify that the `unsupportedConfigOverrides` section is removed from the object by running: -+ -[source,terminal] ----- -$ oc get etcd/cluster -oyaml ----- - -. Verify all nodes are updated to the latest revision. -+ -In a terminal that has access to the cluster as a `cluster-admin` user, run: -+ -[source,terminal] ----- -$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' ----- -+ -Review the `NodeInstallerProgressing` status condition for `etcd` to verify that all nodes are at the latest revision. The output shows `AllNodesAtLatestRevision` upon successful update: -+ -[source,terminal] ----- -AllNodesAtLatestRevision -3 nodes are at revision 7 <1> ----- -<1> In this example, the latest revision number is `7`. -+ -If the output includes multiple revision numbers, such as `2 nodes are at revision 6; 1 nodes are at revision 7`, this means that the update is still in progress. Wait a few minutes and try again. - -. After `etcd` is redeployed, force new rollouts for the control plane. `kube-apiserver` will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer. -+ -In a terminal that has access to the cluster as a `cluster-admin` user, run: - -.. Force a new rollout for `kube-apiserver`: -+ -[source,terminal] ----- -$ oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge ----- -+ -Verify all nodes are updated to the latest revision. -+ -[source,terminal] ----- -$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' ----- -+ -Review the `NodeInstallerProgressing` status condition to verify that all nodes are at the latest revision. The output shows `AllNodesAtLatestRevision` upon successful update: -+ -[source,terminal] ----- -AllNodesAtLatestRevision -3 nodes are at revision 7 <1> ----- -<1> In this example, the latest revision number is `7`. -+ -If the output includes multiple revision numbers, such as `2 nodes are at revision 6; 1 nodes are at revision 7`, this means that the update is still in progress. Wait a few minutes and try again. - -.. Force a new rollout for the Kubernetes controller manager by running the following command: -+ -[source,terminal] ----- -$ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge ----- -+ -Verify all nodes are updated to the latest revision by running: -+ -[source,terminal] ----- -$ oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' ----- -+ -Review the `NodeInstallerProgressing` status condition to verify that all nodes are at the latest revision. The output shows `AllNodesAtLatestRevision` upon successful update: -+ -[source,terminal] ----- -AllNodesAtLatestRevision -3 nodes are at revision 7 <1> ----- -<1> In this example, the latest revision number is `7`. -+ -If the output includes multiple revision numbers, such as `2 nodes are at revision 6; 1 nodes are at revision 7`, this means that the update is still in progress. Wait a few minutes and try again. - -.. Force a new rollout for the `kube-scheduler` by running: -+ -[source,terminal] ----- -$ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge ----- -+ -Verify all nodes are updated to the latest revision by using: -+ -[source,terminal] ----- -$ oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' ----- -+ -Review the `NodeInstallerProgressing` status condition to verify that all nodes are at the latest revision. The output shows `AllNodesAtLatestRevision` upon successful update: -+ -[source,terminal] ----- -AllNodesAtLatestRevision -3 nodes are at revision 7 <1> ----- -<1> In this example, the latest revision number is `7`. -+ -If the output includes multiple revision numbers, such as `2 nodes are at revision 6; 1 nodes are at revision 7`, this means that the update is still in progress. Wait a few minutes and try again. -. Monitor the platform Operators by running: +. Monitor the recovery progress of the control plane by running the following command: + [source,terminal] ---- $ oc adm wait-for-stable-cluster ---- -+ -This process can take up to 15 minutes. -. Verify that all control plane hosts have started and joined the cluster. -+ -In a terminal that has access to the cluster as a `cluster-admin` user, run the following command: -+ -[source,terminal] ----- -$ oc -n openshift-etcd get pods -l k8s-app=etcd ----- +. Once recovered, enable the quorum guard by running the following command: + -.Example output [source,terminal] ---- -etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h -etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h -etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h +$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' ---- -To ensure that all workloads return to normal operation following a recovery procedure, restart all control plane nodes. - -[NOTE] -==== -On completion of the previous procedural steps, you might need to wait a few minutes for all services to return to their restored state. For example, authentication by using `oc login` might not immediately work until the OAuth server pods are restarted. +.Troubleshooting -Consider using the `system:admin` `kubeconfig` file for immediate authentication. This method basis its authentication on SSL/TLS client certificates as against OAuth tokens. You can authenticate with this file by issuing the following command: +If you see no progress rolling out the etcd static pods, you can force redeployment from the `cluster-etcd-operator` by running the following command: [source,terminal] ---- -$ export KUBECONFIG=/auth/kubeconfig ----- - -Issue the following command to display your authenticated user name: - -[source,terminal] ----- -$ oc whoami ----- -==== +$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$(date --rfc-3339=ns )"'"}}' --type=merge +---- \ No newline at end of file diff --git a/modules/dr-restoring-etcd-quorum-ha.adoc b/modules/dr-restoring-etcd-quorum-ha.adoc new file mode 100644 index 000000000000..89e17e6ff5e6 --- /dev/null +++ b/modules/dr-restoring-etcd-quorum-ha.adoc @@ -0,0 +1,47 @@ +// Module included in the following assemblies: +// +// * disaster_recovery/quorum-restoration.adoc + +:_mod-docs-content-type: PROCEDURE +[id="dr-restoring-etcd-quorum-ha_{context}"] += Restoring etcd quorum for high availability clusters + +You can use the `quorum-restore.sh` script to instantly bring back a new single-member etcd cluster based on its local data directory and mark all other members as invalid by retiring the previous cluster identifier. No prior backup is required to restore the control plane from. + +[WARNING] +==== +You might experience data loss if the host that runs the restoration does not have all data replicated to it. +==== + +.Prerequisites + +* You have SSH access to the node used to restore quorum. + +.Procedure + +. Select a control plane host to use as the recovery host. You run the restore operation on this host. + +. Using SSH, connect to the chosen recovery node and run the following command to restore etcd quorum: ++ +[source,terminal] +---- +$ sudo -E /usr/local/bin/quorum-restore.sh +---- + +. Exit the SSH session. + +. Wait until the control plane recovers by running the following command: ++ +[source,terminal] +---- +$ oc adm wait-for-stable-cluster +---- + +.Troubleshooting + +If you see no progress rolling out the etcd static pods, you can force redeployment from the `cluster-etcd-operator` pod by running the following command: + +[source,terminal] +---- +$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$(date --rfc-3339=ns )"'"}}' --type=merge +---- \ No newline at end of file diff --git a/modules/dr-testing-restore-procedures.adoc b/modules/dr-testing-restore-procedures.adoc new file mode 100644 index 000000000000..c93a101acebc --- /dev/null +++ b/modules/dr-testing-restore-procedures.adoc @@ -0,0 +1,78 @@ +// Module included in the following assemblies: +// +// * disaster_recovery/about-disaster-recovery.adoc + +:_mod-docs-content-type: PROCEDURE +[id="dr-testing-restore-procedures_{context}"] += Testing restore procedures + +Testing the restore procedure is important to ensure that your automation and workload handle the new cluster state gracefully. Due to the complex nature of etcd quorum and the etcd Operator attempting to mend automatically, it is often difficult to correctly bring your cluster into a broken enough state that it can be restored. + +[WARNING] +==== +You **must** have SSH access to the cluster. Your cluster might be entirely lost without SSH access. +==== + +.Prerequisites + +* You have SSH access to control plane hosts. +* You have installed the {oc-first}. + +.Procedure + +. Use SSH to connect to each of your nonrecovery nodes and run the following commands to disable etcd and the `kubelet` service: + +.. Disable etcd by running the following command: ++ +[source,terminal] +---- +$ sudo /usr/local/bin/disable-etcd.sh +---- + +.. Delete variable data for etcd by running the following command: ++ +[source,terminal] +---- +$ sudo rm -rf /var/lib/etcd +---- + +.. Disable the `kubelet` service by running the following command: ++ +[source,terminal] +---- +$ sudo systemctl disable kubelet.service +---- + +. Exit every SSH session. + +. Run the following command to ensure that your nonrecovery nodes are in a `NOT READY` state: ++ +[source,terminal] +---- +$ oc get nodes +---- + +. Follow the steps in "Restoring to a previous cluster state" to restore your cluster. + +. After you restore the cluster and the API responds, use SSH to connect to each nonrecovery node and enable the `kubelet` service: ++ +[source,terminal] +---- +$ sudo systemctl enable kubelet.service +---- + +. Exit every SSH session. + +. Run the following command to observe your nodes coming back into the `READY` state: ++ +[source,terminal] +---- +$ oc get nodes +---- + +. Run the following command to verify that etcd is available: ++ +[source,terminal] +---- +$ oc get pods -n openshift-etcd +---- \ No newline at end of file From 9ac0f03e16698bbea9fa431f7a9086ed88910acc Mon Sep 17 00:00:00 2001 From: Laura Bailey Date: Wed, 8 Jan 2025 16:54:39 +1000 Subject: [PATCH 249/669] OSDOCS-12809 Monitoring config for managed OpenShift Adding condition to remove unsupported monitoring config instruction from managed OpenShift. --- ...nfig-map-reference-for-the-cluster-monitoring-operator.adoc | 3 +++ 1 file changed, 3 insertions(+) diff --git a/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc b/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc index b10eac377c1c..8bb5ab53fdf5 100644 --- a/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc +++ b/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc @@ -18,8 +18,11 @@ toc::[] Parts of {product-title} cluster monitoring are configurable. The API is accessible by setting parameters defined in various config maps. +ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] * To configure monitoring components, edit the `ConfigMap` object named `cluster-monitoring-config` in the `openshift-monitoring` namespace. These configurations are defined by link:#clustermonitoringconfiguration[ClusterMonitoringConfiguration]. +endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] + * To configure monitoring components that monitor user-defined projects, edit the `ConfigMap` object named `user-workload-monitoring-config` in the `openshift-user-workload-monitoring` namespace. These configurations are defined by link:#userworkloadconfiguration[UserWorkloadConfiguration]. From c341ba679c9d53fe955d40e9853710183ec455aa Mon Sep 17 00:00:00 2001 From: Gabriel McGoldrick Date: Tue, 3 Dec 2024 14:48:48 +0000 Subject: [PATCH 250/669] OBSDOCS-1521 COO GA 1.0 Release notes --- ...-observability-operator-release-notes.adoc | 73 +++++++++++++++++-- 1 file changed, 66 insertions(+), 7 deletions(-) diff --git a/observability/cluster_observability_operator/cluster-observability-operator-release-notes.adoc b/observability/cluster_observability_operator/cluster-observability-operator-release-notes.adoc index 19f661ec9678..5b4018acd6b9 100644 --- a/observability/cluster_observability_operator/cluster-observability-operator-release-notes.adoc +++ b/observability/cluster_observability_operator/cluster-observability-operator-release-notes.adoc @@ -7,24 +7,83 @@ include::_attributes/common-attributes.adoc[] toc::[] -:FeatureName: The Cluster Observability Operator -include::snippets/technology-preview.adoc[leveloffset=+2] - The {coo-first} is an optional {product-title} Operator that enables administrators to create standalone monitoring stacks that are independently configurable for use by different services and users. The {coo-short} complements the built-in monitoring capabilities of {product-title}. You can deploy it in parallel with the default platform and user workload monitoring stacks managed by the {cmo-first}. These release notes track the development of the {coo-full} in {product-title}. +[id="cluster-observability-operator-release-notes-1-0_{context}"] +== {coo-full} 1.0 + +// Need to check if there is an advisory generated now that the build system has moved to Konflux +// The following advisory is available for {coo-full} 1.0: +// +// * link:https://access.redhat.com/errata/RHSA-2024:????[RHEA-2024:??? {coo-full} 1.0] + + +[id="cluster-observability-operator-1-0-new-features-enhancements_{context}"] +=== New features and enhancements + +* {coo-short} is now enabled for {product-title} platform monitoring. (link:https://issues.redhat.com/browse/COO-476[*COO-476*]) +** Implements HTTPS support for {coo-short} web server. (link:https://issues.redhat.com/browse/COO-480[*COO-480*]) +** Implements authn/authz for {coo-short} web server. (link:https://issues.redhat.com/browse/COO-481[*COO-481*]) +** Configures ServiceMonitor resource to collect metrics from {coo-short}. (link:https://issues.redhat.com/browse/COO-482[*COO-482*]) +** Adds `operatorframework.io/cluster-monitoring=true` annotation to the OLM bundle. (link:https://issues.redhat.com/browse/COO-483[*COO-483*]) +** Defines the alerting strategy for {coo-short} . (link:https://issues.redhat.com/browse/COO-484[*COO-484*]) +** Configures PrometheusRule for alerting. (link:https://issues.redhat.com/browse/COO-485[*COO-485*]) + +* Support level annotations have been added to the `UIPlugin` CR when created. The support level is based on the plugin type, with values of `DevPreview`, `TechPreview`, or `GeneralAvailability`. (link:https://issues.redhat.com/browse/COO-318[*COO-318*]) + +// must-gather postponed to 1.1 +//* You can now gather debugging information about {coo-short} by using the `oc adm must-gather` CLI command. (link:https://issues.redhat.com/browse/COO-194[*COO-194*]) + +* You can now configure the Alertmanager `scheme` and `tlsConfig` fields in the Prometheus CR. (link:https://issues.redhat.com/browse/COO-219[*COO-219*]) + +// Dev preview so cannot document +//* You can now install the Monitoring UI plugin using {coo-short}. (link:https://issues.redhat.com/browse/COO-262[*COO-262*]) + +* The extended Technical Preview for the troubleshooting panel adds support for correlating traces with Kubernetes resources and directly with other observable signals including logs, alerts, metrics, and network events. (link:https://issues.redhat.com/browse/COO-450[*COO-450*]) +** You can select a Tempo instance and tenant when you navigate to the tracing page by clicking *Observe -> Tracing* in the web console. The preview troubleshooting panel only works with the `openshift-tracing / platform` instance and the `platform` tenant. +** The troubleshooting panel works best in the *Administrator* perspective. It has limited functionality in the Developer perspective due to authorization issues with some back ends, most notably Prometheus for metrics and alerts. This will be addressed in a future release. + The following table provides information about which features are available depending on the version of {coo-full} and {product-title}: +[cols="1,1,1,1,1", options="header"] +|=== +| COO Version | OCP Versions | Distributed Tracing | Logging | Troubleshooting Panel +| 1.0+ | 4.12 - 4.15 | ✔ | ✔ | ✘ +| 1.0+ | 4.16+ | ✔ | ✔ | ✔ +|=== + + +[id="cluster-observability-operator-1-0-CVEs"] +=== CVEs + +* link:https://access.redhat.com/security/cve/CVE-2023-26159[CVE-2023-26159] +* link:https://access.redhat.com/security/cve/CVE-2024-28849[CVE-2024-28849] +* link:https://access.redhat.com/security/cve/CVE-2024-45338[CVE-2024-45338] + +[id="cluster-observability-operator-1-0-bug-fixes_{context}"] +=== Bug fixes + +* Previously, the default namespace for the {coo-short} installation was `openshift-operators`. With this release, the defaullt namespace changes to `openshift-cluster-observability-operator`. (link:https://issues.redhat.com/browse/COO-32[*COO-32*]) + +* Previously, `korrel8r` was only able to parse time series selector expressions. With this release, `korrel8r` can parse any valid PromQL expression to extract the time series selectors that it uses for correlation. (link:https://issues.redhat.com/browse/COO-558[*COO-558*]) + +* Previously, when viewing a Tempo instance from the Distributed Tracing UI plugin, the scatter plot graph showing the traces duration was not rendered correctly. The bubble size was too large and overlapped the x and y axis. With this release, the graph is rendered correctly. (link:https://issues.redhat.com/browse/COO-319[*COO-319*]) + +== Features available on older, Technology Preview releases + +The following table provides information about which features are available depending on older version of {coo-full} and {product-title}: + [cols="1,1,1,1,1,1", options="header"] |=== -| COO Version | OCP Versions | Dashboards | Distributed Tracing | Logging | Troubleshooting Panel +| COO Version | OCP Versions | Dashboards | Distributed Tracing | Logging | Troubleshooting Panel -| 0.2.0 | 4.11 | ✔ | ✘ | ✘ | ✘ -| 0.3.0+ | 4.11 - 4.15 | ✔ | ✔ | ✔ | ✘ -| 0.3.0+ | 4.16+ | ✔ | ✔ | ✔ | ✔ +| 0.2.0 | 4.11 | ✔ | ✘ | ✘ | ✘ +| 0.3.0+, 0.4.0+ | 4.11 - 4.15 | ✔ | ✔ | ✔ | ✘ +| 0.3.0+, 0.4.0+ | 4.16+ | ✔ | ✔ | ✔ | ✔ |=== [id="cluster-observability-operator-release-notes-0-4-1_{context}"] From 5865d96fbda04f75dbb30c8477e845b3cb867824 Mon Sep 17 00:00:00 2001 From: Gabriel McGoldrick Date: Thu, 28 Nov 2024 12:15:39 +0000 Subject: [PATCH 251/669] OBSDOCS-1523 remove TP notice for COO, but keep for troubleshooting korrel8r --- _topic_maps/_topic_map.yml | 8 ++-- ...uster-observability-operator-overview.adoc | 2 - ...ability-operator-to-monitor-a-service.adoc | 3 -- ...ng-the-cluster-observability-operator.adoc | 3 -- .../ui_plugins/dashboard-ui-plugin.adoc | 3 -- .../distributed-tracing-ui-plugin.adoc | 2 +- .../observability-ui-plugins-overview.adoc | 39 ++++++++++++------- .../ui_plugins/troubleshooting-ui-plugin.adoc | 2 +- 8 files changed, 30 insertions(+), 32 deletions(-) diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index b1100a47b611..a04d57d9d822 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -3237,14 +3237,14 @@ Topics: Topics: - Name: Observability UI plugins overview File: observability-ui-plugins-overview - - Name: Dashboard UI plugin - File: dashboard-ui-plugin + - Name: Logging UI plugin + File: logging-ui-plugin - Name: Distributed tracing UI plugin File: distributed-tracing-ui-plugin - Name: Troubleshooting UI plugin File: troubleshooting-ui-plugin - - Name: Logging UI plugin - File: logging-ui-plugin +# - Name: Dashboard UI plugin +# File: dashboard-ui-plugin --- Name: Scalability and performance Dir: scalability_and_performance diff --git a/observability/cluster_observability_operator/cluster-observability-operator-overview.adoc b/observability/cluster_observability_operator/cluster-observability-operator-overview.adoc index 5fa4d3f062fa..4715e2dc49ff 100644 --- a/observability/cluster_observability_operator/cluster-observability-operator-overview.adoc +++ b/observability/cluster_observability_operator/cluster-observability-operator-overview.adoc @@ -6,8 +6,6 @@ include::_attributes/common-attributes.adoc[] toc::[] -:FeatureName: The Cluster Observability Operator -include::snippets/technology-preview.adoc[leveloffset=+2] The {coo-first} is an optional component of the {product-title} designed for creating and managing highly customizable monitoring stacks. It enables cluster administrators to automate configuration and management of monitoring needs extensively, offering a more tailored and detailed view of each namespace compared to the default {product-title} monitoring system. diff --git a/observability/cluster_observability_operator/configuring-the-cluster-observability-operator-to-monitor-a-service.adoc b/observability/cluster_observability_operator/configuring-the-cluster-observability-operator-to-monitor-a-service.adoc index a1683f75824b..d7751c3586bb 100644 --- a/observability/cluster_observability_operator/configuring-the-cluster-observability-operator-to-monitor-a-service.adoc +++ b/observability/cluster_observability_operator/configuring-the-cluster-observability-operator-to-monitor-a-service.adoc @@ -6,9 +6,6 @@ include::_attributes/common-attributes.adoc[] toc::[] -:FeatureName: The Cluster Observability Operator -include::snippets/technology-preview.adoc[leveloffset=+2] - You can monitor metrics for a service by configuring monitoring stacks managed by the {coo-first}. To test monitoring a service, follow these steps: diff --git a/observability/cluster_observability_operator/installing-the-cluster-observability-operator.adoc b/observability/cluster_observability_operator/installing-the-cluster-observability-operator.adoc index 51fcae69e440..1aab682cec82 100644 --- a/observability/cluster_observability_operator/installing-the-cluster-observability-operator.adoc +++ b/observability/cluster_observability_operator/installing-the-cluster-observability-operator.adoc @@ -6,9 +6,6 @@ include::_attributes/common-attributes.adoc[] toc::[] -:FeatureName: The Cluster Observability Operator -include::snippets/technology-preview.adoc[leveloffset=+2] - As a cluster administrator, you can install or remove the {coo-first} from OperatorHub by using the {product-title} web console. OperatorHub is a user interface that works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. diff --git a/observability/cluster_observability_operator/ui_plugins/dashboard-ui-plugin.adoc b/observability/cluster_observability_operator/ui_plugins/dashboard-ui-plugin.adoc index 68fc00d8d947..6a6144982867 100644 --- a/observability/cluster_observability_operator/ui_plugins/dashboard-ui-plugin.adoc +++ b/observability/cluster_observability_operator/ui_plugins/dashboard-ui-plugin.adoc @@ -6,9 +6,6 @@ include::_attributes/common-attributes.adoc[] toc::[] -:FeatureName: The Cluster Observability Operator -include::snippets/technology-preview.adoc[leveloffset=+2] - The dashboard UI plugin supports enhanced dashboards in the OpenShift web console at *Observe* -> *Dashboards* . You can add other Prometheus datasources from the cluster to the default dashboards, in addition to the in-cluster datasource. This results in a unified observability experience across different data sources. The plugin searches for datasources from `ConfigMap` resources in the `openshift-config-managed` namespace, that have the label `console.openshift.io/dashboard-datasource: 'true'`. diff --git a/observability/cluster_observability_operator/ui_plugins/distributed-tracing-ui-plugin.adoc b/observability/cluster_observability_operator/ui_plugins/distributed-tracing-ui-plugin.adoc index f9804102e5f2..dd57008021ad 100644 --- a/observability/cluster_observability_operator/ui_plugins/distributed-tracing-ui-plugin.adoc +++ b/observability/cluster_observability_operator/ui_plugins/distributed-tracing-ui-plugin.adoc @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[] toc::[] -:FeatureName: The Cluster Observability Operator +:FeatureName: The {coo-full} distributed tracing UI plugin include::snippets/technology-preview.adoc[leveloffset=+2] The distributed tracing UI plugin adds tracing-related features to the Administrator perspective of the OpenShift web console at **Observe** → **Traces**. You can follow requests through the front end and into the backend of microservices, helping you identify code errors and performance bottlenecks in distributed systems. diff --git a/observability/cluster_observability_operator/ui_plugins/observability-ui-plugins-overview.adoc b/observability/cluster_observability_operator/ui_plugins/observability-ui-plugins-overview.adoc index 706cc34cdf6c..dadd25a44763 100644 --- a/observability/cluster_observability_operator/ui_plugins/observability-ui-plugins-overview.adoc +++ b/observability/cluster_observability_operator/ui_plugins/observability-ui-plugins-overview.adoc @@ -6,24 +6,26 @@ include::_attributes/common-attributes.adoc[] toc::[] -:FeatureName: The {coo-full} -include::snippets/technology-preview.adoc[leveloffset=+2] - You can use the {coo-first} to install and manage UI plugins to enhance the observability capabilities of the {product-title} web console. -The plugins extend the default functionality, providing new UI features for monitoring, troubleshooting, distributed tracing, and cluster logging. +The plugins extend the default functionality, providing new UI features for troubleshooting, distributed tracing, and cluster logging. -[id="dashboards_{context}"] -== Dashboards -The dashboard UI plugin supports enhanced dashboards in the {product-title} web console at *Observe* -> *Dashboards*. -You can add other Prometheus data sources from the cluster to the default dashboards, in addition to the in-cluster data source. -This results in a unified observability experience across different data sources. -For more information, see the xref:../../../observability/cluster_observability_operator/ui_plugins/dashboard-ui-plugin.adoc#dashboard-ui-plugin[dashboard UI plugin] page. +[id="cluster-logging_{context}"] +== Cluster logging + +The logging UI plugin surfaces logging data in the web console on the *Observe* -> *Logs* page. +You can specify filters, queries, time ranges and refresh rates. The results displayed a list of collapsed logs, which can then be expanded to show more detailed information for each log. + +For more information, see the xref:../../../observability/cluster_observability_operator/ui_plugins/logging-ui-plugin.adoc#logging-ui-plugin[logging UI plugin] page. + [id="troubleshooting_{context}"] == Troubleshooting +:FeatureName: The {coo-full} troubleshooting panel UI plugin +include::snippets/technology-preview.adoc[leveloffset=+2] + The troubleshooting panel UI plugin for {product-title} version 4.16+ provides observability signal correlation, powered by the open source Korrel8r project. You can use the troubleshooting panel available from the *Observe* -> *Alerting* page to easily correlate metrics, logs, alerts, netflows, and additional observability signals and resources, across different data stores. Users of {product-title} version 4.17+ can also access the troubleshooting UI panel from the Application Launcher {launch}. @@ -35,6 +37,9 @@ For more information, see the xref:../../../observability/cluster_observability_ [id="distributed-tracing_{context}"] == Distributed tracing +:FeatureName: The {coo-full} distributed tracing UI plugin +include::snippets/technology-preview.adoc[leveloffset=+2] + The distributed tracing UI plugin adds tracing-related features to the web console on the *Observe* -> *Traces* page. You can follow requests through the front end and into the backend of microservices, helping you identify code errors and performance bottlenecks in distributed systems. You can select a supported `TempoStack` or `TempoMonolithic` multi-tenant instance running in the cluster and set a time range and query to view the trace data. @@ -42,10 +47,14 @@ You can select a supported `TempoStack` or `TempoMonolithic` multi-tenant instan For more information, see the xref:../../../observability/cluster_observability_operator/ui_plugins/distributed-tracing-ui-plugin.adoc#distributed-tracing-ui-plugin[distributed tracing UI plugin] page. -[id="cluster-logging_{context}"] -== Cluster logging +//// +[id="dashboards_{context}"] +== Dashboards -The logging UI plugin surfaces logging data in the web console on the *Observe* -> *Logs* page. -You can specify filters, queries, time ranges and refresh rates. The results displayed a list of collapsed logs, which can then be expanded to show more detailed information for each log. +The dashboard UI plugin supports enhanced dashboards in the {product-title} web console at *Observe* -> *Dashboards*. +You can add other Prometheus data sources from the cluster to the default dashboards, in addition to the in-cluster data source. +This results in a unified observability experience across different data sources. -For more information, see the xref:../../../observability/cluster_observability_operator/ui_plugins/logging-ui-plugin.adoc#logging-ui-plugin[logging UI plugin] page. +For more information, see the xref :../../../observability/cluster_observability_operator/ui_plugins/dashboard-ui-plugin.adoc#dashboard-ui-plugin[dashboard UI plugin] page. + +//// \ No newline at end of file diff --git a/observability/cluster_observability_operator/ui_plugins/troubleshooting-ui-plugin.adoc b/observability/cluster_observability_operator/ui_plugins/troubleshooting-ui-plugin.adoc index 7905602a4acc..12706dbc78a7 100644 --- a/observability/cluster_observability_operator/ui_plugins/troubleshooting-ui-plugin.adoc +++ b/observability/cluster_observability_operator/ui_plugins/troubleshooting-ui-plugin.adoc @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[] toc::[] -:FeatureName: The Cluster Observability Operator +:FeatureName: The {coo-full} troubleshooting panel UI plugin include::snippets/technology-preview.adoc[leveloffset=+2] The troubleshooting UI plugin for {product-title} version 4.16+ provides observability signal correlation, powered by the open source Korrel8r project. From b3f19bbe10ad2232ba84a819bfc34896d6c79cfc Mon Sep 17 00:00:00 2001 From: Gabriel McGoldrick Date: Thu, 28 Nov 2024 11:27:07 +0000 Subject: [PATCH 252/669] OBSDOCS-1526 change default namespace for COO --- ...bservability-operator-using-the-web-console.adoc | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/modules/monitoring-installing-cluster-observability-operator-using-the-web-console.adoc b/modules/monitoring-installing-cluster-observability-operator-using-the-web-console.adoc index f7a27719df78..362a575cead7 100644 --- a/modules/monitoring-installing-cluster-observability-operator-using-the-web-console.adoc +++ b/modules/monitoring-installing-cluster-observability-operator-using-the-web-console.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: -// * observability/monitoring/cluster_observability_operator/installing-the-cluster-observability-operator.adoc +// * observability/cluster_observability_operator/installing-the-cluster-observability-operator.adoc :_mod-docs-content-type: PROCEDURE [id="installing-the-cluster-observability-operator-in-the-web-console-_{context}"] @@ -17,15 +17,16 @@ Install the {coo-first} from OperatorHub by using the {product-title} web consol . In the {product-title} web console, click *Operators* -> *OperatorHub*. . Type `cluster observability operator` in the *Filter by keyword* box. . Click *{coo-full}* in the list of results. -. Read the information about the Operator, and review the following default installation settings: +. Read the information about the Operator, and configure the following installation settings: + -* *Update channel* -> *development* -* *Version* -> +* *Update channel* -> *stable* +* *Version* -> *1.0.0* or later * *Installation mode* -> *All namespaces on the cluster (default)* -* *Installed Namespace* -> *openshift-operators* +* *Installed Namespace* -> *Operator recommended Namespace: openshift-cluster-observability-operator* +* Select *Enable Operator recommended cluster monitoring on this Namespace* * *Update approval* -> *Automatic* -. Optional: Change default installation settings to suit your requirements. +. Optional: You can change the installation settings to suit your requirements. For example, you can select to subscribe to a different update channel, to install an older released version of the Operator, or to require manual approval for updates to new versions of the Operator. . Click *Install*. From 198234af1b3baae2eb1aea9143112d6a1c928988 Mon Sep 17 00:00:00 2001 From: Gabriel McGoldrick Date: Thu, 13 Feb 2025 20:28:36 +0000 Subject: [PATCH 253/669] OBSDOCS-1392 Remove logging UI warning --- .../ui_plugins/logging-ui-plugin.adoc | 5 ----- 1 file changed, 5 deletions(-) diff --git a/observability/cluster_observability_operator/ui_plugins/logging-ui-plugin.adoc b/observability/cluster_observability_operator/ui_plugins/logging-ui-plugin.adoc index eea95c04e0f8..5e169c3dc702 100644 --- a/observability/cluster_observability_operator/ui_plugins/logging-ui-plugin.adoc +++ b/observability/cluster_observability_operator/ui_plugins/logging-ui-plugin.adoc @@ -6,11 +6,6 @@ include::_attributes/common-attributes.adoc[] toc::[] -[IMPORTANT] -==== -Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview] (TP), Red{nbsp}Hat provides support to customers who are using Logging 6.0 or later with the COO for the logging UI plugin on {product-title} 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the logging UI plugin is ready for GA. -==== - The logging UI plugin surfaces logging data in the {product-title} web console on the *Observe* -> *Logs* page. You can specify filters, queries, time ranges and refresh rates, with the results displayed as a list of collapsed logs, which can then be expanded to show more detailed information for each log. From 46b82951e91225ce61c2b5fdd62f79ec1ef62631 Mon Sep 17 00:00:00 2001 From: Eliska Romanova Date: Mon, 14 Oct 2024 14:22:58 +0200 Subject: [PATCH 254/669] Monitoring docs restructure integration --- _topic_maps/_topic_map.yml | 80 ++++++-- ...n-metrics-using-developer-perspective.adoc | 5 + .../troubleshooting.adoc | 4 +- ...ing-argo-cd-custom-resource-workloads.adoc | 4 +- ...ting-applications-with-cicd-pipelines.adoc | 2 +- .../observability/telco-observability.adoc | 13 +- getting_started/openshift-overview.adoc | 8 +- ...ing-vsphere-problem-detector-operator.adoc | 2 +- installing/overview/installing-preparing.adoc | 4 +- .../validating-an-installation.adoc | 4 +- .../machine-config-daemon-metrics.adoc | 2 +- .../creating-infrastructure-machinesets.adoc | 2 +- migrating_from_ocp_3_to_4/index.adoc | 2 +- .../planning-migration-3-4.adoc | 2 +- .../accessing-metrics-outside-cluster.adoc | 1 - ...accessing-monitoring-web-service-apis.adoc | 2 +- modules/monitoring-about-managing-alerts.adoc | 18 ++ ...onitoring-about-monitoring-dashboards.adoc | 28 +++ .../monitoring-about-querying-metrics.adoc | 19 -- ...nd-requests-for-monitoring-components.adoc | 16 +- .../monitoring-accessing-the-alerting-ui.adoc | 40 +++- ...ret-to-the-alertmanager-configuration.adoc | 124 +++++------- ...-tolerations-to-monitoring-components.adoc | 114 +++++------ ...labels-to-your-time-series-and-alerts.adoc | 122 +++++------- ...choosing-a-metrics-collection-profile.adoc | 3 + modules/monitoring-common-terms.adoc | 2 +- ...ng-configurable-monitoring-components.adoc | 93 +++++---- ...configuring-a-persistent-volume-claim.adoc | 137 ++++++-------- ...ng-configuring-alert-routing-console.adoc} | 11 +- ...lert-routing-default-platform-alerts.adoc} | 8 +- ...rt-routing-for-user-defined-projects.adoc} | 13 +- ...t-routing-user-defined-alerts-secret.adoc} | 16 +- ...ng-configuring-external-alertmanagers.adoc | 161 +++++++--------- ...nfiguring-metrics-collection-profiles.adoc | 16 +- ...uring-pod-topology-spread-constraints.adoc | 148 ++++++--------- ...ring-configuring-remote-write-storage.adoc | 177 +++++------------- ...-attributes-in-user-defined-projects.adoc} | 0 ...rting-rules-for-user-defined-projects.adoc | 2 +- ...reating-cluster-id-labels-for-metrics.adoc | 138 +++++--------- ...creating-cluster-monitoring-configmap.adoc | 2 +- modules/monitoring-editing-silences.adoc | 33 +++- ...ert-routing-for-user-defined-projects.adoc | 22 +++ ...-remote-write-authentication-settings.adoc | 38 ++-- ...mple-remote-write-queue-configuration.adoc | 28 +-- modules/monitoring-expiring-silences.adoc | 36 ++-- ...g-detailed-information-about-a-target.adoc | 33 ++-- ...ut-alerts-silences-and-alerting-rules.adoc | 38 ++-- ...ert-routing-for-user-defined-projects.adoc | 1 - ...sion-to-monitor-user-defined-projects.adoc | 2 +- ...-monitoring-for-user-defined-projects.adoc | 11 ++ ...les-for-all-projects-in-a-single-view.adoc | 2 +- .../monitoring-maintenance-and-support.adoc | 2 +- ...rting-rules-for-user-defined-projects.adoc | 2 - ...managing-core-platform-alerting-rules.adoc | 2 +- ...-and-size-for-prometheus-metrics-data.adoc | 168 ++++++----------- ...ring-monitoring-stack-in-ha-clusters.adoc} | 4 +- ...itoring-components-to-different-nodes.adoc | 118 ++++++------ ...ng-alerting-for-user-defined-projects.adoc | 2 +- ...-for-all-projects-with-mon-dashboard.adoc} | 10 +- ...-defined-projects-with-mon-dashboard.adoc} | 12 +- ...rting-rules-for-user-defined-projects.adoc | 2 +- ...nitoring-resizing-a-persistent-volume.adoc | 142 +++++++------- ...e-for-the-cluster-monitoring-operator.adoc | 8 +- ...-and-size-for-prometheus-metrics-data.adoc | 33 ++++ ...reviewing-monitoring-dashboards-admin.adoc | 8 +- ...ewing-monitoring-dashboards-developer.adoc | 13 +- ...ng-alerts-silences-and-alerting-rules.adoc | 8 +- ...-log-levels-for-monitoring-components.adoc | 124 ++++++------ ...setting-query-log-file-for-prometheus.adoc | 142 +++++++------- modules/monitoring-silencing-alerts.adoc | 64 +++++-- ...nd-requests-for-monitoring-components.adoc | 84 ++++++--- ...-remote-write-authentication-settings.adoc | 2 +- ...ert-routing-for-user-defined-projects.adoc | 4 +- ...ng-understanding-the-monitoring-stack.adoc | 6 +- ...lectors-to-move-monitoring-components.adoc | 7 +- ...ogy-spread-constraints-for-monitoring.adoc | 16 +- .../metallb/metallb-troubleshoot-support.adoc | 2 +- .../ingress-operator.adoc | 2 +- .../configuring-sriov-operator.adoc | 5 +- ...loud-events-consumer-dev-reference-v2.adoc | 2 +- ...p-cloud-events-consumer-dev-reference.adoc | 2 +- .../distr-tracing-tempo-configuring.adoc | 2 +- .../logging_alerts/custom-logging-alerts.adoc | 6 +- .../default-logging-alerts.adoc | 5 + .../troubleshooting-logging-alerts.adoc | 5 + .../about-ocp-monitoring/_attributes | 1 + .../about-ocp-monitoring.adoc | 26 +++ .../monitoring/about-ocp-monitoring/images | 1 + .../about-ocp-monitoring/key-concepts.adoc | 131 +++++++++++++ .../monitoring/about-ocp-monitoring/modules | 1 + .../monitoring-stack-architecture.adoc | 54 ++++++ .../monitoring/about-ocp-monitoring/snippets | 1 + .../monitoring/accessing-metrics/_attributes | 1 + .../accessing-metrics-as-a-developer.adoc | 37 ++++ ...accessing-metrics-as-an-administrator.adoc | 37 ++++ ...sing-monitoring-apis-by-using-the-cli.adoc | 52 +++++ .../monitoring/accessing-metrics/images | 1 + .../monitoring/accessing-metrics/modules | 1 + .../monitoring/accessing-metrics/snippets | 1 + ...accessing-third-party-monitoring-apis.adoc | 11 +- ...on-monitoring-configuration-scenarios.adoc | 14 +- ...e-for-the-cluster-monitoring-operator.adoc | 9 +- .../_attributes | 1 + .../configuring-alerts-and-notifications.adoc | 59 ++++++ .../configuring-metrics.adoc | 43 +++++ ...nfiguring-performance-and-scalability.adoc | 93 +++++++++ .../images | 1 + .../modules | 1 + ...ing-to-configure-the-monitoring-stack.adoc | 39 ++++ .../snippets | 1 + .../storing-and-recording-data.adoc | 60 ++++++ .../configuring-the-monitoring-stack.adoc | 100 ++++++---- .../_attributes | 1 + ...figuring-alerts-and-notifications-uwm.adoc | 60 ++++++ .../configuring-metrics-uwm.adoc | 60 ++++++ ...uring-performance-and-scalability-uwm.adoc | 98 ++++++++++ .../images | 1 + .../modules | 1 + ...to-configure-the-monitoring-stack-uwm.adoc | 76 ++++++++ .../snippets | 1 + .../storing-and-recording-data-uwm.adoc | 58 ++++++ ...ert-routing-for-user-defined-projects.adoc | 12 +- ...-monitoring-for-user-defined-projects.adoc | 16 +- .../monitoring/getting-started/_attributes | 1 + .../core-platform-monitoring-first-steps.adoc | 58 ++++++ ...developer-and-non-administrator-steps.adoc | 16 ++ .../monitoring/getting-started/images | 1 + ...aintenance-and-support-for-monitoring.adoc | 28 +++ .../monitoring/getting-started/modules | 1 + .../monitoring/getting-started/snippets | 1 + .../user-workload-monitoring-first-steps.adoc | 20 ++ observability/monitoring/managing-alerts.adoc | 32 ++-- .../monitoring/managing-alerts/_attributes | 1 + .../monitoring/managing-alerts/images | 1 + .../managing-alerts-as-a-developer.adoc | 79 ++++++++ .../managing-alerts-as-an-administrator.adoc | 113 +++++++++++ .../monitoring/managing-alerts/modules | 1 + .../monitoring/managing-alerts/snippets | 1 + .../monitoring/managing-metrics.adoc | 15 +- .../monitoring/monitoring-overview.adoc | 12 +- .../reviewing-monitoring-dashboards.adoc | 31 +-- .../troubleshooting-monitoring-issues.adoc | 16 +- .../metrics-alerts-dashboards.adoc | 2 +- ...ork-observability-operator-monitoring.adoc | 2 +- ...figuring-metrics-for-monitoring-stack.adoc | 12 +- .../otel-configuring-otelcol-metrics.adoc | 2 +- observability/overview/index.adoc | 6 + .../visualizing-power-monitoring-metrics.adoc | 2 +- .../cluster-tasks.adoc | 2 +- .../configuring-alert-notifications.adoc | 4 +- rosa_architecture/index.adoc | 6 +- .../learn_more_about_openshift.adoc | 4 +- .../telco-core-ref-design-components.adoc | 2 +- .../cert-manager-monitoring.adoc | 2 +- .../serverless-admin-metrics.adoc | 2 +- .../serverless-developer-metrics.adoc | 6 +- service_mesh/v2x/ossm-observability.adoc | 2 +- .../persistent-storage-local.adoc | 2 +- .../about-remote-health-monitoring.adoc | 12 +- .../investigating-monitoring-issues.adoc | 15 +- .../virt-exposing-custom-metrics-for-vms.adoc | 8 +- virt/monitoring/virt-monitoring-overview.adoc | 5 + virt/monitoring/virt-prometheus-queries.adoc | 12 +- virt/monitoring/virt-runbooks.adoc | 9 +- virt/support/virt-collecting-virt-data.adoc | 17 +- welcome/learn_more_about_openshift.adoc | 7 +- 166 files changed, 2900 insertions(+), 1635 deletions(-) create mode 100644 modules/monitoring-about-managing-alerts.adoc create mode 100644 modules/monitoring-about-monitoring-dashboards.adoc delete mode 100644 modules/monitoring-about-querying-metrics.adoc rename modules/{monitoring-configuring-alert-receivers.adoc => monitoring-configuring-alert-routing-console.adoc} (83%) rename modules/{monitoring-configuring-notifications-for-default-platform-alerts.adoc => monitoring-configuring-alert-routing-default-platform-alerts.adoc} (91%) rename modules/{monitoring-creating-alert-routing-for-user-defined-projects.adoc => monitoring-configuring-alert-routing-for-user-defined-projects.adoc} (78%) rename modules/{monitoring-configuring-notifications-for-user-defined-alerts.adoc => monitoring-configuring-alert-routing-user-defined-alerts-secret.adoc} (74%) rename modules/{monitoring-limiting-scrape-samples-in-user-defined-projects.adoc => monitoring-controlling-the-impact-of-unbound-attributes-in-user-defined-projects.adoc} (100%) create mode 100644 modules/monitoring-enabling-alert-routing-for-user-defined-projects.adoc create mode 100644 modules/monitoring-intro-enabling-monitoring-for-user-defined-projects.adoc rename modules/{monitoring-understanding-monitoring-stack-in-ha-clusters.adoc => monitoring-monitoring-stack-in-ha-clusters.adoc} (90%) rename modules/{monitoring-querying-metrics-for-all-projects-as-an-administrator.adoc => monitoring-querying-metrics-for-all-projects-with-mon-dashboard.adoc} (86%) rename modules/{monitoring-querying-metrics-for-user-defined-projects-as-a-developer.adoc => monitoring-querying-metrics-for-user-defined-projects-with-mon-dashboard.adoc} (84%) create mode 100644 modules/monitoring-retention-time-and-size-for-prometheus-metrics-data.adoc create mode 120000 observability/monitoring/about-ocp-monitoring/_attributes create mode 100644 observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc create mode 120000 observability/monitoring/about-ocp-monitoring/images create mode 100644 observability/monitoring/about-ocp-monitoring/key-concepts.adoc create mode 120000 observability/monitoring/about-ocp-monitoring/modules create mode 100644 observability/monitoring/about-ocp-monitoring/monitoring-stack-architecture.adoc create mode 120000 observability/monitoring/about-ocp-monitoring/snippets create mode 120000 observability/monitoring/accessing-metrics/_attributes create mode 100644 observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc create mode 100644 observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc create mode 100644 observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.adoc create mode 120000 observability/monitoring/accessing-metrics/images create mode 120000 observability/monitoring/accessing-metrics/modules create mode 120000 observability/monitoring/accessing-metrics/snippets create mode 120000 observability/monitoring/configuring-core-platform-monitoring/_attributes create mode 100644 observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.adoc create mode 100644 observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.adoc create mode 100644 observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.adoc create mode 120000 observability/monitoring/configuring-core-platform-monitoring/images create mode 120000 observability/monitoring/configuring-core-platform-monitoring/modules create mode 100644 observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.adoc create mode 120000 observability/monitoring/configuring-core-platform-monitoring/snippets create mode 100644 observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.adoc create mode 120000 observability/monitoring/configuring-user-workload-monitoring/_attributes create mode 100644 observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.adoc create mode 100644 observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.adoc create mode 100644 observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.adoc create mode 120000 observability/monitoring/configuring-user-workload-monitoring/images create mode 120000 observability/monitoring/configuring-user-workload-monitoring/modules create mode 100644 observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc create mode 120000 observability/monitoring/configuring-user-workload-monitoring/snippets create mode 100644 observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.adoc create mode 120000 observability/monitoring/getting-started/_attributes create mode 100644 observability/monitoring/getting-started/core-platform-monitoring-first-steps.adoc create mode 100644 observability/monitoring/getting-started/developer-and-non-administrator-steps.adoc create mode 120000 observability/monitoring/getting-started/images create mode 100644 observability/monitoring/getting-started/maintenance-and-support-for-monitoring.adoc create mode 120000 observability/monitoring/getting-started/modules create mode 120000 observability/monitoring/getting-started/snippets create mode 100644 observability/monitoring/getting-started/user-workload-monitoring-first-steps.adoc create mode 120000 observability/monitoring/managing-alerts/_attributes create mode 120000 observability/monitoring/managing-alerts/images create mode 100644 observability/monitoring/managing-alerts/managing-alerts-as-a-developer.adoc create mode 100644 observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.adoc create mode 120000 observability/monitoring/managing-alerts/modules create mode 120000 observability/monitoring/managing-alerts/snippets diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index a04d57d9d822..c7e6ec215ef1 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2905,26 +2905,68 @@ Topics: Dir: monitoring Distros: openshift-enterprise,openshift-origin Topics: - - Name: Monitoring overview - File: monitoring-overview - - Name: Common monitoring configuration scenarios - File: common-monitoring-configuration-scenarios - - Name: Configuring the monitoring stack - File: configuring-the-monitoring-stack - - Name: Enabling monitoring for user-defined projects - File: enabling-monitoring-for-user-defined-projects - - Name: Enabling alert routing for user-defined projects - File: enabling-alert-routing-for-user-defined-projects - - Name: Managing metrics - File: managing-metrics + - Name: About OpenShift Container Platform monitoring + Dir: about-ocp-monitoring + Topics: + - Name: About OpenShift Container Platform monitoring + File: about-ocp-monitoring + - Name: Monitoring stack architecture + File: monitoring-stack-architecture + - Name: Key concepts + File: key-concepts + - Name: Getting started + Dir: getting-started + Topics: + - Name: Maintenance and support for monitoring + File: maintenance-and-support-for-monitoring + - Name: Core platform monitoring first steps + File: core-platform-monitoring-first-steps + - Name: User workload monitoring first steps + File: user-workload-monitoring-first-steps + - Name: Developer and non-administrator steps + File: developer-and-non-administrator-steps + - Name: Configuring core platform monitoring + Dir: configuring-core-platform-monitoring + Topics: + - Name: Preparing to configure the monitoring stack + File: preparing-to-configure-the-monitoring-stack + - Name: Configuring performance and scalability + File: configuring-performance-and-scalability + - Name: Storing and recording data + File: storing-and-recording-data + - Name: Configuring metrics + File: configuring-metrics + - Name: Configuring alerts and notifications + File: configuring-alerts-and-notifications + - Name: Configuring user workload monitoring + Dir: configuring-user-workload-monitoring + Topics: + - Name: Preparing to configure the monitoring stack + File: preparing-to-configure-the-monitoring-stack-uwm + - Name: Configuring performance and scalability + File: configuring-performance-and-scalability-uwm + - Name: Storing and recording data + File: storing-and-recording-data-uwm + - Name: Configuring metrics + File: configuring-metrics-uwm + - Name: Configuring alerts and notifications + File: configuring-alerts-and-notifications-uwm + - Name: Accessing metrics + Dir: accessing-metrics + Topics: + - Name: Accessing metrics as an administrator + File: accessing-metrics-as-an-administrator + - Name: Accessing metrics as a developer + File: accessing-metrics-as-a-developer + - Name: Accessing monitoring APIs by using the CLI + File: accessing-monitoring-apis-by-using-the-cli - Name: Managing alerts - File: managing-alerts - - Name: Reviewing monitoring dashboards - File: reviewing-monitoring-dashboards - - Name: Monitoring clusters that run on RHOSO - File: shiftstack-prometheus-configuration - - Name: Accessing monitoring APIs by using the CLI - File: accessing-third-party-monitoring-apis + Dir: managing-alerts + Topics: + - Name: Managing alerts as an administrator + File: managing-alerts-as-an-administrator + - Name: Managing alerts as a developer + File: managing-alerts-as-a-developer - Name: Troubleshooting monitoring issues File: troubleshooting-monitoring-issues - Name: Config map reference for the Cluster Monitoring Operator diff --git a/applications/odc-monitoring-project-and-application-metrics-using-developer-perspective.adoc b/applications/odc-monitoring-project-and-application-metrics-using-developer-perspective.adoc index b6a72d2720ec..9c143803f648 100644 --- a/applications/odc-monitoring-project-and-application-metrics-using-developer-perspective.adoc +++ b/applications/odc-monitoring-project-and-application-metrics-using-developer-perspective.adoc @@ -31,4 +31,9 @@ include::modules/odc-monitoring-your-app-vulnerabilities.adoc[leveloffset=+1] [role="_additional-resources"] [id="additional-resources-odc-monitoring-project-and-application-metrics-using-developer-perspective"] == Additional resources +ifdef::openshift-rosa,openshift-dedicated[] * xref:../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring overview] +endif::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-dedicated[] +* xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] +endif::openshift-rosa,openshift-dedicated[] \ No newline at end of file diff --git a/backup_and_restore/application_backup_and_restore/troubleshooting.adoc b/backup_and_restore/application_backup_and_restore/troubleshooting.adoc index 3e00d83af4ce..3112e19ca3d5 100644 --- a/backup_and_restore/application_backup_and_restore/troubleshooting.adoc +++ b/backup_and_restore/application_backup_and_restore/troubleshooting.adoc @@ -145,14 +145,14 @@ include::modules/migration-combining-must-gather.adoc[leveloffset=+2] include::modules/oadp-monitoring.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../observability/monitoring/monitoring-overview.adoc#about-openshift-monitoring[Monitoring stack] +* xref:../../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] include::modules/oadp-monitoring-setup.adoc[leveloffset=+2] include::modules/oadp-creating-service-monitor.adoc[leveloffset=+2] include::modules/oadp-creating-alerting-rule.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources -* xref:../../observability/monitoring/managing-alerts.adoc#managing-alerts[Managing alerts] +* xref:../../observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.adoc#managing-alerts-as-an-administrator[Managing alerts as an Administrator] include::modules/oadp-list-of-metrics.adoc[leveloffset=+2] include::modules/oadp-viewing-metrics-ui.adoc[leveloffset=+2] diff --git a/cicd/gitops/monitoring-argo-cd-custom-resource-workloads.adoc b/cicd/gitops/monitoring-argo-cd-custom-resource-workloads.adoc index 0e2f7bf94666..43941ea9b6c0 100644 --- a/cicd/gitops/monitoring-argo-cd-custom-resource-workloads.adoc +++ b/cicd/gitops/monitoring-argo-cd-custom-resource-workloads.adoc @@ -19,7 +19,7 @@ You can enable and disable the setting for monitoring Argo CD custom resource wo * {gitops-title} is installed in your cluster. * The monitoring stack is configured in your cluster in the `openshift-monitoring` project. In addition, the Argo CD instance is in a namespace that you can monitor through Prometheus. * The `kube-state-metrics` service is running in your cluster. -* Optional: If you are enabling monitoring for an Argo CD instance already present in a user-defined project, ensure that the monitoring is xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects_enabling-monitoring-for-user-defined-projects[enabled for user-defined projects] in your cluster. +* Optional: If you are enabling monitoring for an Argo CD instance already present in a user-defined project, ensure that the monitoring is xref:../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[enabled for user-defined projects] in your cluster. + [NOTE] ==== @@ -35,4 +35,4 @@ include::modules/gitops-disabling-monitoring-for-argo-cd-custom-resource-workloa [role="_additional-resources"] [id="additional-resources_monitoring-argo-cd-custom-resource-workloads"] == Additional resources -* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects] +* xref:../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] diff --git a/cicd/pipelines/creating-applications-with-cicd-pipelines.adoc b/cicd/pipelines/creating-applications-with-cicd-pipelines.adoc index 28e698bd313b..817f094f93b6 100644 --- a/cicd/pipelines/creating-applications-with-cicd-pipelines.adoc +++ b/cicd/pipelines/creating-applications-with-cicd-pipelines.adoc @@ -68,7 +68,7 @@ include::modules/op-enabling-monitoring-of-event-listeners-for-triggers-for-user [role="_additional-resources"] .Additional resources -* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects] +* xref:../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] include::modules/op-configuring-pull-request-capabilities-in-GitHub-interceptor.adoc[leveloffset=+1] diff --git a/edge_computing/day_2_core_cnf_clusters/observability/telco-observability.adoc b/edge_computing/day_2_core_cnf_clusters/observability/telco-observability.adoc index 30b84c6480f2..b9a55c776948 100644 --- a/edge_computing/day_2_core_cnf_clusters/observability/telco-observability.adoc +++ b/edge_computing/day_2_core_cnf_clusters/observability/telco-observability.adoc @@ -18,16 +18,16 @@ include::modules/telco-observability-monitoring-stack.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../../observability/monitoring/monitoring-overview.adoc#understanding-the-monitoring-stack_monitoring-overview[Understanding the monitoring stack] +* xref:../../../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] -* xref:../../../observability/monitoring/configuring-the-monitoring-stack.adoc#configuring-the-monitoring-stack[Configuring the monitoring stack] +* xref:../../../observability/monitoring/getting-started/core-platform-monitoring-first-steps.adoc#core-platform-monitoring-first-steps[Core platform monitoring first steps] include::modules/telco-observability-key-performance-metrics.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../../observability/monitoring/managing-metrics.adoc#managing-metrics[Managing metrics] +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#accessing-metrics-as-an-administrator[Accessing metrics as an administrator] * xref:../../../storage/persistent_storage/persistent_storage_local/persistent-storage-local.adoc#local-storage-install_persistent-storage-local[Persistent storage using local volumes] * xref:../../../scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#cluster-tuning-crs_ran-ref-design-crs[Cluster tuning reference CRs] @@ -38,7 +38,7 @@ include::modules/telco-observability-alerting.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../../observability/monitoring/managing-alerts.adoc#managing-alerts[Managing alerts] +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#about-managing-alerts_key-concepts[Managing alerts] include::modules/telco-observability-workload-monitoring.adoc[leveloffset=+1] @@ -47,6 +47,7 @@ include::modules/telco-observability-workload-monitoring.adoc[leveloffset=+1] * xref:../../../rest_api/monitoring_apis/servicemonitor-monitoring-coreos-com-v1.adoc#servicemonitor-monitoring-coreos-com-v1[ServiceMonitor[monitoring.coreos.com/v1]] -* xref:../../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] + +* xref:../../../observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.adoc#managing-alerting-rules-for-user-defined-projects_managing-alerts-as-an-administrator[Managing alerting rules for user-defined projects] -* xref:../../../observability/monitoring/managing-alerts.adoc#managing-alerting-rules-for-user-defined-projects_managing-alerts[Managing alerting rules for user-defined projects] diff --git a/getting_started/openshift-overview.adoc b/getting_started/openshift-overview.adoc index 8072f829faab..7646eb7a7251 100644 --- a/getting_started/openshift-overview.adoc +++ b/getting_started/openshift-overview.adoc @@ -106,10 +106,10 @@ be reviewed by cluster administrators and xref:../operators/admin/olm-adding-ope * **xref:../scalability_and_performance/recommended-performance-scale-practices/recommended-infrastructure-practices.adoc#scaling-cluster-monitoring-operator[Scale] and xref:../scalability_and_performance/using-node-tuning-operator.adoc#using-node-tuning-operator[tune] clusters**: Set cluster limits, tune nodes, scale cluster monitoring, and optimize networking, storage, and routes for your environment. -* **xref:../disconnected/updating/disconnected-update-osus.adoc#update-service-overview_updating-disconnected-cluster-osus[Using the OpenShift Update Service in a disconnected environement]**: Learn about installing and managing a local OpenShift Update Service for recommending {product-title} updates in disconnected environments. +* **xref:../disconnected/updating/disconnected-update-osus.adoc#update-service-overview_updating-disconnected-cluster-osus[Using the OpenShift Update Service in a disconnected environment]**: Learn about installing and managing a local OpenShift Update Service for recommending {product-title} updates in disconnected environments. -* **xref:../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitor clusters]**: -Learn to xref:../observability/monitoring/configuring-the-monitoring-stack.adoc#configuring-the-monitoring-stack[configure the monitoring stack]. -After configuring monitoring, use the web console to access xref:../observability/monitoring/reviewing-monitoring-dashboards.adoc#reviewing-monitoring-dashboards[monitoring dashboards]. In addition to infrastructure metrics, you can also scrape and view metrics for your own services. +* **xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[Monitor clusters]**: +Learn to xref:../observability/monitoring/getting-started/core-platform-monitoring-first-steps.adoc#core-platform-monitoring-first-steps[configure the monitoring stack]. +After configuring monitoring, use the web console to access xref:../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#reviewing-monitoring-dashboards-admin_accessing-metrics-as-an-administrator[monitoring dashboards]. In addition to infrastructure metrics, you can also scrape and view metrics for your own services. * **xref:../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring_about-remote-health-monitoring[Remote health monitoring]**: {product-title} collects anonymized aggregated information about your cluster. Using Telemetry and the Insights Operator, this data is received by Red Hat and used to improve {product-title}. You can view the xref:../support/remote_health_monitoring/showing-data-collected-by-remote-health-monitoring.adoc#showing-data-collected-by-remote-health-monitoring_showing-data-collected-by-remote-health-monitoring[data collected by remote health monitoring]. diff --git a/installing/installing_vsphere/using-vsphere-problem-detector-operator.adoc b/installing/installing_vsphere/using-vsphere-problem-detector-operator.adoc index 0fe7525dfc97..5dd7ccc8fdfa 100644 --- a/installing/installing_vsphere/using-vsphere-problem-detector-operator.adoc +++ b/installing/installing_vsphere/using-vsphere-problem-detector-operator.adoc @@ -30,4 +30,4 @@ include::modules/vsphere-problem-detector-metrics.adoc[leveloffset=+1] [role="_additional-resources"] == Additional resources -* xref:../../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring overview] +* xref:../../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] diff --git a/installing/overview/installing-preparing.adoc b/installing/overview/installing-preparing.adoc index 2a3919f6b0aa..e2f89ec81f7a 100644 --- a/installing/overview/installing-preparing.adoc +++ b/installing/overview/installing-preparing.adoc @@ -110,12 +110,12 @@ For a production cluster, you must configure the following integrations: * xref:../../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[Persistent storage] * xref:../../authentication/understanding-identity-provider.adoc#understanding-identity-provider[An identity provider] -* xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#configuring-the-monitoring-stack[Monitoring core OpenShift Container Platform components] +* xref:../../observability/monitoring/getting-started/core-platform-monitoring-first-steps.adoc#core-platform-monitoring-first-steps[Monitoring core {product-title} components] [id="installing-preparing-cluster-for-workloads"] == Preparing your cluster for workloads -Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application xref:../../cicd/builds/build-strategies.adoc#build-strategies[build strategy], you might need to make provisions for xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-low-latency-perf-profile[low-latency] workloads or to xref:../../nodes/pods/nodes-pods-secrets.adoc#nodes-pods-secrets[protect sensitive workloads]. You can also configure xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[monitoring] for application workloads. +Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application xref:../../cicd/builds/build-strategies.adoc#build-strategies[build strategy], you might need to make provisions for xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-low-latency-perf-profile[low-latency] workloads or to xref:../../nodes/pods/nodes-pods-secrets.adoc#nodes-pods-secrets[protect sensitive workloads]. You can also configure xref:../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[monitoring] for application workloads. If you plan to run xref:../../windows_containers/enabling-windows-container-workloads.adoc#enabling-windows-container-workloads[Windows workloads], you must enable xref:../../networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.adoc#configuring-hybrid-networking[hybrid networking with OVN-Kubernetes] during the installation process; hybrid networking cannot be enabled after your cluster is installed. [id="supported-installation-methods-for-different-platforms"] diff --git a/installing/validation_and_troubleshooting/validating-an-installation.adoc b/installing/validation_and_troubleshooting/validating-an-installation.adoc index 82b541feee7a..27f624f90f98 100644 --- a/installing/validation_and_troubleshooting/validating-an-installation.adoc +++ b/installing/validation_and_troubleshooting/validating-an-installation.adoc @@ -56,7 +56,7 @@ include::modules/checking-cluster-resource-availability-and-utilization.adoc[lev [role="_additional-resources"] .Additional resources -* See xref:../../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring overview] for more information about the {product-title} monitoring stack. +* See xref:../../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] for more information about the {product-title} monitoring stack. //Listing alerts that are firing include::modules/listing-alerts-that-are-firing.adoc[leveloffset=+1] @@ -64,7 +64,7 @@ include::modules/listing-alerts-that-are-firing.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* See xref:../../observability/monitoring/managing-alerts.adoc#managing-alerts[Managing alerts] for further details about alerting in {product-title}. +* See xref:../../observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.adoc#managing-alerts-as-an-administrator[Managing alerts as an Administrator] for further details about alerting in {product-title}. [id="validating-an-installation-next-steps"] == Next steps diff --git a/machine_configuration/machine-config-daemon-metrics.adoc b/machine_configuration/machine-config-daemon-metrics.adoc index f2e2c9d42397..adf0316e86fc 100644 --- a/machine_configuration/machine-config-daemon-metrics.adoc +++ b/machine_configuration/machine-config-daemon-metrics.adoc @@ -13,7 +13,7 @@ include::modules/machine-config-daemon-metrics-understanding.adoc[leveloffset=+1 [role="_additional-resources"] .Additional resources ifndef::openshift-rosa,openshift-dedicated[] -* xref:../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring overview] +* xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] * xref:../support/gathering-cluster-data.adoc#gathering-cluster-data[Gathering data about your cluster] endif::openshift-rosa,openshift-dedicated[] ifdef::openshift-rosa,openshift-dedicated[] diff --git a/machine_management/creating-infrastructure-machinesets.adoc b/machine_management/creating-infrastructure-machinesets.adoc index 15360cdeffb5..33e2299eaa1a 100644 --- a/machine_management/creating-infrastructure-machinesets.adoc +++ b/machine_management/creating-infrastructure-machinesets.adoc @@ -129,6 +129,6 @@ include::modules/nodes-cluster-resource-override-move-infra.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources -* xref:../observability/monitoring/configuring-the-monitoring-stack.adoc#moving-monitoring-components-to-different-nodes_configuring-the-monitoring-stack[Moving monitoring components to different nodes] +* xref:../observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.adoc#moving-monitoring-components-to-different-nodes-cpm_configuring-performance-and-scalability[Moving monitoring components to different nodes] * xref:../observability/logging/scheduling_resources/logging-node-selectors.adoc#logging-node-selectors[Using node selectors to move logging resources] * xref:../observability/logging/scheduling_resources/logging-taints-tolerations.adoc#cluster-logging-logstore-tolerations_logging-taints-tolerations[Using taints and tolerations to control logging pod placement] diff --git a/migrating_from_ocp_3_to_4/index.adoc b/migrating_from_ocp_3_to_4/index.adoc index cbf073deaa74..bdf42af961ac 100644 --- a/migrating_from_ocp_3_to_4/index.adoc +++ b/migrating_from_ocp_3_to_4/index.adoc @@ -14,7 +14,7 @@ Before migrating from {product-title} 3 to 4, you can check xref:../migrating_fr * xref:../architecture/architecture.adoc#architecture[Architecture] * xref:../architecture/architecture-installation.adoc#architecture-installation[Installation and update] -* xref:../storage/index.adoc#index[Storage], xref:../networking/understanding-networking.adoc#understanding-networking[network], xref:../observability/logging/cluster-logging.adoc#cluster-logging[logging], xref:../security/index.adoc#index[security], and xref:../observability/monitoring/monitoring-overview.adoc#monitoring-overview[monitoring considerations] +* xref:../storage/index.adoc#index[Storage], xref:../networking/understanding-networking.adoc#understanding-networking[network], xref:../observability/logging/cluster-logging.adoc#cluster-logging[logging], xref:../security/index.adoc#index[security], and xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[monitoring considerations] [id="mtc-3-to-4-overview-planning-network-considerations-mtc"] == Planning network considerations diff --git a/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc b/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc index f7e48580603b..b5dd873f20d7 100644 --- a/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc +++ b/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc @@ -253,4 +253,4 @@ Review the following monitoring changes when transitioning from {product-title} The default alert that triggers to ensure the availability of the monitoring structure was called `DeadMansSwitch` in {product-title} 3.11. This was renamed to `Watchdog` in {product-title} 4. If you had PagerDuty integration set up with this alert in {product-title} 3.11, you must set up the PagerDuty integration for the `Watchdog` alert in {product-title} 4. -For more information, see xref:../observability/monitoring/managing-alerts.adoc#applying-custom-alertmanager-configuration_managing-alerts[Applying custom Alertmanager configuration]. +For more information, see xref:../observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.adoc#configuring-alert-routing-default-platform-alerts_configuring-alerts-and-notifications[Configuring alert routing for default platform alerts]. diff --git a/modules/accessing-metrics-outside-cluster.adoc b/modules/accessing-metrics-outside-cluster.adoc index d8788c0a52df..cce1c5775246 100644 --- a/modules/accessing-metrics-outside-cluster.adoc +++ b/modules/accessing-metrics-outside-cluster.adoc @@ -1,6 +1,5 @@ // Module included in the following assemblies: // -// * observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc // * observability/monitoring/accessing-third-party-monitoring-apis.adoc :_mod-docs-content-type: PROCEDURE diff --git a/modules/monitoring-about-accessing-monitoring-web-service-apis.adoc b/modules/monitoring-about-accessing-monitoring-web-service-apis.adoc index 224a4601aa0b..16d6aaaba09e 100644 --- a/modules/monitoring-about-accessing-monitoring-web-service-apis.adoc +++ b/modules/monitoring-about-accessing-monitoring-web-service-apis.adoc @@ -13,7 +13,7 @@ You can directly access web service API endpoints from the command line for the * Thanos Ruler * Thanos Querier -[NOTE] +[IMPORTANT] ==== To access Thanos Ruler and Thanos Querier service APIs, the requesting account must have `get` permission on the namespaces resource, which can be granted by binding the `cluster-monitoring-view` cluster role to the account. ==== diff --git a/modules/monitoring-about-managing-alerts.adoc b/modules/monitoring-about-managing-alerts.adoc new file mode 100644 index 000000000000..9ba92e54ed35 --- /dev/null +++ b/modules/monitoring-about-managing-alerts.adoc @@ -0,0 +1,18 @@ +// Module included in the following assemblies: +// +// * observability/monitoring/managing-alerts.adoc + +:_mod-docs-content-type: CONCEPT +[id="about-managing-alerts_{context}"] += Managing alerts + +In the {product-title}, the Alerting UI enables you to manage alerts, silences, and alerting rules. + +* *Alerting rules*. Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed. +* *Alerts*. An alert is fired when the conditions defined in an alerting rule are true. Alerts provide a notification that a set of circumstances are apparent within an {product-title} cluster. +* *Silences*. A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the issue. + +[NOTE] +==== +The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. For example, if you are logged in as a user with the `cluster-admin` role, you can access all alerts, silences, and alerting rules. +==== diff --git a/modules/monitoring-about-monitoring-dashboards.adoc b/modules/monitoring-about-monitoring-dashboards.adoc new file mode 100644 index 000000000000..83359b1694a9 --- /dev/null +++ b/modules/monitoring-about-monitoring-dashboards.adoc @@ -0,0 +1,28 @@ +// Module included in the following assemblies: +// +// * observability/monitoring/reviewing-monitoring-dashboards.adoc + +:_mod-docs-content-type: CONCEPT +[id="mon-dashboards-adm-perspective_{context}"] += Monitoring dashboards in the Administrator perspective + +Use the *Administrator* perspective to access dashboards for the core {product-title} components, including the following items: + +* API performance +* etcd +* Kubernetes compute resources +* Kubernetes network resources +* Prometheus +* USE method dashboards relating to cluster and node performance +* Node performance metrics + +.Example dashboard in the Administrator perspective +image::monitoring-dashboard-administrator.png[] + +[id="mon-dashboards-dev-perspective_{context}"] += Monitoring dashboards in the Developer perspective + +In the *Developer* perspective, you can access only the Kubernetes compute resources dashboards: + +.Example dashboard in the Developer perspective +image::observe-dashboard-developer.png[] \ No newline at end of file diff --git a/modules/monitoring-about-querying-metrics.adoc b/modules/monitoring-about-querying-metrics.adoc deleted file mode 100644 index a8e465100147..000000000000 --- a/modules/monitoring-about-querying-metrics.adoc +++ /dev/null @@ -1,19 +0,0 @@ -// Module included in the following assemblies: -// -// * observability/monitoring/managing-metrics.adoc -// * virt/support/virt-prometheus-queries.adoc - -:_mod-docs-content-type: CONCEPT -[id="about-querying-metrics_{context}"] -= Querying metrics - -The {product-title} monitoring dashboard enables you to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring. - -ifndef::openshift-dedicated,openshift-rosa[] -As a cluster administrator, you can query metrics for all core {product-title} and user-defined projects. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] -As a `dedicated-admin`, you can query one or more namespaces at a time for metrics about user-defined projects. -endif::openshift-dedicated,openshift-rosa[] - -As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project. diff --git a/modules/monitoring-about-specifying-limits-and-requests-for-monitoring-components.adoc b/modules/monitoring-about-specifying-limits-and-requests-for-monitoring-components.adoc index 1131a8e800fa..cb1d3ff54ba7 100644 --- a/modules/monitoring-about-specifying-limits-and-requests-for-monitoring-components.adoc +++ b/modules/monitoring-about-specifying-limits-and-requests-for-monitoring-components.adoc @@ -3,25 +3,29 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: CONCEPT + [id="about-specifying-limits-and-requests-for-monitoring-components_{context}"] = About specifying limits and requests for monitoring components -You can configure resource limits and request settings for core platform monitoring components and for the components that monitor user-defined projects, including the following components: +You can configure resource limits and requests for the following core platform monitoring components: -* Alertmanager (for core platform monitoring and for user-defined projects) +* Alertmanager * kube-state-metrics * monitoring-plugin * node-exporter * openshift-state-metrics -* Prometheus (for core platform monitoring and for user-defined projects) +* Prometheus * Metrics Server * Prometheus Operator and its admission webhook service * Telemeter Client * Thanos Querier -* Thanos Ruler -By defining resource limits, you limit a container's resource usage, which prevents the container from exceeding the specified maximum values for CPU and memory resources. +You can configure resource limits and requests for the following components that monitor user-defined projects: -By defining resource requests, you specify that a container can be scheduled only on a node that has enough CPU and memory resources available to match the requested resources. +* Alertmanager +* Prometheus +* Thanos Ruler +By defining the resource limits, you limit a container's resource usage, which prevents the container from exceeding the specified maximum values for CPU and memory resources. +By defining the resource requests, you specify that a container can be scheduled only on a node that has enough CPU and memory resources available to match the requested resources. \ No newline at end of file diff --git a/modules/monitoring-accessing-the-alerting-ui.adoc b/modules/monitoring-accessing-the-alerting-ui.adoc index 8ef40b41fd3a..8bf74e776a4c 100644 --- a/modules/monitoring-accessing-the-alerting-ui.adoc +++ b/modules/monitoring-accessing-the-alerting-ui.adoc @@ -4,18 +4,46 @@ // * logging/logging_alerts/log-storage-alerts.adoc :_mod-docs-content-type: PROCEDURE -[id="monitoring-accessing-the-alerting-ui_{context}"] -= Accessing the Alerting UI in the Administrator and Developer perspectives -The Alerting UI is accessible through the *Administrator* perspective and the *Developer* perspective of the {product-title} web console. +// The ultimate solution DOES NOT NEED separate IDs and titles, it is just needed for now so that the tests will not break -* In the *Administrator* perspective, go to *Observe* -> *Alerting*. The three main pages in the Alerting UI in this perspective are the *Alerts*, *Silences*, and *Alerting rules* pages. +// tag::ADM[] +[id="monitoring-accessing-the-alerting-ui-adm_{context}"] += Accessing the Alerting UI from the Administrator perspective +// end::ADM[] -//Next to the title of each of these pages is a link to the Alertmanager interface. +// tag::DEV[] +[id="monitoring-accessing-the-alerting-ui-dev_{context}"] += Accessing the Alerting UI from the Developer perspective +// end::DEV[] -* In the *Developer* perspective, go to *Observe* -> ** -> *Alerts*. In this perspective, alerts, silences, and alerting rules are all managed from the *Alerts* page. The results shown in the *Alerts* page are specific to the selected project. +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples + +// tag::ADM[] +:perspective: Administrator +// end::ADM[] + +// tag::DEV[] +:perspective: Developer +// end::DEV[] + +The Alerting UI is accessible through the *{perspective}* perspective of the {product-title} web console. + +// tag::ADM[] +* From the *Administrator* perspective, go to *Observe* -> *Alerting*. The three main pages in the Alerting UI in this perspective are the *Alerts*, *Silences*, and *Alerting rules* pages. +// end::ADM[] + +// tag::DEV[] +* From the *Developer* perspective, go to *Observe* and go to the *Alerts* tab. +* Select the project that you want to manage alerts for from the *Project:* list. + +In this perspective, alerts, silences, and alerting rules are all managed from the *Alerts* tab. The results shown in the *Alerts* tab are specific to the selected project. [NOTE] ==== In the *Developer* perspective, you can select from core {product-title} and user-defined projects that you have access to in the *Project: * list. However, alerts, silences, and alerting rules relating to core {product-title} projects are not displayed if you are not logged in as a cluster administrator. ==== +// end::DEV[] + +// Unset the source code block attributes just to be safe. +:!perspective: diff --git a/modules/monitoring-adding-a-secret-to-the-alertmanager-configuration.adoc b/modules/monitoring-adding-a-secret-to-the-alertmanager-configuration.adoc index 17b599cd2cfe..383a6c45d3bb 100644 --- a/modules/monitoring-adding-a-secret-to-the-alertmanager-configuration.adoc +++ b/modules/monitoring-adding-a-secret-to-the-alertmanager-configuration.adoc @@ -2,64 +2,68 @@ // // * observability/monitoring/configuring-the-monitoring-stack.adoc -:_mod-docs-content-type: PROCEDURE [id="monitoring-adding-a-secret-to-the-alertmanager-configuration_{context}"] -= Adding a secret to the Alertmanager configuration += Adding a secret to the Alertmanager configuration -ifndef::openshift-dedicated,openshift-rosa[] -You can add secrets to the Alertmanager configuration for core platform monitoring components by editing the `cluster-monitoring-config` config map in the `openshift-monitoring` project. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] -You can add secrets to the Alertmanager configuration for user-defined projects by editing the `user-workload-monitoring-config` config map in the `openshift-user-workload-monitoring` project. -endif::openshift-dedicated,openshift-rosa[] +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples + +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:component: alertmanagerMain +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:component: alertmanager +// end::UWM[] + +You can add secrets to the Alertmanager configuration by editing the `{configmap-name}` config map in the `{namespace-name}` project. After you add a secret to the config map, the secret is mounted as a volume at `/etc/alertmanager/secrets/` within the `alertmanager` container for the Alertmanager pods. .Prerequisites +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` config map. +// end::CPM[] +// tag::UWM[] ifndef::openshift-dedicated,openshift-rosa[] -* *If you are configuring core {product-title} monitoring components in the `openshift-monitoring` project*: -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` config map. -** You have created the secret to be configured in Alertmanager in the `openshift-monitoring` project. -* *If you are configuring components that monitor user-defined projects*: -** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** You have created the secret to be configured in Alertmanager in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. +* You have access to the cluster as a user with the `cluster-admin` cluster role or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-dedicated,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. * The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created. -* You have created the secret to be configured in Alertmanager in the `openshift-user-workload-monitoring` project. endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] +* You have created the secret to be configured in Alertmanager in the `{namespace-name}` project. * You have installed the OpenShift CLI (`oc`). .Procedure -. Edit the `ConfigMap` object. -ifndef::openshift-dedicated,openshift-rosa[] -** *To add a secret configuration to Alertmanager for core platform monitoring*: -.. Edit the `cluster-monitoring-config` config map in the `openshift-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -.. Add a `secrets:` section under `data/config.yaml/alertmanagerMain` with the following configuration: +. Add a `secrets:` section under `data/config.yaml/{component}` with the following configuration: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - alertmanagerMain: - secrets: <1> - - <2> + {component}: + secrets: # <1> + - # <2> - ---- <1> This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object. @@ -67,67 +71,25 @@ data: + The following sample config map settings configure Alertmanager to use two `Secret` objects named `test-secret-basic-auth` and `test-secret-api-token`: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - alertmanagerMain: + {component}: secrets: - test-secret-basic-auth - test-secret-api-token ---- -** *To add a secret configuration to Alertmanager for user-defined project monitoring*: -endif::openshift-dedicated,openshift-rosa[] - -.. Edit the `user-workload-monitoring-config` config map in the `openshift-user-workload-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config ----- - -.. Add a `secrets:` section under `data/config.yaml/alertmanager/secrets` with the following configuration: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - alertmanager: - secrets: <1> - - <2> - - ----- -<1> This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object. -<2> The name of the `Secret` object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line. -+ -The following sample config map settings configure Alertmanager to use two `Secret` objects named `test-secret` and `test-secret-api-token`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - alertmanager: - enabled: true - secrets: - - test-secret - - test-api-receiver-token ----- - . Save the file to apply the changes. The new configuration is applied automatically. +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!component: + diff --git a/modules/monitoring-assigning-tolerations-to-monitoring-components.adoc b/modules/monitoring-assigning-tolerations-to-monitoring-components.adoc index 947056f69d83..f93507178b93 100644 --- a/modules/monitoring-assigning-tolerations-to-monitoring-components.adoc +++ b/modules/monitoring-assigning-tolerations-to-monitoring-components.adoc @@ -3,100 +3,67 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE + [id="assigning-tolerations-to-monitoring-components_{context}"] = Assigning tolerations to monitoring components -ifndef::openshift-dedicated,openshift-rosa[] +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples. +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:component: alertmanagerMain +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:component: thanosRuler +// end::UWM[] + +// tag::CPM[] You can assign tolerations to any of the monitoring stack components to enable moving them to tainted nodes. -endif::openshift-dedicated,openshift-rosa[] +// end::CPM[] -ifdef::openshift-dedicated,openshift-rosa[] +// tag::UWM[] You can assign tolerations to the components that monitor user-defined projects, to enable moving them to tainted worker nodes. Scheduling is not permitted on control plane or infrastructure nodes. -endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] .Prerequisites +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +// end::CPM[] + +// tag::UWM[] ifndef::openshift-dedicated,openshift-rosa[] -* *If you are configuring core {product-title} monitoring components*: -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` `ConfigMap` object. -* *If you are configuring components that monitor user-defined projects*: -** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. +* You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-dedicated,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. * The `user-workload-monitoring-config` `ConfigMap` object exists in the `openshift-user-workload-monitoring` namespace. This object is created by default when the cluster is created. endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] * You have installed the OpenShift CLI (`oc`). .Procedure -. Edit the `ConfigMap` object: -ifndef::openshift-dedicated,openshift-rosa[] -** *To assign tolerations to a component that monitors core {product-title} projects*: -.. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config ----- - -.. Specify `tolerations` for the component: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring -data: - config.yaml: | - : - tolerations: - ----- -+ -Substitute `` and `` accordingly. -+ -For example, `oc adm taint nodes node1 key1=value1:NoSchedule` adds a taint to `node1` with the key `key1` and the value `value1`. This prevents monitoring components from deploying pods on `node1` unless a toleration is configured for that taint. The following example configures the `alertmanagerMain` component to tolerate the example taint: -+ -[source,yaml,subs=quotes] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring -data: - config.yaml: | - alertmanagerMain: - tolerations: - - key: "key1" - operator: "Equal" - value: "value1" - effect: "NoSchedule" ----- - -** *To assign tolerations to a component that monitors user-defined projects*: -endif::openshift-dedicated,openshift-rosa[] -.. Edit the `user-workload-monitoring-config` `ConfigMap` object in the `openshift-user-workload-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -.. Specify `tolerations` for the component: +. Specify `tolerations` for the component: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | : @@ -106,18 +73,18 @@ data: + Substitute `` and `` accordingly. + -For example, `oc adm taint nodes node1 key1=value1:NoSchedule` adds a taint to `node1` with the key `key1` and the value `value1`. This prevents monitoring components from deploying pods on `node1` unless a toleration is configured for that taint. The following example configures the `thanosRuler` component to tolerate the example taint: +For example, `oc adm taint nodes node1 key1=value1:NoSchedule` adds a taint to `node1` with the key `key1` and the value `value1`. This prevents monitoring components from deploying pods on `node1` unless a toleration is configured for that taint. The following example configures the `{component}` component to tolerate the example taint: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - thanosRuler: + {component}: tolerations: - key: "key1" operator: "Equal" @@ -126,3 +93,8 @@ data: ---- . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!component: \ No newline at end of file diff --git a/modules/monitoring-attaching-additional-labels-to-your-time-series-and-alerts.adoc b/modules/monitoring-attaching-additional-labels-to-your-time-series-and-alerts.adoc index e947f69ecc87..0e52543f4a4b 100644 --- a/modules/monitoring-attaching-additional-labels-to-your-time-series-and-alerts.adoc +++ b/modules/monitoring-attaching-additional-labels-to-your-time-series-and-alerts.adoc @@ -3,109 +3,68 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE + [id="attaching-additional-labels-to-your-time-series-and-alerts_{context}"] = Attaching additional labels to your time series and alerts +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples + +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:component: prometheusK8s +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:component: prometheus +// end::UWM[] + You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus. .Prerequisites +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +// end::CPM[] +// tag::UWM[] ifndef::openshift-dedicated,openshift-rosa[] -* *If you are configuring core {product-title} monitoring components*: -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` `ConfigMap` object. -* *If you are configuring components that monitor user-defined projects*: -** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. +* You have access to the cluster as a user with the `cluster-admin` cluster role or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-dedicated,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. * The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created. endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] * You have installed the OpenShift CLI (`oc`). .Procedure -. Edit the `ConfigMap` object: -ifndef::openshift-dedicated,openshift-rosa[] -** *To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors core {product-title} projects*: -.. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -.. Define a map of labels you want to add for every metric under `data/config.yaml`: +. Define labels you want to add for every metric under `data/config.yaml`: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheusK8s: + {component}: externalLabels: : # <1> ---- -+ -<1> Substitute `: ` with a map of key-value pairs where `` is a unique name for the new label and `` is its value. -+ -[WARNING] -==== -* Do not use `prometheus` or `prometheus_replica` as key names, because they are reserved and will be overwritten. - -* Do not use `cluster` or `managed_cluster` as key names. Using them can cause issues where you are unable to see data in the developer dashboards. -==== -+ -For example, to add metadata about the region and environment to all time series and alerts, use the following example: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring -data: - config.yaml: | - prometheusK8s: - externalLabels: - region: eu - environment: prod ----- - -.. Save the file to apply the changes. The new configuration is applied automatically. - -** *To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors user-defined projects*: -endif::openshift-dedicated,openshift-rosa[] -.. Edit the `user-workload-monitoring-config` `ConfigMap` object in the `openshift-user-workload-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config ----- - -.. Define a map of labels you want to add for every metric under `data/config.yaml`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - prometheus: - externalLabels: - : # <1> ----- -+ -<1> Substitute `: ` with a map of key-value pairs where `` is a unique name for the new label and `` is its value. +<1> Substitute `: ` with key-value pairs where `` is a unique name for the new label and `` is its value. + [WARNING] ==== @@ -113,27 +72,34 @@ data: * Do not use `cluster` or `managed_cluster` as key names. Using them can cause issues where you are unable to see data in the developer dashboards. ==== +// tag::UWM[] + [NOTE] ==== In the `openshift-user-workload-monitoring` project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. Setting `externalLabels` for `prometheus` in the `user-workload-monitoring-config` `ConfigMap` object will only configure external labels for metrics and not for any rules. ==== +// end::UWM[] + -For example, to add metadata about the region and environment to all time series and alerts related to user-defined projects, use the following example: +For example, to add metadata about the region and environment to all time series and alerts, use the following example: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheus: + {component}: externalLabels: region: eu environment: prod ---- -.. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. +. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!component: diff --git a/modules/monitoring-choosing-a-metrics-collection-profile.adoc b/modules/monitoring-choosing-a-metrics-collection-profile.adoc index 2ebf9722d235..af8199a5ebf6 100644 --- a/modules/monitoring-choosing-a-metrics-collection-profile.adoc +++ b/modules/monitoring-choosing-a-metrics-collection-profile.adoc @@ -6,6 +6,9 @@ [id="choosing-a-metrics-collection-profile_{context}"] = Choosing a metrics collection profile +:FeatureName: Metrics collection profile +include::snippets/technology-preview.adoc[] + To choose a metrics collection profile for core {product-title} monitoring components, edit the `cluster-monitoring-config` `ConfigMap` object. .Prerequisites diff --git a/modules/monitoring-common-terms.adoc b/modules/monitoring-common-terms.adoc index fb6bee41e943..e57dd7ac9e71 100644 --- a/modules/monitoring-common-terms.adoc +++ b/modules/monitoring-common-terms.adoc @@ -3,7 +3,7 @@ // * observability/monitoring/monitoring-overview.adoc :_mod-docs-content-type: REFERENCE -[id="openshift-monitoring-common-terms_{context}"] +[id="monitoring-common-terms_{context}"] = Glossary of common terms for {product-title} monitoring This glossary defines common terms that are used in {product-title} architecture. diff --git a/modules/monitoring-configurable-monitoring-components.adoc b/modules/monitoring-configurable-monitoring-components.adoc index 173a0ad2e257..7c94c92dc026 100644 --- a/modules/monitoring-configurable-monitoring-components.adoc +++ b/modules/monitoring-configurable-monitoring-components.adoc @@ -2,53 +2,82 @@ // // * observability/monitoring/configuring-the-monitoring-stack.adoc +:_mod-docs-content-type: REFERENCE + [id="configurable-monitoring-components_{context}"] = Configurable monitoring components -This table shows the monitoring components you can configure and the keys used to specify the components in the -ifndef::openshift-dedicated,openshift-rosa[] -`cluster-monitoring-config` and -endif::openshift-dedicated,openshift-rosa[] -`user-workload-monitoring-config` `ConfigMap` objects. +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples. +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:alertmanager: alertmanagerMain +:prometheus: prometheusK8s +:thanosname: Thanos Querier +:thanos: thanosQuerier +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:alertmanager: alertmanager +:prometheus: prometheus +:thanosname: Thanos Ruler +:thanos: thanosRuler +// end::UWM[] +This table shows the monitoring components you can configure and the keys used to specify the components in the `{configmap-name}` config map. + +// tag::UWM[] ifdef::openshift-dedicated,openshift-rosa[] [WARNING] ==== -Do not modify the monitoring components in the `cluster-monitoring-config` `ConfigMap` object. Red Hat Site Reliability Engineers (SRE) use these components to monitor the core cluster components and Kubernetes services. +Do not modify the monitoring components in the `cluster-monitoring-config` `ConfigMap` object. Red{nbsp}Hat Site Reliability Engineers (SRE) use these components to monitor the core cluster components and Kubernetes services. ==== endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] -ifndef::openshift-dedicated,openshift-rosa[] -.Configurable monitoring components +// tag::CPM[] +.Configurable core platform monitoring components +// end::CPM[] +// tag::UWM[] +.Configurable monitoring components for user-defined projects +// end::UWM[] [options="header"] |==== -|Component |cluster-monitoring-config config map key |user-workload-monitoring-config config map key -|Prometheus Operator |`prometheusOperator` |`prometheusOperator` -|Prometheus |`prometheusK8s` |`prometheus` -|Alertmanager |`alertmanagerMain` | `alertmanager` -|kube-state-metrics |`kubeStateMetrics` | -|monitoring-plugin | `monitoringPlugin` | -|openshift-state-metrics |`openshiftStateMetrics` | -|Telemeter Client |`telemeterClient` | -|Metrics Server |`metricsServer` | -|Thanos Querier |`thanosQuerier` | -|Thanos Ruler | |`thanosRuler` +|Component |{configmap-name} config map key +|Prometheus Operator |`prometheusOperator` +|Prometheus |`{prometheus}` +|Alertmanager |`{alertmanager}` +|{thanosname} | `{thanos}` +// tag::CPM[] +|kube-state-metrics |`kubeStateMetrics` +|monitoring-plugin | `monitoringPlugin` +|openshift-state-metrics |`openshiftStateMetrics` +|Telemeter Client |`telemeterClient` +|Metrics Server |`metricsServer` +// end::CPM[] |==== -[NOTE] +ifndef::openshift-dedicated,openshift-rosa[] +[WARNING] ==== -The Prometheus key is called `prometheusK8s` in the `cluster-monitoring-config` `ConfigMap` object and `prometheus` in the `user-workload-monitoring-config` `ConfigMap` object. +Different configuration changes to the `ConfigMap` object result in different outcomes: + +* The pods are not redeployed. Therefore, there is no service outage. + +* The affected pods are redeployed: + +** For single-node clusters, this results in temporary service outage. + +** For multi-node clusters, because of high-availability, the affected pods are gradually rolled out and the monitoring stack remains available. + +** Configuring and resizing a persistent volume always results in a service outage, regardless of high availability. + +Each procedure that requires a change in the config map includes its expected outcome. ==== endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] -.Configurable monitoring components -[options="header"] -|=== -|Component |user-workload-monitoring-config config map key -|Alertmanager |`alertmanager` -|Prometheus Operator |`prometheusOperator` -|Prometheus |`prometheus` -|Thanos Ruler |`thanosRuler` -|=== -endif::openshift-dedicated,openshift-rosa[] +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!alertmanager: +:!prometheus: +:!thanosname: +:!thanos: diff --git a/modules/monitoring-configuring-a-persistent-volume-claim.adoc b/modules/monitoring-configuring-a-persistent-volume-claim.adoc index 68a6a40c3d5c..b420f282b635 100644 --- a/modules/monitoring-configuring-a-persistent-volume-claim.adoc +++ b/modules/monitoring-configuring-a-persistent-volume-claim.adoc @@ -3,143 +3,113 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE -[id="configuring-a-persistent-volume-claim_{context}"] +[id="configuring-a-persistent-volume-claim_{context}"] = Configuring a persistent volume claim +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples + +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:component: prometheusK8s +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:component: thanosRuler +// end::UWM[] + To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC). .Prerequisites +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +// end::CPM[] +// tag::UWM[] ifndef::openshift-dedicated,openshift-rosa[] -* *If you are configuring core {product-title} monitoring components*: -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` `ConfigMap` object. -* *If you are configuring components that monitor user-defined projects*: -** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. +* You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-dedicated,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. * The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created. endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] * You have installed the OpenShift CLI (`oc`). .Procedure -. Edit the `ConfigMap` object: -ifndef::openshift-dedicated,openshift-rosa[] -** *To configure a PVC for a component that monitors core {product-title} projects*: -.. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -.. Add your PVC configuration for the component under `data/config.yaml`: +. Add your PVC configuration for the component under `data/config.yaml`: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - : #<1> + : # <1> volumeClaimTemplate: spec: - storageClassName: #<2> + storageClassName: # <2> resources: requests: - storage: #<3> + storage: # <3> ---- -<1> Specify the core monitoring component for which you want to configure the PVC. +<1> Specify the monitoring component for which you want to configure the PVC. <2> Specify an existing storage class. If a storage class is not specified, the default storage class is used. <3> Specify the amount of required storage. + -See the link:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims[Kubernetes documentation on PersistentVolumeClaims] for information on how to specify `volumeClaimTemplate`. -+ -The following example configures a PVC that claims persistent storage for the Prometheus instance that monitors core {product-title} components: +The following example configures a PVC that claims persistent storage for +// tag::CPM[] +Prometheus: +// end::CPM[] +// tag::UWM[] +Thanos Ruler: +// end::UWM[] + -[source,yaml] +.Example PVC configuration +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheusK8s: + {component}: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: +# tag::CPM[] storage: 40Gi ----- - -** *To configure a PVC for a component that monitors user-defined projects*: -endif::openshift-dedicated,openshift-rosa[] -.. Edit the `user-workload-monitoring-config` `ConfigMap` object in the `openshift-user-workload-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config ----- - -.. Add your PVC configuration for the component under `data/config.yaml`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - : #<1> - volumeClaimTemplate: - spec: - storageClassName: #<2> - resources: - requests: - storage: #<3> ----- -<1> Specify the component for user-defined monitoring for which you want to configure the PVC. -<2> Specify an existing storage class. If a storage class is not specified, the default storage class is used. -<3> Specify the amount of required storage. -+ -See the link:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims[Kubernetes documentation on PersistentVolumeClaims] for information on how to specify `volumeClaimTemplate`. -+ -The following example configures a PVC that claims persistent storage for Thanos Ruler: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - thanosRuler: - volumeClaimTemplate: - spec: - storageClassName: my-storage-class - resources: - requests: +# end::CPM[] +# tag::UWM[] storage: 10Gi +# end::UWM[] ---- +// tag::UWM[] + [NOTE] ==== Storage requirements for the `thanosRuler` component depend on the number of rules that are evaluated and how many samples each rule generates. ==== +// end::UWM[] . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed and the new storage configuration is applied. + @@ -147,3 +117,8 @@ Storage requirements for the `thanosRuler` component depend on the number of rul ==== When you update the config map with a PVC configuration, the affected `StatefulSet` object is recreated, resulting in a temporary service outage. ==== + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!component: diff --git a/modules/monitoring-configuring-alert-receivers.adoc b/modules/monitoring-configuring-alert-routing-console.adoc similarity index 83% rename from modules/monitoring-configuring-alert-receivers.adoc rename to modules/monitoring-configuring-alert-routing-console.adoc index c8d560794b80..f1b7fae98f95 100644 --- a/modules/monitoring-configuring-alert-receivers.adoc +++ b/modules/monitoring-configuring-alert-routing-console.adoc @@ -4,10 +4,15 @@ // * post_installation_configuration/configuring-alert-notifications.adoc :_mod-docs-content-type: PROCEDURE -[id="configuring-alert-receivers_{context}"] -= Configuring alert receivers +[id="configuring-alert-routing-console_{context}"] += Configuring alert routing with the {product-title} web console -You can configure alert receivers to ensure that you learn about important issues with your cluster. +You can configure alert routing through the {product-title} web console to ensure that you learn about important issues with your cluster. + +[NOTE] +==== +The {product-title} web console provides fewer settings to configure alert routing than the `alertmanager-main` secret. To configure alert routing with the access to more configuration settings, see "Configuring alert routing for default platform alerts". +==== .Prerequisites diff --git a/modules/monitoring-configuring-notifications-for-default-platform-alerts.adoc b/modules/monitoring-configuring-alert-routing-default-platform-alerts.adoc similarity index 91% rename from modules/monitoring-configuring-notifications-for-default-platform-alerts.adoc rename to modules/monitoring-configuring-alert-routing-default-platform-alerts.adoc index d029f9f22b1d..859e6c7d2b20 100644 --- a/modules/monitoring-configuring-notifications-for-default-platform-alerts.adoc +++ b/modules/monitoring-configuring-alert-routing-default-platform-alerts.adoc @@ -3,14 +3,14 @@ // * observability/monitoring/managing-alerts.adoc :_mod-docs-content-type: PROCEDURE -[id="configuring-notifications-for-default-platform-alerts_{context}"] -= Configuring notifications for default platform alerts +[id="configuring-alert-routing-default-platform-alerts_{context}"] += Configuring alert routing for default platform alerts You can configure Alertmanager to send notifications. Customize where and how Alertmanager sends notifications about default platform alerts by editing the default configuration in the `alertmanager-main` secret in the `openshift-monitoring` namespace. -[IMPORTANT] +[NOTE] ==== -Alertmanager does not send notifications by default. It is recommended to configure Alertmanager to receive notifications by setting up notifications details in the `alertmanager-main` secret configuration file. +All features of a supported version of upstream Alertmanager are also supported in an {product-title} Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see link:https://prometheus.io/docs/alerting/0.27/configuration/[Alertmanager configuration] (Prometheus documentation). ==== .Prerequisites diff --git a/modules/monitoring-creating-alert-routing-for-user-defined-projects.adoc b/modules/monitoring-configuring-alert-routing-for-user-defined-projects.adoc similarity index 78% rename from modules/monitoring-creating-alert-routing-for-user-defined-projects.adoc rename to modules/monitoring-configuring-alert-routing-for-user-defined-projects.adoc index 691f8d8e1c43..3c6d6f5ddee8 100644 --- a/modules/monitoring-creating-alert-routing-for-user-defined-projects.adoc +++ b/modules/monitoring-configuring-alert-routing-for-user-defined-projects.adoc @@ -3,10 +3,9 @@ // * observability/monitoring/managing-alerts.adoc :_mod-docs-content-type: PROCEDURE -[id="creating-alert-routing-for-user-defined-projects_{context}"] -= Creating alert routing for user-defined projects +[id="configuring-alert-routing-for-user-defined-projects_{context}"] += Configuring alert routing for user-defined projects -[role="_abstract"] If you are a non-administrator user who has been given the `alert-routing-edit` cluster role, you can create or edit alert routing for user-defined projects. .Prerequisites @@ -43,13 +42,7 @@ spec: webhookConfigs: - url: https://example.org/post ---- -+ -[NOTE] -==== -For user-defined alerting rules, user-defined routing is scoped to the namespace in which the resource is defined. -For example, a routing configuration defined in the `AlertmanagerConfig` object for namespace `ns1` only applies to `PrometheusRules` resources in the same namespace. -==== -+ + . Save the file. . Apply the resource to the cluster: diff --git a/modules/monitoring-configuring-notifications-for-user-defined-alerts.adoc b/modules/monitoring-configuring-alert-routing-user-defined-alerts-secret.adoc similarity index 74% rename from modules/monitoring-configuring-notifications-for-user-defined-alerts.adoc rename to modules/monitoring-configuring-alert-routing-user-defined-alerts-secret.adoc index d8852a26d3bd..a2b4197754c4 100644 --- a/modules/monitoring-configuring-notifications-for-user-defined-alerts.adoc +++ b/modules/monitoring-configuring-alert-routing-user-defined-alerts-secret.adoc @@ -3,19 +3,25 @@ // * observability/monitoring/managing-alerts.adoc :_mod-docs-content-type: PROCEDURE -[id="configuring-notifications-for-user-defined-alerts_{context}"] -= Configuring notifications for user-defined alerts +[id="configuring-alert-routing-user-defined-alerts-secret_{context}"] += Configuring alert routing for user-defined projects with the Alertmanager secret If you have enabled a separate instance of Alertmanager that is dedicated to user-defined alert routing, you can customize where and how the instance sends notifications by editing the `alertmanager-user-workload` secret in the `openshift-user-workload-monitoring` namespace. +[NOTE] +==== +All features of a supported version of upstream Alertmanager are also supported in an {product-title} Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see link:https://prometheus.io/docs/alerting/0.27/configuration/[Alertmanager configuration] (Prometheus documentation). +==== + .Prerequisites +ifndef::openshift-dedicated,openshift-rosa[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have enabled a separate instance of Alertmanager for user-defined alert routing. +endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-rosa,openshift-dedicated[] * You have access to the cluster as a user with the `dedicated-admin` role. endif::[] -ifndef::openshift-rosa,openshift-dedicated[] -* You have access to the cluster as a user with the `cluster-admin` cluster role. -endif::[] * You have installed the OpenShift CLI (`oc`). .Procedure diff --git a/modules/monitoring-configuring-external-alertmanagers.adoc b/modules/monitoring-configuring-external-alertmanagers.adoc index c3918c134ab5..4b53594e8743 100644 --- a/modules/monitoring-configuring-external-alertmanagers.adoc +++ b/modules/monitoring-configuring-external-alertmanagers.adoc @@ -3,144 +3,107 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE + [id="monitoring-configuring-external-alertmanagers_{context}"] = Configuring external Alertmanager instances +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples + +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:component: prometheusK8s +:component-name: Prometheus +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:component: thanosRuler +:component-name: Thanos Ruler +// end::UWM[] + The {product-title} monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. -ifndef::openshift-dedicated,openshift-rosa[] -You can add external Alertmanager instances to route alerts for core {product-title} projects or user-defined projects. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] + +// tag::CPM[] +You can add external Alertmanager instances to route alerts for core {product-title} projects. +// end::CPM[] +// tag::UWM[] You can add external Alertmanager instances to route alerts for user-defined projects. -endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance. .Prerequisites +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +// end::CPM[] +// tag::UWM[] ifndef::openshift-dedicated,openshift-rosa[] -* *If you are configuring core {product-title} monitoring components in the `openshift-monitoring` project*: -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` config map. -* *If you are configuring components that monitor user-defined projects*: -** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. +* You have access to the cluster as a user with the `cluster-admin` cluster role or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-dedicated,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. * The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created. endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] * You have installed the OpenShift CLI (`oc`). .Procedure -. Edit the `ConfigMap` object. -ifndef::openshift-dedicated,openshift-rosa[] -** *To configure additional Alertmanagers for routing alerts from core {product-title} projects*: -.. Edit the `cluster-monitoring-config` config map in the `openshift-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -.. Add an `additionalAlertmanagerConfigs:` section under `data/config.yaml/prometheusK8s`. - -.. Add the configuration details for additional Alertmanagers in this section: +. Add an `additionalAlertmanagerConfigs` section with configuration details under +// tag::CPM[] +`data/config.yaml/prometheusK8s`: +// end::CPM[] +// tag::UWM[] +`data/config.yaml/`: +// end::UWM[] + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | +# tag::CPM[] prometheusK8s: +# end::CPM[] +# tag::UWM[] + : # <2> +# end::UWM[] additionalAlertmanagerConfigs: - - + - # <1> ---- -+ -For ``, substitute authentication and other configuration details for additional Alertmanager instances. +<1> Substitute `` with authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token (`bearerToken`) and client TLS (`tlsConfig`). -The following sample config map configures an additional Alertmanager using a bearer token with client TLS authentication: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring -data: - config.yaml: | - prometheusK8s: - additionalAlertmanagerConfigs: - - scheme: https - pathPrefix: / - timeout: "30s" - apiVersion: v1 - bearerToken: - name: alertmanager-bearer-token - key: token - tlsConfig: - key: - name: alertmanager-tls - key: tls.key - cert: - name: alertmanager-tls - key: tls.crt - ca: - name: alertmanager-tls - key: tls.ca - staticConfigs: - - external-alertmanager1-remote.com - - external-alertmanager1-remote2.com ----- - -** *To configure additional Alertmanager instances for routing alerts from user-defined projects*: -endif::openshift-dedicated,openshift-rosa[] - -.. Edit the `user-workload-monitoring-config` config map in the `openshift-user-workload-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config ----- - -.. Add a `/additionalAlertmanagerConfigs:` section under `data/config.yaml/`. - -.. Add the configuration details for additional Alertmanagers in this section: +// tag::UWM[] +<2> Substitute `` for one of two supported external Alertmanager components: `prometheus` or `thanosRuler`. +// end::UWM[] + -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - : - additionalAlertmanagerConfigs: - - ----- +The following sample config map configures an additional Alertmanager for {component-name} by using a bearer token with client TLS authentication: + -For ``, substitute one of two supported external Alertmanager components: `prometheus` or `thanosRuler`. -+ -For ``, substitute authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token (`bearerToken`) and client TLS (`tlsConfig`). The following sample config map configures an additional Alertmanager using Thanos Ruler with a bearer token and client TLS authentication: -+ -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - thanosRuler: + {component}: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / @@ -164,4 +127,10 @@ data: - external-alertmanager1-remote2.com ---- -. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. \ No newline at end of file +. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!component: +:!component-name: \ No newline at end of file diff --git a/modules/monitoring-configuring-metrics-collection-profiles.adoc b/modules/monitoring-configuring-metrics-collection-profiles.adoc index 63a30051f1c2..ac518fe893e5 100644 --- a/modules/monitoring-configuring-metrics-collection-profiles.adoc +++ b/modules/monitoring-configuring-metrics-collection-profiles.adoc @@ -4,17 +4,10 @@ :_mod-docs-content-type: CONCEPT [id="configuring-metrics-collection-profiles_{context}"] -= Configuring metrics collection profiles += About metrics collection profiles -[IMPORTANT] -==== -[subs="attributes+"] -Using a metrics collection profile is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. -Red Hat does not recommend using them in production. -These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. - -For more information about the support scope of Red Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview[https://access.redhat.com/support/offerings/techpreview]. -==== +:FeatureName: Metrics collection profile +include::snippets/technology-preview.adoc[] By default, Prometheus collects metrics exposed by all default metrics targets in {product-title} components. However, you might want Prometheus to collect fewer metrics from a cluster in certain scenarios: @@ -26,9 +19,6 @@ You can use a metrics collection profile to collect either the default amount of When you collect minimal metrics data, basic monitoring features such as alerting continue to work. At the same time, the CPU and memory resources required by Prometheus decrease. -[id="about-metrics-collection-profiles_{context}"] -== About metrics collection profiles - You can enable one of two metrics collection profiles: * *full*: Prometheus collects metrics data exposed by all platform components. This setting is the default. diff --git a/modules/monitoring-configuring-pod-topology-spread-constraints.adoc b/modules/monitoring-configuring-pod-topology-spread-constraints.adoc index 812bc588a66a..71ec3ea8cc3f 100644 --- a/modules/monitoring-configuring-pod-topology-spread-constraints.adoc +++ b/modules/monitoring-configuring-pod-topology-spread-constraints.adoc @@ -3,63 +3,76 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE + [id="configuring-pod-topology-spread-constraints_{context}"] = Configuring pod topology spread constraints +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples + +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:component: prometheusK8s +:component-name: Prometheus +:label: prometheus +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:component: thanosRuler +:component-name: Thanos Ruler +:label: thanos-ruler +// end::UWM[] + You can configure pod topology spread constraints for -ifndef::openshift-dedicated,openshift-rosa[] +// tag::CPM[] all the pods deployed by the {cmo-full} -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +// end::CPM[] +// tag::UWM[] all the pods for user-defined monitoring -endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones. -You can configure pod topology spread constraints for monitoring pods by using -ifndef::openshift-dedicated,openshift-rosa[] -the `cluster-monitoring-config` or -endif::openshift-dedicated,openshift-rosa[] -the `user-workload-monitoring-config` config map. +You can configure pod topology spread constraints for monitoring pods by using the `{configmap-name}` config map. .Prerequisites +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +// end::CPM[] +// tag::UWM[] ifndef::openshift-dedicated,openshift-rosa[] -* *If you are configuring pods for core {product-title} monitoring:* -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` `ConfigMap` object. -* *If you are configuring pods for user-defined monitoring:* -** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. +* You have access to the cluster as a user with the `cluster-admin` cluster role or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. endif::openshift-dedicated,openshift-rosa[] + ifdef::openshift-dedicated,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. * The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created. endif::openshift-dedicated,openshift-rosa[] - +// end::UWM[] * You have installed the OpenShift CLI (`oc`). .Procedure -ifndef::openshift-dedicated,openshift-rosa[] -* *To configure pod topology spread constraints for core {product-title} monitoring:* - -. Edit the `cluster-monitoring-config` config map in the `openshift-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- . Add the following settings under the `data/config.yaml` field to configure pod topology spread constraints: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | : # <1> @@ -82,87 +95,36 @@ Specify `ScheduleAnyway` if you want the scheduler to still schedule the pod but <5> Specify `labelSelector` to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. + -.Example configuration for Prometheus -[source,yaml] +.Example configuration for {component-name} +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheusK8s: + {component}: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring +# tag::CPM[] whenUnsatisfiable: DoNotSchedule - labelSelector: - matchLabels: - app.kubernetes.io/name: prometheus ----- - -. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. - -* *To configure pod topology spread constraints for user-defined monitoring:* -endif::openshift-dedicated,openshift-rosa[] - -. Edit the `user-workload-monitoring-config` config map in the `openshift-user-workload-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config ----- - -. Add the following settings under the `data/config.yaml` field to configure pod topology spread constraints: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - : # <1> - topologySpreadConstraints: - - maxSkew: # <2> - topologyKey: # <3> - whenUnsatisfiable: # <4> - labelSelector: # <5> - ----- -<1> Specify a name of the component for which you want to set up pod topology spread constraints. -<2> Specify a numeric value for `maxSkew`, which defines the degree to which pods are allowed to be unevenly distributed. -<3> Specify a key of node labels for `topologyKey`. -Nodes that have a label with this key and identical values are considered to be in the same topology. -The scheduler tries to put a balanced number of pods into each domain. -<4> Specify a value for `whenUnsatisfiable`. -Available options are `DoNotSchedule` and `ScheduleAnyway`. -Specify `DoNotSchedule` if you want the `maxSkew` value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. -Specify `ScheduleAnyway` if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew. -<5> Specify `labelSelector` to find matching pods. -Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. -+ -.Example configuration for Thanos Ruler -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - thanosRuler: - topologySpreadConstraints: - - maxSkew: 1 - topologyKey: monitoring +# end::CPM[] +# tag::UWM[] whenUnsatisfiable: ScheduleAnyway +# end::UWM[] labelSelector: matchLabels: - app.kubernetes.io/name: thanos-ruler + app.kubernetes.io/name: {label} ---- . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!component: +:!component-name: +:!label: \ No newline at end of file diff --git a/modules/monitoring-configuring-remote-write-storage.adoc b/modules/monitoring-configuring-remote-write-storage.adoc index d7fbff7e7135..4284c66e38dc 100644 --- a/modules/monitoring-configuring-remote-write-storage.adoc +++ b/modules/monitoring-configuring-remote-write-storage.adoc @@ -3,25 +3,40 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE + [id="configuring-remote-write-storage_{context}"] = Configuring remote write storage +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:component: prometheusK8s +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:component: prometheus +// end::UWM[] + You can configure remote write storage to enable Prometheus to send ingested metrics to remote systems for long-term storage. Doing so has no impact on how or for how long Prometheus stores metrics. .Prerequisites +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +// end::CPM[] +// tag::UWM[] ifndef::openshift-dedicated,openshift-rosa[] -* *If you are configuring core {product-title} monitoring components:* -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` `ConfigMap` object. -* *If you are configuring components that monitor user-defined projects:* -** You have access to the cluster as a user with the `cluster-admin` cluster role or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. +* You have access to the cluster as a user with the `cluster-admin` cluster role or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-dedicated,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. * The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created. endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] * You have installed the OpenShift CLI (`oc`). * You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the link:https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage[Prometheus remote endpoints and storage documentation] for information about endpoints that are compatible with the remote write feature. + @@ -29,13 +44,7 @@ endif::openshift-dedicated,openshift-rosa[] ==== Red{nbsp}Hat only provides information for configuring remote write senders and does not offer guidance on configuring receiver endpoints. Customers are responsible for setting up their own endpoints that are remote-write compatible. Issues with endpoint receiver configurations are not included in Red{nbsp}Hat production support. ==== -* You have set up authentication credentials in a `Secret` object for the remote write endpoint. You must create the secret in the -ifndef::openshift-dedicated,openshift-rosa[] -same namespace as the Prometheus object for which you configure remote write: the `openshift-monitoring` namespace for default platform monitoring or the `openshift-user-workload-monitoring` namespace for user workload monitoring. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] -`openshift-user-workload-monitoring` namespace. -endif::openshift-dedicated,openshift-rosa[] +* You have set up authentication credentials in a `Secret` object for the remote write endpoint. You must create the secret in the `{namespace-name}` namespace. + [WARNING] ==== @@ -44,137 +53,46 @@ To reduce security risks, use HTTPS and authentication to send metrics to an end .Procedure -. Edit the `ConfigMap` object: -ifndef::openshift-dedicated,openshift-rosa[] -** *To configure remote write for the Prometheus instance that monitors core {product-title} projects*: -.. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -.. Add a `remoteWrite:` section under `data/config.yaml/prometheusK8s`, as shown in the following example: +. Add a `remoteWrite:` section under `data/config.yaml/{component}`, as shown in the following example: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheusK8s: + {component}: remoteWrite: - - url: "https://remote-write-endpoint.example.com" #<1> - #<2> + - url: "https://remote-write-endpoint.example.com" # <1> + # <2> ---- <1> The URL of the remote write endpoint. <2> The authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP in an `Authorization` request header, Basic authentication, OAuth 2.0, and TLS client. See _Supported remote write authentication settings_ for sample configurations of supported authentication methods. -.. Add write relabel configuration values after the authentication credentials: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring -data: - config.yaml: | - prometheusK8s: - remoteWrite: - - url: "https://remote-write-endpoint.example.com" - - writeRelabelConfigs: - - #<1> ----- -<1> Add configuration for metrics that you want to send to the remote endpoint. -+ -.Example of forwarding a single metric called `my_metric` -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring -data: - config.yaml: | - prometheusK8s: - remoteWrite: - - url: "https://remote-write-endpoint.example.com" - writeRelabelConfigs: - - sourceLabels: [__name__] - regex: 'my_metric' - action: keep ----- -+ -.Example of forwarding metrics called `my_metric_1` and `my_metric_2` in `my_namespace` namespace -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring -data: - config.yaml: | - prometheusK8s: - remoteWrite: - - url: "https://remote-write-endpoint.example.com" - writeRelabelConfigs: - - sourceLabels: [__name__,namespace] - regex: '(my_metric_1|my_metric_2);my_namespace' - action: keep ----- - -** *To configure remote write for the Prometheus instance that monitors user-defined projects*: -endif::openshift-dedicated,openshift-rosa[] -.. Edit the `user-workload-monitoring-config` `ConfigMap` object in the `openshift-user-workload-monitoring` project: +. Add write relabel configuration values after the authentication credentials: + -[source,terminal] ----- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config ----- - -.. Add a `remoteWrite:` section under `data/config.yaml/prometheus`, as shown in the following example: -+ -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheus: - remoteWrite: - - url: "https://remote-write-endpoint.example.com" #<1> - #<2> ----- -<1> The URL of the remote write endpoint. -<2> The authentication method and credentials for the endpoint. -Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP an `Authorization` request header, basic authentication, OAuth 2.0, and TLS client. -See _Supported remote write authentication settings_ below for sample configurations of supported authentication methods. - -.. Add write relabel configuration values after the authentication credentials: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - prometheus: + {component}: remoteWrite: - url: "https://remote-write-endpoint.example.com" @@ -184,16 +102,16 @@ data: <1> Add configuration for metrics that you want to send to the remote endpoint. + .Example of forwarding a single metric called `my_metric` -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheus: + {component}: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: @@ -203,16 +121,16 @@ data: ---- + .Example of forwarding metrics called `my_metric_1` and `my_metric_2` in `my_namespace` namespace -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheus: + {component}: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: @@ -222,3 +140,8 @@ data: ---- . Save the file to apply the changes. The new configuration is applied automatically. + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!component: \ No newline at end of file diff --git a/modules/monitoring-limiting-scrape-samples-in-user-defined-projects.adoc b/modules/monitoring-controlling-the-impact-of-unbound-attributes-in-user-defined-projects.adoc similarity index 100% rename from modules/monitoring-limiting-scrape-samples-in-user-defined-projects.adoc rename to modules/monitoring-controlling-the-impact-of-unbound-attributes-in-user-defined-projects.adoc diff --git a/modules/monitoring-creating-alerting-rules-for-user-defined-projects.adoc b/modules/monitoring-creating-alerting-rules-for-user-defined-projects.adoc index e69f2455ff39..84364167cb63 100644 --- a/modules/monitoring-creating-alerting-rules-for-user-defined-projects.adoc +++ b/modules/monitoring-creating-alerting-rules-for-user-defined-projects.adoc @@ -16,7 +16,7 @@ To help users understand the impact and cause of the alert, ensure that your ale .Prerequisites * You have enabled monitoring for user-defined projects. -* You are logged in as a user that has the `monitoring-rules-edit` cluster role for the project where you want to create an alerting rule. +* You are logged in as a cluster administrator or as a user that has the `monitoring-rules-edit` cluster role for the project where you want to create an alerting rule. * You have installed the OpenShift CLI (`oc`). .Procedure diff --git a/modules/monitoring-creating-cluster-id-labels-for-metrics.adoc b/modules/monitoring-creating-cluster-id-labels-for-metrics.adoc index b4e4aed3a113..1a180301b40a 100644 --- a/modules/monitoring-creating-cluster-id-labels-for-metrics.adoc +++ b/modules/monitoring-creating-cluster-id-labels-for-metrics.adoc @@ -3,98 +3,103 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE + [id="creating-cluster-id-labels-for-metrics_{context}"] = Creating cluster ID labels for metrics -ifndef::openshift-dedicated,openshift-rosa[] -You can create cluster ID labels for metrics for default platform monitoring and for user workload monitoring. - -For default platform monitoring, you add cluster ID labels for metrics in the `write_relabel` settings for remote write storage in the `cluster-monitoring-config` config map in the `openshift-monitoring` namespace. +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:component: prometheusK8s +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:component: prometheus +// end::UWM[] -For user workload monitoring, you edit the settings in the `user-workload-monitoring-config` config map in the `openshift-user-workload-monitoring` namespace. +You can create cluster ID labels for metrics by adding the `write_relabel` settings for remote write storage in the `{configmap-name}` config map in the `{namespace-name}` namespace. +ifndef::openshift-dedicated,openshift-rosa[] +// tag::UWM[] [NOTE] ==== When Prometheus scrapes user workload targets that expose a `namespace` label, the system stores this label as `exported_namespace`. This behavior ensures that the final namespace label value is equal to the namespace of the target pod. You cannot override this default configuration by setting the value of the `honorLabels` field to `true` for `PodMonitor` or `ServiceMonitor` objects. ==== - -endif::openshift-dedicated,openshift-rosa[] - -ifdef::openshift-dedicated,openshift-rosa[] -You can create cluster ID labels for metrics by editing the settings in the `user-workload-monitoring-config` config map in the `openshift-user-workload-monitoring` namespace. +// end::UWM[] endif::openshift-dedicated,openshift-rosa[] .Prerequisites +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +// end::CPM[] +// tag::UWM[] ifndef::openshift-dedicated,openshift-rosa[] -* *If you are configuring default platform monitoring components:* -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` `ConfigMap` object. -* *If you are configuring components that monitor user-defined projects:* -** You have access to the cluster as a user with the `cluster-admin` cluster role or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. +* You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-dedicated,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. * The `user-workload-monitoring-config` ConfigMap object exists. This object is created by default when the cluster is created. endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] * You have installed the OpenShift CLI (`oc`). * You have configured remote write storage. .Procedure -. Edit the `ConfigMap` object: -ifndef::openshift-dedicated,openshift-rosa[] -** *To create cluster ID labels for core {product-title} metrics:* -.. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -.. In the `writeRelabelConfigs:` section under `data/config.yaml/prometheusK8s/remoteWrite`, add cluster ID relabel configuration values: +. In the `writeRelabelConfigs:` section under `data/config.yaml/{component}/remoteWrite`, add cluster ID relabel configuration values: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheusK8s: + {component}: remoteWrite: - url: "https://remote-write-endpoint.example.com" - writeRelabelConfigs: <1> - - <2> + writeRelabelConfigs: # <1> + - # <2> ---- <1> Add a list of write relabel configurations for metrics that you want to send to the remote endpoint. <2> Substitute the label configuration for the metrics sent to the remote write endpoint. + -The following sample shows how to forward a metric with the cluster ID label `cluster_id` in default platform monitoring: +The following sample shows how to forward a metric with the cluster ID label `cluster_id`: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheusK8s: + {component}: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: - - __tmp_openshift_cluster_id__ <1> - targetLabel: cluster_id <2> - action: replace <3> + - __tmp_openshift_cluster_id__ # <1> + targetLabel: cluster_id # <2> + action: replace # <3> ---- <1> The system initially applies a temporary cluster ID source label named `+++__tmp_openshift_cluster_id__+++`. This temporary label gets replaced by the cluster ID label name that you specify. <2> Specify the name of the cluster ID label for metrics sent to remote write storage. @@ -103,58 +108,9 @@ For the label name, do not use `+++__tmp_openshift_cluster_id__+++`. The final r <3> The `replace` write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified. -** *To create cluster ID labels for user-defined project metrics:* -endif::openshift-dedicated,openshift-rosa[] -.. Edit the `user-workload-monitoring-config` `ConfigMap` object in the `openshift-user-workload-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config ----- - -.. In the `writeRelabelConfigs:` section under `data/config.yaml/prometheus/remoteWrite`, add cluster ID relabel configuration values: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - prometheus: - remoteWrite: - - url: "https://remote-write-endpoint.example.com" - - writeRelabelConfigs: <1> - - <2> ----- -<1> Add a list of write relabel configurations for metrics that you want to send to the remote endpoint. -<2> Substitute the label configuration for the metrics sent to the remote write endpoint. -+ -The following sample shows how to forward a metric with the cluster ID label `cluster_id` in user-workload monitoring: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - prometheus: - remoteWrite: - - url: "https://remote-write-endpoint.example.com" - writeRelabelConfigs: - - sourceLabels: - - __tmp_openshift_cluster_id__ <1> - targetLabel: cluster_id <2> - action: replace <3> ----- -<1> The system initially applies a temporary cluster ID source label named `+++__tmp_openshift_cluster_id__+++`. This temporary label gets replaced by the cluster ID label name that you specify. -<2> Specify the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use `+++__tmp_openshift_cluster_id__+++`. The final relabeling step removes labels that use this name. -<3> The `replace` write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified. - . Save the file to apply the changes. The new configuration is applied automatically. + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!component: diff --git a/modules/monitoring-creating-cluster-monitoring-configmap.adoc b/modules/monitoring-creating-cluster-monitoring-configmap.adoc index 053fc78929bd..ace2e6bafb6b 100644 --- a/modules/monitoring-creating-cluster-monitoring-configmap.adoc +++ b/modules/monitoring-creating-cluster-monitoring-configmap.adoc @@ -6,7 +6,7 @@ [id="creating-cluster-monitoring-configmap_{context}"] = Creating a cluster monitoring config map -You can configure the core {product-title} monitoring components by creating the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` project. The {cmo-first} then configures the core components of the monitoring stack. +You can configure the core {product-title} monitoring components by creating and updating the `cluster-monitoring-config` config map in the `openshift-monitoring` project. The {cmo-first} then configures the core components of the monitoring stack. .Prerequisites diff --git a/modules/monitoring-editing-silences.adoc b/modules/monitoring-editing-silences.adoc index be7588044bb5..c36592bf7f70 100644 --- a/modules/monitoring-editing-silences.adoc +++ b/modules/monitoring-editing-silences.adoc @@ -3,8 +3,18 @@ // * observability/monitoring/managing-alerts.adoc :_mod-docs-content-type: PROCEDURE -[id="editing-silences_{context}"] -= Editing silences + +// The ultimate solution DOES NOT NEED separate IDs and titles, it is just needed for now so that the tests will not break + +// tag::ADM[] +[id="editing-silences-adm_{context}"] += Editing silences from the Administrator perspective +// end::ADM[] + +// tag::DEV[] +[id="editing-silences-dev_{context}"] += Editing silences from the Developer perspective +// end::DEV[] You can edit a silence, which expires the existing silence and creates a new one with the changed configuration. @@ -18,14 +28,23 @@ ifdef::openshift-dedicated,openshift-rosa[] endif::openshift-dedicated,openshift-rosa[] * If you are a non-administrator user, you have access to the cluster as a user with the following user roles: ** The `cluster-monitoring-view` cluster role, which allows you to access Alertmanager. +// tag::ADM[] ** The `monitoring-alertmanager-edit` role, which permits you to create and silence alerts in the *Administrator* perspective in the web console. +// end::ADM[] +// tag::DEV[] ** The `monitoring-rules-edit` cluster role, which permits you to create and silence alerts in the *Developer* perspective in the web console. +// end::DEV[] .Procedure -To edit a silence in the *Administrator* perspective: +// tag::ADM[] +. From the *Administrator* perspective of the {product-title} web console, go to *Observe* -> *Alerting* -> *Silences*. +// end::ADM[] -. Go to *Observe* -> *Alerting* -> *Silences*. +// tag::DEV[] +. From the *Developer* perspective of the {product-title} web console, go to *Observe* and go to the *Silences* tab. +. Select the project that you want to edit silences for from the *Project:* list. +// end::DEV[] . For the silence you want to modify, click {kebab} and select *Edit silence*. + @@ -33,13 +52,7 @@ Alternatively, you can click *Actions* and select *Edit silence* on the *Silence . On the *Edit silence* page, make changes and click *Silence*. Doing so expires the existing silence and creates one with the updated configuration. -To edit a silence in the *Developer* perspective: -. Go to *Observe* -> ** -> *Silences*. -. For the silence you want to modify, click {kebab} and select *Edit silence*. -+ -Alternatively, you can click *Actions* and select *Edit silence* on the *Silence details* page for a silence. -. On the *Edit silence* page, make changes and click *Silence*. Doing so expires the existing silence and creates one with the updated configuration. diff --git a/modules/monitoring-enabling-alert-routing-for-user-defined-projects.adoc b/modules/monitoring-enabling-alert-routing-for-user-defined-projects.adoc new file mode 100644 index 000000000000..1948302436af --- /dev/null +++ b/modules/monitoring-enabling-alert-routing-for-user-defined-projects.adoc @@ -0,0 +1,22 @@ +// Module included in the following assemblies: +// +// * observability/monitoring/enabling-alert-routing-for-user-defined-projects.adoc + +:_mod-docs-content-type: CONCEPT +[id="enabling-alert-routing-for-user-defined-projects_{context}"] += Enabling alert routing for user-defined projects + +In {product-title}, an administrator can enable alert routing for user-defined projects. +This process consists of the following steps: + +ifndef::openshift-dedicated,openshift-rosa[] +* Enable alert routing for user-defined projects: +** Use the default platform Alertmanager instance. +** Use a separate Alertmanager instance only for user-defined projects. +endif::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa[] +* Enable alert routing for user-defined projects to use a separate Alertmanager instance. +endif::openshift-dedicated,openshift-rosa[] +* Grant users permission to configure alert routing for user-defined projects. + +After you complete these steps, developers and other users can configure custom alerts and alert routing for their user-defined projects. \ No newline at end of file diff --git a/modules/monitoring-example-remote-write-authentication-settings.adoc b/modules/monitoring-example-remote-write-authentication-settings.adoc index f8271a0428a0..baceb4e8b858 100644 --- a/modules/monitoring-example-remote-write-authentication-settings.adoc +++ b/modules/monitoring-example-remote-write-authentication-settings.adoc @@ -3,28 +3,29 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: REFERENCE + [id="example-remote-write-authentication-settings_{context}"] = Example remote write authentication settings -// Set attributes to distinguish between cluster monitoring examples and user workload monitoring examples. -ifndef::openshift-dedicated,openshift-rosa[] +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples +// tag::CPM[] :configmap-name: cluster-monitoring-config :namespace-name: openshift-monitoring -:prometheus-instance: prometheusK8s -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +:component: prometheusK8s +// end::CPM[] +// tag::UWM[] :configmap-name: user-workload-monitoring-config :namespace-name: openshift-user-workload-monitoring -:prometheus-instance: prometheus -endif::openshift-dedicated,openshift-rosa[] +:component: prometheus +// end::UWM[] The following samples show different authentication settings you can use to connect to a remote write endpoint. Each sample also shows how to configure a corresponding `Secret` object that contains authentication credentials and other relevant settings. Each sample configures authentication for use with -ifndef::openshift-dedicated,openshift-rosa[] +// tag::CPM[] default platform monitoring -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] -monitoring user-defined projects -endif::openshift-dedicated,openshift-rosa[] +// end::CPM[] +// tag::UWM[] +monitoring for user-defined projects +// end::UWM[] in the `{namespace-name}` namespace. [id="remote-write-sample-yaml-aws-sigv4_{context}"] @@ -58,7 +59,7 @@ metadata: namespace: {namespace-name} data: config.yaml: | - {prometheus-instance}: + {component}: remoteWrite: - url: "https://authorization.example.com/api/write" sigv4: @@ -111,7 +112,7 @@ metadata: namespace: {namespace-name} data: config.yaml: | - {prometheus-instance}: + {component}: remoteWrite: - url: "https://basicauth.example.com/api/write" basicAuth: @@ -156,7 +157,7 @@ metadata: data: config.yaml: | enableUserWorkload: true - {prometheus-instance}: + {component}: remoteWrite: - url: "https://authorization.example.com/api/write" authorization: @@ -200,7 +201,7 @@ metadata: namespace: {namespace-name} data: config.yaml: | - {prometheus-instance}: + {component}: remoteWrite: - url: "https://test.example.com/api/write" oauth2: @@ -258,7 +259,7 @@ metadata: namespace: {namespace-name} data: config.yaml: | - {prometheus-instance}: + {component}: remoteWrite: - url: "https://remote-write-endpoint.example.com" tlsConfig: @@ -280,5 +281,6 @@ data: <4> The key in the specified `Secret` object that contains the client key secret. // Unset the source code block attributes just to be safe. +:!configmap-name: :!namespace-name: -:!prometheus-instance: +:!component: diff --git a/modules/monitoring-example-remote-write-queue-configuration.adoc b/modules/monitoring-example-remote-write-queue-configuration.adoc index 24c8f1d252fc..dfa60e9c34aa 100644 --- a/modules/monitoring-example-remote-write-queue-configuration.adoc +++ b/modules/monitoring-example-remote-write-queue-configuration.adoc @@ -3,28 +3,29 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: REFERENCE + [id="example-remote-write-queue-configuration_{context}"] = Example remote write queue configuration -// Set attributes to distinguish between cluster monitoring examples and user workload monitoring examples. -ifndef::openshift-dedicated,openshift-rosa[] +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples +// tag::CPM[] :configmap-name: cluster-monitoring-config :namespace-name: openshift-monitoring -:prometheus-instance: prometheusK8s -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +:component: prometheusK8s +// end::CPM[] +// tag::UWM[] :configmap-name: user-workload-monitoring-config :namespace-name: openshift-user-workload-monitoring -:prometheus-instance: prometheus -endif::openshift-dedicated,openshift-rosa[] +:component: prometheus +// end::UWM[] You can use the `queueConfig` object for remote write to tune the remote write queue parameters. The following example shows the queue parameters with their default values for -ifndef::openshift-dedicated,openshift-rosa[] +// tag::CPM[] default platform monitoring -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] +// end::CPM[] +// tag::UWM[] monitoring for user-defined projects -endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] in the `{namespace-name}` namespace. .Example configuration of remote write parameters with default values @@ -37,7 +38,7 @@ metadata: namespace: {namespace-name} data: config.yaml: | - {prometheus-instance}: + {component}: remoteWrite: - url: "https://remote-write-endpoint.example.com" @@ -63,6 +64,7 @@ data: <9> The samples that are older than the `sampleAgeLimit` limit are dropped from the queue. If the value is undefined or set to `0s`, the parameter is ignored. // Unset the source code block attributes just to be safe. +:!configmap-name: :!namespace-name: -:!prometheus-instance: +:!component: diff --git a/modules/monitoring-expiring-silences.adoc b/modules/monitoring-expiring-silences.adoc index 2451560e2aa1..30d6351b0119 100644 --- a/modules/monitoring-expiring-silences.adoc +++ b/modules/monitoring-expiring-silences.adoc @@ -3,8 +3,18 @@ // * observability/monitoring/managing-alerts.adoc :_mod-docs-content-type: PROCEDURE -[id="expiring-silences_{context}"] -= Expiring silences + +// The ultimate solution DOES NOT NEED separate IDs and titles, it is just needed for now so that the tests will not break + +// tag::ADM[] +[id="expiring-silences-adm_{context}"] += Expiring silences from the Administrator perspective +// end::ADM[] + +// tag::DEV[] +[id="expiring-silences-dev_{context}"] += Expiring silences from the Developer perspective +// end::DEV[] You can expire a single silence or multiple silences. Expiring a silence deactivates it permanently. @@ -24,27 +34,29 @@ ifdef::openshift-dedicated,openshift-rosa[] endif::openshift-dedicated,openshift-rosa[] * If you are a non-administrator user, you have access to the cluster as a user with the following user roles: ** The `cluster-monitoring-view` cluster role, which allows you to access Alertmanager. +// tag::ADM[] ** The `monitoring-alertmanager-edit` role, which permits you to create and silence alerts in the *Administrator* perspective in the web console. +// end::ADM[] +// tag::DEV[] ** The `monitoring-rules-edit` cluster role, which permits you to create and silence alerts in the *Developer* perspective in the web console. +// end::DEV[] .Procedure -To expire a silence or silences in the *Administrator* perspective: - +// tag::ADM[] . Go to *Observe* -> *Alerting* -> *Silences*. +// end::ADM[] -. For the silence or silences you want to expire, select the checkbox in the corresponding row. - -. Click *Expire 1 silence* to expire a single selected silence or *Expire __ silences* to expire multiple selected silences, where __ is the number of silences you selected. -+ -Alternatively, to expire a single silence you can click *Actions* and select *Expire silence* on the *Silence details* page for a silence. +// tag::DEV[] +. From the *Developer* perspective of the {product-title} web console, go to *Observe* and go to the *Silences* tab. -To expire a silence in the *Developer* perspective: - -. Go to *Observe* -> ** -> *Silences*. +. Select the project that you want to expire a silence for from the *Project:* list. +// end::DEV[] . For the silence or silences you want to expire, select the checkbox in the corresponding row. . Click *Expire 1 silence* to expire a single selected silence or *Expire __ silences* to expire multiple selected silences, where __ is the number of silences you selected. + Alternatively, to expire a single silence you can click *Actions* and select *Expire silence* on the *Silence details* page for a silence. + + diff --git a/modules/monitoring-getting-detailed-information-about-a-target.adoc b/modules/monitoring-getting-detailed-information-about-a-target.adoc index e2a9ad956638..2c6a78da1e77 100644 --- a/modules/monitoring-getting-detailed-information-about-a-target.adoc +++ b/modules/monitoring-getting-detailed-information-about-a-target.adoc @@ -6,7 +6,7 @@ [id="getting-detailed-information-about-a-target_{context}"] = Getting detailed information about a metrics target -In the *Administrator* perspective in the {product-title} web console, you can use the *Metrics targets* page to view, search, and filter the endpoints that are currently targeted for scraping, which helps you to identify and troubleshoot problems. For example, you can view the current status of targeted endpoints to see when {product-title} Monitoring is not able to scrape metrics from a targeted component. +You can use the {product-title} web console to view, search, and filter the endpoints that are currently targeted for scraping, which helps you to identify and troubleshoot problems. For example, you can view the current status of targeted endpoints to see when {product-title} monitoring is not able to scrape metrics from a targeted component. ifndef::openshift-dedicated,openshift-rosa[] The *Metrics targets* page shows targets for default {product-title} projects and for user-defined projects. @@ -26,26 +26,24 @@ endif::openshift-dedicated,openshift-rosa[] .Procedure -. In the *Administrator* perspective, select *Observe* -> *Targets*. The *Metrics targets* page opens with a list of all service endpoint targets that are being scraped for metrics. +. In the *Administrator* perspective of the {product-title} web console, go to *Observe* -> *Targets*. The *Metrics targets* page opens with a list of all service endpoint targets that are being scraped for metrics. + --- This page shows details about targets for default {product-title} and user-defined projects. This page lists the following information for each target: -* Service endpoint URL being scraped -* ServiceMonitor component being monitored -* The **up** or **down** status of the target -* Namespace -* Last scrape time -* Duration of the last scrape --- +** Service endpoint URL being scraped +** The `ServiceMonitor` resource being monitored +** The **up** or **down** status of the target +** Namespace +** Last scrape time +** Duration of the last scrape -. Optional: The list of metrics targets can be long. To find a specific target, do any of the following: +. Optional: To find a specific target, perform any of the following actions: + |=== |Option |Description |Filter the targets by status and source. -a|Select filters in the *Filter* list. +a|Choose filters in the *Filter* list. The following filtering options are available: @@ -54,7 +52,7 @@ The following filtering options are available: ** **Down**. The target is currently down and not being scraped for metrics. * **Source** filters: -** **Platform**. Platform-level targets relate only to default Red Hat OpenShift Service on AWS projects. These projects provide core Red Hat OpenShift Service on AWS functionality. +** **Platform**. Platform-level targets relate only to default {product-rosa} projects. These projects provide core {product-rosa} functionality. ** **User**. User targets relate to user-defined projects. These projects are user-created and can be customized. |Search for a target by name or label. |Enter a search term in the **Text** or **Label** field next to the search box. @@ -62,13 +60,12 @@ The following filtering options are available: |Sort the targets. |Click one or more of the **Endpoint Status**, **Namespace**, **Last Scrape**, and **Scrape Duration** column headers. |=== -. Click the URL in the **Endpoint** column for a target to navigate to its **Target details** page. This page provides information about the target, including the following: -+ --- +. Click the URL in the **Endpoint** column for a target to go to its **Target details** page. This page provides information about the target, including the following information: + ** The endpoint URL being scraped for metrics ** The current *Up* or *Down* status of the target ** A link to the namespace -** A link to the ServiceMonitor details +** A link to the `ServiceMonitor` resource details ** Labels attached to the target ** The most recent time that the target was scraped for metrics --- + diff --git a/modules/monitoring-getting-information-about-alerts-silences-and-alerting-rules.adoc b/modules/monitoring-getting-information-about-alerts-silences-and-alerting-rules.adoc index ccd2d8c9e9c7..12470bac57e8 100644 --- a/modules/monitoring-getting-information-about-alerts-silences-and-alerting-rules.adoc +++ b/modules/monitoring-getting-information-about-alerts-silences-and-alerting-rules.adoc @@ -3,20 +3,33 @@ // * observability/monitoring/managing-alerts.adoc :_mod-docs-content-type: PROCEDURE -[id="getting-information-about-alerts-silences-and-alerting-rules_{context}"] -= Getting information about alerts, silences, and alerting rules + +// The ultimate solution DOES NOT NEED separate IDs and titles, it is just needed for now so that the tests will not break + +// tag::ADM[] +[id="getting-information-about-alerts-silences-and-alerting-rules-adm_{context}"] += Getting information about alerts, silences, and alerting rules from the Administrator perspective +// end::ADM[] + +// tag::DEV[] +[id="getting-information-about-alerts-silences-and-alerting-rules-dev_{context}"] += Getting information about alerts, silences, and alerting rules from the Developer perspective +// end::DEV[] + +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. .Prerequisites -* You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing alerts for. +* You have access to the cluster as a user with view permissions for the project that you are viewing alerts for. .Procedure -*To obtain information about alerts in the Administrator perspective*: +// tag::ADM[] +To obtain information about alerts: -. Open the {product-title} web console and go to the *Observe* -> *Alerting* -> *Alerts* page. +. From the *Administrator* perspective of the {product-title} web console, go to the *Observe* -> *Alerting* -> *Alerts* page. . Optional: Search for alerts by name by using the *Name* field in the search list. @@ -32,9 +45,9 @@ The Alerting UI provides detailed information about alerts and their governing a * A link to its governing alerting rule * Silences for the alert, if any exist -*To obtain information about silences in the Administrator perspective*: +To obtain information about silences: -. Go to the *Observe* -> *Alerting* -> *Silences* page. +. From the *Administrator* perspective of the {product-title} web console, go to the *Observe* -> *Alerting* -> *Silences* page. . Optional: Filter the silences by name using the *Search by name* field. @@ -50,9 +63,9 @@ The Alerting UI provides detailed information about alerts and their governing a * Silence state * Number and list of firing alerts -*To obtain information about alerting rules in the Administrator perspective*: +To obtain information about alerting rules: -. Go to the *Observe* -> *Alerting* -> *Alerting rules* page. +. From the *Administrator* perspective of the {product-title} web console, go to the *Observe* -> *Alerting* -> *Alerting rules* page. . Optional: Filter alerting rules by state, severity, and source by selecting filters in the *Filter* list. @@ -65,10 +78,12 @@ The Alerting UI provides detailed information about alerts and their governing a * The time for which the condition should be true for an alert to fire. * A graph for each alert governed by the alerting rule, showing the value with which the alert is firing. * A table of all alerts governed by the alerting rule. +// end::ADM[] -*To obtain information about alerts, silences, and alerting rules in the Developer perspective*: +// tag::DEV[] +To obtain information about alerts, silences, and alerting rules: -. Go to the *Observe* -> ** -> *Alerts* page. +. From the *Developer* perspective of the {product-title} web console, go to the *Observe* -> ** -> *Alerts* page. . View details for an alert, silence, or an alerting rule: @@ -88,3 +103,4 @@ The Alerting UI provides detailed information about alerts and their governing a ==== Only alerts, silences, and alerting rules relating to the selected project are displayed in the *Developer* perspective. ==== +// end::DEV[] \ No newline at end of file diff --git a/modules/monitoring-granting-users-permission-to-configure-alert-routing-for-user-defined-projects.adoc b/modules/monitoring-granting-users-permission-to-configure-alert-routing-for-user-defined-projects.adoc index c1d2e71a62c8..041ff50a7965 100644 --- a/modules/monitoring-granting-users-permission-to-configure-alert-routing-for-user-defined-projects.adoc +++ b/modules/monitoring-granting-users-permission-to-configure-alert-routing-for-user-defined-projects.adoc @@ -6,7 +6,6 @@ [id="granting-users-permission-to-configure-alert-routing-for-user-defined-projects_{context}"] = Granting users permission to configure alert routing for user-defined projects -[role="_abstract"] You can grant users permission to configure alert routing for user-defined projects. .Prerequisites diff --git a/modules/monitoring-granting-users-permission-to-monitor-user-defined-projects.adoc b/modules/monitoring-granting-users-permission-to-monitor-user-defined-projects.adoc index ec9f13cdec3c..674cc2ac979e 100644 --- a/modules/monitoring-granting-users-permission-to-monitor-user-defined-projects.adoc +++ b/modules/monitoring-granting-users-permission-to-monitor-user-defined-projects.adoc @@ -4,7 +4,7 @@ :_mod-docs-content-type: CONCEPT [id="granting-users-permission-to-monitor-user-defined-projects_{context}"] -= Granting users permission to monitor user-defined projects += Granting users permissions for monitoring for user-defined projects As a cluster administrator, you can monitor all core {product-title} and user-defined projects. diff --git a/modules/monitoring-intro-enabling-monitoring-for-user-defined-projects.adoc b/modules/monitoring-intro-enabling-monitoring-for-user-defined-projects.adoc new file mode 100644 index 000000000000..e0f8198cfeb2 --- /dev/null +++ b/modules/monitoring-intro-enabling-monitoring-for-user-defined-projects.adoc @@ -0,0 +1,11 @@ +// Module included in the following assemblies: +// +// * observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc + +:_mod-docs-content-type: CONCEPT +[id="intro-enabling-monitoring-for-user-defined-projects_{context}"] += Enabling monitoring for user-defined projects + +In {product-title}, you can enable monitoring for user-defined projects in addition to the default platform monitoring. You can monitor your own projects in {product-title} without the need for an additional monitoring solution. Using this feature centralizes monitoring for core platform components and user-defined projects. + +include::snippets/monitoring-custom-prometheus-note.adoc[] diff --git a/modules/monitoring-listing-alerting-rules-for-all-projects-in-a-single-view.adoc b/modules/monitoring-listing-alerting-rules-for-all-projects-in-a-single-view.adoc index 197d58772c18..56c306ae30d0 100644 --- a/modules/monitoring-listing-alerting-rules-for-all-projects-in-a-single-view.adoc +++ b/modules/monitoring-listing-alerting-rules-for-all-projects-in-a-single-view.adoc @@ -26,7 +26,7 @@ endif::[] .Procedure -. In the *Administrator* perspective, navigate to *Observe* -> *Alerting* -> *Alerting rules*. +. From the *Administrator* perspective of the {product-title} web console, go to *Observe* -> *Alerting* -> *Alerting rules*. . Select the *Platform* and *User* sources in the *Filter* drop-down menu. + diff --git a/modules/monitoring-maintenance-and-support.adoc b/modules/monitoring-maintenance-and-support.adoc index ac299b862adc..8e52994e74f2 100644 --- a/modules/monitoring-maintenance-and-support.adoc +++ b/modules/monitoring-maintenance-and-support.adoc @@ -5,7 +5,7 @@ [id="maintenance-and-support_{context}"] = Maintenance and support for monitoring -Not all configuration options for the monitoring stack are exposed. The only supported way of configuring {product-title} monitoring is by configuring the {cmo-first} using the options described in the "Config map reference for the {cmo-short}". *Do not use other configurations, as they are unsupported.* +Not all configuration options for the monitoring stack are exposed. The only supported way of configuring {product-title} monitoring is by configuring the {cmo-first} using the options described in the "Config map reference for the {cmo-short}". _Do not use other configurations, as they are unsupported._ Configuration paradigms might change across Prometheus releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in the "Config map reference for the {cmo-full}", your changes will disappear because the {cmo-short} automatically reconciles any differences and resets any unsupported changes back to the originally defined state by default and by design. diff --git a/modules/monitoring-managing-alerting-rules-for-user-defined-projects.adoc b/modules/monitoring-managing-alerting-rules-for-user-defined-projects.adoc index d144090983a6..a6125178453c 100644 --- a/modules/monitoring-managing-alerting-rules-for-user-defined-projects.adoc +++ b/modules/monitoring-managing-alerting-rules-for-user-defined-projects.adoc @@ -7,8 +7,6 @@ [id="managing-alerting-rules-for-user-defined-projects_{context}"] = Managing alerting rules for user-defined projects -{product-title} monitoring ships with a set of default alerting rules. As a cluster administrator, you can view the default alerting rules. - In {product-title}, you can view, edit, and remove alerting rules in user-defined projects. ifdef::openshift-rosa,openshift-dedicated[] diff --git a/modules/monitoring-managing-core-platform-alerting-rules.adoc b/modules/monitoring-managing-core-platform-alerting-rules.adoc index 60b7fa166b92..2f65acfd0f20 100644 --- a/modules/monitoring-managing-core-platform-alerting-rules.adoc +++ b/modules/monitoring-managing-core-platform-alerting-rules.adoc @@ -6,7 +6,7 @@ [id="managing-core-platform-alerting-rules_{context}"] = Managing alerting rules for core platform monitoring -{product-title} {product-version} monitoring ships with a large set of default alerting rules for platform metrics. +The {product-title} monitoring includes a large set of default alerting rules for platform metrics. As a cluster administrator, you can customize this set of rules in two ways: * Modify the settings for existing platform alerting rules by adjusting thresholds or by adding and modifying labels. diff --git a/modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc b/modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc index 476111a1e203..028744396c55 100644 --- a/modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc +++ b/modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc @@ -3,52 +3,31 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE -[id="modifying-retention-time-and-size-for-prometheus-metrics-data_{context}"] -= Modifying the retention time and size for Prometheus metrics data - -By default, Prometheus retains metrics data for the following durations: - -ifndef::openshift-dedicated,openshift-rosa[] -* *Core platform monitoring*: 15 days -* *Monitoring for user-defined projects*: 24 hours -endif::openshift-dedicated,openshift-rosa[] - -ifdef::openshift-dedicated,openshift-rosa[] -* *Core platform monitoring*: 11 days -* *Monitoring for user-defined projects*: 24 hours -endif::openshift-dedicated,openshift-rosa[] - -You can modify the retention time for -ifndef::openshift-dedicated,openshift-rosa[] -Prometheus -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] -the Prometheus instance that monitors user-defined projects, -endif::openshift-dedicated,openshift-rosa[] -to change how soon the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. If the data reaches this size limit, Prometheus deletes the oldest data first until the disk space used is again below the limit. -Note the following behaviors of these data retention settings: - -* The size-based retention policy applies to all data block directories in the `/prometheus` directory, including persistent blocks, write-ahead log (WAL) data, and m-mapped chunks. -* Data in the `/wal` and `/head_chunks` directories counts toward the retention size limit, but Prometheus never purges data from these directories based on size- or time-based retention policies. -Thus, if you set a retention size limit lower than the maximum size set for the `/wal` and `/head_chunks` directories, you have configured the system not to retain any data blocks in the `/prometheus` data directories. -* The size-based retention policy is applied only when Prometheus cuts a new data block, which occurs every two hours after the WAL contains at least three hours of data. -ifndef::openshift-dedicated,openshift-rosa[] -* If you do not explicitly define values for either `retention` or `retentionSize`, retention time defaults to 15 days for core platform monitoring and 24 hours for user-defined project monitoring. Retention size is not set. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] -* If you do not explicitly define values for either `retention` or `retentionSize`, retention time defaults to 11 days for core platform monitoring and 24 hours for user-defined project monitoring. Retention size is not set. -endif::openshift-dedicated,openshift-rosa[] -* If you define values for both `retention` and `retentionSize`, both values apply. -If any data blocks exceed the defined retention time or the defined size limit, Prometheus purges these data blocks. -* If you define a value for `retentionSize` and do not define `retention`, only the `retentionSize` value applies. -* If you do not define a value for `retentionSize` and only define a value for `retention`, only the `retention` value applies. -ifndef::openshift-dedicated,openshift-rosa[] -* If you set the `retentionSize` or `retention` value to `0`, the default settings apply. The default settings set retention time to 15 days for core platform monitoring and 24 hours for user-defined project monitoring. By default, retention size is not set. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] -* If you set the `retentionSize` or `retention` value to `0`, the default settings apply. The default settings set retention time to 11 days for core platform monitoring and 24 hours for user-defined project monitoring. By default, retention size is not set. -endif::openshift-dedicated,openshift-rosa[] +[id="modifying-retention-time-and-size-for-prometheus-metrics-data_{context}"] += Modifying retention time and size for Prometheus metrics data + +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples + +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:component: prometheusK8s +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:component: prometheus +// end::UWM[] + +By default, Prometheus retains metrics data for +// tag::CPM[] +15 days for core platform monitoring. +// end::CPM[] +// tag::UWM[] +24 hours for monitoring for user-defined projects. +// end::UWM[] +You can modify the retention time for the Prometheus instance to change when the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. [NOTE] ==== @@ -57,110 +36,69 @@ Data compaction occurs every two hours. Therefore, a persistent volume (PV) migh .Prerequisites +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +// end::CPM[] +// tag::UWM[] ifndef::openshift-dedicated,openshift-rosa[] -* *If you are configuring core {product-title} monitoring components*: -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` `ConfigMap` object. -* *If you are configuring components that monitor user-defined projects*: -** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. +* You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-dedicated,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. * The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created. endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] * You have installed the OpenShift CLI (`oc`). .Procedure -. Edit the `ConfigMap` object: -ifndef::openshift-dedicated,openshift-rosa[] -** *To modify the retention time and size for the Prometheus instance that monitors core {product-title} projects*: -.. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -.. Add the retention time and size configuration under `data/config.yaml`: +. Add the retention time and size configuration under `data/config.yaml`: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheusK8s: - retention: <1> - retentionSize: <2> + {component}: + retention: # <1> + retentionSize: # <2> ---- -+ <1> The retention time: a number directly followed by `ms` (milliseconds), `s` (seconds), `m` (minutes), `h` (hours), `d` (days), `w` (weeks), or `y` (years). You can also combine time values for specific times, such as `1h30m15s`. <2> The retention size: a number directly followed by `B` (bytes), `KB` (kilobytes), `MB` (megabytes), `GB` (gigabytes), `TB` (terabytes), `PB` (petabytes), and `EB` (exabytes). + -The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance that monitors core {product-title} components: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring -data: - config.yaml: | - prometheusK8s: - retention: 24h - retentionSize: 10GB ----- - -** *To modify the retention time and size for the Prometheus instance that monitors user-defined projects*: -endif::openshift-dedicated,openshift-rosa[] -.. Edit the `user-workload-monitoring-config` `ConfigMap` object in the `openshift-user-workload-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config ----- - -.. Add the retention time and size configuration under `data/config.yaml`: +The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance: + -[source,yaml] +.Example of setting retention time for Prometheus +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheus: - retention: <1> - retentionSize: <2> ----- -+ -<1> The retention time: a number directly followed by `ms` (milliseconds), `s` (seconds), `m` (minutes), `h` (hours), `d` (days), `w` (weeks), or `y` (years). -You can also combine time values for specific times, such as `1h30m15s`. -<2> The retention size: a number directly followed by `B` (bytes), `KB` (kilobytes), `MB` (megabytes), `GB` (gigabytes), `TB` (terabytes), `PB` (petabytes), or `EB` (exabytes). -+ -The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance that monitors user-defined projects: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - prometheus: + {component}: retention: 24h retentionSize: 10GB ---- . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!component: \ No newline at end of file diff --git a/modules/monitoring-understanding-monitoring-stack-in-ha-clusters.adoc b/modules/monitoring-monitoring-stack-in-ha-clusters.adoc similarity index 90% rename from modules/monitoring-understanding-monitoring-stack-in-ha-clusters.adoc rename to modules/monitoring-monitoring-stack-in-ha-clusters.adoc index 778717ed9e73..a0246553c4e0 100644 --- a/modules/monitoring-understanding-monitoring-stack-in-ha-clusters.adoc +++ b/modules/monitoring-monitoring-stack-in-ha-clusters.adoc @@ -3,8 +3,8 @@ // * observability/monitoring/monitoring-overview.adoc :_mod-docs-content-type: CONCEPT -[id="understanding-monitoring-stack-in-ha-clusters_{context}"] -= Understanding the monitoring stack in high-availability clusters +[id="monitoring-stack-in-ha-clusters_{context}"] += The monitoring stack in high-availability clusters By default, in multi-node clusters, the following components run in high-availability (HA) mode to prevent data loss and service interruption: diff --git a/modules/monitoring-moving-monitoring-components-to-different-nodes.adoc b/modules/monitoring-moving-monitoring-components-to-different-nodes.adoc index fd865e3a7d38..d35c95cea39d 100644 --- a/modules/monitoring-moving-monitoring-components-to-different-nodes.adoc +++ b/modules/monitoring-moving-monitoring-components-to-different-nodes.adoc @@ -3,36 +3,57 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE + [id="moving-monitoring-components-to-different-nodes_{context}"] = Moving monitoring components to different nodes -ifndef::openshift-dedicated,openshift-rosa[] -To specify the nodes in your cluster on which monitoring stack components will run, configure the `nodeSelector` constraint in the component's `ConfigMap` object to match labels assigned to the nodes. +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples. +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +// end::UWM[] + +// tag::CPM[] +To specify the nodes in your cluster on which monitoring stack components will run, configure the `nodeSelector` constraint for the components in the `cluster-monitoring-config` config map to match labels assigned to the nodes. [NOTE] ==== You cannot add a node selector constraint directly to an existing scheduled pod. ==== -endif::openshift-dedicated,openshift-rosa[] +// end::CPM[] -ifdef::openshift-dedicated,openshift-rosa[] -You can move any of the components that monitor workloads for user-defined projects to specific worker nodes. It is not permitted to move components to control plane or infrastructure nodes. -endif::openshift-dedicated,openshift-rosa[] +// tag::UWM[] +You can move any of the components that monitor workloads for user-defined projects to specific worker nodes. + +[WARNING] +==== +It is not permitted to move components to control plane or infrastructure nodes. +==== +// end::UWM[] .Prerequisites + +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +* You have installed the OpenShift CLI (`oc`). +// end::CPM[] + +// tag::UWM[] ifndef::openshift-dedicated,openshift-rosa[] -* *If you are configuring core {product-title} monitoring components*: -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` `ConfigMap` object. -* *If you are configuring components that monitor user-defined projects*: -** You have access to the cluster as a user with the `cluster-admin` cluster role or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. +* You have access to the cluster as a user with the `cluster-admin` cluster role or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-dedicated,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. * The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created. endif::openshift-dedicated,openshift-rosa[] * You have installed the OpenShift CLI (`oc`). +// end::UWM[] .Procedure @@ -40,75 +61,38 @@ endif::openshift-dedicated,openshift-rosa[] + [source,terminal] ---- -$ oc label nodes ----- -. Edit the `ConfigMap` object: -ifndef::openshift-dedicated,openshift-rosa[] -** *To move a component that monitors core {product-title} projects*: - -.. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config ----- - -.. Specify the node labels for the `nodeSelector` constraint for the component under `data/config.yaml`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring -data: - config.yaml: | - : <1> - nodeSelector: - <2> - <3> - <...> +$ oc label nodes <1> ---- -<1> Substitute `` with the appropriate monitoring stack component name. -<2> Substitute `` with the label you added to the node. -<3> Optional: Specify additional labels. -If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels. -+ -[NOTE] -==== -If monitoring components remain in a `Pending` state after configuring the `nodeSelector` constraint, check the pod events for errors relating to taints and tolerations. -==== - -** *To move a component that monitors user-defined projects*: -endif::openshift-dedicated,openshift-rosa[] +<1> Replace `` with the name of the node where you want to add the label. +Replace `` with the name of the wanted label. -.. Edit the `user-workload-monitoring-config` `ConfigMap` object in the `openshift-user-workload-monitoring` project: +. Edit the `{configmap-name}` `ConfigMap` object in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -.. Specify the node labels for the `nodeSelector` constraint for the component under `data/config.yaml`: +. Specify the node labels for the `nodeSelector` constraint for the component under `data/config.yaml`: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - : <1> + # ... + : #<1> nodeSelector: - <2> - <3> - <...> + #<2> + #<3> + # ... ---- <1> Substitute `` with the appropriate monitoring stack component name. -<2> Substitute `` with the label you added to the node. +<2> Substitute `` with the label you added to the node. <3> Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels. + @@ -118,3 +102,7 @@ If monitoring components remain in a `Pending` state after configuring the `node ==== . Save the file to apply the changes. The components specified in the new configuration are automatically moved to the new nodes, and the pods affected by the new configuration are redeployed. + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: diff --git a/modules/monitoring-optimizing-alerting-for-user-defined-projects.adoc b/modules/monitoring-optimizing-alerting-for-user-defined-projects.adoc index 17b2b1ecbf60..cd2065ba065f 100644 --- a/modules/monitoring-optimizing-alerting-for-user-defined-projects.adoc +++ b/modules/monitoring-optimizing-alerting-for-user-defined-projects.adoc @@ -3,7 +3,7 @@ // * observability/monitoring/managing-alerts.adoc :_mod-docs-content-type: CONCEPT -[id="Optimizing-alerting-for-user-defined-projects_{context}"] +[id="optimizing-alerting-for-user-defined-projects_{context}"] = Optimizing alerting for user-defined projects You can optimize alerting for your own projects by considering the following recommendations when creating alerting rules: diff --git a/modules/monitoring-querying-metrics-for-all-projects-as-an-administrator.adoc b/modules/monitoring-querying-metrics-for-all-projects-with-mon-dashboard.adoc similarity index 86% rename from modules/monitoring-querying-metrics-for-all-projects-as-an-administrator.adoc rename to modules/monitoring-querying-metrics-for-all-projects-with-mon-dashboard.adoc index e1bdf4bf8d53..3ac1e7f0de36 100644 --- a/modules/monitoring-querying-metrics-for-all-projects-as-an-administrator.adoc +++ b/modules/monitoring-querying-metrics-for-all-projects-with-mon-dashboard.adoc @@ -4,8 +4,12 @@ // * virt/support/virt-prometheus-queries.adoc :_mod-docs-content-type: PROCEDURE -[id="querying-metrics-for-all-projects-as-an-administrator_{context}"] -= Querying metrics for all projects as a cluster administrator +[id="querying-metrics-for-all-projects-with-mon-dashboard_{context}"] += Querying metrics for all projects with the {product-title} web console + +// The following section will be included in the administrator section, hence there is no need to include "administrator" in the title + +You can use the {product-title} metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring. As a ifndef::openshift-dedicated,openshift-rosa[] @@ -69,7 +73,7 @@ Use the keyboard arrows to select one of these suggested items and then press En * By default, the query table shows an expanded view that lists every metric and its current value. Click the *˅* down arrowhead to minimize the expanded view for a query. ==== -. Optional: The page URL now contains the queries you ran. To use this set of queries again in the future, save this URL. +. Optional: Save the page URL to use this set of queries again in the future. . Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions: + diff --git a/modules/monitoring-querying-metrics-for-user-defined-projects-as-a-developer.adoc b/modules/monitoring-querying-metrics-for-user-defined-projects-with-mon-dashboard.adoc similarity index 84% rename from modules/monitoring-querying-metrics-for-user-defined-projects-as-a-developer.adoc rename to modules/monitoring-querying-metrics-for-user-defined-projects-with-mon-dashboard.adoc index 152a39e90496..dd2c7cbd5d4e 100644 --- a/modules/monitoring-querying-metrics-for-user-defined-projects-as-a-developer.adoc +++ b/modules/monitoring-querying-metrics-for-user-defined-projects-with-mon-dashboard.adoc @@ -4,10 +4,12 @@ // * virt/support/virt-prometheus-queries.adoc :_mod-docs-content-type: PROCEDURE -[id="querying-metrics-for-user-defined-projects-as-a-developer_{context}"] -= Querying metrics for user-defined projects as a developer +[id="querying-metrics-for-user-defined-projects-with-mon-dashboard_{context}"] += Querying metrics for user-defined projects with the {product-title} web console -You can access metrics for a user-defined project as a developer or as a user with view permissions for the project. +You can use the {product-title} metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about any user-defined workloads that you are monitoring. + +As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project. The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet. These queries are restricted to the selected project. You can also run custom Prometheus Query Language (PromQL) queries for the project. @@ -30,7 +32,7 @@ endif::openshift-dedicated,openshift-rosa[] . In the *Developer* perspective of the {product-title} web console, click *Observe* and go to the *Metrics* tab. -. Select the project that you want to view metrics for in the *Project:* list. +. Select the project that you want to view metrics for from the *Project:* list. . To add one or more queries, perform any of the following actions: + @@ -62,7 +64,7 @@ Use the keyboard arrows to select one of these suggested items and then press En * By default, the query table shows an expanded view that lists every metric and its current value. Click the *˅* down arrowhead to minimize the expanded view for a query. ==== -. Optional: The page URL now contains the queries you ran. To use this set of queries again in the future, save this URL. +. Optional: Save the page URL to use this set of queries again in the future. . Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions: + diff --git a/modules/monitoring-removing-alerting-rules-for-user-defined-projects.adoc b/modules/monitoring-removing-alerting-rules-for-user-defined-projects.adoc index fd21d052dbbb..99f5993765fa 100644 --- a/modules/monitoring-removing-alerting-rules-for-user-defined-projects.adoc +++ b/modules/monitoring-removing-alerting-rules-for-user-defined-projects.adoc @@ -11,7 +11,7 @@ You can remove alerting rules for user-defined projects. .Prerequisites * You have enabled monitoring for user-defined projects. -* You are logged in as a user that has the `monitoring-rules-edit` cluster role for the project where you want to create an alerting rule. +* You are logged in as a cluster administrator or as a user that has the `monitoring-rules-edit` cluster role for the project where you want to create an alerting rule. * You have installed the OpenShift CLI (`oc`). .Procedure diff --git a/modules/monitoring-resizing-a-persistent-volume.adoc b/modules/monitoring-resizing-a-persistent-volume.adoc index 229eefa28aaf..6e2d60be549e 100644 --- a/modules/monitoring-resizing-a-persistent-volume.adoc +++ b/modules/monitoring-resizing-a-persistent-volume.adoc @@ -3,10 +3,30 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE + [id="resizing-a-persistent-volume_{context}"] = Resizing a persistent volume -You can resize a persistent volume (PV) for monitoring components, such as Prometheus, Thanos Ruler, or Alertmanager. You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured. +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples + +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:component: prometheusK8s +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:component: thanosRuler +// end::UWM[] + +// tag::CPM[] +You can resize a persistent volume (PV) for monitoring components, such as Prometheus or Alertmanager. +// end::CPM[] +// tag::UWM[] +You can resize a persistent volume (PV) for the instances of Prometheus, Thanos Ruler, and Alertmanager. +// end::UWM[] +You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured. [IMPORTANT] ==== @@ -14,128 +34,87 @@ You can only expand the size of the PVC. Shrinking the storage size is not possi ==== .Prerequisites - +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +* You have configured at least one PVC for core {product-title} monitoring components. +// end::CPM[] +// tag::UWM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. +* You have configured at least one PVC for components that monitor user-defined projects. +// end::UWM[] * You have installed the OpenShift CLI (`oc`). -* *If you are configuring core {product-title} monitoring components*: -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` `ConfigMap` object. -** You have configured at least one PVC for core {product-title} monitoring components. -* *If you are configuring components that monitor user-defined projects*: -** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. -** You have configured at least one PVC for components that monitor user-defined projects. .Procedure . Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in _Expanding persistent volumes_. -. Edit the `ConfigMap` object: -** *If you are configuring core {product-title} monitoring components*: -.. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -.. Add a new storage size for the PVC configuration for the component under `data/config.yaml`: +. Add a new storage size for the PVC configuration for the component under `data/config.yaml`: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - : #<1> + : # <1> volumeClaimTemplate: spec: resources: requests: - storage: #<2> + storage: # <2> ---- <1> The component for which you want to change the storage size. <2> Specify the new size for the storage volume. It must be greater than the previous value. + -The following example sets the new PVC request to 100 gigabytes for the Prometheus instance that monitors core {product-title} components: +The following example sets the new PVC request to +// tag::CPM[] +100 gigabytes for the Prometheus instance: +// end::CPM[] +// tag::UWM[] +20 gigabytes for Thanos Ruler: +// end::UWM[] + -[source,yaml] +.Example storage configuration for `{component}` +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheusK8s: + {component}: volumeClaimTemplate: spec: resources: requests: +# tag::CPM[] storage: 100Gi ----- - -** *If you are configuring components that monitor user-defined projects*: -+ -[NOTE] -==== -You can resize the volumes for the Thanos Ruler and for instances of Alertmanager and Prometheus that monitor user-defined projects. -==== -+ -.. Edit the `user-workload-monitoring-config` `ConfigMap` object in the `openshift-user-workload-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config ----- - -.. Update the PVC configuration for the monitoring component under `data/config.yaml`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - : #<1> - volumeClaimTemplate: - spec: - resources: - requests: - storage: #<2> ----- -<1> The component for which you want to change the storage size. -<2> Specify the new size for the storage volume. It must be greater than the previous value. -+ -The following example sets the new PVC request to 20 gigabytes for Thanos Ruler: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - thanosRuler: - volumeClaimTemplate: - spec: - resources: - requests: +# end::CPM[] +# tag::UWM[] storage: 20Gi +# end::UWM[] ---- +// tag::UWM[] + [NOTE] ==== Storage requirements for the `thanosRuler` component depend on the number of rules that are evaluated and how many samples each rule generates. ==== +// end::UWM[] . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. + @@ -143,3 +122,8 @@ Storage requirements for the `thanosRuler` component depend on the number of rul ==== When you update the config map with a new storage size, the affected `StatefulSet` object is recreated, resulting in a temporary service outage. ==== + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!component: diff --git a/modules/monitoring-resources-reference-for-the-cluster-monitoring-operator.adoc b/modules/monitoring-resources-reference-for-the-cluster-monitoring-operator.adoc index 09b354453575..072b187e85d1 100644 --- a/modules/monitoring-resources-reference-for-the-cluster-monitoring-operator.adoc +++ b/modules/monitoring-resources-reference-for-the-cluster-monitoring-operator.adoc @@ -9,8 +9,8 @@ This document describes the following resources deployed and managed by the Cluster Monitoring Operator (CMO): -* link:#cmo-routes-resources[Routes] -* link:#cmo-services-resources[Services] +* link:#cmo-routes-resources_{context}[Routes] +* link:#cmo-services-resources_{context}[Services] Use this information when you want to configure API endpoint connections to retrieve, send, or query metrics data. @@ -23,7 +23,7 @@ To avoid these issues, follow these recommendations: * Avoid querying endpoints frequently. Limit queries to a maximum of one every 30 seconds. * Do not try to retrieve all metrics data via the `/federate` endpoint. Query it only when you want to retrieve a limited, aggregated data set. For example, retrieving fewer than 1,000 samples for each request helps minimize the risk of performance degradation. ==== -[id="cmo-routes-resources"] +[id="cmo-routes-resources_{context}"] == CMO routes resources === openshift-monitoring/alertmanager-main @@ -50,7 +50,7 @@ Expose the `/api` endpoints of the `thanos-querier` service via a router. Expose the `/api` endpoints of the `thanos-ruler` service via a router. -[id="cmo-services-resources"] +[id="cmo-services-resources_{context}"] == CMO services resources === openshift-monitoring/prometheus-operator-admission-webhook diff --git a/modules/monitoring-retention-time-and-size-for-prometheus-metrics-data.adoc b/modules/monitoring-retention-time-and-size-for-prometheus-metrics-data.adoc new file mode 100644 index 000000000000..0bb05c180673 --- /dev/null +++ b/modules/monitoring-retention-time-and-size-for-prometheus-metrics-data.adoc @@ -0,0 +1,33 @@ +// Module included in the following assemblies: +// +// * observability/monitoring/configuring-the-monitoring-stack.adoc + +:_mod-docs-content-type: CONCEPT + +[id="retention-time-and-size-for-prometheus-metrics-data_{context}"] += Retention time and size for Prometheus metrics + +By default, Prometheus retains metrics data for the following durations: + +* *Core platform monitoring*: 15 days +* *Monitoring for user-defined projects*: 24 hours + +You can modify the retention time for the Prometheus instance to change how soon the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. If the data reaches this size limit, Prometheus deletes the oldest data first until the disk space used is again below the limit. + +Note the following behaviors of these data retention settings: + +* The size-based retention policy applies to all data block directories in the `/prometheus` directory, including persistent blocks, write-ahead log (WAL) data, and m-mapped chunks. +* Data in the `/wal` and `/head_chunks` directories counts toward the retention size limit, but Prometheus never purges data from these directories based on size- or time-based retention policies. +Thus, if you set a retention size limit lower than the maximum size set for the `/wal` and `/head_chunks` directories, you have configured the system not to retain any data blocks in the `/prometheus` data directories. +* The size-based retention policy is applied only when Prometheus cuts a new data block, which occurs every two hours after the WAL contains at least three hours of data. +* If you do not explicitly define values for either `retention` or `retentionSize`, retention time defaults to 15 days for core platform monitoring and 24 hours for user-defined project monitoring. Retention size is not set. +* If you define values for both `retention` and `retentionSize`, both values apply. +If any data blocks exceed the defined retention time or the defined size limit, Prometheus purges these data blocks. +* If you define a value for `retentionSize` and do not define `retention`, only the `retentionSize` value applies. +* If you do not define a value for `retentionSize` and only define a value for `retention`, only the `retention` value applies. +* If you set the `retentionSize` or `retention` value to `0`, the default settings apply. The default settings set retention time to 15 days for core platform monitoring and 24 hours for user-defined project monitoring. By default, retention size is not set. + +[NOTE] +==== +Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the `retentionSize` limit. In such cases, the `KubePersistentVolumeFillingUp` alert fires until the space on a PV is lower than the `retentionSize` limit. +==== diff --git a/modules/monitoring-reviewing-monitoring-dashboards-admin.adoc b/modules/monitoring-reviewing-monitoring-dashboards-admin.adoc index 13a9abb419cf..e28ae308c046 100644 --- a/modules/monitoring-reviewing-monitoring-dashboards-admin.adoc +++ b/modules/monitoring-reviewing-monitoring-dashboards-admin.adoc @@ -24,10 +24,10 @@ endif::openshift-dedicated,openshift-rosa[] . Choose a dashboard in the *Dashboard* list. Some dashboards, such as *etcd* and *Prometheus* dashboards, produce additional sub-menus when selected. . Optional: Select a time range for the graphs in the *Time Range* list. -+ + ** Select a pre-defined time period. -+ -** Set a custom time range by selecting *Custom time range* in the *Time Range* list. + +** Set a custom time range by clicking *Custom time range* in the *Time Range* list. + .. Input or select the *From* and *To* dates and times. + @@ -35,4 +35,4 @@ endif::openshift-dedicated,openshift-rosa[] . Optional: Select a *Refresh Interval*. -. Hover over each of the graphs within a dashboard to display detailed information about specific items. +. Hover over each of the graphs within a dashboard to display detailed information about specific items. \ No newline at end of file diff --git a/modules/monitoring-reviewing-monitoring-dashboards-developer.adoc b/modules/monitoring-reviewing-monitoring-dashboards-developer.adoc index 044d50840a98..8e2ee61481cc 100644 --- a/modules/monitoring-reviewing-monitoring-dashboards-developer.adoc +++ b/modules/monitoring-reviewing-monitoring-dashboards-developer.adoc @@ -6,7 +6,12 @@ [id="reviewing-monitoring-dashboards-developer_{context}"] = Reviewing monitoring dashboards as a developer -In the *Developer* perspective, you can view dashboards relating to a selected project. You must have access to monitor a project to view dashboard information for it. +In the *Developer* perspective, you can view dashboards relating to a selected project. + +[NOTE] +==== +In the *Developer* perspective, you can view dashboards for only one project at a time. +==== .Prerequisites @@ -27,10 +32,10 @@ All dashboards produce additional sub-menus when selected, except *Kubernetes / ==== + . Optional: Select a time range for the graphs in the *Time Range* list. -+ + ** Select a pre-defined time period. -+ -** Set a custom time range by selecting *Custom time range* in the *Time Range* list. + +** Set a custom time range by clicking *Custom time range* in the *Time Range* list. + .. Input or select the *From* and *To* dates and times. + diff --git a/modules/monitoring-searching-alerts-silences-and-alerting-rules.adoc b/modules/monitoring-searching-alerts-silences-and-alerting-rules.adoc index 214aa7534eb4..1ccd82197830 100644 --- a/modules/monitoring-searching-alerts-silences-and-alerting-rules.adoc +++ b/modules/monitoring-searching-alerts-silences-and-alerting-rules.adoc @@ -8,7 +8,7 @@ You can filter the alerts, silences, and alerting rules that are displayed in the Alerting UI. This section provides a description of each of the available filtering options. -[discrete] +[id="understanding-alert-filters_{context}"] == Understanding alert filters In the *Administrator* perspective, the *Alerts* page in the Alerting UI provides details about alerts relating to default {product-title} and user-defined projects. The page includes a summary of severity, state, and source for each alert. The time at which an alert went into its current state is also shown. @@ -31,7 +31,7 @@ You can filter by alert state, severity, and source. By default, only *Platform* ** *Platform*. Platform-level alerts relate only to default {product-title} projects. These projects provide core {product-title} functionality. ** *User*. User alerts relate to user-defined projects. These alerts are user-created and are customizable. User-defined workload monitoring can be enabled postinstallation to provide observability into your own workloads. -[discrete] +[id="understanding-silence-filters_{context}"] == Understanding silence filters In the *Administrator* perspective, the *Silences* page in the Alerting UI provides details about silences applied to alerts in default {product-title} and user-defined projects. The page includes a summary of the state of each silence and the time at which a silence ends. @@ -43,7 +43,7 @@ You can filter by silence state. By default, only *Active* and *Pending* silence ** *Pending*. The silence has been scheduled and it is not yet active. ** *Expired*. The silence has expired and notifications will be sent if the conditions for an alert are true. -[discrete] +[id="understanding-alerting-rule-filters_{context}"] == Understanding alerting rule filters In the *Administrator* perspective, the *Alerting rules* page in the Alerting UI provides details about alerting rules relating to default {product-title} and user-defined projects. The page includes a summary of the state, severity, and source for each alerting rule. @@ -67,7 +67,7 @@ You can filter alerting rules by alert state, severity, and source. By default, ** *Platform*. Platform-level alerting rules relate only to default {product-title} projects. These projects provide core {product-title} functionality. ** *User*. User-defined workload alerting rules relate to user-defined projects. These alerting rules are user-created and are customizable. User-defined workload monitoring can be enabled postinstallation to provide observability into your own workloads. -[discrete] +[id="searching-filtering-alerts-dev-perspective_{context}"] == Searching and filtering alerts, silences, and alerting rules in the Developer perspective In the *Developer* perspective, the *Alerts* page in the Alerting UI provides a combined view of alerts and silences relating to the selected project. A link to the governing alerting rule is provided for each displayed alert. diff --git a/modules/monitoring-setting-log-levels-for-monitoring-components.adoc b/modules/monitoring-setting-log-levels-for-monitoring-components.adoc index d21b6635e79e..d4c873db7848 100644 --- a/modules/monitoring-setting-log-levels-for-monitoring-components.adoc +++ b/modules/monitoring-setting-log-levels-for-monitoring-components.adoc @@ -3,22 +3,33 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE + [id="setting-log-levels-for-monitoring-components_{context}"] = Setting log levels for monitoring components -You can configure the log level for -ifndef::openshift-dedicated,openshift-rosa[] -Alertmanager, Prometheus Operator, Prometheus, Thanos Querier, and Thanos Ruler. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] -Alertmanager, Prometheus Operator, Prometheus, and Thanos Ruler. -endif::openshift-dedicated,openshift-rosa[] +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples -The following log levels can be applied to the relevant component in the -ifndef::openshift-dedicated,openshift-rosa[] -`cluster-monitoring-config` and -endif::openshift-dedicated,openshift-rosa[] -`user-workload-monitoring-config` `ConfigMap` objects: +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:prometheus: prometheusK8s +:alertmanager: alertmanagerMain +:thanos: thanosQuerier +:component-name: Thanos Querier +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:prometheus: prometheus +:alertmanager: alertmanager +:thanos: thanosRuler +:component-name: Thanos Ruler +// end::UWM[] + +You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, and {component-name}. + + +The following log levels can be applied to the relevant component in the `{configmap-name}` `ConfigMap` object: * `debug`. Log debug, informational, warning, and error messages. * `info`. Log informational, warning, and error messages. @@ -29,103 +40,84 @@ The default log level is `info`. .Prerequisites +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +// end::CPM[] +// tag::UWM[] ifndef::openshift-dedicated,openshift-rosa[] -* *If you are setting a log level for Alertmanager, Prometheus Operator, Prometheus, or Thanos Querier in the `openshift-monitoring` project*: -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` `ConfigMap` object. -* *If you are setting a log level for Prometheus Operator, Prometheus, or Thanos Ruler in the `openshift-user-workload-monitoring` project*: -** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. +* You have access to the cluster as a user with the `cluster-admin` cluster role or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. endif::openshift-dedicated,openshift-rosa[] + ifdef::openshift-dedicated,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. * The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created. endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] * You have installed the OpenShift CLI (`oc`). .Procedure -. Edit the `ConfigMap` object: -ifndef::openshift-dedicated,openshift-rosa[] -** *To set a log level for a component in the `openshift-monitoring` project*: -.. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -.. Add `logLevel: ` for a component under `data/config.yaml`: +. Add `logLevel: ` for a component under `data/config.yaml`: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - : <1> - logLevel: <2> + : # <1> + logLevel: # <2> ---- <1> The monitoring stack component for which you are setting a log level. -For default platform monitoring, available component values are `prometheusK8s`, `alertmanagerMain`, `prometheusOperator`, and `thanosQuerier`. +Available component values are `{prometheus}`, `{alertmanager}`, `prometheusOperator`, and `{thanos}`. <2> The log level to set for the component. The available values are `error`, `warn`, `info`, and `debug`. The default value is `info`. -** *To set a log level for a component in the `openshift-user-workload-monitoring` project*: -endif::openshift-dedicated,openshift-rosa[] - -.. Edit the `user-workload-monitoring-config` `ConfigMap` object in the `openshift-user-workload-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config ----- - -.. Add `logLevel: ` for a component under `data/config.yaml`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - : <1> - logLevel: <2> ----- -<1> The monitoring stack component for which you are setting a log level. -For user workload monitoring, available component values are `alertmanager`, `prometheus`, `prometheusOperator`, and `thanosRuler`. -<2> The log level to apply to the component. The available values are `error`, `warn`, `info`, and `debug`. The default value is `info`. - . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. -. Confirm that the log-level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level in the `prometheus-operator` deployment in the `openshift-user-workload-monitoring` project: +. Confirm that the log level has been applied by reviewing the deployment or pod configuration in the related project. +The following example checks the log level for the `prometheus-operator` deployment: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" +$ oc -n {namespace-name} get deploy prometheus-operator -o yaml | grep "log-level" ---- + .Example output -[source,terminal] +[source,terminal,subs="attributes+"] ---- - --log-level=debug ---- -. Check that the pods for the component are running. The following example lists the status of pods in the `openshift-user-workload-monitoring` project: +. Check that the pods for the component are running. The following example lists the status of pods: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-user-workload-monitoring get pods +$ oc -n {namespace-name} get pods ---- + [NOTE] ==== If an unrecognized `logLevel` value is included in the `ConfigMap` object, the pods for the component might not restart successfully. ==== + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!prometheus: +:!alertmanager: +:!thanos: +:!component-name: diff --git a/modules/monitoring-setting-query-log-file-for-prometheus.adoc b/modules/monitoring-setting-query-log-file-for-prometheus.adoc index 5aef6e5a24bc..5171d28100d9 100644 --- a/modules/monitoring-setting-query-log-file-for-prometheus.adoc +++ b/modules/monitoring-setting-query-log-file-for-prometheus.adoc @@ -3,14 +3,26 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE + [id="setting-query-log-file-for-prometheus_{context}"] = Enabling the query log file for Prometheus -[role="_abstract"] +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples + +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:component: prometheusK8s +:pod: prometheus-k8s-0 +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:component: prometheus +:pod: prometheus-user-workload-0 +// end::UWM[] + You can configure Prometheus to write all queries that have been run by the engine to a log file. -ifndef::openshift-dedicated,openshift-rosa[] -You can do so for default platform monitoring and for user-defined workload monitoring. -endif::openshift-dedicated,openshift-rosa[] [IMPORTANT] ==== @@ -19,110 +31,98 @@ Because log rotation is not supported, only enable this feature temporarily when .Prerequisites +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `cluster-monitoring-config` `ConfigMap` object. +// end::CPM[] +// tag::UWM[] ifndef::openshift-dedicated,openshift-rosa[] -* *If you are enabling the query log file feature for Prometheus in the `openshift-monitoring` project*: -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created the `cluster-monitoring-config` `ConfigMap` object. -* *If you are enabling the query log file feature for Prometheus in the `openshift-user-workload-monitoring` project*: -** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. -** A cluster administrator has enabled monitoring for user-defined projects. +* You have access to the cluster as a user with the `cluster-admin` cluster role or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +* A cluster administrator has enabled monitoring for user-defined projects. endif::openshift-dedicated,openshift-rosa[] + ifdef::openshift-dedicated,openshift-rosa[] * You have access to the cluster as a user with the `dedicated-admin` role. * The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created. endif::openshift-dedicated,openshift-rosa[] +// end::UWM[] * You have installed the OpenShift CLI (`oc`). .Procedure -ifndef::openshift-dedicated,openshift-rosa[] -** *To set the query log file for Prometheus in the `openshift-monitoring` project*: -. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` project: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- + +. Add the `queryLogFile` parameter for Prometheus under `data/config.yaml`: + -. Add `queryLogFile: ` for `prometheusK8s` under `data/config.yaml`: -+ -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - prometheusK8s: - queryLogFile: <1> + {component}: + queryLogFile: # <1> ---- -<1> The full path to the file in which queries will be logged. -+ +<1> Add the full path to the file in which queries will be logged. + . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. -+ -. Verify that the pods for the component are running. The following sample command lists the status of pods in the `openshift-monitoring` project: -+ -[source,terminal] ----- -$ oc -n openshift-monitoring get pods ----- -+ -. Read the query log: -+ -[source,terminal] ----- -$ oc -n openshift-monitoring exec prometheus-k8s-0 -- cat ----- -+ -[IMPORTANT] -==== -Revert the setting in the config map after you have examined the logged query information. -==== -** *To set the query log file for Prometheus in the `openshift-user-workload-monitoring` project*: -endif::openshift-dedicated,openshift-rosa[] -. Edit the `user-workload-monitoring-config` `ConfigMap` object in the `openshift-user-workload-monitoring` project: +. Verify that the pods for the component are running. The following sample command lists the status of pods: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config +$ oc -n {namespace-name} get pods ---- + -. Add `queryLogFile: ` for `prometheus` under `data/config.yaml`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: user-workload-monitoring-config - namespace: openshift-user-workload-monitoring -data: - config.yaml: | - prometheus: - queryLogFile: <1> +// tag::CPM[] +.Example output +[source,terminal] ---- -<1> The full path to the file in which queries will be logged. -+ -. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. -+ -. Verify that the pods for the component are running. The following example command lists the status of pods in the `openshift-user-workload-monitoring` project: -+ +... +prometheus-operator-567c9bc75c-96wkj 2/2 Running 0 62m +prometheus-k8s-0 6/6 Running 1 57m +prometheus-k8s-1 6/6 Running 1 57m +thanos-querier-56c76d7df4-2xkpc 6/6 Running 0 57m +thanos-querier-56c76d7df4-j5p29 6/6 Running 0 57m +... +---- +// end::CPM[] +// tag::UWM[] +.Example output [source,terminal] ---- -$ oc -n openshift-user-workload-monitoring get pods +... +prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m +prometheus-user-workload-0 5/5 Running 1 132m +prometheus-user-workload-1 5/5 Running 1 132m +thanos-ruler-user-workload-0 3/3 Running 0 132m +thanos-ruler-user-workload-1 3/3 Running 0 132m +... ---- -+ +// end::UWM[] + . Read the query log: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat +$ oc -n {namespace-name} exec {pod} -- cat ---- + [IMPORTANT] ==== Revert the setting in the config map after you have examined the logged query information. ==== + +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!component: +:!pod: diff --git a/modules/monitoring-silencing-alerts.adoc b/modules/monitoring-silencing-alerts.adoc index d6674aa83f75..0a20e4f2fab8 100644 --- a/modules/monitoring-silencing-alerts.adoc +++ b/modules/monitoring-silencing-alerts.adoc @@ -3,13 +3,28 @@ // * observability/monitoring/managing-alerts.adoc :_mod-docs-content-type: PROCEDURE -[id="silencing-alerts_{context}"] -= Silencing alerts + +// The ultimate solution DOES NOT NEED separate IDs and titles, it is just needed for now so that the tests will not break + +// tag::ADM[] +[id="silencing-alerts-adm_{context}"] += Silencing alerts from the Administrator perspective +// end::ADM[] + +// tag::DEV[] +[id="silencing-alerts-dev_{context}"] += Silencing alerts from the Developer perspective +// end::DEV[] You can silence a specific alert or silence alerts that match a specification that you define. .Prerequisites +// tag::ADM[] +* You have access to the cluster as a user with the `cluster-admin` role. +// end::ADM[] + +// tag::DEV[] ifndef::openshift-dedicated,openshift-rosa[] * If you are a cluster administrator, you have access to the cluster as a user with the `cluster-admin` role. endif::openshift-dedicated,openshift-rosa[] @@ -20,12 +35,14 @@ endif::openshift-dedicated,openshift-rosa[] ** The `cluster-monitoring-view` cluster role, which allows you to access Alertmanager. ** The `monitoring-alertmanager-edit` role, which permits you to create and silence alerts in the *Administrator* perspective in the web console. ** The `monitoring-rules-edit` cluster role, which permits you to create and silence alerts in the *Developer* perspective in the web console. +// end::DEV[] .Procedure -To silence a specific alert in the *Administrator* perspective: +// tag::ADM[] +To silence a specific alert: -. Go to *Observe* -> *Alerting* -> *Alerts* in the {product-title} web console. +. From the *Administrator* perspective of the {product-title} web console, go to *Observe* -> *Alerting* -> *Alerts*. . For the alert that you want to silence, click {kebab} and select *Silence alert* to open the *Silence alert* page with a default configuration for the chosen alert. @@ -38,43 +55,49 @@ You must add a comment before saving a silence. . To save the silence, click *Silence*. -To silence a specific alert in the *Developer* perspective: - -. Go to *Observe* -> ** -> *Alerts* in the {product-title} web console. - -. If necessary, expand the details for the alert by selecting a greater than symbol (*>*) next to the alert name. +To silence a set of alerts: -. Click the alert message in the expanded view to open the *Alert details* page for the alert. +. From the *Administrator* perspective of the {product-title} web console, go to *Observe* -> *Alerting* -> *Silences*. -. Click *Silence alert* to open the *Silence alert* page with a default configuration for the alert. +. Click *Create silence*. -. Optional: Change the default configuration details for the silence. +. On the *Create silence* page, set the schedule, duration, and label details for an alert. + [NOTE] ==== You must add a comment before saving a silence. ==== -. To save the silence, click *Silence*. +. To create silences for alerts that match the labels that you entered, click *Silence*. +// end::ADM[] -To silence a set of alerts by creating a silence configuration in the *Administrator* perspective: +// tag::DEV[] +To silence a specific alert: -. Go to *Observe* -> *Alerting* -> *Silences* in the {product-title} web console. +. From the *Developer* perspective of the {product-title} web console, go to *Observe* and go to the *Alerts* tab. -. Click *Create silence*. +. Select the project that you want to silence an alert for from the *Project:* list. -. On the *Create silence* page, set the schedule, duration, and label details for an alert. +. If necessary, expand the details for the alert by clicking a greater than symbol (*>*) next to the alert name. + +. Click the alert message in the expanded view to open the *Alert details* page for the alert. + +. Click *Silence alert* to open the *Silence alert* page with a default configuration for the alert. + +. Optional: Change the default configuration details for the silence. + [NOTE] ==== You must add a comment before saving a silence. ==== -. To create silences for alerts that match the labels that you entered, click *Silence*. +. To save the silence, click *Silence*. + +To silence a set of alerts: -To silence a set of alerts by creating a silence configuration in the *Developer* perspective: +. From the *Developer* perspective of the {product-title} web console, go to *Observe* and go to the *Silences* tab. -. Go to *Observe* -> ** -> *Silences* in the {product-title} web console. +. Select the project that you want to silence alerts for from the *Project:* list. . Click *Create silence*. @@ -86,3 +109,4 @@ You must add a comment before saving a silence. ==== . To create silences for alerts that match the labels that you entered, click *Silence*. +// end::DEV[] \ No newline at end of file diff --git a/modules/monitoring-specifying-limits-and-requests-for-monitoring-components.adoc b/modules/monitoring-specifying-limits-and-requests-for-monitoring-components.adoc index 7287f7ececd5..89f9a0da4c81 100644 --- a/modules/monitoring-specifying-limits-and-requests-for-monitoring-components.adoc +++ b/modules/monitoring-specifying-limits-and-requests-for-monitoring-components.adoc @@ -3,52 +3,69 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE + [id="specifying-limits-and-resource-requests-for-monitoring-components_{context}"] -= Specifying limits and requests for monitoring components += Specifying limits and requests -To configure CPU and memory resources, specify values for resource limits and requests in the appropriate `ConfigMap` object for the namespace in which the monitoring component is located: +// Set attributes to distinguish between cluster monitoring example (core platform monitoring - CPM) and user workload monitoring (UWM) examples. +// tag::CPM[] +:configmap-name: cluster-monitoring-config +:namespace-name: openshift-monitoring +:alertmanager: alertmanagerMain +:prometheus: prometheusK8s +:thanos: thanosQuerier +// end::CPM[] +// tag::UWM[] +:configmap-name: user-workload-monitoring-config +:namespace-name: openshift-user-workload-monitoring +:alertmanager: alertmanager +:prometheus: prometheus +:thanos: thanosRuler +// end::UWM[] -* The `cluster-monitoring-config` config map in the `openshift-monitoring` namespace for core platform monitoring -* The `user-workload-monitoring-config` config map in the `openshift-user-workload-monitoring` namespace for components that monitor user-defined projects +To configure CPU and memory resources, specify values for resource limits and requests in the `{configmap-name}` `ConfigMap` object in the `{namespace-name}` namespace. .Prerequisites -* *If you are configuring core platform monitoring components*: -** You have access to the cluster as a user with the `cluster-admin` cluster role. -** You have created a `ConfigMap` object named `cluster-monitoring-config`. -* *If you are configuring components that monitor user-defined projects*: -** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +// tag::CPM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role. +* You have created the `ConfigMap` object named `cluster-monitoring-config`. +// end::CPM[] + +// tag::UWM[] +* You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project. +// end::UWM[] * You have installed the OpenShift CLI (`oc`). .Procedure -. To configure core platform monitoring components, edit the `cluster-monitoring-config` config map object in the `openshift-monitoring` namespace: +. Edit the `{configmap-name}` config map in the `{namespace-name}` project: + -[source,terminal] +[source,terminal,subs="attributes+"] ---- -$ oc -n openshift-monitoring edit configmap cluster-monitoring-config +$ oc -n {namespace-name} edit configmap {configmap-name} ---- -. Add values to define resource limits and requests for each core platform monitoring component you want to configure. +. Add values to define resource limits and requests for each component you want to configure. + [IMPORTANT] ==== -Make sure that the value set for a limit is always higher than the value set for a request. +Ensure that the value set for a limit is always higher than the value set for a request. Otherwise, an error will occur, and the container will not run. ==== + -.Example +.Example of setting resource limits and requests + -[source,yaml] +[source,yaml,subs="attributes+"] ---- apiVersion: v1 kind: ConfigMap metadata: - name: cluster-monitoring-config - namespace: openshift-monitoring + name: {configmap-name} + namespace: {namespace-name} data: config.yaml: | - alertmanagerMain: + {alertmanager}: resources: limits: cpu: 500m @@ -56,7 +73,7 @@ data: requests: cpu: 200m memory: 500Mi - prometheusK8s: + {prometheus}: resources: limits: cpu: 500m @@ -64,6 +81,15 @@ data: requests: cpu: 200m memory: 500Mi + {thanos}: + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 200m + memory: 500Mi +# tag::CPM[] prometheusOperator: resources: limits: @@ -104,14 +130,6 @@ data: requests: cpu: 200m memory: 500Mi - thanosQuerier: - resources: - limits: - cpu: 500m - memory: 1Gi - requests: - cpu: 200m - memory: 500Mi nodeExporter: resources: limits: @@ -136,10 +154,14 @@ data: requests: cpu: 20m memory: 50Mi +# end::CPM[] ---- . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. -[role="_additional-resources"] -.Additional resources -* link:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits[Kubernetes requests and limits documentation] +// Unset the source code block attributes just to be safe. +:!configmap-name: +:!namespace-name: +:!alertmanager: +:!prometheus: +:!thanos: \ No newline at end of file diff --git a/modules/monitoring-supported-remote-write-authentication-settings.adoc b/modules/monitoring-supported-remote-write-authentication-settings.adoc index 864c4e70a2fa..55aff7883ba8 100644 --- a/modules/monitoring-supported-remote-write-authentication-settings.adoc +++ b/modules/monitoring-supported-remote-write-authentication-settings.adoc @@ -3,7 +3,7 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: REFERENCE -[id="supported_remote_write_authentication_settings_{context}"] +[id="supported-remote-write-authentication-settings_{context}"] = Supported remote write authentication settings You can use different methods to authenticate with a remote write endpoint. Currently supported authentication methods are AWS Signature Version 4, basic authentication, authorization, OAuth 2.0, and TLS client. The following table provides details about supported authentication methods for use with remote write. diff --git a/modules/monitoring-understanding-alert-routing-for-user-defined-projects.adoc b/modules/monitoring-understanding-alert-routing-for-user-defined-projects.adoc index a53525e3e5d5..00373dfc2318 100644 --- a/modules/monitoring-understanding-alert-routing-for-user-defined-projects.adoc +++ b/modules/monitoring-understanding-alert-routing-for-user-defined-projects.adoc @@ -13,7 +13,7 @@ endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-dedicated,openshift-rosa[] As a `dedicated-admin`, you can enable alert routing for user-defined projects. endif::openshift-dedicated,openshift-rosa[] -With this feature, you can allow users with the **alert-routing-edit** role to configure alert notification routing and receivers for user-defined projects. +With this feature, you can allow users with the `alert-routing-edit` cluster role to configure alert notification routing and receivers for user-defined projects. ifndef::openshift-dedicated,openshift-rosa[] These notifications are routed by the default Alertmanager instance or, if enabled, an optional Alertmanager instance dedicated to user-defined monitoring. endif::openshift-dedicated,openshift-rosa[] @@ -36,7 +36,7 @@ endif::openshift-dedicated,openshift-rosa[] [NOTE] ==== -The following are limitations of alert routing for user-defined projects: +Review the following limitations of alert routing for user-defined projects: * For user-defined alerting rules, user-defined routing is scoped to the namespace in which the resource is defined. For example, a routing configuration in namespace `ns1` only applies to `PrometheusRules` resources in the same namespace. diff --git a/modules/monitoring-understanding-the-monitoring-stack.adoc b/modules/monitoring-understanding-the-monitoring-stack.adoc index 87b1359ec5f0..e6922b364897 100644 --- a/modules/monitoring-understanding-the-monitoring-stack.adoc +++ b/modules/monitoring-understanding-the-monitoring-stack.adoc @@ -11,11 +11,7 @@ [id="understanding-the-monitoring-stack_{context}"] = Understanding the monitoring stack -The {product-title} -ifdef::openshift-rosa[] -(ROSA) -endif::openshift-rosa[] -monitoring stack is based on the link:https://prometheus.io/[Prometheus] open source project and its wider ecosystem. The monitoring stack includes the following: +The monitoring stack includes the following components: * *Default platform monitoring components*. ifndef::openshift-dedicated,openshift-rosa[] diff --git a/modules/monitoring-using-node-selectors-to-move-monitoring-components.adoc b/modules/monitoring-using-node-selectors-to-move-monitoring-components.adoc index 766176960ef4..ecbadb1b46c7 100644 --- a/modules/monitoring-using-node-selectors-to-move-monitoring-components.adoc +++ b/modules/monitoring-using-node-selectors-to-move-monitoring-components.adoc @@ -9,16 +9,15 @@ By using the `nodeSelector` constraint with labeled nodes, you can move any of the monitoring stack components to specific nodes. By doing so, you can control the placement and distribution of the monitoring components across a cluster. -By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and segregate workloads based on specific requirements or policies. +By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies. -[id="how-node-selectors-work-with-other-constraints_{context}"] +[discrete] == How node selectors work with other constraints - If you move monitoring components by using node selector constraints, be aware that other constraints to control pod scheduling might exist for a cluster: * Topology spread constraints might be in place to control pod placement. -* Hard anti-affinity rules are in place for Prometheus, Thanos Querier, Alertmanager, and other monitoring components to ensure that multiple pods for these components are always spread across different nodes and are therefore always highly available. +* Hard anti-affinity rules are in place for Prometheus, Alertmanager, and other monitoring components to ensure that multiple pods for these components are always spread across different nodes and are therefore always highly available. When scheduling pods onto nodes, the pod scheduler tries to satisfy all existing constraints when determining pod placement. That is, all constraints compound when the pod scheduler determines which pods will be placed on which nodes. diff --git a/modules/monitoring-using-pod-topology-spread-constraints-for-monitoring.adoc b/modules/monitoring-using-pod-topology-spread-constraints-for-monitoring.adoc index 9eb16490f834..240c4c20b4d7 100644 --- a/modules/monitoring-using-pod-topology-spread-constraints-for-monitoring.adoc +++ b/modules/monitoring-using-pod-topology-spread-constraints-for-monitoring.adoc @@ -4,16 +4,12 @@ :_mod-docs-content-type: CONCEPT [id="using-pod-topology-spread-constraints-for-monitoring_{context}"] -= Using pod topology spread constraints for monitoring += About pod topology spread constraints for monitoring -You can use pod topology spread constraints to control how -ifndef::openshift-dedicated,openshift-rosa[] -the monitoring pods -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] -the pods for user-defined monitoring -endif::openshift-dedicated,openshift-rosa[] -are spread across a network topology when {product-title} pods are deployed in multiple availability zones. +You can use pod topology spread constraints to control how the monitoring pods are spread across a network topology when {product-title} pods are deployed in multiple availability zones. Pod topology spread constraints are suitable for controlling pod scheduling within hierarchical topologies in which nodes are spread across different infrastructure levels, such as regions and zones within those regions. -Additionally, by being able to schedule pods in different zones, you can improve network latency in certain scenarios. \ No newline at end of file +Additionally, by being able to schedule pods in different zones, you can improve network latency in certain scenarios. + +You can configure pod topology spread constraints for all the pods deployed by the {cmo-full} to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones. + diff --git a/networking/metallb/metallb-troubleshoot-support.adoc b/networking/metallb/metallb-troubleshoot-support.adoc index f76bed881194..419ccfa3a45d 100644 --- a/networking/metallb/metallb-troubleshoot-support.adoc +++ b/networking/metallb/metallb-troubleshoot-support.adoc @@ -26,7 +26,7 @@ include::modules/nw-metallb-metrics.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* See xref:../../observability/monitoring/managing-metrics.adoc#about-querying-metrics_managing-metrics[Querying metrics] for information about using the monitoring dashboard. +* See xref:../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#querying-metrics-for-all-projects-with-mon-dashboard_accessing-metrics-as-an-administrator[Querying metrics for all projects with the monitoring dashboard] for information about using the monitoring dashboard. // Collecting data include::modules/nw-metallb-collecting-data.adoc[leveloffset=+1] diff --git a/networking/networking_operators/ingress-operator.adoc b/networking/networking_operators/ingress-operator.adoc index cdeebd15d87a..1eab4719469d 100644 --- a/networking/networking_operators/ingress-operator.adoc +++ b/networking/networking_operators/ingress-operator.adoc @@ -62,7 +62,7 @@ ifndef::openshift-rosa,openshift-dedicated[] * xref:../../nodes/cma/nodes-cma-autoscaling-custom-install.adoc#nodes-cma-autoscaling-custom-install_nodes-cma-autoscaling-custom-install[Installing the custom metrics autoscaler] -* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects_enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects] +* xref:../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] * xref:../../nodes/cma/nodes-cma-autoscaling-custom-trigger-auth.adoc#nodes-cma-autoscaling-custom-trigger-auth[Understanding custom metrics autoscaler trigger authentications] diff --git a/networking/networking_operators/sr-iov-operator/configuring-sriov-operator.adoc b/networking/networking_operators/sr-iov-operator/configuring-sriov-operator.adoc index db67049c50f9..41f55cfbd7dd 100644 --- a/networking/networking_operators/sr-iov-operator/configuring-sriov-operator.adoc +++ b/networking/networking_operators/sr-iov-operator/configuring-sriov-operator.adoc @@ -19,9 +19,8 @@ include::modules/sriov-operator-metrics.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources -* xref:../../../observability/monitoring/managing-metrics.adoc#about-querying-metrics_managing-metrics[Querying metrics] -* xref:../../../observability/monitoring/managing-metrics.adoc#querying-metrics-for-all-projects-as-an-administrator_managing-metrics[Querying metrics for all projects as a cluster administrator] -* xref:../../../observability/monitoring/managing-metrics.adoc#querying-metrics-for-user-defined-projects-as-a-developer_managing-metrics[Querying metrics for user-defined projects as a developer] +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#querying-metrics-for-all-projects-with-mon-dashboard_accessing-metrics-as-an-administrator[Querying metrics for all projects with the monitoring dashboard] +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc#querying-metrics-for-user-defined-projects-with-mon-dashboard_accessing-metrics-as-a-developer[Querying metrics for user-defined projects as a developer] [id="configuring-sriov-operator-next-steps"] == Next steps diff --git a/networking/ptp/ptp-cloud-events-consumer-dev-reference-v2.adoc b/networking/ptp/ptp-cloud-events-consumer-dev-reference-v2.adoc index e483b1084ccf..5e99fc562003 100644 --- a/networking/ptp/ptp-cloud-events-consumer-dev-reference-v2.adoc +++ b/networking/ptp/ptp-cloud-events-consumer-dev-reference-v2.adoc @@ -50,6 +50,6 @@ include::modules/cnf-monitoring-fast-events-metrics.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../observability/monitoring/managing-metrics.adoc#managing-metrics[Managing metrics] +* xref:../../observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc#accessing-metrics-as-a-developer[Accessing metrics as a developer] include::modules/nw-ptp-operator-metrics-reference.adoc[leveloffset=+1] diff --git a/networking/ptp/ptp-cloud-events-consumer-dev-reference.adoc b/networking/ptp/ptp-cloud-events-consumer-dev-reference.adoc index 0855cf1ecf1d..a429a3f3e2c8 100644 --- a/networking/ptp/ptp-cloud-events-consumer-dev-reference.adoc +++ b/networking/ptp/ptp-cloud-events-consumer-dev-reference.adoc @@ -53,6 +53,6 @@ include::modules/cnf-monitoring-fast-events-metrics.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../observability/monitoring/managing-metrics.adoc#managing-metrics[Managing metrics] +* xref:../../observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc#accessing-metrics-as-a-developer[Accessing metrics as a developer] include::modules/nw-ptp-operator-metrics-reference.adoc[leveloffset=+1] diff --git a/observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-configuring.adoc b/observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-configuring.adoc index 2d66a2dcf677..85ffc53b57c3 100644 --- a/observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-configuring.adoc +++ b/observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-configuring.adoc @@ -67,6 +67,6 @@ include::modules/distr-tracing-tempo-configuring-tempostack-metrics-and-alerts.a [role="_additional-resources"] .Additional resources -* xref:../../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] include::modules/distr-tracing-tempo-configuring-tempooperator-metrics-and-alerts.adoc[leveloffset=+2] diff --git a/observability/logging/logging_alerts/custom-logging-alerts.adoc b/observability/logging/logging_alerts/custom-logging-alerts.adoc index 027049a27b56..6c7cb78279a3 100644 --- a/observability/logging/logging_alerts/custom-logging-alerts.adoc +++ b/observability/logging/logging_alerts/custom-logging-alerts.adoc @@ -30,8 +30,12 @@ include::modules/logging-enabling-loki-alerts.adoc[leveloffset=+1] [role="_additional-resources"] [id="additional-resources_custom-logging-alerts"] == Additional resources +ifdef::openshift-dedicated,openshift-rosa[] * xref:../../../observability/monitoring/monitoring-overview.adoc#about-openshift-monitoring[About {product-title} monitoring] - +endif::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa[] +* xref:../../../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] +endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-enterprise[] * xref:../../../post_installation_configuration/configuring-alert-notifications.adoc#configuring-alert-notifications[Configuring alert notifications] endif::[] diff --git a/observability/logging/logging_alerts/default-logging-alerts.adoc b/observability/logging/logging_alerts/default-logging-alerts.adoc index 5c6200089cfd..5f8d0836c47f 100644 --- a/observability/logging/logging_alerts/default-logging-alerts.adoc +++ b/observability/logging/logging_alerts/default-logging-alerts.adoc @@ -19,4 +19,9 @@ include::modules/cluster-logging-elasticsearch-rules.adoc[leveloffset=+1] [role="_additional-resources"] [id="additional-resources_default-logging-alerts"] == Additional resources +ifdef::openshift-dedicated,openshift-rosa[] * xref:../../../observability/monitoring/managing-alerts.adoc#modifying-core-platform-alerting-rules_managing-alerts[Modifying core platform alerting rules] +endif::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa[] +* xref:../../../observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.adoc#modifying-core-platform-alerting-rules_managing-alerts-as-an-administrator[Modifying core platform alerting rules] +endif::openshift-dedicated,openshift-rosa[] diff --git a/observability/logging/troubleshooting/troubleshooting-logging-alerts.adoc b/observability/logging/troubleshooting/troubleshooting-logging-alerts.adoc index 66d130f11f96..d2190f18b71e 100644 --- a/observability/logging/troubleshooting/troubleshooting-logging-alerts.adoc +++ b/observability/logging/troubleshooting/troubleshooting-logging-alerts.adoc @@ -12,7 +12,12 @@ include::modules/es-cluster-health-is-red.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources +ifdef::openshift-dedicated,openshift-rosa[] * xref:../../../observability/monitoring/reviewing-monitoring-dashboards.adoc#reviewing-monitoring-dashboards[Reviewing monitoring dashboards] +endif::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa[] +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#reviewing-monitoring-dashboards-admin_accessing-metrics-as-an-administrator[Reviewing monitoring dashboards as a cluster administrator] +endif::openshift-dedicated,openshift-rosa[] * link:https://www.elastic.co/guide/en/elasticsearch/reference/7.13/fix-common-cluster-issues.html#fix-red-yellow-cluster-status[Fix a red or yellow cluster status] [id="elasticsearch-cluster-health-is-yellow"] diff --git a/observability/monitoring/about-ocp-monitoring/_attributes b/observability/monitoring/about-ocp-monitoring/_attributes new file mode 120000 index 000000000000..20cc1dcb77bf --- /dev/null +++ b/observability/monitoring/about-ocp-monitoring/_attributes @@ -0,0 +1 @@ +../../_attributes/ \ No newline at end of file diff --git a/observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc b/observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc new file mode 100644 index 000000000000..c4db245903c7 --- /dev/null +++ b/observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc @@ -0,0 +1,26 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="about-ocp-monitoring"] += About {product-title} monitoring +:context: about-ocp-monitoring + +toc::[] + +ifndef::openshift-dedicated,openshift-rosa[] +{product-title} includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. You also have the option to xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[enable monitoring for user-defined projects]. + +A cluster administrator can xref:../../../observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.adoc#preparing-to-configure-the-monitoring-stack[configure the monitoring stack] with the supported configurations. {product-title} delivers monitoring best practices out of the box. + +A set of alerts are included by default that immediately notify administrators about issues with a cluster. Default dashboards in the {product-title} web console include visual representations of cluster metrics to help you to quickly understand the state of your cluster. With the {product-title} web console, you can xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#accessing-metrics-as-an-administrator[access metrics] and xref:../../../observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.adoc#managing-alerts-as-an-administrator[manage alerts]. + +After installing {product-title}, cluster administrators can optionally enable monitoring for user-defined projects. By using this feature, cluster administrators, developers, and other users can specify how services and pods are monitored in their own projects. +As a cluster administrator, you can find answers to common problems such as user metrics unavailability and high consumption of disk space by Prometheus in xref:../../../observability/monitoring/troubleshooting-monitoring-issues.adoc#troubleshooting-monitoring-issues[Troubleshooting monitoring issues]. +endif::openshift-dedicated,openshift-rosa[] + +ifdef::openshift-dedicated,openshift-rosa[] +In {product-title}, you can monitor your own projects in isolation from Red{nbsp}Hat Site Reliability Engineering (SRE) platform metrics. You can monitor your own projects without the need for an additional monitoring solution. +endif::openshift-dedicated,openshift-rosa[] + + + + diff --git a/observability/monitoring/about-ocp-monitoring/images b/observability/monitoring/about-ocp-monitoring/images new file mode 120000 index 000000000000..847b03ed0541 --- /dev/null +++ b/observability/monitoring/about-ocp-monitoring/images @@ -0,0 +1 @@ +../../images/ \ No newline at end of file diff --git a/observability/monitoring/about-ocp-monitoring/key-concepts.adoc b/observability/monitoring/about-ocp-monitoring/key-concepts.adoc new file mode 100644 index 000000000000..51388771d2d7 --- /dev/null +++ b/observability/monitoring/about-ocp-monitoring/key-concepts.adoc @@ -0,0 +1,131 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="key-concepts"] += Understanding the monitoring stack - key concepts +:context: key-concepts + +toc::[] + +Get familiar with the {product-title} monitoring concepts and terms. Learn about how you can improve performance and scale of your cluster, store and record data, manage metrics and alerts, and more. + +[id="about-performance-and-scalability_{context}"] +== About performance and scalability + +You can optimize the performance and scale of your clusters. +You can configure the default monitoring stack by performing any of the following actions: + +* Control the placement and distribution of monitoring components: +** Use node selectors to move components to specific nodes. +** Assign tolerations to enable moving components to tainted nodes. +* Use pod topology spread constraints. +* Set the body size limit for metrics scraping. +* Manage CPU and memory resources. +* Use metrics collection profiles. + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.adoc#configuring-performance-and-scalability[Configuring performance and scalability for core platform monitoring] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.adoc#configuring-performance-and-scalability-uwm[Configuring performance and scalability for user workload monitoring] + +include::modules/monitoring-using-node-selectors-to-move-monitoring-components.adoc[leveloffset=+2] + +include::modules/monitoring-using-pod-topology-spread-constraints-for-monitoring.adoc[leveloffset=+2] + +include::modules/monitoring-about-specifying-limits-and-requests-for-monitoring-components.adoc[leveloffset=+2] + +include::modules/monitoring-configuring-metrics-collection-profiles.adoc[leveloffset=+2] + +[id="about-storing-and-recording-data_{context}"] +== About storing and recording data + +You can store and record data to help you protect the data and use them for troubleshooting. +You can configure the default monitoring stack by performing any of the following actions: + +* Configure persistent storage: +** Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated. +** Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted. +* Modify the retention time and size for Prometheus and Thanos Ruler metrics data. +* Configure logging to help you troubleshoot issues with your cluster: +** Configure audit logs for Metrics Server. +** Set log levels for monitoring. +** Enable the query logging for Prometheus and Thanos Querier. + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.adoc#storing-and-recording-data[Storing and recording data for core platform monitoring] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.adoc#storing-and-recording-data-uwm[Storing and recording data for user workload monitoring] + +include::modules/monitoring-retention-time-and-size-for-prometheus-metrics-data.adoc[leveloffset=+2] + +// Understanding metrics +include::modules/monitoring-understanding-metrics.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.adoc#configuring-metrics[Configuring metrics for core platform monitoring] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.adoc#configuring-metrics-uwm[Configuring metrics for user workload monitoring] +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#accessing-metrics-as-an-administrator[Accessing metrics as an administrator] +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc#accessing-metrics-as-a-developer[Accessing metrics as a developer] + +include::modules/monitoring-controlling-the-impact-of-unbound-attributes-in-user-defined-projects.adoc[leveloffset=+2] +include::modules/monitoring-adding-cluster-id-labels-to-metrics.adoc[leveloffset=+2] + +//About monitoring dashboards +[id="about-monitoring-dashboards_{context}"] +== About monitoring dashboards + +{product-title} provides a set of monitoring dashboards that help you understand the state of cluster components and user-defined workloads. + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#reviewing-monitoring-dashboards-admin_accessing-metrics-as-an-administrator[Reviewing monitoring dashboards as a cluster administrator] +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc#reviewing-monitoring-dashboards-developer_accessing-metrics-as-a-developer[Reviewing monitoring dashboards as a developer] + +include::modules/monitoring-about-monitoring-dashboards.adoc[leveloffset=+2] + +//Managing alerts +include::modules/monitoring-about-managing-alerts.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + + +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.adoc#configuring-alerts-and-notifications[Configuring alerts and notifications for core platform monitoring] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.adoc#configuring-alerts-and-notifications-uwm[Configuring alerts and notifications for user workload monitoring] +* xref:../../../observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.adoc#managing-alerts-as-an-administrator[Managing alerts as an Administrator] +* xref:../../../observability/monitoring/managing-alerts/managing-alerts-as-a-developer.adoc#managing-alerts-as-a-developer[Managing alerts as a Developer] + +include::modules/monitoring-managing-silences.adoc[leveloffset=+2] + +include::modules/monitoring-managing-core-platform-alerting-rules.adoc[leveloffset=+2] +include::modules/monitoring-tips-for-optimizing-alerting-rules-for-core-platform-monitoring.adoc[leveloffset=+2] + +include::modules/monitoring-about-creating-alerting-rules-for-user-defined-projects.adoc[leveloffset=+2] +include::modules/monitoring-managing-alerting-rules-for-user-defined-projects.adoc[leveloffset=+2] +include::modules/monitoring-optimizing-alerting-for-user-defined-projects.adoc[leveloffset=+2] + +include::modules/monitoring-searching-alerts-silences-and-alerting-rules.adoc[leveloffset=+2] + + +// Overview of setting up alert routing for user-defined projects +include::modules/monitoring-understanding-alert-routing-for-user-defined-projects.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-alert-routing-for-user-defined-projects_preparing-to-configure-the-monitoring-stack-uwm[Enabling alert routing for user-defined projects] + +// Sending notifications to external systems +include::modules/monitoring-sending-notifications-to-external-systems.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.adoc#configuring-alert-notifications_configuring-alerts-and-notifications[Configuring alert notifications for core platform monitoring] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.adoc#configuring-alert-notifications_configuring-alerts-and-notifications-uwm[Configuring alert notifications for user workload monitoring] + + diff --git a/observability/monitoring/about-ocp-monitoring/modules b/observability/monitoring/about-ocp-monitoring/modules new file mode 120000 index 000000000000..36719b9de743 --- /dev/null +++ b/observability/monitoring/about-ocp-monitoring/modules @@ -0,0 +1 @@ +../../modules/ \ No newline at end of file diff --git a/observability/monitoring/about-ocp-monitoring/monitoring-stack-architecture.adoc b/observability/monitoring/about-ocp-monitoring/monitoring-stack-architecture.adoc new file mode 100644 index 000000000000..2f84137c3860 --- /dev/null +++ b/observability/monitoring/about-ocp-monitoring/monitoring-stack-architecture.adoc @@ -0,0 +1,54 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="monitoring-stack-architecture"] += Monitoring stack architecture +:context: monitoring-stack-architecture + +toc::[] + +The {product-title} +ifdef::openshift-rosa[] +(ROSA) +endif::openshift-rosa[] +monitoring stack is based on the link:https://prometheus.io/[Prometheus] open source project and its wider ecosystem. The monitoring stack includes default monitoring components and components for monitoring user-defined projects. + +// Understanding the monitoring stack +include::modules/monitoring-understanding-the-monitoring-stack.adoc[leveloffset=+1] +ifndef::openshift-dedicated,openshift-rosa[] +//Default monitoring components +include::modules/monitoring-default-monitoring-components.adoc[leveloffset=+1] +include::modules/monitoring-default-monitoring-targets.adoc[leveloffset=+2] +[role="_additional-resources"] +.Additional resources +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#getting-detailed-information-about-a-target_accessing-metrics-as-an-administrator[Getting detailed information about a metrics target] +endif::openshift-dedicated,openshift-rosa[] + +//Components for monitoring user-defined projects +include::modules/monitoring-components-for-monitoring-user-defined-projects.adoc[leveloffset=+1] +include::modules/monitoring-targets-for-user-defined-projects.adoc[leveloffset=+2] + +//The monitoring stack in high-availability clusters +include::modules/monitoring-monitoring-stack-in-ha-clusters.adoc[leveloffset=+1] +[role="_additional-resources"] +.Additional resources +* xref:../../../operators/operator_sdk/osdk-ha-sno.adoc#osdk-ha-sno[High-availability or single-node cluster detection and support] +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.adoc#configuring-persistent-storage_storing-and-recording-data[Configuring persistent storage] +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.adoc#configuring-performance-and-scalability[Configuring performance and scalability] + +//Glossary of common terms for OCP monitoring +include::modules/monitoring-common-terms.adoc[leveloffset=+1] + +ifndef::openshift-dedicated,openshift-rosa[] +[role="_additional-resources"] +[id="additional-resources_{context}"] +== Additional resources +* xref:../../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#granting-users-permission-to-monitor-user-defined-projects_preparing-to-configure-the-monitoring-stack-uwm[Granting users permissions for monitoring for user-defined projects] +* xref:../../../security/tls-security-profiles.adoc#tls-security-profiles[Configuring TLS security profiles] +endif::openshift-dedicated,openshift-rosa[] + + + + + + diff --git a/observability/monitoring/about-ocp-monitoring/snippets b/observability/monitoring/about-ocp-monitoring/snippets new file mode 120000 index 000000000000..5a3f5add140e --- /dev/null +++ b/observability/monitoring/about-ocp-monitoring/snippets @@ -0,0 +1 @@ +../../snippets/ \ No newline at end of file diff --git a/observability/monitoring/accessing-metrics/_attributes b/observability/monitoring/accessing-metrics/_attributes new file mode 120000 index 000000000000..20cc1dcb77bf --- /dev/null +++ b/observability/monitoring/accessing-metrics/_attributes @@ -0,0 +1 @@ +../../_attributes/ \ No newline at end of file diff --git a/observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc b/observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc new file mode 100644 index 000000000000..348aaf79e11d --- /dev/null +++ b/observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc @@ -0,0 +1,37 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="accessing-metrics-as-a-developer"] += Accessing metrics as a developer +:context: accessing-metrics-as-a-developer + +toc::[] + +You can access metrics to monitor the performance of your cluster workloads. + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#understanding-metrics_key-concepts[Understanding metrics] + +//Viewing a list of available metrics +include::modules/monitoring-viewing-a-list-of-available-metrics.adoc[leveloffset=+1] + +//Querying metrics for user-defined projects with the OCP web console +include::modules/monitoring-querying-metrics-for-user-defined-projects-with-mon-dashboard.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* link:https://prometheus.io/docs/prometheus/latest/querying/basics/[Querying Prometheus] (Prometheus documentation) + +//Reviewing monitoring dashboards as a developer +include::modules/monitoring-reviewing-monitoring-dashboards-developer.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#about-monitoring-dashboards_key-concepts[About monitoring dashboards] +* xref:../../../applications/odc-monitoring-project-and-application-metrics-using-developer-perspective.adoc#monitoring-project-and-application-metrics-using-developer-perspective[Monitoring project and application metrics using the Developer perspective] + + + diff --git a/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc b/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc new file mode 100644 index 000000000000..d552bf30d171 --- /dev/null +++ b/observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc @@ -0,0 +1,37 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="accessing-metrics-as-an-administrator"] += Accessing metrics as an administrator +:context: accessing-metrics-as-an-administrator + +toc::[] + +You can access metrics to monitor the performance of cluster components and your workloads. + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#understanding-metrics_key-concepts[Understanding metrics] + +//Viewing a list of available metrics +include::modules/monitoring-viewing-a-list-of-available-metrics.adoc[leveloffset=+1] + +//Querying metrics for all projects with the OCP web console +include::modules/monitoring-querying-metrics-for-all-projects-with-mon-dashboard.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* link:https://prometheus.io/docs/prometheus/latest/querying/basics/[Querying Prometheus] (Prometheus documentation) + +//Getting detailed information about a metrics target +include::modules/monitoring-getting-detailed-information-about-a-target.adoc[leveloffset=+1] + +//Reviewing monitoring dashboards as a cluster administrator +include::modules/monitoring-reviewing-monitoring-dashboards-admin.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#about-monitoring-dashboards_key-concepts[About monitoring dashboards] + diff --git a/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.adoc b/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.adoc new file mode 100644 index 000000000000..ac3bd4e24ce6 --- /dev/null +++ b/observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.adoc @@ -0,0 +1,52 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="accessing-monitoring-apis-by-using-the-cli"] += Accessing monitoring APIs by using the CLI +:context: accessing-monitoring-apis-by-using-the-cli + +toc::[] + +In {product-title}, you can access web service APIs for some monitoring components from the command-line interface (CLI). + +[IMPORTANT] +==== +In certain situations, accessing API endpoints can degrade the performance and scalability of your cluster, especially if you use endpoints to retrieve, send, or query large amounts of metrics data. + +To avoid these issues, consider the following recommendations: + +* Avoid querying endpoints frequently. Limit queries to a maximum of one every 30 seconds. +* Do not retrieve all metrics data through the `/federate` endpoint for Prometheus. Query the endpoint only when you want to retrieve a limited, aggregated data set. For example, retrieving fewer than 1,000 samples for each request helps minimize the risk of performance degradation. +==== + +// About accessing monitoring web service APIs +include::modules/monitoring-about-accessing-monitoring-web-service-apis.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#reviewing-monitoring-dashboards-admin_accessing-metrics-as-an-administrator[Reviewing monitoring dashboards as a cluster administrator] +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc#reviewing-monitoring-dashboards-developer_accessing-metrics-as-a-developer[Reviewing monitoring dashboards as a developer] + +// Accessing a monitoring web service API +include::modules/monitoring-accessing-third-party-monitoring-web-service-apis.adoc[leveloffset=+1] + +// Querying metrics by using the federation endpoint for Prometheus +include::modules/monitoring-querying-metrics-by-using-the-federation-endpoint-for-prometheus.adoc[leveloffset=+1] + +// Accessing metrics from outside the cluster for custom applications +include::modules/accessing-metrics-outside-cluster.adoc[leveloffset=+1] + +// Resources reference for the Cluster Monitoring Operator +include::modules/monitoring-resources-reference-for-the-cluster-monitoring-operator.adoc[leveloffset=+1] + +[role="_additional-resources"] +[id="additional-resources_{context}"] +== Additional resources + +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.adoc#configuring-remote-write-storage_configuring-metrics[Configuring remote write storage for core platform monitoring] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.adoc#configuring-remote-write-storage_configuring-metrics-uwm[Configuring remote write storage for monitoring of user-defined projects] +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#accessing-metrics-as-an-administrator[Accessing metrics as an administrator] +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc#accessing-metrics-as-a-developer[Accessing metrics as a developer] +* xref:../../../observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.adoc#managing-alerts-as-an-administrator[Managing alerts as an Administrator] +* xref:../../../observability/monitoring/managing-alerts/managing-alerts-as-a-developer.adoc#managing-alerts-as-a-developer[Managing alerts as a Developer] diff --git a/observability/monitoring/accessing-metrics/images b/observability/monitoring/accessing-metrics/images new file mode 120000 index 000000000000..847b03ed0541 --- /dev/null +++ b/observability/monitoring/accessing-metrics/images @@ -0,0 +1 @@ +../../images/ \ No newline at end of file diff --git a/observability/monitoring/accessing-metrics/modules b/observability/monitoring/accessing-metrics/modules new file mode 120000 index 000000000000..36719b9de743 --- /dev/null +++ b/observability/monitoring/accessing-metrics/modules @@ -0,0 +1 @@ +../../modules/ \ No newline at end of file diff --git a/observability/monitoring/accessing-metrics/snippets b/observability/monitoring/accessing-metrics/snippets new file mode 120000 index 000000000000..5a3f5add140e --- /dev/null +++ b/observability/monitoring/accessing-metrics/snippets @@ -0,0 +1 @@ +../../snippets/ \ No newline at end of file diff --git a/observability/monitoring/accessing-third-party-monitoring-apis.adoc b/observability/monitoring/accessing-third-party-monitoring-apis.adoc index afa25fea4e35..4d45fe011b1f 100644 --- a/observability/monitoring/accessing-third-party-monitoring-apis.adoc +++ b/observability/monitoring/accessing-third-party-monitoring-apis.adoc @@ -2,12 +2,11 @@ [id="accessing-third-party-monitoring-apis"] = Accessing monitoring APIs by using the CLI include::_attributes/common-attributes.adoc[] -:context: accessing-monitoring-apis-by-using-the-cli +:context: accessing-third-party-monitoring-apis toc::[] -[role="_abstract"] -In {product-title} {product-version}, you can access web service APIs for some monitoring components from the command line interface (CLI). +In {product-title}, you can access web service APIs for some monitoring components from the command line interface (CLI). [IMPORTANT] ==== @@ -22,6 +21,7 @@ To avoid these issues, follow these recommendations: // Accessing service APIs for third-party monitoring components include::modules/monitoring-about-accessing-monitoring-web-service-apis.adoc[leveloffset=+1] +[role="_additional-resources"] .Additional resources * xref:../../observability/monitoring/reviewing-monitoring-dashboards.adoc#reviewing-monitoring-dashboards[Reviewing monitoring dashboards] @@ -36,13 +36,12 @@ include::modules/accessing-metrics-outside-cluster.adoc[leveloffset=+1] // Resources reference for accessing API endpoints include::modules/monitoring-resources-reference-for-the-cluster-monitoring-operator.adoc[leveloffset=+1] - [role="_additional-resources"] -[id="additional-resources_accessing-monitoring-apis-by-using-the-cli"] +[id="additional-resources_{context}"] == Additional resources ifndef::openshift-dedicated,openshift-rosa[] -* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects_enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects] +* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects-uwm_enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects] endif::openshift-dedicated,openshift-rosa[] * xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#configuring-remote-write-storage_configuring-the-monitoring-stack[Configuring remote write storage] * xref:../../observability/monitoring/managing-metrics.adoc#managing-metrics[Managing metrics] diff --git a/observability/monitoring/common-monitoring-configuration-scenarios.adoc b/observability/monitoring/common-monitoring-configuration-scenarios.adoc index 3b6714f6cce6..03cbdb91b4b5 100644 --- a/observability/monitoring/common-monitoring-configuration-scenarios.adoc +++ b/observability/monitoring/common-monitoring-configuration-scenarios.adoc @@ -28,7 +28,7 @@ Any other configuration options listed here are optional. * xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#creating-cluster-monitoring-configmap_configuring-the-monitoring-stack[Create the `cluster-monitoring-config` `ConfigMap` object] if it does not exist. * xref:../../observability/monitoring/managing-alerts.adoc#sending-notifications-to-external-systems_managing-alerts[Configure alert receivers] so that Alertmanager can send alerts to an external notification system such as email, Slack, or PagerDuty. -* xref:../../observability/monitoring/managing-alerts.adoc#configuring-notifications-for-default-platform-alerts_managing-alerts[Configure notifications for default platform alerts]. +* xref:../../observability/monitoring/managing-alerts.adoc#configuring-alert-routing-default-platform-alerts_managing-alerts[Configure notifications for default platform alerts]. * For shorter term data retention, xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#configuring-persistent-storage_configuring-the-monitoring-stack[configure persistent storage] for Prometheus and Alertmanager to store metrics and alert data. Specify the metrics data retention parameters for Prometheus and Thanos Ruler. + @@ -57,7 +57,7 @@ With the monitoring stack configured to suit your needs, Prometheus collects met You can go to the *Observe* pages in the {product-title} web console to view and query collected metrics, manage alerts, identify performance bottlenecks, and scale resources as needed: * xref:../../observability/monitoring/reviewing-monitoring-dashboards.adoc#reviewing-monitoring-dashboards[View dashboards] to visualize collected metrics, troubleshoot alerts, and monitor other information about your cluster. -* xref:../../observability/monitoring/managing-metrics.adoc#about-querying-metrics_managing-metrics[Query collected metrics] by creating PromQL queries or using predefined queries. +* xref:../../observability/monitoring/managing-metrics.adoc#querying-metrics-for-all-projects-with-mon-dashboard_managing-metrics[Query collected metrics] by creating PromQL queries or using predefined queries. [id="configuring-monitoring-for-user-defined-projects-getting-started_{context}"] == Configuring monitoring for user-defined projects: Getting started @@ -67,20 +67,20 @@ Non-administrator users such as developers can then monitor their own projects o Cluster administrators typically complete the following activities to configure user-defined projects so that users can view collected metrics, query these metrics, and receive alerts for their own projects: -* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects_enabling-monitoring-for-user-defined-projects[Enable user-defined projects]. +* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects-uwm_enabling-monitoring-for-user-defined-projects[Enable user-defined projects]. * xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#granting-users-permission-to-monitor-user-defined-projects_enabling-monitoring-for-user-defined-projects[Assign the `monitoring-rules-view`, `monitoring-rules-edit`, or `monitoring-edit` cluster roles] to grant non-administrator users permissions to monitor user-defined projects. -* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#granting-users-permission-to-configure-monitoring-for-user-defined-projects_enabling-monitoring-for-user-defined-projects[Assign the `user-workload-monitoring-config-edit` role] to grant non-administrator users permission to configure user-defined projects. +* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#granting-users-permission-to-configure-monitoring-for-user-defined-projects_enabling-monitoring-for-user-defined-projects[Grant non-administrator users permission to configure user-defined projects] by assigning the `user-workload-monitoring-config-edit` role. * xref:../../observability/monitoring/enabling-alert-routing-for-user-defined-projects.adoc#enabling-alert-routing-for-user-defined-projects[Enable alert routing for user-defined projects] so that developers and other users can configure custom alerts and alert routing for their projects. * If needed, configure alert routing for user-defined projects to xref:../../observability/monitoring/enabling-alert-routing-for-user-defined-projects.adoc#enabling-a-separate-alertmanager-instance-for-user-defined-alert-routing_enabling-alert-routing-for-user-defined-projects[use an optional Alertmanager instance dedicated for use only by user-defined projects]. * xref:../../observability/monitoring/managing-alerts.adoc#configuring-different-alert-receivers-for-default-platform-alerts-and-user-defined-alerts_managing-alerts[Configure alert receivers] for user-defined projects. -* xref:../../observability/monitoring/managing-alerts.adoc#configuring-notifications-for-user-defined-alerts_managing-alerts[Configure notifications for user-defined alerts]. +* xref:../../observability/monitoring/managing-alerts.adoc#configuring-alert-routing-user-defined-alerts-secret_managing-alerts[Configure notifications for user-defined alerts]. After monitoring for user-defined projects is enabled and configured, developers and other non-administrator users can then perform the following activities to set up and use monitoring for their own projects: * xref:../../observability/monitoring/managing-metrics.adoc#setting-up-metrics-collection-for-user-defined-projects_managing-metrics[Deploy and monitor services]. * xref:../../observability/monitoring/managing-alerts.adoc#creating-alerting-rules-for-user-defined-projects_managing-alerts[Create and manage alerting rules]. * xref:../../observability/monitoring/managing-alerts.adoc#managing-alerts[Receive and manage alerts] for their projects. -* If granted the `user-workload-monitoring-config-edit` role, xref:../../observability/monitoring/managing-alerts.adoc#creating-alert-routing-for-user-defined-projects_managing-alerts[configure alert routing]. +* If granted the `user-workload-monitoring-config-edit` role, xref:../../observability/monitoring/managing-alerts.adoc#configuring-alert-routing-for-user-defined-projects_managing-alerts[configure alert routing]. * Use the {product-title} web console to xref:../../observability/monitoring/reviewing-monitoring-dashboards.adoc#reviewing-monitoring-dashboards-developer_reviewing-monitoring-dashboards[view dashboards]. -* xref:../../observability/monitoring/managing-metrics.adoc#querying-metrics-for-user-defined-projects-as-a-developer_managing-metrics[Query the collected metrics] by creating PromQL queries or using predefined queries. +* xref:../../observability/monitoring/managing-metrics.adoc#querying-metrics-for-user-defined-projects-with-mon-dashboard_managing-metrics[Query the collected metrics] by creating PromQL queries or using predefined queries. diff --git a/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc b/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc index 8bb5ab53fdf5..54baec8e1e2c 100644 --- a/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc +++ b/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc @@ -32,7 +32,14 @@ The configuration file is always defined under the `config.yaml` key in the conf ==== * Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in this reference are supported for configuration. -For more information about supported configurations, see xref:../monitoring/configuring-the-monitoring-stack.adoc#maintenance-and-support_configuring-the-monitoring-stack[Maintenance and support for monitoring]. +For more information about supported configurations, see +ifndef::openshift-dedicated,openshift-rosa[] +* xref:../../observability/monitoring/getting-started/maintenance-and-support-for-monitoring.adoc#maintenance-and-support-for-monitoring[Maintenance and support for monitoring] +endif::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa[] +xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#maintenance-and-support_configuring-the-monitoring-stack[Maintenance and support for monitoring]. +endif::openshift-dedicated,openshift-rosa[] + * Configuring cluster monitoring is optional. * If a configuration does not exist or is empty, default values are used. * If the configuration has invalid YAML data, or if it contains unsupported or duplicated fields that bypassed early validation, the Cluster Monitoring Operator stops reconciling the resources and reports the `Degraded=True` status in the status conditions of the Operator. diff --git a/observability/monitoring/configuring-core-platform-monitoring/_attributes b/observability/monitoring/configuring-core-platform-monitoring/_attributes new file mode 120000 index 000000000000..20cc1dcb77bf --- /dev/null +++ b/observability/monitoring/configuring-core-platform-monitoring/_attributes @@ -0,0 +1 @@ +../../_attributes/ \ No newline at end of file diff --git a/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.adoc b/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.adoc new file mode 100644 index 000000000000..048b092baaf3 --- /dev/null +++ b/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.adoc @@ -0,0 +1,59 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="configuring-alerts-and-notifications"] += Configuring alerts and notifications for core platform monitoring +:context: configuring-alerts-and-notifications + +toc::[] + +You can configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers. You can also attach custom labels to all time series and alerts to add useful metadata information. + +//Configuring external Alertmanager instances +include::modules/monitoring-configuring-external-alertmanagers.adoc[leveloffset=1,tags=**;CPM;!UWM] + +// Disabling the local Alertmanager +include::modules/monitoring-disabling-the-local-alertmanager.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* link:https://prometheus.io/docs/alerting/latest/alertmanager/[Alertmanager] (Prometheus documentation) +* xref:../../../observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.adoc#managing-alerts-as-an-administrator[Managing alerts as an Administrator] + +//Configuring secrets for Alertmanager +include::modules/monitoring-configuring-secrets-for-alertmanager.adoc[leveloffset=1] + +include::modules/monitoring-adding-a-secret-to-the-alertmanager-configuration.adoc[leveloffset=2,tags=**;CPM;!UWM] + +//Attaching additional labels to your time series and alerts +include::modules/monitoring-attaching-additional-labels-to-your-time-series-and-alerts.adoc[leveloffset=+1,tags=**;CPM;!UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.adoc#preparing-to-configure-the-monitoring-stack[Preparing to configure core platform monitoring stack] + +[id="configuring-alert-notifications_{context}"] +== Configuring alert notifications + +In {product-title} {product-version}, you can view firing alerts in the Alerting UI. You can configure Alertmanager to send notifications about default platform alerts by configuring alert receivers. + +[IMPORTANT] +==== +Alertmanager does not send notifications by default. It is strongly recommended to configure Alertmanager to receive notifications by configuring alert receivers through the web console or through the `alertmanager-main` secret. +==== + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#sending-notifications-to-external-systems_key-concepts[Sending notifications to external systems] +* link:https://www.pagerduty.com/[PagerDuty] (PagerDuty official site) +* link:https://www.pagerduty.com/docs/guides/prometheus-integration-guide/[Prometheus Integration Guide] (PagerDuty official site) +* xref:../../../observability/monitoring/getting-started/maintenance-and-support-for-monitoring.adoc#support-version-matrix-for-monitoring-components_maintenance-and-support-for-monitoring[Support version matrix for monitoring components] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-alert-routing-for-user-defined-projects_preparing-to-configure-the-monitoring-stack-uwm[Enabling alert routing for user-defined projects] + +include::modules/monitoring-configuring-alert-routing-default-platform-alerts.adoc[leveloffset=+2] + +include::modules/monitoring-configuring-alert-routing-console.adoc[leveloffset=+2] + +include::modules/monitoring-configuring-different-alert-receivers-for-default-platform-alerts-and-user-defined-alerts.adoc[leveloffset=+2] diff --git a/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.adoc b/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.adoc new file mode 100644 index 000000000000..7d46aa6de349 --- /dev/null +++ b/observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.adoc @@ -0,0 +1,43 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="configuring-metrics"] += Configuring metrics for core platform monitoring +:context: configuring-metrics + +toc::[] + +Configure the collection of metrics to monitor how cluster components and your own workloads are performing. + +You can send ingested metrics to remote systems for long-term storage and add cluster ID labels to the metrics to identify the data coming from different clusters. + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#understanding-metrics_key-concepts[Understanding metrics] + +// Configuring remote write storage +include::modules/monitoring-configuring-remote-write-storage.adoc[leveloffset=+1,tags=**;CPM;!UWM] + +include::modules/monitoring-supported-remote-write-authentication-settings.adoc[leveloffset=+2] + +include::modules/monitoring-example-remote-write-authentication-settings.adoc[leveloffset=+2,tags=**;CPM;!UWM] + +include::modules/monitoring-example-remote-write-queue-configuration.adoc[leveloffset=+2,tags=**;CPM;!UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../rest_api/monitoring_apis/prometheus-monitoring-coreos-com-v1.adoc#spec-remotewrite-2[Prometheus REST API reference for remote write] +* link:https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage[Setting up remote write compatible endpoints] (Prometheus documentation) +* link:https://prometheus.io/docs/practices/remote_write/#remote-write-tuning[Tuning remote write settings] (Prometheus documentation) +* xref:../../../nodes/pods/nodes-pods-secrets.adoc#nodes-pods-secrets-about_nodes-pods-secrets[Understanding secrets] + +//Creating cluster ID labels for metrics for core platform monitoring +include::modules/monitoring-creating-cluster-id-labels-for-metrics.adoc[leveloffset=+1,tags=**;CPM;!UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#adding-cluster-id-labels-to-metrics_key-concepts[Adding cluster ID labels to metrics] +* xref:../../../support/gathering-cluster-data.adoc#support-get-cluster-id_gathering-cluster-data[Obtaining your cluster ID] + diff --git a/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.adoc b/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.adoc new file mode 100644 index 000000000000..d4a40001bca6 --- /dev/null +++ b/observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.adoc @@ -0,0 +1,93 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="configuring-performance-and-scalability"] += Configuring performance and scalability for core platform monitoring +:context: configuring-performance-and-scalability + +toc::[] + +You can configure the monitoring stack to optimize the performance and scale of your clusters. The following documentation provides information about how to distribute the monitoring components and control the impact of the monitoring stack on CPU and memory resources. + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#about-performance-and-scalability_key-concepts[About performance and scalability] + +[id="controlling-placement-and-distribution-of-monitoing-components_{context}"] +== Controlling the placement and distribution of monitoring components + +You can move the monitoring stack components to specific nodes: + +* Use the `nodeSelector` constraint with labeled nodes to move any of the monitoring stack components to specific nodes. +* Assign tolerations to enable moving components to tainted nodes. + +By doing so, you control the placement and distribution of the monitoring components across a cluster. + +By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies. + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#using-node-selectors-to-move-monitoring-components_key-concepts[Using node selectors to move monitoring components] + +include::modules/monitoring-moving-monitoring-components-to-different-nodes.adoc[leveloffset=+2,tags=**;CPM;!UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.adoc#preparing-to-configure-the-monitoring-stack[Preparing to configure core platform monitoring stack] +* xref:../../../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-updating_nodes-nodes-working[Understanding how to update labels on nodes] +* xref:../../../nodes/scheduling/nodes-scheduler-node-selectors.adoc#nodes-scheduler-node-selectors[Placing pods on specific nodes using node selectors] +* link:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector[nodeSelector] (Kubernetes documentation) + +include::modules/monitoring-assigning-tolerations-to-monitoring-components.adoc[leveloffset=+2,tags=**;CPM;!UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.adoc#preparing-to-configure-the-monitoring-stack[Preparing to configure core platform monitoring stack] +* xref:../../../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations[Controlling pod placement using node taints] +* link:https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/[Taints and Tolerations] (Kubernetes documentation) + +// Setting the body size limit for metrics scraping +include::modules/monitoring-setting-the-body-size-limit-for-metrics-scraping.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* link:https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config[scrape_config configuration] (Prometheus documentation) + +[id="managing-cpu-and-memory-resources-for-monitoring-components_{context}"] +== Managing CPU and memory resources for monitoring components + +You can ensure that the containers that run monitoring components have enough CPU and memory resources by specifying values for resource limits and requests for those components. + +You can configure these limits and requests for core platform monitoring components in the `openshift-monitoring` namespace. + +include::modules/monitoring-specifying-limits-and-requests-for-monitoring-components.adoc[leveloffset=+2,tags=**;CPM;!UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#about-specifying-limits-and-requests-for-monitoring-components_key-concepts[About specifying limits and requests] +* link:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits[Kubernetes requests and limits documentation] (Kubernetes documentation) + +// Choosing a metrics collection profile +include::modules/monitoring-choosing-a-metrics-collection-profile.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#configuring-metrics-collection-profiles_key-concepts[About metrics collection profiles] +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#viewing-a-list-of-available-metrics_accessing-metrics-as-an-administrator[Viewing a list of available metrics] +* xref:../../../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling[Enabling features using feature gates] + +//Configuring pod topology spread constraints for core platform monitoring +include::modules/monitoring-configuring-pod-topology-spread-constraints.adoc[leveloffset=1,tags=**;CPM;!UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#using-pod-topology-spread-constraints-for-monitoring_key-concepts[About pod topology spread constraints for monitoring] +* xref:../../../nodes/scheduling/nodes-scheduler-pod-topology-spread-constraints.adoc#nodes-scheduler-pod-topology-spread-constraints-about[Controlling pod placement by using pod topology spread constraints] +* link:https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/[Pod Topology Spread Constraints] (Kubernetes documentation) + + + diff --git a/observability/monitoring/configuring-core-platform-monitoring/images b/observability/monitoring/configuring-core-platform-monitoring/images new file mode 120000 index 000000000000..847b03ed0541 --- /dev/null +++ b/observability/monitoring/configuring-core-platform-monitoring/images @@ -0,0 +1 @@ +../../images/ \ No newline at end of file diff --git a/observability/monitoring/configuring-core-platform-monitoring/modules b/observability/monitoring/configuring-core-platform-monitoring/modules new file mode 120000 index 000000000000..36719b9de743 --- /dev/null +++ b/observability/monitoring/configuring-core-platform-monitoring/modules @@ -0,0 +1 @@ +../../modules/ \ No newline at end of file diff --git a/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.adoc b/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.adoc new file mode 100644 index 000000000000..f88fe56da028 --- /dev/null +++ b/observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.adoc @@ -0,0 +1,39 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] + +[id="preparing-to-configure-the-monitoring-stack"] += Preparing to configure core platform monitoring stack +:context: preparing-to-configure-the-monitoring-stack + +toc::[] + +The {product-title} installation program provides only a low number of configuration options before installation. Configuring most {product-title} framework components, including the cluster monitoring stack, happens after the installation. + +This section explains which monitoring components can be configured and how to prepare for configuring the monitoring stack. + +[IMPORTANT] +==== +* Not all configuration parameters for the monitoring stack are exposed. +Only the parameters and fields listed in the xref:../../../observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc#cluster-monitoring-operator-configuration-reference[Config map reference for the {cmo-full}] are supported for configuration. + +* The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in xref:../../../scalability_and_performance/recommended-performance-scale-practices/recommended-infrastructure-practices.adoc#scaling-cluster-monitoring-operator_recommended-infrastructure-practices[Scaling the {cmo-full}] and verify that you have sufficient resources. +==== + +// Configurable monitoring components +include::modules/monitoring-configurable-monitoring-components.adoc[leveloffset=+1,tags=**;CPM;!UWM] + +// Creating a cluster monitoring config map +include::modules/monitoring-creating-cluster-monitoring-configmap.adoc[leveloffset=+1] + +// Granting users permissions for core platform monitoring +include::modules/monitoring-granting-users-permissions-for-core-platform-monitoring.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.adoc#resources-reference-for-the-cluster-monitoring-operator_accessing-monitoring-apis-by-using-the-cli[Resources reference for the {cmo-full}] +* xref:../../../observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.adoc#cmo-services-resources_accessing-monitoring-apis-by-using-the-cli[CMO services resources] + +include::modules/monitoring-granting-user-permissions-using-the-web-console.adoc[leveloffset=+2] +include::modules/monitoring-granting-user-permissions-using-the-cli.adoc[leveloffset=+2] + diff --git a/observability/monitoring/configuring-core-platform-monitoring/snippets b/observability/monitoring/configuring-core-platform-monitoring/snippets new file mode 120000 index 000000000000..5a3f5add140e --- /dev/null +++ b/observability/monitoring/configuring-core-platform-monitoring/snippets @@ -0,0 +1 @@ +../../snippets/ \ No newline at end of file diff --git a/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.adoc b/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.adoc new file mode 100644 index 000000000000..cb70873c1dec --- /dev/null +++ b/observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.adoc @@ -0,0 +1,60 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="storing-and-recording-data"] += Storing and recording data for core platform monitoring +:context: storing-and-recording-data + +toc::[] + +Store and record your metrics and alerting data, configure logs to specify which activities are recorded, control how long Prometheus retains stored data, and set the maximum amount of disk space for the data. These actions help you protect your data and use them for troubleshooting. + +// Configuring persistent storage +include::modules/monitoring-configuring-persistent-storage.adoc[leveloffset=+1] + +include::modules/monitoring-configuring-a-persistent-volume-claim.adoc[leveloffset=+2,tags=**;CPM;!UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[Understanding persistent storage] +* link:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims[PersistentVolumeClaims] (Kubernetes documentation) + +include::modules/monitoring-resizing-a-persistent-volume.adoc[leveloffset=+2,tags=**;CPM;!UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../scalability_and_performance/recommended-performance-scale-practices/recommended-infrastructure-practices.adoc#prometheus-database-storage-requirements_recommended-infrastructure-practices[Prometheus database storage requirements] +* xref:../../../storage/expanding-persistent-volumes.adoc#expanding-pvc-filesystem_expanding-persistent-volumes[Expanding persistent volume claims (PVCs) with a file system] + +// Modifying the retention time and size for Prometheus metrics data + +include::modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc[leveloffset=+1,tags=**;CPM;!UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#retention-time-and-size-for-prometheus-metrics-data_key-concepts[Retention time and size for Prometheus metrics] +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.adoc#preparing-to-configure-the-monitoring-stack[Preparing to configure core platform monitoring stack] +* xref:../../../scalability_and_performance/recommended-performance-scale-practices/recommended-infrastructure-practices.adoc#prometheus-database-storage-requirements_cluster-monitoring-operator[Prometheus database storage requirements] +* xref:../../../scalability_and_performance/optimization/optimizing-storage.adoc#optimizing-storage[Recommended configurable storage technology] +* xref:../../../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[Understanding persistent storage] +* xref:../../../scalability_and_performance/optimization/optimizing-storage.adoc#optimizing-storage[Optimizing storage] + +// Configuring audit logs for Metrics Server +include::modules/monitoring-configuring-audit-logs-for-metrics-server.adoc[leveloffset=+1] + +// Setting log levels for monitoring components +include::modules/monitoring-setting-log-levels-for-monitoring-components.adoc[leveloffset=+1,tags=**;CPM;!UWM] + +// Enabling the query log file for Prometheus +include::modules/monitoring-setting-query-log-file-for-prometheus.adoc[leveloffset=+1,tags=**;CPM;!UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.adoc#preparing-to-configure-the-monitoring-stack[Preparing to configure core platform monitoring stack] + +// Enabling query logging for Thanos Querier +include::modules/monitoring-enabling-query-logging-for-thanos-querier.adoc[leveloffset=+1] + diff --git a/observability/monitoring/configuring-the-monitoring-stack.adoc b/observability/monitoring/configuring-the-monitoring-stack.adoc index 392e099ab39b..f3a72ed87742 100644 --- a/observability/monitoring/configuring-the-monitoring-stack.adoc +++ b/observability/monitoring/configuring-the-monitoring-stack.adoc @@ -2,7 +2,7 @@ [id="configuring-the-monitoring-stack"] = Configuring the monitoring stack include::_attributes/common-attributes.adoc[] -:context: configuring-the-monitoring-stack +:context: configuring-the-monitoring-stack toc::[] @@ -82,8 +82,8 @@ include::modules/monitoring-granting-users-permissions-for-core-platform-monitor .Additional resources * xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#granting-user-permissions-using-the-web-console_enabling-monitoring-for-user-defined-projects[Granting user permissions by using the web console] * xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#granting-user-permissions-using-the-cli_enabling-monitoring-for-user-defined-projects[Granting user permissions by using the CLI] -* xref:../../observability/monitoring/accessing-third-party-monitoring-apis.adoc#resources-reference-for-the-cluster-monitoring-operator_accessing-monitoring-apis-by-using-the-cli[Resources reference for the {cmo-full}] -* xref:../../observability/monitoring/accessing-third-party-monitoring-apis.adoc#cmo-services-resources[CMO services resources] +* xref:../../observability/monitoring/accessing-third-party-monitoring-apis.adoc#resources-reference-for-the-cluster-monitoring-operator_accessing-third-party-monitoring-apis[Resources reference for the {cmo-full}] +* xref:../../observability/monitoring/accessing-third-party-monitoring-apis.adoc#cmo-services-resources_accessing-third-party-monitoring-apis[CMO services resources] endif::openshift-dedicated,openshift-rosa[] @@ -103,7 +103,8 @@ ifndef::openshift-dedicated,openshift-rosa[] endif::openshift-dedicated,openshift-rosa[] // Configurable monitoring components -include::modules/monitoring-configurable-monitoring-components.adoc[leveloffset=+1] +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-configurable-monitoring-components.adoc[leveloffset=+1,tags=**;!CPM;UWM] // Moving monitoring components to different nodes include::modules/monitoring-using-node-selectors-to-move-monitoring-components.adoc[leveloffset=+1] @@ -119,8 +120,8 @@ endif::openshift-dedicated,openshift-rosa[] * xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#using-pod-topology-spread-constraints-for-monitoring_configuring-the-monitoring-stack[Using pod topology spread constraints for monitoring] * link:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector[Kubernetes documentation about node selectors] -include::modules/monitoring-moving-monitoring-components-to-different-nodes.adoc[leveloffset=+2] - +// The module should only include monitoring for user-defined projects +include::modules/monitoring-moving-monitoring-components-to-different-nodes.adoc[leveloffset=+2,tags=**;!CPM;UWM] [role="_additional-resources"] .Additional resources @@ -136,7 +137,8 @@ endif::openshift-dedicated,openshift-rosa[] * See the link:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector[Kubernetes documentation] for details on the `nodeSelector` constraint // Assigning tolerations to monitoring components -include::modules/monitoring-assigning-tolerations-to-monitoring-components.adoc[leveloffset=+1] +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-assigning-tolerations-to-monitoring-components.adoc[leveloffset=+1,tags=**;!CPM;UWM] [role="_additional-resources"] .Additional resources @@ -168,14 +170,25 @@ You can ensure that the containers that run monitoring components have enough CP You can configure these limits and requests for core platform monitoring components in the `openshift-monitoring` namespace and for the components that monitor user-defined projects in the `openshift-user-workload-monitoring` namespace. include::modules/monitoring-about-specifying-limits-and-requests-for-monitoring-components.adoc[leveloffset=+2] -include::modules/monitoring-specifying-limits-and-requests-for-monitoring-components.adoc[leveloffset=+2] + +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-specifying-limits-and-requests-for-monitoring-components.adoc[leveloffset=+2,tags=**;!CPM;UWM] // Configuring persistent storage include::modules/monitoring-configuring-persistent-storage.adoc[leveloffset=+1] -include::modules/monitoring-configuring-a-persistent-volume-claim.adoc[leveloffset=+2] + +// Configuring a persistent volume claim +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-configuring-a-persistent-volume-claim.adoc[leveloffset=+2,tags=**;!CPM;UWM] + +[role="_additional-resources"] +.Additional resources +* link:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims[PersistentVolumeClaims](Kubernetes documentation about how to specify `volumeClaimTemplate`) ifndef::openshift-dedicated,openshift-rosa[] -include::modules/monitoring-resizing-a-persistent-volume.adoc[leveloffset=+2] +// Resizing a persistent volume +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-resizing-a-persistent-volume.adoc[leveloffset=+2,tags=**;!CPM;UWM] [role="_additional-resources"] .Additional resources @@ -183,7 +196,14 @@ include::modules/monitoring-resizing-a-persistent-volume.adoc[leveloffset=+2] * xref:../../storage/expanding-persistent-volumes.adoc#expanding-pvc-filesystem_expanding-persistent-volumes[Expanding persistent volume claims (PVCs) with a file system] endif::openshift-dedicated,openshift-rosa[] -include::modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc[leveloffset=+2] +// The retention time and size for Prometheus metrics data +// This section will be moved in the future PR. Therefore, some of the repetition in the introduction for the following procedure modules does not matter for the time being +include::modules/monitoring-retention-time-and-size-for-prometheus-metrics-data.adoc[leveloffset=+2] + +// Modifying the retention time and size for Prometheus metrics data +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc[leveloffset=+2,tags=**;!CPM;UWM] + include::modules/monitoring-modifying-the-retention-time-for-thanos-ruler-metrics-data.adoc[leveloffset=+2] [role="_additional-resources"] @@ -201,10 +221,20 @@ ifdef::openshift-dedicated,openshift-rosa[] endif::openshift-dedicated,openshift-rosa[] // Configuring remote write storage for Prometheus -include::modules/monitoring-configuring-remote-write-storage.adoc[leveloffset=+1] + +// Configuring remote write storage +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-configuring-remote-write-storage.adoc[leveloffset=+1,tags=**;!CPM;UWM] + include::modules/monitoring-supported-remote-write-authentication-settings.adoc[leveloffset=+2] -include::modules/monitoring-example-remote-write-authentication-settings.adoc[leveloffset=+2] -include::modules/monitoring-example-remote-write-queue-configuration.adoc[leveloffset=+2] + +// Example remote write authentication settings +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-example-remote-write-authentication-settings.adoc[leveloffset=+2,tags=**;!CPM;UWM] + +// Example remote write queue configuration +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-example-remote-write-queue-configuration.adoc[leveloffset=+2,tags=**;!CPM;UWM] [role="_additional-resources"] .Additional resources @@ -220,7 +250,10 @@ endif::openshift-dedicated,openshift-rosa[] // Configuring labels for outgoing metrics include::modules/monitoring-adding-cluster-id-labels-to-metrics.adoc[leveloffset=+1] -include::modules/monitoring-creating-cluster-id-labels-for-metrics.adoc[leveloffset=+2] + +// Creating cluster ID labels for metrics +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-creating-cluster-id-labels-for-metrics.adoc[leveloffset=+2,tags=**;!CPM;UWM] [role="_additional-resources"] .Additional resources @@ -248,8 +281,8 @@ include::modules/monitoring-choosing-a-metrics-collection-profile.adoc[leveloffs * See xref:../../nodes/clusters/nodes-cluster-enabling-features.adoc[Enabling features using feature gates] for steps to enable Technology Preview features. endif::openshift-dedicated,openshift-rosa[] -// Managing scrape and evaluation intervals and enforced limits for user-defined projects -include::modules/monitoring-limiting-scrape-samples-in-user-defined-projects.adoc[leveloffset=+1] +// Controlling the impact of unbound metrics attributes in user-defined projects +include::modules/monitoring-controlling-the-impact-of-unbound-attributes-in-user-defined-projects.adoc[leveloffset=+1] include::modules/monitoring-setting-scrape-and-evaluation-intervals-limits-for-user-defined-projects.adoc[leveloffset=+2] ifndef::openshift-dedicated,openshift-rosa[] include::modules/monitoring-creating-scrape-sample-alerts.adoc[leveloffset=+2] @@ -262,15 +295,20 @@ include::modules/monitoring-creating-scrape-sample-alerts.adoc[leveloffset=+2] * See xref:../../observability/monitoring/troubleshooting-monitoring-issues.adoc#determining-why-prometheus-is-consuming-disk-space_troubleshooting-monitoring-issues[Determining why Prometheus is consuming a lot of disk space] for steps to query which metrics have the highest number of scrape samples. endif::openshift-dedicated,openshift-rosa[] -//Configuring external alertmanagers -include::modules/monitoring-configuring-external-alertmanagers.adoc[leveloffset=1] +//Configuring external Alertmanager instances +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-configuring-external-alertmanagers.adoc[leveloffset=1,tags=**;!CPM;UWM] //Configuring secrets for Alertmanager include::modules/monitoring-configuring-secrets-for-alertmanager.adoc[leveloffset=1] -include::modules/monitoring-adding-a-secret-to-the-alertmanager-configuration.adoc[leveloffset=2] + +// Adding a secret to the Alertmanager configuration +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-adding-a-secret-to-the-alertmanager-configuration.adoc[leveloffset=2,tags=**;!CPM;UWM] //Attaching additional labels to your time series and alerts -include::modules/monitoring-attaching-additional-labels-to-your-time-series-and-alerts.adoc[leveloffset=+1] +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-attaching-additional-labels-to-your-time-series-and-alerts.adoc[leveloffset=+1,tags=**;!CPM;UWM] ifndef::openshift-dedicated,openshift-rosa[] [role="_additional-resources"] @@ -293,13 +331,16 @@ endif::openshift-dedicated,openshift-rosa[] * link:https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/[Kubernetes Pod Topology Spread Constraints documentation] // Configuring pod topology spread constraints -include::modules/monitoring-configuring-pod-topology-spread-constraints.adoc[leveloffset=2] +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-configuring-pod-topology-spread-constraints.adoc[leveloffset=2,tags=**;!CPM;UWM] // Setting log levels for monitoring components -include::modules/monitoring-setting-log-levels-for-monitoring-components.adoc[leveloffset=+1] +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-setting-log-levels-for-monitoring-components.adoc[leveloffset=+1,tags=**;!CPM;UWM] -// Setting query log for Prometheus -include::modules/monitoring-setting-query-log-file-for-prometheus.adoc[leveloffset=+1] +// Enabling the query log file for Prometheus +// The following module should only include monitoring for user-defined projects (UWM tags) +include::modules/monitoring-setting-query-log-file-for-prometheus.adoc[leveloffset=+1,tags=**;!CPM;UWM] ifndef::openshift-dedicated,openshift-rosa[] [role="_additional-resources"] @@ -315,14 +356,7 @@ include::modules/monitoring-enabling-query-logging-for-thanos-querier.adoc[level [role="_additional-resources"] .Additional resources -* See xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#preparing-to-configure-the-monitoring-stack[Preparing to configure the monitoring stack] for steps to create monitoring config maps. -endif::openshift-dedicated,openshift-rosa[] - -[role="_additional-resources"] -.Additional resources - -ifndef::openshift-dedicated,openshift-rosa[] -* See xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#preparing-to-configure-the-monitoring-stack[Preparing to configure the monitoring stack] for steps to create monitoring config maps. +* xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#preparing-to-configure-the-monitoring-stack[Preparing to configure the monitoring stack] endif::openshift-dedicated,openshift-rosa[] // Disabling the local Alertmanager diff --git a/observability/monitoring/configuring-user-workload-monitoring/_attributes b/observability/monitoring/configuring-user-workload-monitoring/_attributes new file mode 120000 index 000000000000..20cc1dcb77bf --- /dev/null +++ b/observability/monitoring/configuring-user-workload-monitoring/_attributes @@ -0,0 +1 @@ +../../_attributes/ \ No newline at end of file diff --git a/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.adoc b/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.adoc new file mode 100644 index 000000000000..0d00011ae10f --- /dev/null +++ b/observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.adoc @@ -0,0 +1,60 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="configuring-alerts-and-notifications-uwm"] += Configuring alerts and notifications for user workload monitoring +:context: configuring-alerts-and-notifications-uwm + +toc::[] + +You can configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers. You can also attach custom labels to all time series and alerts to add useful metadata information. + +//Configuring external Alertmanager instances +include::modules/monitoring-configuring-external-alertmanagers.adoc[leveloffset=1,tags=**;!CPM;UWM] + +//Configuring secrets for Alertmanager +include::modules/monitoring-configuring-secrets-for-alertmanager.adoc[leveloffset=1] + +include::modules/monitoring-adding-a-secret-to-the-alertmanager-configuration.adoc[leveloffset=2,tags=**;!CPM;UWM] + +//Attaching additional labels to your time series and alerts +include::modules/monitoring-attaching-additional-labels-to-your-time-series-and-alerts.adoc[leveloffset=+1,tags=**;!CPM;UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] + +[id="configuring-alert-notifications_{context}"] +== Configuring alert notifications + +In {product-title}, an administrator can enable alert routing for user-defined projects with one of the following methods: + +* Use the default platform Alertmanager instance. +* Use a separate Alertmanager instance only for user-defined projects. + +Developers and other users with the `alert-routing-edit` cluster role can configure custom alert notifications for their user-defined projects by configuring alert receivers. + +[NOTE] +==== +Review the following limitations of alert routing for user-defined projects: + +* User-defined alert routing is scoped to the namespace in which the resource is defined. For example, a routing configuration in namespace `ns1` only applies to `PrometheusRules` resources in the same namespace. + +* When a namespace is excluded from user-defined monitoring, `AlertmanagerConfig` resources in the namespace cease to be part of the Alertmanager configuration. +==== + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#understanding-alert-routing-for-user-defined-projects_key-concepts[Understanding alert routing for user-defined projects] +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#sending-notifications-to-external-systems_key-concepts[Sending notifications to external systems] +* link:https://www.pagerduty.com/[PagerDuty] (PagerDuty official site) +* link:https://www.pagerduty.com/docs/guides/prometheus-integration-guide/[Prometheus Integration Guide] (PagerDuty official site) +* xref:../../../observability/monitoring/getting-started/maintenance-and-support-for-monitoring.adoc#support-version-matrix-for-monitoring-components_maintenance-and-support-for-monitoring[Support version matrix for monitoring components] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-alert-routing-for-user-defined-projects_preparing-to-configure-the-monitoring-stack-uwm[Enabling alert routing for user-defined projects] + +include::modules/monitoring-configuring-alert-routing-for-user-defined-projects.adoc[leveloffset=+2] + +include::modules/monitoring-configuring-alert-routing-user-defined-alerts-secret.adoc[leveloffset=+2] + +include::modules/monitoring-configuring-different-alert-receivers-for-default-platform-alerts-and-user-defined-alerts.adoc[leveloffset=+2] diff --git a/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.adoc b/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.adoc new file mode 100644 index 000000000000..9037e8e5adfb --- /dev/null +++ b/observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.adoc @@ -0,0 +1,60 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="configuring-metrics-uwm"] += Configuring metrics for user workload monitoring +:context: configuring-metrics-uwm + +toc::[] + +Configure the collection of metrics to monitor how cluster components and your own workloads are performing. + +You can send ingested metrics to remote systems for long-term storage and add cluster ID labels to the metrics to identify the data coming from different clusters. + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#understanding-metrics_key-concepts[Understanding metrics] + +// Configuring remote write storage +include::modules/monitoring-configuring-remote-write-storage.adoc[leveloffset=+1,tags=**;!CPM;UWM] + +include::modules/monitoring-supported-remote-write-authentication-settings.adoc[leveloffset=+2] + +include::modules/monitoring-example-remote-write-authentication-settings.adoc[leveloffset=+2,tags=**;!CPM;UWM] + +include::modules/monitoring-example-remote-write-queue-configuration.adoc[leveloffset=+2,tags=**;!CPM;UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../rest_api/monitoring_apis/prometheus-monitoring-coreos-com-v1.adoc#spec-remotewrite-2[Prometheus REST API reference for remote write] +* link:https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage[Setting up remote write compatible endpoints] (Prometheus documentation) +* link:https://prometheus.io/docs/practices/remote_write/#remote-write-tuning[Tuning remote write settings] (Prometheus documentation) +* xref:../../../nodes/pods/nodes-pods-secrets.adoc#nodes-pods-secrets-about_nodes-pods-secrets[Understanding secrets] + +// Creating cluster ID labels for metrics for monitoring of user-defined projects +include::modules/monitoring-creating-cluster-id-labels-for-metrics.adoc[leveloffset=+1,tags=**;!CPM;UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#adding-cluster-id-labels-to-metrics_key-concepts[Adding cluster ID labels to metrics] +* xref:../../../support/gathering-cluster-data.adoc#support-get-cluster-id_gathering-cluster-data[Obtaining your cluster ID] + +// Setting up metrics collection for user-defined projects + +include::modules/monitoring-setting-up-metrics-collection-for-user-defined-projects.adoc[leveloffset=+1] + +include::modules/monitoring-deploying-a-sample-service.adoc[leveloffset=+2] + +include::modules/monitoring-specifying-how-a-service-is-monitored.adoc[leveloffset=+2] + +include::modules/monitoring-example-service-endpoint-authentication-settings.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] +* link:https://access.redhat.com/articles/6675491[Scrape Prometheus metrics using TLS in ServiceMonitor configuration] (Red{nbsp}Hat Customer Portal article) +* xref:../../../rest_api/monitoring_apis/podmonitor-monitoring-coreos-com-v1.adoc#podmonitor-monitoring-coreos-com-v1[PodMonitor API] +* xref:../../../rest_api/monitoring_apis/servicemonitor-monitoring-coreos-com-v1.adoc#servicemonitor-monitoring-coreos-com-v1[ServiceMonitor API] diff --git a/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.adoc b/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.adoc new file mode 100644 index 000000000000..d0c88d5e1c79 --- /dev/null +++ b/observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.adoc @@ -0,0 +1,98 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="configuring-performance-and-scalability-uwm"] += Configuring performance and scalability for user workload monitoring +:context: configuring-performance-and-scalability-uwm + +toc::[] + +You can configure the monitoring stack to optimize the performance and scale of your clusters. The following documentation provides information about how to distribute the monitoring components and control the impact of the monitoring stack on CPU and memory resources. + +[id="controlling-placement-and-distribution-of-monitoing-components_{context}"] +== Controlling the placement and distribution of monitoring components + +You can move the monitoring stack components to specific nodes: + +* Use the `nodeSelector` constraint with labeled nodes to move any of the monitoring stack components to specific nodes. +* Assign tolerations to enable moving components to tainted nodes. + +By doing so, you control the placement and distribution of the monitoring components across a cluster. + +By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies. + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#using-node-selectors-to-move-monitoring-components_key-concepts[Using node selectors to move monitoring components] + +include::modules/monitoring-moving-monitoring-components-to-different-nodes.adoc[leveloffset=+2,tags=**;!CPM;UWM] + +[role="_additional-resources"] +.Additional resources +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] +* xref:../../../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-updating_nodes-nodes-working[Understanding how to update labels on nodes] +* xref:../../../nodes/scheduling/nodes-scheduler-node-selectors.adoc#nodes-scheduler-node-selectors[Placing pods on specific nodes using node selectors] +* link:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector[nodeSelector] (Kubernetes documentation) + +include::modules/monitoring-assigning-tolerations-to-monitoring-components.adoc[leveloffset=+2,tags=**;!CPM;UWM] + +[role="_additional-resources"] +.Additional resources +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] +* xref:../../../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations[Controlling pod placement using node taints] +* link:https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/[Taints and Tolerations] (Kubernetes documentation) + +[id="managing-cpu-and-memory-resources-for-monitoring-components_{context}"] +== Managing CPU and memory resources for monitoring components + +You can ensure that the containers that run monitoring components have enough CPU and memory resources by specifying values for resource limits and requests for those components. + +You can configure these limits and requests for monitoring components that monitor user-defined projects in the `openshift-user-workload-monitoring` namespace. + +include::modules/monitoring-specifying-limits-and-requests-for-monitoring-components.adoc[leveloffset=+2,tags=**;!CPM;UWM] + +[role="_additional-resources"] +.Additional resources +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#about-specifying-limits-and-requests-for-monitoring-components_key-concepts[About specifying limits and requests for monitoring components] +* link:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits[Kubernetes requests and limits documentation] (Kubernetes documentation) + +[id="controlling-the-impact-of-unbound-attributes-in-user-defined-projects_{context}"] +== Controlling the impact of unbound metrics attributes in user-defined projects + +ifndef::openshift-dedicated,openshift-rosa[] +Cluster administrators +endif::openshift-dedicated,openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa[] +A `dedicated-admin` +endif::openshift-dedicated,openshift-rosa[] +can use the following measures to control the impact of unbound metrics attributes in user-defined projects: + +* Limit the number of samples that can be accepted per target scrape in user-defined projects +* Limit the number of scraped labels, the length of label names, and the length of label values +* Configure the intervals between consecutive scrapes and between Prometheus rule evaluations +* Create alerts that fire when a scrape sample threshold is reached or when the target cannot be scraped + +[NOTE] +==== +Limiting scrape samples can help prevent the issues caused by adding many unbound attributes to labels. Developers can also prevent the underlying cause by limiting the number of unbound attributes that they define for metrics. Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. +==== + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#controlling-the-impact-of-unbound-attributes-in-user-defined-projects_key-concepts[Controlling the impact of unbound metrics attributes in user-defined projects] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] +* xref:../../../observability/monitoring/troubleshooting-monitoring-issues.adoc#determining-why-prometheus-is-consuming-disk-space_troubleshooting-monitoring-issues[Determining why Prometheus is consuming a lot of disk space] + +include::modules/monitoring-setting-scrape-and-evaluation-intervals-limits-for-user-defined-projects.adoc[leveloffset=+2] +include::modules/monitoring-creating-scrape-sample-alerts.adoc[leveloffset=+2] + +//Configuring pod topology spread constraints for monitoring of user-defined projects +include::modules/monitoring-configuring-pod-topology-spread-constraints.adoc[leveloffset=1,tags=**;!CPM;UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#using-pod-topology-spread-constraints-for-monitoring_key-concepts[About pod topology spread constraints for monitoring] +* xref:../../../nodes/scheduling/nodes-scheduler-pod-topology-spread-constraints.adoc#nodes-scheduler-pod-topology-spread-constraints-about[Controlling pod placement by using pod topology spread constraints] +* link:https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/[Pod Topology Spread Constraints] (Kubernetes documentation) \ No newline at end of file diff --git a/observability/monitoring/configuring-user-workload-monitoring/images b/observability/monitoring/configuring-user-workload-monitoring/images new file mode 120000 index 000000000000..847b03ed0541 --- /dev/null +++ b/observability/monitoring/configuring-user-workload-monitoring/images @@ -0,0 +1 @@ +../../images/ \ No newline at end of file diff --git a/observability/monitoring/configuring-user-workload-monitoring/modules b/observability/monitoring/configuring-user-workload-monitoring/modules new file mode 120000 index 000000000000..36719b9de743 --- /dev/null +++ b/observability/monitoring/configuring-user-workload-monitoring/modules @@ -0,0 +1 @@ +../../modules/ \ No newline at end of file diff --git a/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc b/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc new file mode 100644 index 000000000000..52ec4036ad8d --- /dev/null +++ b/observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc @@ -0,0 +1,76 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="preparing-to-configure-the-monitoring-stack-uwm"] += Preparing to configure the user workload monitoring stack +:context: preparing-to-configure-the-monitoring-stack-uwm + +toc::[] + +This section explains which user-defined monitoring components can be configured, how to enable user workload monitoring, and how to prepare for configuring the user workload monitoring stack. + +[IMPORTANT] +==== +* Not all configuration parameters for the monitoring stack are exposed. +Only the parameters and fields listed in the xref:../../../observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc#cluster-monitoring-operator-configuration-reference[Config map reference for the {cmo-full}] are supported for configuration. + +* The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in xref:../../../scalability_and_performance/recommended-performance-scale-practices/recommended-infrastructure-practices.adoc#scaling-cluster-monitoring-operator_recommended-infrastructure-practices[Scaling the {cmo-full}] and verify that you have sufficient resources. +==== + +// Configurable monitoring components +include::modules/monitoring-configurable-monitoring-components.adoc[leveloffset=+1,tags=**;!CPM;UWM] + +// Enabling monitoring for user-defined projects +[id="enabling-monitoring-for-user-defined-projects-uwm_{context}"] +== Enabling monitoring for user-defined projects + +In {product-title}, you can enable monitoring for user-defined projects in addition to the default platform monitoring. You can monitor your own projects in {product-title} without the need for an additional monitoring solution. Using this feature centralizes monitoring for core platform components and user-defined projects. + +include::snippets/monitoring-custom-prometheus-note.adoc[] + +include::modules/monitoring-enabling-monitoring-for-user-defined-projects.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/getting-started/user-workload-monitoring-first-steps.adoc#user-workload-monitoring-first-steps[User workload monitoring first steps] + +include::modules/monitoring-granting-users-permission-to-configure-monitoring-for-user-defined-projects.adoc[leveloffset=+2] + +// Enabling alert routing for user-defined projects +include::modules/monitoring-enabling-alert-routing-for-user-defined-projects.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#understanding-alert-routing-for-user-defined-projects_key-concepts[Understanding alert routing for user-defined projects] + +// Enabling the platform Alertmanager instance for user-defined alert routing +ifndef::openshift-dedicated,openshift-rosa[] +include::modules/monitoring-enabling-the-platform-alertmanager-instance-for-user-defined-alert-routing.adoc[leveloffset=+2] +endif::openshift-dedicated,openshift-rosa[] + +include::modules/monitoring-enabling-a-separate-alertmanager-instance-for-user-defined-alert-routing.adoc[leveloffset=+2] +include::modules/monitoring-granting-users-permission-to-configure-alert-routing-for-user-defined-projects.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +xref:../../../observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.adoc#configuring-alert-notifications_configuring-alerts-and-notifications-uwm[Configuring alert notifications] + +// Granting users permissions for monitoring for user-defined projects +include::modules/monitoring-granting-users-permission-to-monitor-user-defined-projects.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources +* xref:../../../observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.adoc#cmo-services-resources_accessing-monitoring-apis-by-using-the-cli[CMO services resources] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#granting-users-permission-to-configure-monitoring-for-user-defined-projects_preparing-to-configure-the-monitoring-stack-uwm[Granting users permission to configure monitoring for user-defined projects] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#granting-users-permission-to-configure-alert-routing-for-user-defined-projects_preparing-to-configure-the-monitoring-stack-uwm[Granting users permission to configure alert routing for user-defined projects] + +include::modules/monitoring-granting-user-permissions-using-the-web-console.adoc[leveloffset=+2] +include::modules/monitoring-granting-user-permissions-using-the-cli.adoc[leveloffset=+2] + +// Excluding a user-defined project from monitoring +include::modules/monitoring-excluding-a-user-defined-project-from-monitoring.adoc[leveloffset=+1] + +// Disabling monitoring for user-defined projects +include::modules/monitoring-disabling-monitoring-for-user-defined-projects.adoc[leveloffset=+1] \ No newline at end of file diff --git a/observability/monitoring/configuring-user-workload-monitoring/snippets b/observability/monitoring/configuring-user-workload-monitoring/snippets new file mode 120000 index 000000000000..5a3f5add140e --- /dev/null +++ b/observability/monitoring/configuring-user-workload-monitoring/snippets @@ -0,0 +1 @@ +../../snippets/ \ No newline at end of file diff --git a/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.adoc b/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.adoc new file mode 100644 index 000000000000..ee27889c5663 --- /dev/null +++ b/observability/monitoring/configuring-user-workload-monitoring/storing-and-recording-data-uwm.adoc @@ -0,0 +1,58 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="storing-and-recording-data-uwm"] += Storing and recording data for user workload monitoring +:context: storing-and-recording-data-uwm + +toc::[] + +Store and record your metrics and alerting data, configure logs to specify which activities are recorded, control how long Prometheus retains stored data, and set the maximum amount of disk space for the data. These actions help you protect your data and use them for troubleshooting. + +// Configuring persistent storage +include::modules/monitoring-configuring-persistent-storage.adoc[leveloffset=+1] + +include::modules/monitoring-configuring-a-persistent-volume-claim.adoc[leveloffset=+2,tags=**;!CPM;UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[Understanding persistent storage] +* link:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims[PersistentVolumeClaims] (Kubernetes documentation) + +include::modules/monitoring-resizing-a-persistent-volume.adoc[leveloffset=+2,tags=**;!CPM;UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../scalability_and_performance/recommended-performance-scale-practices/recommended-infrastructure-practices.adoc#prometheus-database-storage-requirements_recommended-infrastructure-practices[Prometheus database storage requirements] +* xref:../../../storage/expanding-persistent-volumes.adoc#expanding-pvc-filesystem_expanding-persistent-volumes[Expanding persistent volume claims (PVCs) with a file system] + +// Modifying the retention time and size + +include::modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc[leveloffset=+1,tags=**;!CPM;UWM] + +include::modules/monitoring-modifying-the-retention-time-for-thanos-ruler-metrics-data.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#retention-time-and-size-for-prometheus-metrics-data_key-concepts[Retention time and size for Prometheus metrics] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] +* xref:../../../scalability_and_performance/recommended-performance-scale-practices/recommended-infrastructure-practices.adoc#prometheus-database-storage-requirements_cluster-monitoring-operator[Prometheus database storage requirements] +* xref:../../../scalability_and_performance/optimization/optimizing-storage.adoc#optimizing-storage[Recommended configurable storage technology] +* xref:../../../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[Understanding persistent storage] +* xref:../../../scalability_and_performance/optimization/optimizing-storage.adoc#optimizing-storage[Optimizing storage] + +// Setting log levels for monitoring components +include::modules/monitoring-setting-log-levels-for-monitoring-components.adoc[leveloffset=+1,tags=**;!CPM;UWM] + +// Enabling the query log file for Prometheus +include::modules/monitoring-setting-query-log-file-for-prometheus.adoc[leveloffset=+1,tags=**;!CPM;UWM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] + + + diff --git a/observability/monitoring/enabling-alert-routing-for-user-defined-projects.adoc b/observability/monitoring/enabling-alert-routing-for-user-defined-projects.adoc index c03b06ffafa9..83e607e6451c 100644 --- a/observability/monitoring/enabling-alert-routing-for-user-defined-projects.adoc +++ b/observability/monitoring/enabling-alert-routing-for-user-defined-projects.adoc @@ -6,14 +6,8 @@ include::_attributes/common-attributes.adoc[] toc::[] -[role="_abstract"] -ifndef::openshift-dedicated,openshift-rosa[] -In {product-title} {product-version}, a cluster administrator can enable alert routing for user-defined projects. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] -In {product-title}, a `dedicated-admin` can enable alert routing for user-defined projects. -endif::openshift-dedicated,openshift-rosa[] -This process consists of two general steps: +In {product-title}, an administrator can enable alert routing for user-defined projects. +This process consists of the following steps: ifndef::openshift-dedicated,openshift-rosa[] * Enable alert routing for user-defined projects to use the default platform Alertmanager instance or, optionally, a separate Alertmanager instance only for user-defined projects. @@ -45,4 +39,4 @@ include::modules/monitoring-granting-users-permission-to-configure-alert-routing ifndef::openshift-dedicated,openshift-rosa[] * xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user defined projects] endif::openshift-dedicated,openshift-rosa[] -* xref:../../observability/monitoring/managing-alerts.adoc#creating-alert-routing-for-user-defined-projects_managing-alerts[Creating alert routing for user-defined projects] +* xref:../../observability/monitoring/managing-alerts.adoc#configuring-alert-routing-for-user-defined-projects_managing-alerts[Configuring alert routing for user-defined projects] diff --git a/observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc b/observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc index 64788eeaeadb..8101c759eadb 100644 --- a/observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc +++ b/observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc @@ -1,11 +1,15 @@ :_mod-docs-content-type: ASSEMBLY [id="enabling-monitoring-for-user-defined-projects"] -= Enabling monitoring for user-defined projects += Enabling the user workload monitoring include::_attributes/common-attributes.adoc[] :context: enabling-monitoring-for-user-defined-projects toc::[] +// Preparing the following short assembly introduction into a module, because this assembly will be deleted and divided into just modules +// Introduction enabling monitoring for user-defined projects +//include::modules/monitoring-intro-enabling-monitoring-for-user-defined-projects.adoc[leveloffset=+1] + In {product-title}, you can enable monitoring for user-defined projects in addition to the default platform monitoring. You can monitor your own projects in {product-title} without the need for an additional monitoring solution. Using this feature centralizes monitoring for core platform components and user-defined projects. include::snippets/monitoring-custom-prometheus-note.adoc[] @@ -25,7 +29,7 @@ include::modules/monitoring-granting-users-permission-to-monitor-user-defined-pr [role="_additional-resources"] .Additional resources -* xref:../../observability/monitoring/accessing-third-party-monitoring-apis.adoc#cmo-services-resources[CMO services resources] +* xref:../../observability/monitoring/accessing-third-party-monitoring-apis.adoc#cmo-services-resources_accessing-third-party-monitoring-apis[CMO services resources] include::modules/monitoring-granting-user-permissions-using-the-web-console.adoc[leveloffset=+2] include::modules/monitoring-granting-user-permissions-using-the-cli.adoc[leveloffset=+2] @@ -33,14 +37,6 @@ include::modules/monitoring-granting-user-permissions-using-the-cli.adoc[levelof // Granting users permission to configure monitoring for user-defined projects include::modules/monitoring-granting-users-permission-to-configure-monitoring-for-user-defined-projects.adoc[leveloffset=+1] -// Accessing metrics from outside the cluster for custom applications -include::modules/accessing-metrics-outside-cluster.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects_enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects] - // Excluding a user-defined project from monitoring include::modules/monitoring-excluding-a-user-defined-project-from-monitoring.adoc[leveloffset=+1] diff --git a/observability/monitoring/getting-started/_attributes b/observability/monitoring/getting-started/_attributes new file mode 120000 index 000000000000..20cc1dcb77bf --- /dev/null +++ b/observability/monitoring/getting-started/_attributes @@ -0,0 +1 @@ +../../_attributes/ \ No newline at end of file diff --git a/observability/monitoring/getting-started/core-platform-monitoring-first-steps.adoc b/observability/monitoring/getting-started/core-platform-monitoring-first-steps.adoc new file mode 100644 index 000000000000..2cb5d6128a8b --- /dev/null +++ b/observability/monitoring/getting-started/core-platform-monitoring-first-steps.adoc @@ -0,0 +1,58 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="core-platform-monitoring-first-steps"] += Core platform monitoring first steps +:context: core-platform-monitoring-first-steps + +toc::[] + +After {product-title} is installed, core platform monitoring components immediately begin collecting metrics, which you can query and view. +The default in-cluster monitoring stack includes the core platform Prometheus instance that collects metrics from your cluster and the core Alertmanager instance that routes alerts, among other components. +Depending on who will use the monitoring stack and for what purposes, as a cluster administrator, you can further configure these monitoring components to suit the needs of different users in various scenarios. + +[id="configuring-core-platform-monitoring-postinstallation-steps_{context}"] +== Configuring core platform monitoring: Postinstallation steps + +After {product-title} is installed, cluster administrators typically configure core platform monitoring to suit their needs. +These activities include setting up storage and configuring options for Prometheus, Alertmanager, and other monitoring components. + +[NOTE] +==== +By default, in a newly installed {product-title} system, users can query and view collected metrics. +You need only configure an alert receiver if you want users to receive alert notifications. +Any other configuration options listed here are optional. +==== + +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.adoc#creating-cluster-monitoring-configmap_preparing-to-configure-the-monitoring-stack[Create the `cluster-monitoring-config` `ConfigMap` object] if it does not exist. +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.adoc#configuring-alert-notifications_configuring-alerts-and-notifications[Configure notifications for default platform alerts] so that Alertmanager can send alerts to an external notification system such as email, Slack, or PagerDuty. +* For shorter term data retention, xref:../../../observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.adoc#configuring-persistent-storage_storing-and-recording-data[configure persistent storage] for Prometheus and Alertmanager to store metrics and alert data. +Specify the metrics data retention parameters for Prometheus and Thanos Ruler. ++ +[IMPORTANT] +==== +* In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability. + +* By default, in a newly installed {product-title} system, the monitoring `ClusterOperator` resource reports a `PrometheusDataPersistenceNotConfigured` status message to remind you that storage is not configured. +==== ++ +* For longer term data retention, xref:../../../observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.adoc#configuring-remote-write-storage_configuring-metrics[configure the remote write feature] to enable Prometheus to send ingested metrics to remote systems for storage. ++ +[IMPORTANT] +==== +Be sure to xref:../../../observability/monitoring/configuring-core-platform-monitoring/configuring-metrics.adoc#creating-cluster-id-labels-for-metrics_configuring-metrics[add cluster ID labels to metrics] for use with your remote write storage configuration. +==== ++ +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/preparing-to-configure-the-monitoring-stack.adoc#granting-users-permissions-for-core-platform-monitoring_preparing-to-configure-the-monitoring-stack[Grant monitoring cluster roles] to any non-administrator users that need to access certain monitoring features. +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.adoc#assigning-tolerations-to-monitoring-components_configuring-performance-and-scalability[Assign tolerations] to monitoring stack components so that administrators can move them to tainted nodes. +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.adoc#setting-the-body-size-limit-for-metrics-scraping_configuring-performance-and-scalability[Set the body size limit] for metrics collection to help avoid situations in which Prometheus consumes excessive amounts of memory when scraped targets return a response that contains a large amount of data. +* xref:../../../observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.adoc#managing-alerting-rules-for-core-platform-monitoring_managing-alerts-as-an-administrator[Modify or create alerting rules] for your cluster. +These rules specify the conditions that trigger alerts, such as high CPU or memory usage, network latency, and so forth. +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.adoc#managing-cpu-and-memory-resources-for-monitoring-components_configuring-performance-and-scalability[Specify resource limits and requests for monitoring components] to ensure that the containers that run monitoring components have enough CPU and memory resources. + +With the monitoring stack configured to suit your needs, Prometheus collects metrics from the specified services and stores these metrics according to your settings. +You can go to the *Observe* pages in the {product-title} web console to view and query collected metrics, manage alerts, identify performance bottlenecks, and scale resources as needed: + +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#reviewing-monitoring-dashboards-admin_accessing-metrics-as-an-administrator[View dashboards] to visualize collected metrics, troubleshoot alerts, and monitor other information about your cluster. +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#querying-metrics-for-all-projects-with-mon-dashboard_accessing-metrics-as-an-administrator[Query collected metrics] by creating PromQL queries or using predefined queries. + + diff --git a/observability/monitoring/getting-started/developer-and-non-administrator-steps.adoc b/observability/monitoring/getting-started/developer-and-non-administrator-steps.adoc new file mode 100644 index 000000000000..43aa98574405 --- /dev/null +++ b/observability/monitoring/getting-started/developer-and-non-administrator-steps.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="developer-and-non-administrator-steps"] += Developer and non-administrator steps +:context: developer-and-non-administrator-steps + +toc::[] + +After monitoring for user-defined projects is enabled and configured, developers and other non-administrator users can then perform the following activities to set up and use monitoring for their own projects: + +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.adoc#setting-up-metrics-collection-for-user-defined-projects_configuring-metrics-uwm[Deploy and monitor services]. +* xref:../../../observability/monitoring/managing-alerts/managing-alerts-as-a-developer.adoc#managing-alerting-rules-for-user-defined-projects-uwm_managing-alerts-as-a-developer[Create and manage alerting rules]. +* xref:../../../observability/monitoring/managing-alerts/managing-alerts-as-a-developer.adoc#managing-alerts-as-a-developer[Receive and manage alerts] for your projects. +* If granted the `alert-routing-edit` cluster role, xref:../../../observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.adoc#configuring-alert-routing-for-user-defined-projects_configuring-alerts-and-notifications-uwm[configure alert routing]. +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc#reviewing-monitoring-dashboards-developer_accessing-metrics-as-a-developer[View dashboards] by using the {product-title} web console. +* xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc#querying-metrics-for-user-defined-projects-with-mon-dashboard_accessing-metrics-as-a-developer[Query the collected metrics] by creating PromQL queries or using predefined queries. diff --git a/observability/monitoring/getting-started/images b/observability/monitoring/getting-started/images new file mode 120000 index 000000000000..847b03ed0541 --- /dev/null +++ b/observability/monitoring/getting-started/images @@ -0,0 +1 @@ +../../images/ \ No newline at end of file diff --git a/observability/monitoring/getting-started/maintenance-and-support-for-monitoring.adoc b/observability/monitoring/getting-started/maintenance-and-support-for-monitoring.adoc new file mode 100644 index 000000000000..157e969aa013 --- /dev/null +++ b/observability/monitoring/getting-started/maintenance-and-support-for-monitoring.adoc @@ -0,0 +1,28 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="maintenance-and-support-for-monitoring"] += Maintenance and support for monitoring +:context: maintenance-and-support-for-monitoring + +toc::[] + +Not all configuration options for the monitoring stack are exposed. The only supported way of configuring {product-title} monitoring is by configuring the {cmo-first} using the options described in the xref:../../../observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc#cluster-monitoring-operator-configuration-reference[Config map reference for the {cmo-full}]. *Do not use other configurations, as they are unsupported.* + +Configuration paradigms might change across Prometheus releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in the xref:../../../observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc#cluster-monitoring-operator-configuration-reference[Config map reference for the {cmo-full}], your changes will disappear because the {cmo-short} automatically reconciles any differences and resets any unsupported changes back to the originally defined state by default and by design. + +ifdef::openshift-dedicated,openshift-rosa[] +[IMPORTANT] +==== +Installing another Prometheus instance is not supported by the Red Hat Site Reliability Engineers (SRE). +==== +endif::openshift-dedicated,openshift-rosa[] + +include::modules/monitoring-support-considerations.adoc[leveloffset=+1] +ifndef::openshift-dedicated,openshift-rosa[] +include::modules/monitoring-support-policy-for-monitoring-operators.adoc[leveloffset=+1] +endif::openshift-dedicated,openshift-rosa[] + +include::modules/monitoring-support-version-matrix-for-monitoring-components.adoc[leveloffset=+1] + + + diff --git a/observability/monitoring/getting-started/modules b/observability/monitoring/getting-started/modules new file mode 120000 index 000000000000..36719b9de743 --- /dev/null +++ b/observability/monitoring/getting-started/modules @@ -0,0 +1 @@ +../../modules/ \ No newline at end of file diff --git a/observability/monitoring/getting-started/snippets b/observability/monitoring/getting-started/snippets new file mode 120000 index 000000000000..5a3f5add140e --- /dev/null +++ b/observability/monitoring/getting-started/snippets @@ -0,0 +1 @@ +../../snippets/ \ No newline at end of file diff --git a/observability/monitoring/getting-started/user-workload-monitoring-first-steps.adoc b/observability/monitoring/getting-started/user-workload-monitoring-first-steps.adoc new file mode 100644 index 000000000000..7b193b5e75e8 --- /dev/null +++ b/observability/monitoring/getting-started/user-workload-monitoring-first-steps.adoc @@ -0,0 +1,20 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="user-workload-monitoring-first-steps"] += User workload monitoring first steps +:context: user-workload-monitoring-first-steps + +toc::[] + +As a cluster administrator, you can optionally enable monitoring for user-defined projects in addition to core platform monitoring. +Non-administrator users such as developers can then monitor their own projects outside of core platform monitoring. + +Cluster administrators typically complete the following activities to configure user-defined projects so that users can view collected metrics, query these metrics, and receive alerts for their own projects: + +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enable user workload monitoring]. +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#granting-users-permission-to-monitor-user-defined-projects_preparing-to-configure-the-monitoring-stack-uwm[Grant non-administrator users permissions to monitor user-defined projects] by assigning the `monitoring-rules-view`, `monitoring-rules-edit`, or `monitoring-edit` cluster roles. +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#granting-users-permission-to-configure-alert-routing-for-user-defined-projects_preparing-to-configure-the-monitoring-stack-uwm[Assign the `user-workload-monitoring-config-edit` role] to grant non-administrator users permission to configure user-defined projects. +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-alert-routing-for-user-defined-projects_preparing-to-configure-the-monitoring-stack-uwm[Enable alert routing for user-defined projects] so that developers and other users can configure custom alerts and alert routing for their projects. +* If needed, configure alert routing for user-defined projects to xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-a-separate-alertmanager-instance-for-user-defined-alert-routing_preparing-to-configure-the-monitoring-stack-uwm[use an optional Alertmanager instance dedicated for use only by user-defined projects]. +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.adoc#configuring-alert-notifications_configuring-alerts-and-notifications-uwm[Configure notifications for user-defined alerts]. +* If you use the platform Alertmanager instance for user-defined alert routing, xref:../../../observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.adoc#configuring-different-alert-receivers-for-default-platform-alerts-and-user-defined-alerts_configuring-alerts-and-notifications-uwm[configure different alert receivers] for default platform alerts and user-defined alerts. diff --git a/observability/monitoring/managing-alerts.adoc b/observability/monitoring/managing-alerts.adoc index ceb8bdcbbaa1..77e2f4f6d03f 100644 --- a/observability/monitoring/managing-alerts.adoc +++ b/observability/monitoring/managing-alerts.adoc @@ -18,13 +18,15 @@ The alerts, silences, and alerting rules that are available in the Alerting UI r ==== // Accessing the Alerting UI in the Administrator and Developer perspectives -include::modules/monitoring-accessing-the-alerting-ui.adoc[leveloffset=+1] +include::modules/monitoring-accessing-the-alerting-ui.adoc[leveloffset=1,tags=**;ADM;!DEV] +include::modules/monitoring-accessing-the-alerting-ui.adoc[leveloffset=1,tags=**;DEV;!ADM] // Searching and filtering alerts, silences, and alerting rules include::modules/monitoring-searching-alerts-silences-and-alerting-rules.adoc[leveloffset=+1] // Getting information about alerts, silences and alerting rules -include::modules/monitoring-getting-information-about-alerts-silences-and-alerting-rules.adoc[leveloffset=+1] +include::modules/monitoring-getting-information-about-alerts-silences-and-alerting-rules.adoc[leveloffset=1,tags=**;ADM;!DEV] +include::modules/monitoring-getting-information-about-alerts-silences-and-alerting-rules.adoc[leveloffset=1,tags=**;DEV;!ADM] [role="_additional-resources"] .Additional resources @@ -36,9 +38,15 @@ include::modules/monitoring-managing-silences.adoc[leveloffset=+1] .Additional resources * xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#configuring-persistent-storage_configuring-the-monitoring-stack[Configuring persistent storage] -include::modules/monitoring-silencing-alerts.adoc[leveloffset=+2] -include::modules/monitoring-editing-silences.adoc[leveloffset=+2] -include::modules/monitoring-expiring-silences.adoc[leveloffset=+2] +include::modules/monitoring-silencing-alerts.adoc[leveloffset=+2,tags=**;ADM;!DEV] +include::modules/monitoring-silencing-alerts.adoc[leveloffset=+2,tags=**;DEV;!ADM] + + +include::modules/monitoring-editing-silences.adoc[leveloffset=+2,tags=**;ADM;!DEV] +include::modules/monitoring-editing-silences.adoc[leveloffset=+2,tags=**;DEV;!ADM] + +include::modules/monitoring-expiring-silences.adoc[leveloffset=+2,tags=**;ADM;!DEV] +include::modules/monitoring-expiring-silences.adoc[leveloffset=+2,tags=**;DEV;!ADM] // Managing core platform alerting rules ifndef::openshift-dedicated,openshift-rosa[] @@ -56,7 +64,7 @@ include::modules/monitoring-modifying-core-platform-alerting-rules.adoc[leveloff * See the link:https://prometheus.io/docs/practices/alerting/[Prometheus alerting documentation] for further guidelines on optimizing alerts. endif::openshift-dedicated,openshift-rosa[] -// Creating alerting rules for user-defined projects +// Creating alerting rules for user workload monitoring include::modules/monitoring-about-creating-alerting-rules-for-user-defined-projects.adoc[leveloffset=+1] include::modules/monitoring-optimizing-alerting-for-user-defined-projects.adoc[leveloffset=+2] include::modules/monitoring-creating-alerting-rules-for-user-defined-projects.adoc[leveloffset=+2] @@ -67,7 +75,7 @@ include::modules/monitoring-creating-cross-project-alerting-rules-for-user-defin * link:https://prometheus.io/docs/practices/alerting/[Prometheus alerting documentation] * xref:../../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring overview] -// Managing alerting rules for user-defined projects +// Managing alerting rules for user workload monitoring include::modules/monitoring-managing-alerting-rules-for-user-defined-projects.adoc[leveloffset=+1] include::modules/monitoring-accessing-alerting-rules-for-your-project.adoc[leveloffset=+2] include::modules/monitoring-listing-alerting-rules-for-all-projects-in-a-single-view.adoc[leveloffset=+2] @@ -83,12 +91,12 @@ include::modules/monitoring-disabling-cross-project-alerting-rules-for-user-defi include::modules/monitoring-sending-notifications-to-external-systems.adoc[leveloffset=+1] // Configuring alert receivers ifndef::openshift-dedicated,openshift-rosa[] -include::modules/monitoring-configuring-alert-receivers.adoc[leveloffset=+2] +include::modules/monitoring-configuring-alert-routing-console.adoc[leveloffset=+2] endif::openshift-dedicated,openshift-rosa[] // Configuring different alert receivers for default platform alerts and user-defined alerts include::modules/monitoring-configuring-different-alert-receivers-for-default-platform-alerts-and-user-defined-alerts.adoc[leveloffset=+2] // Creating alert routing for user-defined projects -include::modules/monitoring-creating-alert-routing-for-user-defined-projects.adoc[leveloffset=+2] +include::modules/monitoring-configuring-alert-routing-for-user-defined-projects.adoc[leveloffset=+2] [id="configuring-alertmanager-to-send-notifications"] == Configuring Alertmanager to send notifications @@ -106,14 +114,14 @@ All features of a supported version of upstream Alertmanager are also supported // Configuring notifications for default platform alerts ifndef::openshift-dedicated,openshift-rosa[] -include::modules/monitoring-configuring-notifications-for-default-platform-alerts.adoc[leveloffset=+2] +include::modules/monitoring-configuring-alert-routing-default-platform-alerts.adoc[leveloffset=+2] endif::openshift-dedicated,openshift-rosa[] // Configuring notifications for user-defined alerts -include::modules/monitoring-configuring-notifications-for-user-defined-alerts.adoc[leveloffset=+2] +include::modules/monitoring-configuring-alert-routing-user-defined-alerts-secret.adoc[leveloffset=+2] [role="_additional-resources"] -[id="additional-resources_configuring-alertmanager-to-send-notifications"] +[id="additional-resources_{context}"] == Additional resources * link:https://www.pagerduty.com/[PagerDuty official site] diff --git a/observability/monitoring/managing-alerts/_attributes b/observability/monitoring/managing-alerts/_attributes new file mode 120000 index 000000000000..20cc1dcb77bf --- /dev/null +++ b/observability/monitoring/managing-alerts/_attributes @@ -0,0 +1 @@ +../../_attributes/ \ No newline at end of file diff --git a/observability/monitoring/managing-alerts/images b/observability/monitoring/managing-alerts/images new file mode 120000 index 000000000000..847b03ed0541 --- /dev/null +++ b/observability/monitoring/managing-alerts/images @@ -0,0 +1 @@ +../../images/ \ No newline at end of file diff --git a/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.adoc b/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.adoc new file mode 100644 index 000000000000..0d72847a1524 --- /dev/null +++ b/observability/monitoring/managing-alerts/managing-alerts-as-a-developer.adoc @@ -0,0 +1,79 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="managing-alerts-as-a-developer"] += Managing alerts as a Developer +:context: managing-alerts-as-a-developer + +toc::[] + +In {product-title}, the Alerting UI enables you to manage alerts, silences, and alerting rules. + +[NOTE] +==== +The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. +==== + +// Accessing the Alerting UI from the Developer perspective +include::modules/monitoring-accessing-the-alerting-ui.adoc[leveloffset=1,tags=**;DEV;!ADM] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#searching-alerts-silences-and-alerting-rules_key-concepts[Searching and filtering alerts, silences, and alerting rules] + +// Getting information about alerts, silences and alerting rules from the Developer perspective +include::modules/monitoring-getting-information-about-alerts-silences-and-alerting-rules.adoc[leveloffset=1,tags=**;DEV;!ADM] + +[role="_additional-resources"] +.Additional resources +* link:https://github.com/openshift/runbooks/tree/master/alerts/cluster-monitoring-operator[{cmo-full} runbooks] ({cmo-full} GitHub repository) + +[id="managing-silences_{context}"] +== Managing silences + +You can create a silence for an alert in the {product-title} web console in the *Developer* perspective. +After you create silences, you can view, edit, and expire them. You also do not receive notifications about a silenced alert when the alert fires. + +[NOTE] +==== +When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time. +==== + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#managing-silences_key-concepts[Managing silences] +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.adoc#configuring-persistent-storage_storing-and-recording-data[Configuring persistent storage] + +include::modules/monitoring-silencing-alerts.adoc[leveloffset=+2,tags=**;DEV;!ADM] +include::modules/monitoring-editing-silences.adoc[leveloffset=+2,tags=**;DEV;!ADM] +include::modules/monitoring-expiring-silences.adoc[leveloffset=+2,tags=**;DEV;!ADM] + + +[id="managing-alerting-rules-for-user-defined-projects-uwm_{context}"] +== Managing alerting rules for user-defined projects + +In {product-title}, you can create, view, edit, and remove alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#about-creating-alerting-rules-for-user-defined-projects_key-concepts[Creating alerting rules for user-defined projects] +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#managing-alerting-rules-for-user-defined-projects_key-concepts[Managing alerting rules for user-defined projects] +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#optimizing-alerting-for-user-defined-projects_key-concepts[Optimizing alerting for user-defined projects] + +include::modules/monitoring-creating-alerting-rules-for-user-defined-projects.adoc[leveloffset=+2] +include::modules/monitoring-creating-cross-project-alerting-rules-for-user-defined-projects.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources +* xref:../../../observability/monitoring/about-ocp-monitoring/monitoring-stack-architecture.adoc#monitoring-stack-architecture[Monitoring stack architecture] +* link:https://prometheus.io/docs/practices/alerting/[Alerting] (Prometheus documentation) + +include::modules/monitoring-accessing-alerting-rules-for-your-project.adoc[leveloffset=+2] +include::modules/monitoring-removing-alerting-rules-for-user-defined-projects.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* link:https://prometheus.io/docs/alerting/alertmanager/[Alertmanager] (Prometheus documentation) diff --git a/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.adoc b/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.adoc new file mode 100644 index 000000000000..9dab1f77124c --- /dev/null +++ b/observability/monitoring/managing-alerts/managing-alerts-as-an-administrator.adoc @@ -0,0 +1,113 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/common-attributes.adoc[] +[id="managing-alerts-as-an-administrator"] += Managing alerts as an Administrator +:context: managing-alerts-as-an-administrator + +toc::[] + +In {product-title}, the Alerting UI enables you to manage alerts, silences, and alerting rules. + +[NOTE] +==== +The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. For example, if you are logged in as a user with the `cluster-admin` role, you can access all alerts, silences, and alerting rules. +==== + +// Accessing the Alerting UI from the Administrator perspective +include::modules/monitoring-accessing-the-alerting-ui.adoc[leveloffset=1,tags=**;ADM;!DEV] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#searching-alerts-silences-and-alerting-rules_key-concepts[Searching and filtering alerts, silences, and alerting rules] + +// Getting information about alerts, silences and alerting rules from the Administrator perspective +include::modules/monitoring-getting-information-about-alerts-silences-and-alerting-rules.adoc[leveloffset=1,tags=**;ADM;!DEV] + +[role="_additional-resources"] +.Additional resources +* link:https://github.com/openshift/runbooks/tree/master/alerts/cluster-monitoring-operator[{cmo-full} runbooks] ({cmo-full} GitHub repository) + +[id="managing-silences_{context}"] +== Managing silences + +You can create a silence for an alert in the {product-title} web console in the *Administrator* perspective. +After you create silences, you can view, edit, and expire them. You also do not receive notifications about a silenced alert when the alert fires. + +[NOTE] +==== +When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time. +==== + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#managing-silences_key-concepts[Managing silences] +* xref:../../../observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.adoc#configuring-persistent-storage_storing-and-recording-data[Configuring persistent storage] + +include::modules/monitoring-silencing-alerts.adoc[leveloffset=+2,tags=**;ADM;!DEV] +include::modules/monitoring-editing-silences.adoc[leveloffset=+2,tags=**;ADM;!DEV] +include::modules/monitoring-expiring-silences.adoc[leveloffset=+2,tags=**;ADM;!DEV] + + +[id="managing-alerting-rules-for-core-platform-monitoring_{context}"] +== Managing alerting rules for core platform monitoring + +The {product-title} monitoring includes a large set of default alerting rules for platform metrics. +As a cluster administrator, you can customize this set of rules in two ways: + +* Modify the settings for existing platform alerting rules by adjusting thresholds or by adding and modifying labels. +For example, you can change the `severity` label for an alert from `warning` to `critical` to help you route and triage issues flagged by an alert. + +* Define and add new custom alerting rules by constructing a query expression based on core platform metrics in the `openshift-monitoring` project. + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#managing-core-platform-alerting-rules_key-concepts[Managing alerting rules for core platform monitoring] +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#tips-for-optimizing-alerting-rules-for-core-platform-monitoring_key-concepts[Tips for optimizing alerting rules for core platform monitoring] + +include::modules/monitoring-creating-new-alerting-rules.adoc[leveloffset=+2] +include::modules/monitoring-modifying-core-platform-alerting-rules.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/monitoring-stack-architecture.adoc#monitoring-stack-architecture[Monitoring stack architecture] +* link:https://prometheus.io/docs/alerting/alertmanager/[Alertmanager] (Prometheus documentation) +* link:https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config[relabel_config configuration] (Prometheus documentation) +* link:https://prometheus.io/docs/practices/alerting/[Alerting] (Prometheus documentation) + +[id="managing-alerting-rules-for-user-defined-projects_{context}"] +== Managing alerting rules for user-defined projects + +In {product-title}, you can create, view, edit, and remove alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. + +[role="_additional-resources"] +.Additional resources +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#about-creating-alerting-rules-for-user-defined-projects_key-concepts[Creating alerting rules for user-defined projects] +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#managing-alerting-rules-for-user-defined-projects_key-concepts[Managing alerting rules for user-defined projects] +* xref:../../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#optimizing-alerting-for-user-defined-projects_key-concepts[Optimizing alerting for user-defined projects] + +include::modules/monitoring-creating-alerting-rules-for-user-defined-projects.adoc[leveloffset=+2] +include::modules/monitoring-creating-cross-project-alerting-rules-for-user-defined-projects.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../observability/monitoring/about-ocp-monitoring/monitoring-stack-architecture.adoc#monitoring-stack-architecture[Monitoring stack architecture] +* link:https://prometheus.io/docs/practices/alerting/[Alerting] (Prometheus documentation) + +include::modules/monitoring-listing-alerting-rules-for-all-projects-in-a-single-view.adoc[leveloffset=+2] +include::modules/monitoring-removing-alerting-rules-for-user-defined-projects.adoc[leveloffset=+2] +include::modules/monitoring-disabling-cross-project-alerting-rules-for-user-defined-projects.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* link:https://prometheus.io/docs/alerting/alertmanager/[Alertmanager] (Prometheus documentation) + + + + + diff --git a/observability/monitoring/managing-alerts/modules b/observability/monitoring/managing-alerts/modules new file mode 120000 index 000000000000..36719b9de743 --- /dev/null +++ b/observability/monitoring/managing-alerts/modules @@ -0,0 +1 @@ +../../modules/ \ No newline at end of file diff --git a/observability/monitoring/managing-alerts/snippets b/observability/monitoring/managing-alerts/snippets new file mode 120000 index 000000000000..5a3f5add140e --- /dev/null +++ b/observability/monitoring/managing-alerts/snippets @@ -0,0 +1 @@ +../../snippets/ \ No newline at end of file diff --git a/observability/monitoring/managing-metrics.adoc b/observability/monitoring/managing-metrics.adoc index d258deedc0da..13b32d9cdc30 100644 --- a/observability/monitoring/managing-metrics.adoc +++ b/observability/monitoring/managing-metrics.adoc @@ -40,26 +40,23 @@ ifndef::openshift-dedicated,openshift-rosa[] include::modules/monitoring-viewing-a-list-of-available-metrics.adoc[leveloffset=+1] endif::openshift-dedicated,openshift-rosa[] -// Querying metrics -include::modules/monitoring-about-querying-metrics.adoc[leveloffset=+1] - // include::modules/monitoring-contents-of-the-metrics-ui.adoc[leveloffset=+2] -// Querying metrics for all projects as an administrator -include::modules/monitoring-querying-metrics-for-all-projects-as-an-administrator.adoc[leveloffset=+2] +// Querying metrics for all projects with the {product-title} web console [adm] +include::modules/monitoring-querying-metrics-for-all-projects-with-mon-dashboard.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* For more information about creating PromQL queries, see the link:https://prometheus.io/docs/prometheus/latest/querying/basics/[Prometheus query documentation]. +* link:https://prometheus.io/docs/prometheus/latest/querying/basics/[Prometheus query documentation] -// Querying metrics for user-defined projects as a developer -include::modules/monitoring-querying-metrics-for-user-defined-projects-as-a-developer.adoc[leveloffset=+2] +// Querying metrics for user-defined projects with the {product-title} web console [dev] +include::modules/monitoring-querying-metrics-for-user-defined-projects-with-mon-dashboard.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* For more information about creating PromQL queries, see the link:https://prometheus.io/docs/prometheus/latest/querying/basics/[Prometheus query documentation]. +* link:https://prometheus.io/docs/prometheus/latest/querying/basics/[Prometheus query documentation] // Getting detailed information about metrics targets include::modules/monitoring-getting-detailed-information-about-a-target.adoc[leveloffset=+1] diff --git a/observability/monitoring/monitoring-overview.adoc b/observability/monitoring/monitoring-overview.adoc index c386abed77be..f8d632d752c4 100644 --- a/observability/monitoring/monitoring-overview.adoc +++ b/observability/monitoring/monitoring-overview.adoc @@ -25,6 +25,14 @@ endif::openshift-dedicated,openshift-rosa[] ifdef::openshift-dedicated,openshift-rosa[] In {product-title}, you can monitor your own projects in isolation from Red Hat Site Reliability Engineering (SRE) platform metrics. You can monitor your own projects without the need for an additional monitoring solution. + +The {product-title} +endif::openshift-dedicated,openshift-rosa[] +ifdef::openshift-rosa[] +(ROSA) +endif::openshift-rosa[] +ifdef::openshift-dedicated,openshift-rosa[] +monitoring stack is based on the link:https://prometheus.io/[Prometheus] open source project and its wider ecosystem. endif::openshift-dedicated,openshift-rosa[] // Understanding the monitoring stack @@ -39,7 +47,7 @@ include::modules/monitoring-default-monitoring-targets.adoc[leveloffset=+2] include::modules/monitoring-components-for-monitoring-user-defined-projects.adoc[leveloffset=+2] include::modules/monitoring-targets-for-user-defined-projects.adoc[leveloffset=+2] -include::modules/monitoring-understanding-monitoring-stack-in-ha-clusters.adoc[leveloffset=+2] +include::modules/monitoring-monitoring-stack-in-ha-clusters.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources * xref:../../operators/operator_sdk/osdk-ha-sno.adoc#osdk-ha-sno[High-availability or single-node cluster detection and support] @@ -50,7 +58,7 @@ include::modules/monitoring-common-terms.adoc[leveloffset=+1] ifndef::openshift-dedicated,openshift-rosa[] [role="_additional-resources"] -[id="additional-resources_monitoring-overview"] +[id="additional-resources_{context}"] == Additional resources * xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] diff --git a/observability/monitoring/reviewing-monitoring-dashboards.adoc b/observability/monitoring/reviewing-monitoring-dashboards.adoc index 646873aa02ba..6a1e1ef0e690 100644 --- a/observability/monitoring/reviewing-monitoring-dashboards.adoc +++ b/observability/monitoring/reviewing-monitoring-dashboards.adoc @@ -6,30 +6,10 @@ include::_attributes/common-attributes.adoc[] toc::[] -ifndef::openshift-dedicated,openshift-rosa[] -{product-title} {product-version} provides a comprehensive set of monitoring dashboards that help you understand the state of cluster components and user-defined workloads. -endif::openshift-dedicated,openshift-rosa[] -ifdef::openshift-dedicated,openshift-rosa[] -{product-title} provides monitoring dashboards that help you understand the state of user-defined projects. -endif::openshift-dedicated,openshift-rosa[] - -Use the *Administrator* perspective to access dashboards for the core {product-title} components, including the following items: +{product-title} provides a set of monitoring dashboards that help you understand the state of cluster components and user-defined workloads. -* API performance -* etcd -* Kubernetes compute resources -* Kubernetes network resources -* Prometheus -* USE method dashboards relating to cluster and node performance -* Node performance metrics - -.Example dashboard in the Administrator perspective -image::monitoring-dashboard-administrator.png[] - -In the *Developer* perspective, you can access only the Kubernetes compute resources dashboards: - -.Example dashboard in the Developer perspective -image::observe-dashboard-developer.png[] +// About monitoring dashboards +include::modules/monitoring-about-monitoring-dashboards.adoc[leveloffset=+1] // Reviewing monitoring dashboards as a cluster administrator include::modules/monitoring-reviewing-monitoring-dashboards-admin.adoc[leveloffset=+1] @@ -40,8 +20,7 @@ include::modules/monitoring-reviewing-monitoring-dashboards-developer.adoc[level ifndef::openshift-dedicated,openshift-rosa[] // This additional resource might be valid for ROSA/OSD when the Building applications content is ported. [role="_additional-resources"] -[id="additional-resources-reviewing-monitoring-dashboards"] -.Additional resources - +[id="additional-resources_{context}"] +== Additional resources * xref:../../applications/odc-monitoring-project-and-application-metrics-using-developer-perspective.adoc#monitoring-project-and-application-metrics-using-developer-perspective[Monitoring project and application metrics using the Developer perspective] endif::openshift-dedicated,openshift-rosa[] diff --git a/observability/monitoring/troubleshooting-monitoring-issues.adoc b/observability/monitoring/troubleshooting-monitoring-issues.adoc index 096ec7d33a5e..e42bf072f35a 100644 --- a/observability/monitoring/troubleshooting-monitoring-issues.adoc +++ b/observability/monitoring/troubleshooting-monitoring-issues.adoc @@ -20,9 +20,9 @@ include::modules/monitoring-investigating-why-user-defined-metrics-are-unavailab [role="_additional-resources"] .Additional resources -* xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#creating-user-defined-workload-monitoring-configmap_configuring-the-monitoring-stack[Creating a user-defined workload monitoring config map] -* See xref:../../observability/monitoring/managing-metrics.adoc#specifying-how-a-service-is-monitored_managing-metrics[Specifying how a service is monitored] for details on how to create a `ServiceMonitor` or `PodMonitor` resource -* See xref:../../observability/monitoring/managing-metrics.adoc#getting-detailed-information-about-a-target_managing-metrics[Getting detailed information about metrics targets] +* xref:../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] +* xref:../../observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.adoc#specifying-how-a-service-is-monitored_configuring-metrics-uwm[Specifying how a service is monitored] +* xref:../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#getting-detailed-information-about-a-target_accessing-metrics-as-an-administrator[Getting detailed information about a metrics target] endif::openshift-dedicated,openshift-rosa[] // Investigating why user-defined project metrics are unavailable (OSD/ROSA) @@ -35,9 +35,15 @@ include::modules/monitoring-determining-why-prometheus-is-consuming-disk-space.a [role="_additional-resources"] .Additional resources +ifndef::openshift-dedicated,openshift-rosa[] +* xref:../../observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.adoc#accessing-monitoring-apis-by-using-the-cli[Accessing monitoring APIs by using the CLI] +* xref:../../observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.adoc#setting-scrape-and-evaluation-intervals-limits-for-user-defined-projects_configuring-performance-and-scalability-uwm[Setting scrape intervals, evaluation intervals, and enforced limits for user-defined projects] +endif::openshift-dedicated,openshift-rosa[] -* xref:../../observability/monitoring/accessing-third-party-monitoring-apis.adoc#about-accessing-monitoring-web-service-apis_accessing-monitoring-apis-by-using-the-cli[Accessing monitoring APIs by using the CLI] -* xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#setting-scrape-and-evaluation-intervals-limits-for-user-defined-projects_configuring-the-monitoring-stack[Setting scrape intervals, evaluation intervals, and scrape limits for user-defined projects] +ifdef::openshift-dedicated,openshift-rosa[] +* xref:../../observability/monitoring/accessing-third-party-monitoring-apis.adoc#about-accessing-monitoring-web-service-apis_accessing-third-party-monitoring-apis[Accessing monitoring APIs by using the CLI] +* xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#setting-scrape-and-evaluation-intervals-limits-for-user-defined-projects_configuring-the-monitoring-stack[Setting scrape intervals, evaluation intervals, and enforced limits for user-defined projects] +endif::openshift-dedicated,openshift-rosa[] * xref:../../support/getting-support.adoc#support-submitting-a-case_getting-support[Submitting a support case] // Resolving the KubePersistentVolumeFillingUp alert firing for Prometheus diff --git a/observability/network_observability/metrics-alerts-dashboards.adoc b/observability/network_observability/metrics-alerts-dashboards.adoc index 3c8a13b37418..ca2bec841cbc 100644 --- a/observability/network_observability/metrics-alerts-dashboards.adoc +++ b/observability/network_observability/metrics-alerts-dashboards.adoc @@ -25,5 +25,5 @@ include::modules/network-observability-tcp-flag-syn-flood.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources * xref:../../observability/network_observability/observing-network-traffic.adoc#network-observability-filtering-ebpf-rule_nw-observe-network-traffic[Filtering eBPF flow data using a global rule] -* xref:../../observability/monitoring/managing-alerts.adoc#creating-alerting-rules-for-user-defined-projects_managing-alerts[Creating alerting rules for user-defined projects]. +* xref:../../observability/monitoring/managing-alerts/managing-alerts-as-a-developer.adoc#creating-alerting-rules-for-user-defined-projects_managing-alerts-as-a-developer[Creating alerting rules for user-defined projects]. * xref:../../support/troubleshooting/investigating-monitoring-issues.adoc#determining-why-prometheus-is-consuming-disk-space_investigating-monitoring-issues[Troubleshooting high cardinality metrics- Determining why Prometheus is consuming a lot of disk space] diff --git a/observability/network_observability/network-observability-operator-monitoring.adoc b/observability/network_observability/network-observability-operator-monitoring.adoc index 9c9219106ace..1dafd36fe84c 100644 --- a/observability/network_observability/network-observability-operator-monitoring.adoc +++ b/observability/network_observability/network-observability-operator-monitoring.adoc @@ -17,4 +17,4 @@ include::modules/network-observability-ebpf-agent-alert.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* For more information about creating alerts that you can see on the dashboard, see xref:../../observability/monitoring/managing-alerts.adoc#creating-alerting-rules-for-user-defined-projects_managing-alerts[Creating alerting rules for user-defined projects]. \ No newline at end of file +* For more information about creating alerts that you can see on the dashboard, see xref:../../observability/monitoring/managing-alerts/managing-alerts-as-a-developer.adoc#creating-alerting-rules-for-user-defined-projects_managing-alerts-as-a-developer[Creating alerting rules for user-defined projects]. \ No newline at end of file diff --git a/observability/otel/otel-configuring-metrics-for-monitoring-stack.adoc b/observability/otel/otel-configuring-metrics-for-monitoring-stack.adoc index 7c36bb042a5f..a92af420e5b0 100644 --- a/observability/otel/otel-configuring-metrics-for-monitoring-stack.adoc +++ b/observability/otel/otel-configuring-metrics-for-monitoring-stack.adoc @@ -18,14 +18,4 @@ include::modules/otel-config-receive-metrics-monitoring-stack.adoc[leveloffset=+ [id="additional-resources_otel-configuring-metrics-for-monitoring-stack"] == Additional resources -// * xref:../monitoring/accessing-third-party-monitoring-apis.adoc#monitoring-querying-metrics-by-using-the-federation-endpoint-for-prometheus[Querying metrics by using the federation endpoint for Prometheus] - -//* xref:../monitoring/accessing-third-party-monitoring-apis.adoc#monitoring-querying-metrics-by-using-the-federation-endpoint-for-prometheus_accessing-third-party-monitoring-apis[Querying metrics by using the federation endpoint for Prometheus] - -//* xref:../monitoring/accessing-third-party-monitoring-apis.adoc#monitoring-querying-metrics-by-using-the-federation-endpoint-for-prometheus[Querying metrics by using the federation endpoint for Prometheus] - -//* xref:../monitoring/accessing-third-party-monitoring-apis.adoc#monitoring-querying-metrics-by-using-the-federation-endpoint-for-prometheus_accessing-monitoring-apis-by-using-the-cli[Querying metrics by using the federation endpoint for Prometheus] - -//* xref:../monitoring/accessing-third-party-monitoring-apis.adoc#accessing-third-party-monitoring-apis_monitoring-querying-metrics-by-using-the-federation-endpoint-for-prometheus[Querying metrics by using the federation endpoint for Prometheus] - -* xref:../monitoring/accessing-third-party-monitoring-apis.adoc#monitoring-querying-metrics-by-using-the-federation-endpoint-for-prometheus_accessing-monitoring-apis-by-using-the-cli[Querying metrics by using the federation endpoint for Prometheus] +* xref:../../observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.adoc#monitoring-querying-metrics-by-using-the-federation-endpoint-for-prometheus_accessing-monitoring-apis-by-using-the-cli[Querying metrics by using the federation endpoint for Prometheus] diff --git a/observability/otel/otel-configuring-otelcol-metrics.adoc b/observability/otel/otel-configuring-otelcol-metrics.adoc index c0fe04a53016..3e82820b0773 100644 --- a/observability/otel/otel-configuring-otelcol-metrics.adoc +++ b/observability/otel/otel-configuring-otelcol-metrics.adoc @@ -50,4 +50,4 @@ You can use the *Administrator* view of the web console to verify successful con . Check that the *ServiceMonitors* or *PodMonitors* in the `opentelemetry-collector-` format have the *Up* status. .Additional resources -* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects] \ No newline at end of file +* xref:../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] \ No newline at end of file diff --git a/observability/overview/index.adoc b/observability/overview/index.adoc index 120618a14c74..2dd7d0c145b6 100644 --- a/observability/overview/index.adoc +++ b/observability/overview/index.adoc @@ -36,7 +36,13 @@ Monitor the in-cluster health and performance of your applications running on {p Monitoring stack components are deployed by default in every {product-title} installation and are managed by the {cmo-first}. These components include Prometheus, Alertmanager, Thanos Querier, and others. The {cmo-short} also deploys the Telemeter Client, which sends a subset of data from platform Prometheus instances to Red Hat to facilitate Remote Health Monitoring for clusters. +ifndef::openshift-dedicated,openshift-rosa[] +For more information, see xref:../../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] and xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring]. +endif::openshift-dedicated,openshift-rosa[] + +ifdef::openshift-dedicated,openshift-rosa[] For more information, see xref:../../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring overview] and xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring]. +endif::openshift-dedicated,openshift-rosa[] [id="cluster-logging-index_{context}"] == Logging diff --git a/observability/power_monitoring/visualizing-power-monitoring-metrics.adoc b/observability/power_monitoring/visualizing-power-monitoring-metrics.adoc index d8c1a9d09f21..fa9d9a100205 100644 --- a/observability/power_monitoring/visualizing-power-monitoring-metrics.adoc +++ b/observability/power_monitoring/visualizing-power-monitoring-metrics.adoc @@ -19,4 +19,4 @@ include::modules/power-monitoring-metrics-overview.adoc[leveloffset=+1] [role="_additional-resources"] [id="additional-resources_visualizing-power-monitoring-metrics"] == Additional resources -* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects_enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects] \ No newline at end of file +* xref:../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] \ No newline at end of file diff --git a/post_installation_configuration/cluster-tasks.adoc b/post_installation_configuration/cluster-tasks.adoc index 1f98e29828dc..8703f8bd959c 100644 --- a/post_installation_configuration/cluster-tasks.adoc +++ b/post_installation_configuration/cluster-tasks.adoc @@ -119,7 +119,7 @@ documentation for details on how and when you can create additional resource ins |`alertmanager.monitoring.coreos.com` |`main` |`openshift-monitoring` -|Controls the xref:../observability/monitoring/managing-alerts.adoc#managing-alerts[Alertmanager] deployment parameters. +|Controls the xref:../observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.adoc#configuring-alerts-and-notifications[Alertmanager] deployment parameters. |`ingresscontroller.operator.openshift.io` |`default` diff --git a/post_installation_configuration/configuring-alert-notifications.adoc b/post_installation_configuration/configuring-alert-notifications.adoc index 8fde499c73f0..d253eeaf1aa7 100644 --- a/post_installation_configuration/configuring-alert-notifications.adoc +++ b/post_installation_configuration/configuring-alert-notifications.adoc @@ -14,5 +14,5 @@ include::modules/monitoring-sending-notifications-to-external-systems.adoc[level [role="_additional-resources"] == Additional resources -* xref:../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring overview] -* xref:../observability/monitoring/managing-alerts.adoc#configuring-alert-receivers_managing-alerts[Configuring alert receivers] +* xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] +* xref:../observability/monitoring/configuring-user-workload-monitoring/configuring-alerts-and-notifications-uwm.adoc#configuring-alert-notifications_configuring-alerts-and-notifications-uwm[Configuring alert notifications] diff --git a/rosa_architecture/index.adoc b/rosa_architecture/index.adoc index 037d14f2bba8..854603e3d3f5 100644 --- a/rosa_architecture/index.adoc +++ b/rosa_architecture/index.adoc @@ -281,9 +281,9 @@ Use the Cluster Version Operator (CVO) to upgrade your {product-title} cluster. - **xref:../observability/network_observability/network-observability-overview.adoc#network-observability-overview[Network Observability]**: Observe network traffic for {product-title} clusters by using eBPF technology to create and enrich network flows. You can xref:../observability/network_observability/metrics-alerts-dashboards.adoc#metrics-alerts-dashboards_metrics-alerts-dashboards[view dashboards, customize alerts], and xref:../observability/network_observability/observing-network-traffic.adoc#network-observability-trafficflow_nw-observe-network-traffic[analyze network flow] information for further insight and troubleshooting. -- **xref:../observability/monitoring/monitoring-overview.adoc#monitoring-overview[In-cluster monitoring]**: -Learn to xref:../observability/monitoring/configuring-the-monitoring-stack.adoc#configuring-the-monitoring-stack[configure the monitoring stack]. -After configuring monitoring, use the web console to access xref:../observability/monitoring/reviewing-monitoring-dashboards.adoc#reviewing-monitoring-dashboards[monitoring dashboards]. In addition to infrastructure metrics, you can also scrape and view metrics for your own services. +- **xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[In-cluster monitoring]**: +Learn to xref:../observability/monitoring/getting-started/core-platform-monitoring-first-steps.adoc#core-platform-monitoring-first-steps[configure the monitoring stack]. +After configuring monitoring, use the web console to access xref:../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#reviewing-monitoring-dashboards-admin_accessing-metrics-as-an-administrator[monitoring dashboards]. In addition to infrastructure metrics, you can also scrape and view metrics for your own services. - **xref:../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring_about-remote-health-monitoring[Remote health monitoring]**: {product-title} collects anonymized aggregated information about your cluster. By using Telemetry and the Insights Operator, this data is received by Red Hat and used to improve {product-title}. You can view the xref:../support/remote_health_monitoring/showing-data-collected-by-remote-health-monitoring.adoc#showing-data-collected-by-remote-health-monitoring_showing-data-collected-by-remote-health-monitoring[data collected by remote health monitoring]. diff --git a/rosa_architecture/learn_more_about_openshift.adoc b/rosa_architecture/learn_more_about_openshift.adoc index 4c1c18f62bbf..c85124de5828 100644 --- a/rosa_architecture/learn_more_about_openshift.adoc +++ b/rosa_architecture/learn_more_about_openshift.adoc @@ -52,7 +52,7 @@ Use the following sections to find content to help you learn about and use {prod | link:https://learn.openshift.com/?extIdCarryOver=true&sc_cid=701f2000001Css5AAC[OpenShift Interactive Learning Portal] | xref:../networking/understanding-networking.adoc#understanding-networking[Networking] -| xref:../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring overview] +| xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] | link:https://access.redhat.com/support/policy/updates/openshift#ocp4_phases[{product-title} Life Cycle] | @@ -96,7 +96,7 @@ Use the following sections to find content to help you learn about and use {prod | | -| xref:../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring] +| xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[Monitoring] | | diff --git a/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc b/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc index 8108423725e4..370320853849 100644 --- a/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +++ b/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc @@ -131,7 +131,7 @@ include::modules/telco-core-monitoring.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../../observability/monitoring/monitoring-overview.adoc#about-openshift-monitoring[About {product-version} monitoring] +* xref:../../../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] include::modules/telco-core-scheduling.adoc[leveloffset=+1] diff --git a/security/cert_manager_operator/cert-manager-monitoring.adoc b/security/cert_manager_operator/cert-manager-monitoring.adoc index 753cbb66ba48..f5779982f0ac 100644 --- a/security/cert_manager_operator/cert-manager-monitoring.adoc +++ b/security/cert_manager_operator/cert-manager-monitoring.adoc @@ -14,7 +14,7 @@ include::modules/cert-manager-enable-metrics.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../observability/monitoring/managing-metrics.adoc#setting-up-metrics-collection-for-user-defined-projects_managing-metrics[Setting up metrics collection for user-defined projects] +* xref:../../observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.adoc#setting-up-metrics-collection-for-user-defined-projects_configuring-metrics-uwm[Setting up metrics collection for user-defined projects] // Querying metrics for the {cert-manager-operator} include::modules/cert-manager-query-metrics.adoc[leveloffset=+1] \ No newline at end of file diff --git a/serverless/observability/admin-metrics/serverless-admin-metrics.adoc b/serverless/observability/admin-metrics/serverless-admin-metrics.adoc index d5b8bc8eb40a..ccf48f34a8a7 100644 --- a/serverless/observability/admin-metrics/serverless-admin-metrics.adoc +++ b/serverless/observability/admin-metrics/serverless-admin-metrics.adoc @@ -20,7 +20,7 @@ endif::[] == Prerequisites ifdef::openshift-enterprise[] -* See the {product-title} documentation on xref:../../../observability/monitoring/managing-metrics.adoc#managing-metrics[Managing metrics] for information about enabling metrics for your cluster. +* See the {product-title} documentation on xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#accessing-metrics-as-an-administrator[Accessing metrics as an administrator] for information about enabling metrics for your cluster. * You have access to an {product-title} account with cluster administrator access. endif::[] diff --git a/serverless/observability/developer-metrics/serverless-developer-metrics.adoc b/serverless/observability/developer-metrics/serverless-developer-metrics.adoc index 53998b954f38..65f165738744 100644 --- a/serverless/observability/developer-metrics/serverless-developer-metrics.adoc +++ b/serverless/observability/developer-metrics/serverless-developer-metrics.adoc @@ -32,7 +32,7 @@ ifdef::openshift-enterprise[] [id="additional-resources_serverless-service-monitoring"] [role="_additional-resources"] == Additional resources -* xref:../../../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring overview] -* xref:../../../observability/monitoring/managing-metrics.adoc#specifying-how-a-service-is-monitored[Enabling monitoring for user-defined projects] -* xref:../../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Specifying how a service is monitored] +* xref:../../../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] +* xref:../../../observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.adoc#specifying-how-a-service-is-monitored_configuring-metrics-uwm[Specifying how a service is monitored] endif::[] diff --git a/service_mesh/v2x/ossm-observability.adoc b/service_mesh/v2x/ossm-observability.adoc index e49790ab874c..241502ca7fbf 100644 --- a/service_mesh/v2x/ossm-observability.adoc +++ b/service_mesh/v2x/ossm-observability.adoc @@ -44,7 +44,7 @@ ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] [id="additional-resources_user-workload-monitoring"] == Additional resources -* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc[Enabling monitoring for user-defined projects] +* xref:../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] * xref:../../observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-installing.adoc[Installing the distributed tracing platform (Tempo)] * xref:../../observability/otel/otel-installing.adoc[Installing the Red Hat build of OpenTelemetry] endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] diff --git a/storage/persistent_storage/persistent_storage_local/persistent-storage-local.adoc b/storage/persistent_storage/persistent_storage_local/persistent-storage-local.adoc index 5cc3cbf228ec..3104dd1f09aa 100644 --- a/storage/persistent_storage/persistent_storage_local/persistent-storage-local.adoc +++ b/storage/persistent_storage/persistent_storage_local/persistent-storage-local.adoc @@ -37,7 +37,7 @@ include::modules/persistent-storage-local-tolerations.adoc[leveloffset=+1] include::modules/persistent-storage-local-metrics.adoc[leveloffset=+1] -For more information about metrics, see xref:../../../observability/monitoring/managing-metrics.adoc#managing-metric[Managing metrics]. +For more information about metrics, see xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#accessing-metrics-as-an-administrator[Accessing metrics as an administrator]. == Deleting the Local Storage Operator resources diff --git a/support/remote_health_monitoring/about-remote-health-monitoring.adoc b/support/remote_health_monitoring/about-remote-health-monitoring.adoc index 02c695ba95a4..291569bbe3cb 100644 --- a/support/remote_health_monitoring/about-remote-health-monitoring.adoc +++ b/support/remote_health_monitoring/about-remote-health-monitoring.adoc @@ -109,13 +109,15 @@ include::modules/understanding-telemetry-and-insights-operator-data-flow.adoc[le ifndef::openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources +ifdef::openshift-rosa,openshift-dedicated[] +* See xref:../../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring overview] +endif::openshift-rosa,openshift-dedicated[] -* See xref:../../observability/monitoring/monitoring-overview.adoc#monitoring-overview_monitoring-overview[Monitoring overview] for more information about the {product-title} monitoring stack. -endif::openshift-rosa-hcp[] - -ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +ifndef::openshift-rosa,openshift-dedicated[] +* See xref:../../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] for more information about the {product-title} monitoring stack. * See xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[Configuring your firewall] for details about configuring a firewall and enabling endpoints for Telemetry and Insights -endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] +endif::openshift-rosa,openshift-dedicated[] +endif::openshift-rosa-hcp[] [id="additional-details-about-how-remote-health-monitoring-data-is-used"] == Additional details about how remote health monitoring data is used diff --git a/support/troubleshooting/investigating-monitoring-issues.adoc b/support/troubleshooting/investigating-monitoring-issues.adoc index c3d2ca739341..a86277744cd6 100644 --- a/support/troubleshooting/investigating-monitoring-issues.adoc +++ b/support/troubleshooting/investigating-monitoring-issues.adoc @@ -22,10 +22,17 @@ include::modules/monitoring-investigating-why-user-defined-metrics-are-unavailab ifndef::openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources - +ifdef::openshift-rosa,openshift-dedicated[] * xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#creating-user-defined-workload-monitoring-configmap_configuring-the-monitoring-stack[Creating a user-defined workload monitoring config map] * See xref:../../observability/monitoring/managing-metrics.adoc#specifying-how-a-service-is-monitored_managing-metrics[Specifying how a service is monitored] for details on how to create a service monitor or pod monitor * See xref:../../observability/monitoring/managing-metrics.adoc#getting-detailed-information-about-a-target_managing-metrics[Getting detailed information about a metrics target] +endif::openshift-rosa,openshift-dedicated[] + +ifndef::openshift-rosa,openshift-dedicated[] +* xref:../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] +* See xref:../../observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.adoc#specifying-how-a-service-is-monitored_configuring-metrics-uwm[Specifying how a service is monitored] for details on how to create a service monitor or pod monitor +* See xref:../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#getting-detailed-information-about-a-target_accessing-metrics-as-an-administrator[Getting detailed information about a metrics target] +endif::openshift-rosa,openshift-dedicated[] endif::openshift-rosa-hcp[] // Determining why Prometheus is consuming a lot of disk space @@ -35,8 +42,12 @@ include::modules/monitoring-determining-why-prometheus-is-consuming-disk-space.a ifndef::openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources - +ifdef::openshift-rosa,openshift-dedicated[] * xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#setting-scrape-and-evaluation-intervals-limits-for-user-defined-projects_configuring-the-monitoring-stack[Setting scrape and evaluation intervals and enforced limits for user-defined projects] +endif::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-dedicated[] +* xref:../../observability/monitoring/configuring-user-workload-monitoring/configuring-performance-and-scalability-uwm.adoc#setting-scrape-and-evaluation-intervals-limits-for-user-defined-projects_configuring-performance-and-scalability-uwm[Setting scrape intervals, evaluation intervals, and enforced limits for user-defined projects] +endif::openshift-rosa,openshift-dedicated[] endif::openshift-rosa-hcp[] // Resolving the KubePersistentVolumeFillingUp alert firing for Prometheus diff --git a/virt/monitoring/virt-exposing-custom-metrics-for-vms.adoc b/virt/monitoring/virt-exposing-custom-metrics-for-vms.adoc index 32fd3aab3aa7..601f2e9c6d45 100644 --- a/virt/monitoring/virt-exposing-custom-metrics-for-vms.adoc +++ b/virt/monitoring/virt-exposing-custom-metrics-for-vms.adoc @@ -22,13 +22,13 @@ include::modules/virt-accessing-node-exporter-outside-cluster.adoc[leveloffset=+ == Additional resources // Hiding in ROSA/OSD as not supported ifndef::openshift-rosa,openshift-dedicated[] -* xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#configuring-the-monitoring-stack[Configuring the monitoring stack] +* xref:../../observability/monitoring/getting-started/core-platform-monitoring-first-steps.adoc#core-platform-monitoring-first-steps[Core platform monitoring first steps] -* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects] +* xref:../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[Enabling monitoring for user-defined projects] -* xref:../../observability/monitoring/managing-metrics.adoc#managing-metrics[Managing metrics] +* xref:../../observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc#accessing-metrics-as-a-developer[Accessing metrics as a developer] -* xref:../../observability/monitoring/reviewing-monitoring-dashboards.adoc#reviewing-monitoring-dashboards[Reviewing monitoring dashboards] +* xref:../../observability/monitoring/accessing-metrics/accessing-metrics-as-a-developer.adoc#reviewing-monitoring-dashboards-developer_accessing-metrics-as-a-developer[Reviewing monitoring dashboards as a developer] * xref:../../applications/application-health.adoc#application-health[Monitoring application health by using health checks] endif::openshift-rosa,openshift-dedicated[] diff --git a/virt/monitoring/virt-monitoring-overview.adoc b/virt/monitoring/virt-monitoring-overview.adoc index 73d927904f72..0089e188daa1 100644 --- a/virt/monitoring/virt-monitoring-overview.adoc +++ b/virt/monitoring/virt-monitoring-overview.adoc @@ -33,7 +33,12 @@ xref:../../virt/monitoring/virt-monitoring-vm-health.adoc#virt-monitoring-vm-hea Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs. xref:../../virt/monitoring/virt-runbooks.adoc#virt-runbooks[Runbooks]:: +ifdef::openshift-dedicated,openshift-rosa[] Diagnose and resolve issues that trigger {VirtProductName} xref:../../observability/monitoring/managing-alerts.adoc#managing-alerts[alerts] in the {product-title} web console. +endif::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa[] +Diagnose and resolve issues that trigger {VirtProductName} xref:../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#about-managing-alerts_key-concepts[alerts] in the {product-title} web console. +endif::openshift-dedicated,openshift-rosa[] //:FeatureName: The guest agent ping probe //include::snippets/technology-preview.adoc[] diff --git a/virt/monitoring/virt-prometheus-queries.adoc b/virt/monitoring/virt-prometheus-queries.adoc index 0c57a5252708..37f42ed8956e 100644 --- a/virt/monitoring/virt-prometheus-queries.adoc +++ b/virt/monitoring/virt-prometheus-queries.adoc @@ -25,11 +25,9 @@ endif::openshift-rosa,openshift-dedicated[] * For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests. -include::modules/monitoring-about-querying-metrics.adoc[leveloffset=+1] +include::modules/monitoring-querying-metrics-for-all-projects-with-mon-dashboard.adoc[leveloffset=+1] -include::modules/monitoring-querying-metrics-for-all-projects-as-an-administrator.adoc[leveloffset=+2] - -include::modules/monitoring-querying-metrics-for-user-defined-projects-as-a-developer.adoc[leveloffset=+2] +include::modules/monitoring-querying-metrics-for-user-defined-projects-with-mon-dashboard.adoc[leveloffset=+1] include::modules/virt-querying-metrics.adoc[leveloffset=+1] @@ -38,8 +36,12 @@ include::modules/virt-live-migration-metrics.adoc[leveloffset=+2] [id="additional-resources_virt-prometheus-queries"] [role="_additional-resources"] == Additional resources - +ifdef::openshift-dedicated,openshift-rosa[] * xref:../../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring overview] +endif::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa[] +* xref:../../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] +endif::openshift-dedicated,openshift-rosa[] * link:https://prometheus.io/docs/prometheus/latest/querying/basics/[Querying Prometheus] diff --git a/virt/monitoring/virt-runbooks.adoc b/virt/monitoring/virt-runbooks.adoc index d4bf1bced9cf..b3fe529ed781 100644 --- a/virt/monitoring/virt-runbooks.adoc +++ b/virt/monitoring/virt-runbooks.adoc @@ -7,7 +7,14 @@ include::_attributes/common-attributes.adoc[] toc::[] :!virt-runbooks: -To diagnose and resolve issues that trigger {VirtProductName} xref:../../observability/monitoring/managing-alerts.adoc#managing-alerts[alerts], follow the procedures in the runbooks for the {VirtProductName} Operator. Triggered {VirtProductName} alerts can be viewed in the main *Observe* -> *Alerts* tab in the web console, and also in the *Virtualization* -> *Overview* tab. +To diagnose and resolve issues that trigger {VirtProductName} +ifdef::openshift-dedicated,openshift-rosa[] +xref:../../observability/monitoring/managing-alerts.adoc#managing-alerts[alerts], +endif::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa[] +xref:../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#about-managing-alerts_key-concepts[alerts], +endif::openshift-dedicated,openshift-rosa[] +follow the procedures in the runbooks for the {VirtProductName} Operator. Triggered {VirtProductName} alerts can be viewed in the main *Observe* -> *Alerts* tab in the web console, and also in the *Virtualization* -> *Overview* tab. Runbooks for the {VirtProductName} Operator are maintained in the link:https://github.com/openshift/runbooks/tree/master/alerts/openshift-virtualization-operator[openshift/runbooks] Git repository, and you can view them on GitHub. diff --git a/virt/support/virt-collecting-virt-data.adoc b/virt/support/virt-collecting-virt-data.adoc index 89fcaf8bd117..2aeedc693cc5 100644 --- a/virt/support/virt-collecting-virt-data.adoc +++ b/virt/support/virt-collecting-virt-data.adoc @@ -19,8 +19,12 @@ Prometheus is a time-series database and a rule evaluation engine for metrics. P Alertmanager:: The Alertmanager service handles alerts received from Prometheus. The Alertmanager is also responsible for sending the alerts to external notification systems. - +ifdef::openshift-dedicated,openshift-rosa[] For information about the {product-title} monitoring stack, see xref:../../observability/monitoring/monitoring-overview.adoc#about-openshift-monitoring[About {product-title} monitoring]. +endif::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa[] +For information about the {product-title} monitoring stack, see xref:../../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring]. +endif::openshift-dedicated,openshift-rosa[] // This procedure is in the assembly so that we can add xrefs instead of a long list of additional resources. [id="virt-collecting-data-about-your-environment_{context}"] @@ -29,9 +33,14 @@ For information about the {product-title} monitoring stack, see xref:../../obser Collecting data about your environment minimizes the time required to analyze and determine the root cause. .Prerequisites - +ifdef::openshift-dedicated,openshift-rosa[] * xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#modifying-retention-time-for-prometheus-metrics-data_configuring-the-monitoring-stack[Set the retention time for Prometheus metrics data] to a minimum of seven days. * xref:../../observability/monitoring/managing-alerts.adoc#sending-notifications-to-external-systems_managing-alerts[Configure the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox] so that they can be viewed and persisted outside the cluster. +endif::openshift-dedicated,openshift-rosa[] +ifndef::openshift-dedicated,openshift-rosa[] +* xref:../../observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.adoc#modifying-retention-time-for-prometheus-metrics-data_storing-and-recording-data[Set the retention time for Prometheus metrics data] to a minimum of seven days. +* xref:../../observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.adoc#configuring-alert-notifications_configuring-alerts-and-notifications[Configure the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox] so that they can be viewed and persisted outside the cluster. +endif::openshift-dedicated,openshift-rosa[] * Record the exact number of affected nodes and virtual machines. .Procedure @@ -41,10 +50,10 @@ ifndef::openshift-rosa,openshift-dedicated[] . xref:../../support/gathering-cluster-data.adoc#support_gathering_data_gathering-cluster-data[Collect must-gather data for the cluster]. . link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.18/html-single/troubleshooting_openshift_data_foundation/index#downloading-log-files-and-diagnostic-information_rhodf[Collect must-gather data for {rh-storage-first}], if necessary. . xref:../../virt/support/virt-collecting-virt-data.adoc#virt-using-virt-must-gather_virt-collecting-virt-data[Collect must-gather data for {VirtProductName}]. -. xref:../../observability/monitoring/managing-metrics.adoc#querying-metrics-for-all-projects-as-an-administrator_managing-metrics[Collect Prometheus metrics for the cluster]. +. xref:../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#querying-metrics-for-all-projects-with-mon-dashboard_accessing-metrics-as-an-administrator[Collect Prometheus metrics for the cluster]. endif::openshift-rosa,openshift-dedicated[] ifdef::openshift-rosa,openshift-dedicated[] -* xref:../../observability/monitoring/managing-metrics.adoc#querying-metrics-for-all-projects-as-an-administrator_managing-metrics[Collect Prometheus metrics for the cluster]. +* xref:../../observability/monitoring/managing-metrics.adoc#querying-metrics-for-all-projects-with-mon-dashboard_managing-metrics[Collect Prometheus metrics for the cluster]. endif::openshift-rosa,openshift-dedicated[] [id="virt-collecting-data-about-vms_{context}"] diff --git a/welcome/learn_more_about_openshift.adoc b/welcome/learn_more_about_openshift.adoc index cf6831324398..efc7f877328b 100644 --- a/welcome/learn_more_about_openshift.adoc +++ b/welcome/learn_more_about_openshift.adoc @@ -212,9 +212,8 @@ a|* xref:../operators/understanding/crds/crd-extending-api-with-crds.adoc#crd-cr a|* xref:../observability/network_observability/metrics-alerts-dashboards.adoc#metrics-alerts-dashboards_metrics-alerts-dashboards[Using metrics with dashboards and alerts] * xref:../observability/network_observability/observing-network-traffic.adoc#network-observability-trafficflow_nw-observe-network-traffic[Obsserving the network traffic from the Traffic flows view] -| xref:../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring overview] -a|* xref:../observability/monitoring/monitoring-overview.adoc#monitoring-overview[In-cluster monitoring] -* xref:../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring_about-remote-health-monitoring[Remote health monitoring] +| xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] +a|* xref:../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring_about-remote-health-monitoring[Remote health monitoring] * xref:../observability/power_monitoring/power-monitoring-overview.adoc#power-monitoring-overview[{PM-title-c} (Technology Preview)] |=== @@ -258,7 +257,7 @@ a|* xref:../observability/monitoring/monitoring-overview.adoc#monitoring-overvie | | -| xref:../observability/monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring] +| xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[Monitoring] | |=== From 75cd5e03aae0106a182ffa1e53578972a1390361 Mon Sep 17 00:00:00 2001 From: mletalie Date: Thu, 19 Sep 2024 12:31:00 -0400 Subject: [PATCH 255/669] WIF commit --- modules/create-wif-cluster-ocm.adoc | 3 ++- modules/osd-create-cluster-ccs.adoc | 28 ++++++++++++++-------------- 2 files changed, 16 insertions(+), 15 deletions(-) diff --git a/modules/create-wif-cluster-ocm.adoc b/modules/create-wif-cluster-ocm.adoc index 02db88a139cf..b0f29fbc4995 100644 --- a/modules/create-wif-cluster-ocm.adoc +++ b/modules/create-wif-cluster-ocm.adoc @@ -55,13 +55,14 @@ Workload Identity Federation (WIF) is only supported on {product-title} version .. Select a cloud provider region from the *Region* drop-down menu. .. Select a *Single zone* or *Multi-zone* configuration. + -.. Optional: Select *Enable Secure Boot for Shielded VMs* to use Shielded VMs when installing your cluster. For more information, see link:https://cloud.google.com/security/products/shielded-vm[Shielded VMs]. +.. Optional: Select *Enable Secure Boot support for Shielded VMs* to use Shielded VMs when installing your cluster. For more information, see link:https://cloud.google.com/security/products/shielded-vm[Shielded VMs]. + [IMPORTANT] ==== To successfully create a cluster, you must select *Enable Secure Boot support for Shielded VMs* if your organization has the policy constraint `constraints/compute.requireShieldedVm` enabled. For more information regarding GCP organizational policy constraints, see link:https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints[Organization policy constraints]. ==== + + .. Leave *Enable user workload monitoring* selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default. . Optional: Expand *Advanced Encryption* to make changes to encryption settings. diff --git a/modules/osd-create-cluster-ccs.adoc b/modules/osd-create-cluster-ccs.adoc index dc67a9f9b39a..b747ec13f89e 100644 --- a/modules/osd-create-cluster-ccs.adoc +++ b/modules/osd-create-cluster-ccs.adoc @@ -68,36 +68,36 @@ To successfully create a cluster, you must select *Enable Secure Boot support fo + .. Leave *Enable user workload monitoring* selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default. -.. Optional: Expand *Advanced Encryption* to make changes to encryption settings. -... Accept the default setting *Use default KMS Keys* to use your default AWS KMS key, or select *Use Custom KMS keys* to use a custom KMS key. -.... With *Use Custom KMS keys* selected, enter the AWS Key Management Service (KMS) custom key Amazon Resource Name (ARN) ARN in the *Key ARN* field. -The key is used for encrypting all control plane, infrastructure, worker node root volumes, and persistent volumes in your cluster. +. Optional: Expand *Advanced Encryption* to make changes to encryption settings. -+ +.. Select *Use custom KMS keys* to use custom KMS keys. If you prefer not to use custom KMS keys, leave the default setting *Use default KMS Keys*. -... Select *Use custom KMS keys* to use custom KMS keys. If you prefer not to use custom KMS keys, leave the default setting *Use default KMS Keys*. + + [IMPORTANT] ==== To use custom KMS keys, the IAM service account `osd-ccs-admin` must be granted the *Cloud KMS CryptoKey Encrypter/Decrypter* role. For more information about granting roles on a resource, see link:https://cloud.google.com/kms/docs/iam#granting_roles_on_a_resource[Granting roles on a resource]. ==== -+ -With *Use Custom KMS keys* selected: -.... Select a key ring location from the *Key ring location* drop-down menu. -.... Select a key ring from the *Key ring* drop-down menu. -.... Select a key name from the *Key name* drop-down menu. -.... Provide the *KMS Service Account*. + -... Optional: Select *Enable FIPS cryptography* if you require your cluster to be FIPS validated. + +.. With *Use Custom KMS keys* selected: + +... Select a key ring location from the *Key ring location* drop-down menu. +... Select a key ring from the *Key ring* drop-down menu. +... Select a key name from the *Key name* drop-down menu. +... Provide the *KMS Service Account*. + +.. Optional: Select *Enable FIPS cryptography* if you require your cluster to be FIPS validated. + [NOTE] ==== If *Enable FIPS cryptography* is selected, *Enable additional etcd encryption* is enabled by default and cannot be disabled. You can select *Enable additional etcd encryption* without selecting *Enable FIPS cryptography*. ==== + -... Optional: Select *Enable additional etcd encryption* if you require etcd key value encryption. With this option, the etcd key values are encrypted, but the keys are not. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in {product-title} clusters by default. +.. Optional: Select *Enable additional etcd encryption* if you require etcd key value encryption. +With this option, the etcd key values are encrypted, but not the keys. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in {product-title} clusters by default. + [NOTE] ==== From c020eb9f1ed419c6c24f7e046aea00d178c9bd51 Mon Sep 17 00:00:00 2001 From: Aidan Reilly <74046732+aireilly@users.noreply.github.com> Date: Thu, 30 Jan 2025 16:43:37 +0000 Subject: [PATCH 256/669] Adding docs for the T-GM antenna delay settings --- _topic_maps/_topic_map.yml | 6 +-- ...e810-hardware-configuration-reference.adoc | 54 +++++-------------- networking/ptp/configuring-ptp.adoc | 2 +- .../ptp/ptp-events-rest-api-reference-v2.adoc | 2 +- .../ptp/ptp-events-rest-api-reference.adoc | 2 +- 5 files changed, 20 insertions(+), 46 deletions(-) diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index c7e6ec215ef1..0602e77a8dc1 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -1506,15 +1506,15 @@ Topics: Topics: - Name: About Precision Time Protocol in OpenShift cluster nodes File: about-ptp - - Name: Configuring Precision Time Protocol devices + - Name: Configuring PTP devices File: configuring-ptp - Name: Developing PTP events consumer applications with the REST API v2 File: ptp-cloud-events-consumer-dev-reference-v2 - - Name: Precision Time Protocol events REST API v2 reference + - Name: PTP events REST API v2 reference File: ptp-events-rest-api-reference-v2 - Name: Developing PTP events consumer applications with the REST API v1 File: ptp-cloud-events-consumer-dev-reference - - Name: Precision Time Protocol events REST API v1 reference + - Name: PTP events REST API v1 reference File: ptp-events-rest-api-reference - Name: CIDR range definitions File: cidr-range-definitions diff --git a/modules/nw-ptp-e810-hardware-configuration-reference.adoc b/modules/nw-ptp-e810-hardware-configuration-reference.adoc index 69a28a118ff3..6102e37a0fc1 100644 --- a/modules/nw-ptp-e810-hardware-configuration-reference.adoc +++ b/modules/nw-ptp-e810-hardware-configuration-reference.adoc @@ -31,19 +31,31 @@ The `SMA2` connector is bidirectional. ==== Set `spec.profile.plugins.e810.ublxCmds` parameters to configure the GNSS clock in the `PtpConfig` custom resource (CR). + +[IMPORTANT] +==== +You must configure an offset value to compensate for T-GM GPS antenna cable signal delay. +To configure the optimal T-GM antenna offset value, make precise measurements of the GNSS antenna cable signal delay. +Red{nbsp}Hat cannot assist in this measurement or provide any values for the required delay offsets. +==== + Each of these `ublxCmds` stanzas correspond to a configuration that is applied to the host NIC by using `ubxtool` commands. For example: [source,yaml] ---- ublxCmds: - - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 + - args: - "-P" - "29.20" - "-z" - "CFG-HW-ANT_CFG_VOLTCTRL,1" + - "-z" + - "CFG-TP-ANT_CABLEDELAY,"<1> reportOutput: false ---- +<1> Measured T-GM antenna delay offset in nanoseconds. +To get the required delay offset value, you must measure the cable delay using external test equipment. The following table describes the equivalent `ubxtool` commands: @@ -51,7 +63,7 @@ The following table describes the equivalent `ubxtool` commands: [width="90%", options="header"] |==== |ubxtool command|Description -|`ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1`|Enables antenna voltage control. Enables antenna status to be reported in the `UBX-MON-RF` and `UBX-INF-NOTICE` log messages. +|`ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 -z CFG-TP-ANT_CABLEDELAY,`|Enables antenna voltage control, allows antenna status to be reported in the `UBX-MON-RF` and `UBX-INF-NOTICE` log messages, and sets a `` value in nanoseconds that offsets the GPS antenna cable signal delay. |`ubxtool -P 29.20 -e GPS`|Enables the antenna to receive GPS signals. |`ubxtool -P 29.20 -d Galileo`|Configures the antenna to receive signal from the Galileo GPS satellite. |`ubxtool -P 29.20 -d GLONASS`|Disables the antenna from receiving signal from the GLONASS GPS satellite. @@ -60,41 +72,3 @@ The following table describes the equivalent `ubxtool` commands: |`ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000`| Configures the GNSS receiver survey-in process to improve its initial position estimate. This can take up to 24 hours to achieve an optimal result. |`ubxtool -P 29.20 -p MON-HW`|Runs a single automated scan of the hardware and reports on the NIC state and configuration settings. |==== - -The E810 plugin implements the following interfaces: - -.E810 plugin interfaces -[cols="1,3", width="90%", options="header"] -|==== -|Interface -|Description - -|`OnPTPConfigChangeE810` -|Runs whenever you update the `PtpConfig` CR. -The function parses the plugin options and applies the required configurations to the network device pins based on the configuration data. - -|`AfterRunPTPCommandE810` -|Runs after launching the PTP processes and running the `gpspipe` PTP command. -The function processes the plugin options and runs `ubxtool` commands, storing the output in the plugin-specific data. - -|`PopulateHwConfigE810` -|Populates the `NodePtpDevice` CR based on hardware-specific data in the `PtpConfig` CR. -|==== - -The E810 plugin has the following structs and variables: - -.E810 plugin structs and variables -[cols="1,3", width="90%", options="header"] -|==== -|Struct -|Description - -|`E810Opts` -|Represents options for the E810 plugin, including boolean flags and a map of network device pins. - -|`E810UblxCmds` -|Represents configurations for `ubxtool` commands with a boolean flag and a slice of strings for command arguments. - -|`E810PluginData` -|Holds plugin-specific data used during plugin execution. -|==== diff --git a/networking/ptp/configuring-ptp.adoc b/networking/ptp/configuring-ptp.adoc index 88e2d37d62ca..2bf457efda7d 100644 --- a/networking/ptp/configuring-ptp.adoc +++ b/networking/ptp/configuring-ptp.adoc @@ -1,6 +1,6 @@ :_mod-docs-content-type: ASSEMBLY [id="configuring-ptp"] -= Configuring Precision Time Protocol devices += Configuring PTP devices include::_attributes/common-attributes.adoc[] :context: configuring-ptp diff --git a/networking/ptp/ptp-events-rest-api-reference-v2.adoc b/networking/ptp/ptp-events-rest-api-reference-v2.adoc index c7a0421230e9..e2ba9e9b7a13 100644 --- a/networking/ptp/ptp-events-rest-api-reference-v2.adoc +++ b/networking/ptp/ptp-events-rest-api-reference-v2.adoc @@ -1,6 +1,6 @@ :_mod-docs-content-type: ASSEMBLY [id="ptp-events-rest-api-reference-v2"] -= Precision Time Protocol events REST API v2 reference += PTP events REST API v2 reference include::_attributes/common-attributes.adoc[] :context: using-ptp-hardware-fast-events-framework-v2 diff --git a/networking/ptp/ptp-events-rest-api-reference.adoc b/networking/ptp/ptp-events-rest-api-reference.adoc index e68e5002faf9..ca2f40c6b63e 100644 --- a/networking/ptp/ptp-events-rest-api-reference.adoc +++ b/networking/ptp/ptp-events-rest-api-reference.adoc @@ -1,6 +1,6 @@ :_mod-docs-content-type: ASSEMBLY [id="ptp-events-rest-api-reference"] -= Precision Time Protocol events REST API v1 reference += PTP events REST API v1 reference include::_attributes/common-attributes.adoc[] :context: using-ptp-hardware-fast-events-framework-v1 From 513b1a91156622e4d33377371facb8fec3f24697 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E2=80=9CShauna=20Diaz=E2=80=9D?= Date: Tue, 11 Feb 2025 08:17:48 -0500 Subject: [PATCH 257/669] OSDOCS-11354: corrects blueprint file reference --- ...embed-microshift-image-offline-deploy.adoc | 55 +++++++++---------- 1 file changed, 27 insertions(+), 28 deletions(-) diff --git a/modules/microshift-embed-microshift-image-offline-deploy.adoc b/modules/microshift-embed-microshift-image-offline-deploy.adoc index 8141f2e5d4a4..653a88dce9a1 100644 --- a/modules/microshift-embed-microshift-image-offline-deploy.adoc +++ b/modules/microshift-embed-microshift-image-offline-deploy.adoc @@ -7,15 +7,15 @@ [id="microshift-embed-microshift-image-offline-deployment_{context}"] = Embedding {microshift-short} containers for offline deployments -You can use image builder to create `rpm-ostree` system images with embedded {microshift-short} container images. To embed container images, you must add the image references to your image builder blueprint. +You can use image builder to create {op-system-ostree} images with embedded {microshift-short} container images. To embed container images, you must add the image references to your image builder blueprint file. .Prerequisites * You have root-user access to your build host. * Your build host meets the image builder system requirements. -* You have installed and set up image builder and the `composer-cli` tool. -* You have created a {op-system-ostree} image blueprint. -* You have installed jq. +* You installed and set up image builder and the `composer-cli` tool. +* You created a {op-system-ostree} image blueprint. +* You installed jq. .Procedure @@ -35,7 +35,7 @@ Replace `` with the numerical value of the release you are depl + [source,terminal] ---- -$ ls /usr/share/microshift/release +$ sudo ls /usr/share/microshift/release ---- + .Example output @@ -45,50 +45,49 @@ release-x86_64.json release-aarch64.json ---- + -If you installed the `microshift-release-info` RPM, you can proceed to step 4. +If you installed the `microshift-release-info` RPM, proceed to step 4. . If you did not complete step 2, download and unpack the `microshift-release-info` RPM without installing it: .. Download the RPM package by running the following command: + -[source,terminal] +[source,terminal,subs="+quotes"] ---- -$ sudo dnf download microshift-release-info- +$ sudo dnf download microshift-release-info-__ # <1> ---- -Replace `` with the numerical value of the release you are deploying, using the entire version number, such as `4.18.1`. +<1> Replace `__` with the numerical value of the release you are deploying, using the entire version number, such as `4.18.1`. + -.Example rpm -[source,terminal] +.Example RPM output +[source,terminal,subs="+quotes"] ---- -microshift-release-info-4.18.1.*.el9.noarch.rpm <1> +microshift-release-info-4.18.1.-202511101230.p0.g7dc6a00.assembly.4.18.1.el9.noarch.rpm ---- -<1> The `*` represents the date and commit ID. Your output should contain both, for example `-202511101230.p0.g7dc6a00.assembly.4.18.1`. .. Unpack the RPM package without installing it by running the following command: + -[source,terminal] +[source,terminal,subs="+quotes"] ---- -$ rpm2cpio | cpio -idmv <1> +$ rpm2cpio __ | cpio -idmv # <1> ./usr/share/microshift/release/release-aarch64.json ./usr/share/microshift/release/release-x86_64.json ---- -<1> Replace `` with the name of the RPM package from the previous step. +<1> Replace `__` with the name of the RPM package from the previous step. . Define the location of your JSON file, which contains the container reference information, by running the following command: + -[source,terminal] +[source,terminal,subs="+quotes"] ---- -$ RELEASE_FILE= +$ RELEASE_FILE=__ # <1> ---- -Replace `` with the full path to your JSON file. Be sure to use the file needed for your architecture. +<1> Replace `__` with the full path to your JSON file. Be sure to use the file needed for your architecture. . Define the location of your TOML file, which contains instructions for building the image, by running the following command: + -[source,terminal] +[source,terminal,subs="+quotes"] ---- -$ BLUEPRINT_FILE= +$ BLUEPRINT_FILE=__ # <1> ---- -Replace `` with the full path to your JSON file. +<1> Replace `__` with the full path to your TOML file. . Generate and then embed the container image references in your blueprint TOML file by running the following command: + @@ -97,7 +96,7 @@ Replace `` with the full path to your JSON file. $ jq -r '.images | .[] | ("[[containers]]\nsource = \"" + . + "\"\n")' "${RELEASE_FILE}" >> "${BLUEPRINT_FILE}" ---- + -.Example resulting `` fragment showing container references +.Example resulting TOML fragment showing container references [source,terminal] ---- [[containers]] @@ -107,12 +106,12 @@ source = "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82cfef91557f9a70 source = "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82cfef91557f9a70cff5a90accba45841a37524e9b93f98a97b20f6b2b69e5db" ---- -. You can manually embed any container image by adding it to the image builder blueprint using the following example: +. You can manually embed any container image by adding it to an image builder blueprint file using the following example: + -.Example section for manually embedding container image to image builder -[source,terminal] +.Example section for manually embedding container image to a blueprint +[source,text,subs="+quotes"] ---- [[containers]] -source = "" +source = "__" ---- -Replace `` with the exact reference to a container image used by the {microshift-short} version you are deploying. +Replace `__` with the exact reference to a container image used by the {microshift-short} version you are deploying. From 5c486298bb9780048f34c79f8a707f299360ba9d Mon Sep 17 00:00:00 2001 From: William Gabor Date: Fri, 27 Sep 2024 09:49:45 -0400 Subject: [PATCH 258/669] OCPBUGS-37487 Removed the 4.6 note under the Installing RHCOS and startinng the OpenShift Container Platform section --- modules/creating-machines-bare-metal.adoc | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/modules/creating-machines-bare-metal.adoc b/modules/creating-machines-bare-metal.adoc index 5003efb554e4..a0ed776c6129 100644 --- a/modules/creating-machines-bare-metal.adoc +++ b/modules/creating-machines-bare-metal.adoc @@ -26,7 +26,4 @@ You can configure {op-system} during ISO and PXE installations by using the foll Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. -[NOTE] -==== -As of {product-title} 4.6, the {op-system} ISO and other installation artifacts provide support for installation on disks with 4K sectors. -==== + From c453e64a1487c2f3d03743cc4b083bfeb77f106a Mon Sep 17 00:00:00 2001 From: opayne1 Date: Thu, 13 Feb 2025 15:43:13 -0500 Subject: [PATCH 259/669] OSDOCS#13072:Adds Content Security Policy for web console --- _topic_maps/_topic_map.yml | 2 ++ modules/csp-overview.adoc | 36 +++++++++++++++++++ .../content-security-policy.adoc | 25 +++++++++++++ .../dynamic-plugin-example.adoc | 2 +- 4 files changed, 64 insertions(+), 1 deletion(-) create mode 100644 modules/csp-overview.adoc create mode 100644 web_console/dynamic-plugin/content-security-policy.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 0602e77a8dc1..50ab618879a9 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -887,6 +887,8 @@ Topics: File: dynamic-plugins-get-started - Name: Deploy your plugin on a cluster File: deploy-plugin-cluster + - Name: Content Security Policy + File: content-security-policy - Name: Dynamic plugin example File: dynamic-plugin-example - Name: Dynamic plugin reference diff --git a/modules/csp-overview.adoc b/modules/csp-overview.adoc new file mode 100644 index 000000000000..1282dfe2b8be --- /dev/null +++ b/modules/csp-overview.adoc @@ -0,0 +1,36 @@ +// Module included in the following assemblies: +// +// * web_console/dynamic-plugin/content-security-policy.adoc + +:_mod-docs-content-type: CONCEPT +[id="content-security-policy-overview_{context}"] += Content Security Policy (CSP) overview + +A Content Security Policy (CSP) is delivered to the browser in the `Content-Security-Policy-Report-Only` response header. The policy is specified as a series of directives and values. Each directive type serves a different purpose, and each directive can have a list of values representing allowed sources. + +[id="content-security-policy-key-features_{context}"] +== Key features of `contentSecurityPolicy` + +[discrete] +=== Directive Types + +The supported directive types include `DefaultSrc`, `ScriptSrc`, `StyleSrc`, `ImgSrc`, and `FontSrc`. These directives allow you to specify valid sources for loading different types of content for your plugin. Each directive type serves a different purpose. For example, `ScriptSrc` defines valid JavaScript sources, while `ImgSrc` controls where images can be loaded from. + +//backporting the ConnectSrc directive, but that is tbd - openshift/console#14701 and https://github.com/openshift/api/pull/2164 + + +[discrete] +=== Values + +Each directive can have a list of values representing allowed sources. For example, `ScriptSrc` can specify multiple external scripts. These values are restricted to 1024 characters and cannot include whitespace, commas, or semicolons. Additionally, single-quoted strings and wildcard characters (`*`) are disallowed. + +[discrete] +=== Unified Policy + +The {product-title} web console aggregates the CSP directives across all enabled `ConsolePlugin` custom resources (CRs) and merges them with its own default policy. The combined policy is then applied with the `Content-Security-Policy-Report-Only` HTTP response header. + +[discrete] +=== Validation Rules +* Each directive can have up to 16 unique values. +* The total size of all values across directives must not exceed 8192 bytes (8KB). +* Each value must be unique, and additional validation rules are in place to ensure no quotes, spaces, commas, or wildcard symbols are used. \ No newline at end of file diff --git a/web_console/dynamic-plugin/content-security-policy.adoc b/web_console/dynamic-plugin/content-security-policy.adoc new file mode 100644 index 000000000000..a5373dd458b4 --- /dev/null +++ b/web_console/dynamic-plugin/content-security-policy.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: ASSEMBLY +[id="content-security-policy_{context}"] += Content Security Policy (CSP) +include::_attributes/common-attributes.adoc[] +:context: content-security-policy + +toc::[] + +You can specify Content Security Policy (CSP) directives for your dynamic plugin using the `contentSecurityPolicy` field in the `ConsolePluginSpec` file. This field helps mitigate potential security risks by specifying which sources are allowed for fetching content like scripts, styles, images, and fonts. For dynamic plugins that require loading resources from external sources, defining custom CSP rules ensures secure integration into the {product-title} console. + +[IMPORTANT] +==== +The console currently uses the `Content-Security-Policy-Report-Only` response header, so the browser will only warn about CSP violations in the web console and enforcement of CSP policies will be limited. CSP violations will be logged in the browser console, but the associated CSP directives will not be enforced. This feature is behind a `feature-gate`, so you will need to manually enable it. + +For more information, see xref:../../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling-features-console_nodes-cluster-enabling[Enabling feature sets using the web console]. +==== + +include::modules/csp-overview.adoc[leveloffset=+1] + +[role="_additional-resources"] +[id="content-security-policy_additional-resources"] +== Additional resources + +* link:https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy[Content Security Policy (CSP)] + diff --git a/web_console/dynamic-plugin/dynamic-plugin-example.adoc b/web_console/dynamic-plugin/dynamic-plugin-example.adoc index d8056a32d29b..67fa72d4b26b 100644 --- a/web_console/dynamic-plugin/dynamic-plugin-example.adoc +++ b/web_console/dynamic-plugin/dynamic-plugin-example.adoc @@ -8,4 +8,4 @@ toc::[] Before working through the example, verify that the plugin is working by following the steps in xref:../../web_console/dynamic-plugin/dynamic-plugins-get-started.adoc#dynamic-plugin-development_dynamic-plugins-get-started[Dynamic plugin development] -include::modules/adding-tab-pods-page.adoc[leveloffset=+1] +include::modules/adding-tab-pods-page.adoc[leveloffset=+1] \ No newline at end of file From 340cca8273db368a459b038775f07e72b68f2684 Mon Sep 17 00:00:00 2001 From: Gabriel McGoldrick Date: Tue, 18 Feb 2025 14:09:03 +0000 Subject: [PATCH 260/669] OBSDOCS-1475 Move COO higher up in nav --- _topic_maps/_topic_map.yml | 52 +++++++++++++++++++------------------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 50ab618879a9..ad0de2176d4a 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2903,6 +2903,32 @@ Topics: Topics: - Name: About Observability File: index +- Name: Cluster Observability Operator + Dir: cluster_observability_operator + Distros: openshift-enterprise,openshift-origin + Topics: + - Name: Cluster Observability Operator release notes + File: cluster-observability-operator-release-notes + - Name: Cluster Observability Operator overview + File: cluster-observability-operator-overview + - Name: Installing the Cluster Observability Operator + File: installing-the-cluster-observability-operator + - Name: Configuring the Cluster Observability Operator to monitor a service + File: configuring-the-cluster-observability-operator-to-monitor-a-service + - Name: Observability UI plugins + Dir: ui_plugins + Distros: openshift-enterprise,openshift-origin + Topics: + - Name: Observability UI plugins overview + File: observability-ui-plugins-overview + - Name: Logging UI plugin + File: logging-ui-plugin + - Name: Distributed tracing UI plugin + File: distributed-tracing-ui-plugin + - Name: Troubleshooting UI plugin + File: troubleshooting-ui-plugin +# - Name: Dashboard UI plugin +# File: dashboard-ui-plugin - Name: Monitoring Dir: monitoring Distros: openshift-enterprise,openshift-origin @@ -3263,32 +3289,6 @@ Topics: File: visualizing-power-monitoring-metrics - Name: Uninstalling power monitoring File: uninstalling-power-monitoring -- Name: Cluster Observability Operator - Dir: cluster_observability_operator - Distros: openshift-enterprise,openshift-origin - Topics: - - Name: Cluster Observability Operator release notes - File: cluster-observability-operator-release-notes - - Name: Cluster Observability Operator overview - File: cluster-observability-operator-overview - - Name: Installing the Cluster Observability Operator - File: installing-the-cluster-observability-operator - - Name: Configuring the Cluster Observability Operator to monitor a service - File: configuring-the-cluster-observability-operator-to-monitor-a-service - - Name: Observability UI plugins - Dir: ui_plugins - Distros: openshift-enterprise,openshift-origin - Topics: - - Name: Observability UI plugins overview - File: observability-ui-plugins-overview - - Name: Logging UI plugin - File: logging-ui-plugin - - Name: Distributed tracing UI plugin - File: distributed-tracing-ui-plugin - - Name: Troubleshooting UI plugin - File: troubleshooting-ui-plugin -# - Name: Dashboard UI plugin -# File: dashboard-ui-plugin --- Name: Scalability and performance Dir: scalability_and_performance From f917fc625f3212c0077662df0ab89b3255209887 Mon Sep 17 00:00:00 2001 From: Steven Smith Date: Fri, 7 Feb 2025 10:38:58 -0500 Subject: [PATCH 261/669] Updates additional configuration rules for UDN --- modules/nw-udn-additional-config-details.adoc | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/modules/nw-udn-additional-config-details.adoc b/modules/nw-udn-additional-config-details.adoc index 5dff9c355b3a..36880049bf13 100644 --- a/modules/nw-udn-additional-config-details.adoc +++ b/modules/nw-udn-additional-config-details.adoc @@ -20,12 +20,25 @@ The following table explains additional configurations for UDN that are optional The `joinSubnets` field configures the routing between different segments within a user-defined network. Dual-stack clusters can set 2 subnets, one for each IP family; otherwise, only 1 subnet is allowed. This field is only allowed for the `Primary` network. -|`spec.IPAMLifecycle` +|`spec.ipam.lifecycle` |object -|The `IPAMLifecycle` field configures the IP address management system (IPAM). You might use this field for virtual workloads to ensure persistent IP addresses. This field is allowed when `topology` is `layer2`. The `subnets` field must be specified when this field is specified. Setting a value of `Persistent` ensures that your virtual workloads have persistent IP addresses across reboots and migration. These are assigned by the container network interface (CNI) and used by OVN-Kubernetes to program pod IP addresses. You must not change this for pod annotations. +|The `spec.ipam.lifecycle` field configures the IP address management system (IPAM). You might use this field for virtual workloads to ensure persistent IP addresses. The only allowed value is `Persistent`, which +ensures that your virtual workloads have persistent IP addresses across reboots and migration. These are assigned by the container network interface (CNI) and used by OVN-Kubernetes to program pod IP addresses. You must not change this for pod annotations. + +Setting a value of `Persistent` is only supported when `spec.ipam.mode` is set to `Enabled`. + +|`spec.ipam.mode` +|object +|The `spec.ipam.mode` field controls how much of the IP configuration is managed by OVN-Kubernetes. The following options are available: + +**Enabled:** + +When enabled, OVN-Kubernetes applies the IP configuration to the SDN infrastructure and assigns IP addresses from the selected subnet to the individual pods. This is the default setting. When set to `Enabled`, the `subnets` field must be defined. `Enabled` is the default configuration. + +**Disabled:** + +When disabled, OVN-Kubernetes only assigns MAC addresses and provides layer 2 communication, which allows users to configure IP addresses. `Disabled` is only available for layer 2 (secondary) networks. By disabling IPAM, features that rely on selecting pods by IP, for example, network policy, services, and so on, no longer function. Additionally, IP port security is also disabled for interfaces attached to this network. The `subnets` field must be empty when `spec.ipam.mode` is set to `Disabled.` |`spec.layer2.mtu` and `spec.layer3.mtu` |integer -|The maximum transmission units (MTU). The default value is `1400`. The boundary for IPv4 is `574`, and for IPv6 it is `1280`. +|The maximum transmission units (MTU). The default value is `1400`. The boundary for IPv4 is `576`, and for IPv6 it is `1280`. |==== \ No newline at end of file From 5ae270e547fc69a5955bc73ce66df29fc8be4763 Mon Sep 17 00:00:00 2001 From: Kathryn Alexander Date: Wed, 12 Feb 2025 10:59:35 -0500 Subject: [PATCH 262/669] fixing api builds --- modules/network-flow-matrix.adoc | 2 +- .../distr_tracing/distr-tracing-rn.adoc | 2 +- .../podmonitor-monitoring-coreos-com-v1.adoc | 12 +- .../probe-monitoring-coreos-com-v1.adoc | 14 +- .../prometheus-monitoring-coreos-com-v1.adoc | 30 +-- ...rvicemonitor-monitoring-coreos-com-v1.adoc | 12 +- rest_api/objects/index.adoc | 230 +++++++++--------- ...sscontroller-operator-openshift-io-v1.adoc | 16 +- ...tconstraints-security-openshift-io-v1.adoc | 12 +- 9 files changed, 165 insertions(+), 165 deletions(-) diff --git a/modules/network-flow-matrix.adoc b/modules/network-flow-matrix.adoc index 20844f7488ca..a448cf6aff83 100644 --- a/modules/network-flow-matrix.adoc +++ b/modules/network-flow-matrix.adoc @@ -102,4 +102,4 @@ In addition to the base network flows, the following matrix describes the ingres [%header,format=csv] |=== include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.18/docs/stable/unique/aws-sno.csv[] -|=== +|=== \ No newline at end of file diff --git a/observability/distr_tracing/distr-tracing-rn.adoc b/observability/distr_tracing/distr-tracing-rn.adoc index d4efe080afd6..60c833213091 100644 --- a/observability/distr_tracing/distr-tracing-rn.adoc +++ b/observability/distr_tracing/distr-tracing-rn.adoc @@ -139,7 +139,7 @@ endif::openshift-rosa[] [id="distr-tracing_3-3-1_{context}"] == Release notes for {DTProductName} 3.3.1 -The {DTProductName} 3.3.1 is a maintenance release with no changes because the {DTProductName} is bundled with the {OTELName} that is xref:../otel/otel-rn.adoc#otel-rn[released] with a bug fix. +The {DTProductName} 3.3.1 is a maintenance release with no changes because the {DTProductName} is bundled with the {OTELName} that is xref:../otel/otel-rn.adoc#otel_rn[released] with a bug fix. This release of the {DTProductName} includes the {TempoName} and the deprecated {JaegerName}. diff --git a/rest_api/monitoring_apis/podmonitor-monitoring-coreos-com-v1.adoc b/rest_api/monitoring_apis/podmonitor-monitoring-coreos-com-v1.adoc index 4499d75fbf4d..70e34bc974ef 100644 --- a/rest_api/monitoring_apis/podmonitor-monitoring-coreos-com-v1.adoc +++ b/rest_api/monitoring_apis/podmonitor-monitoring-coreos-com-v1.adoc @@ -356,7 +356,7 @@ Cannot be set at the same time as `authorization`, or `basicAuth`. | `params{}` | `array (string)` -| +| | `path` | `string` @@ -382,7 +382,7 @@ metadata labels. The Operator automatically adds relabelings for a few standard Kubernetes fields. -The original scrape job's name is available via the `__tmp_prometheus_job_name` label. +The original scrape job's name is available via the `\__tmp_prometheus_job_name` label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config @@ -780,7 +780,7 @@ It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. | `proxyConnectHeader{}` | `array` -| +| | `proxyConnectHeader{}[]` | `object` @@ -1316,7 +1316,7 @@ metadata labels. The Operator automatically adds relabelings for a few standard Kubernetes fields. -The original scrape job's name is available via the `__tmp_prometheus_job_name` label. +The original scrape job's name is available via the `\__tmp_prometheus_job_name` label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config -- @@ -1866,7 +1866,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../monitoring_apis/podmonitor-monitoring-coreos-com-v1.adoc#podmonitor-monitoring-coreos-com-v1[`PodMonitor`] schema -| +| |=== .HTTP responses @@ -1999,7 +1999,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../monitoring_apis/podmonitor-monitoring-coreos-com-v1.adoc#podmonitor-monitoring-coreos-com-v1[`PodMonitor`] schema -| +| |=== .HTTP responses diff --git a/rest_api/monitoring_apis/probe-monitoring-coreos-com-v1.adoc b/rest_api/monitoring_apis/probe-monitoring-coreos-com-v1.adoc index bd213298c117..db3c6a967fa4 100644 --- a/rest_api/monitoring_apis/probe-monitoring-coreos-com-v1.adoc +++ b/rest_api/monitoring_apis/probe-monitoring-coreos-com-v1.adoc @@ -530,7 +530,7 @@ It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. | `proxyConnectHeader{}` | `array` -| +| | `proxyConnectHeader{}[]` | `object` @@ -1143,9 +1143,9 @@ Type:: | RelabelConfigs to apply to the label set of the target before it gets scraped. The original ingress address is available via the -`__tmp_prometheus_ingress_address` label. It can be used to customize the +`\__tmp_prometheus_ingress_address` label. It can be used to customize the probed URL. -The original scrape job's name is available via the `__tmp_prometheus_job_name` label. +The original scrape job's name is available via the `\__tmp_prometheus_job_name` label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config | `relabelingConfigs[]` @@ -1194,9 +1194,9 @@ Description:: RelabelConfigs to apply to the label set of the target before it gets scraped. The original ingress address is available via the -`__tmp_prometheus_ingress_address` label. It can be used to customize the +`\__tmp_prometheus_ingress_address` label. It can be used to customize the probed URL. -The original scrape job's name is available via the `__tmp_prometheus_job_name` label. +The original scrape job's name is available via the `\__tmp_prometheus_job_name` label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config -- @@ -1868,7 +1868,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../monitoring_apis/probe-monitoring-coreos-com-v1.adoc#probe-monitoring-coreos-com-v1[`Probe`] schema -| +| |=== .HTTP responses @@ -2001,7 +2001,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../monitoring_apis/probe-monitoring-coreos-com-v1.adoc#probe-monitoring-coreos-com-v1[`Probe`] schema -| +| |=== .HTTP responses diff --git a/rest_api/monitoring_apis/prometheus-monitoring-coreos-com-v1.adoc b/rest_api/monitoring_apis/prometheus-monitoring-coreos-com-v1.adoc index 25ab2c3b8ef8..42fbb3934b97 100644 --- a/rest_api/monitoring_apis/prometheus-monitoring-coreos-com-v1.adoc +++ b/rest_api/monitoring_apis/prometheus-monitoring-coreos-com-v1.adoc @@ -823,7 +823,7 @@ in a breaking way. | `scrapeClasses[]` | `object` -| +| | `scrapeConfigNamespaceSelector` | `object` @@ -968,7 +968,7 @@ the triple using the matching operator . | `topologySpreadConstraints[]` | `object` -| +| | `tracingConfig` | `object` @@ -4368,7 +4368,7 @@ Type:: | `deny` | `boolean` -| +| |=== === .spec.containers @@ -10201,7 +10201,7 @@ It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. | `proxyConnectHeader{}` | `array` -| +| | `proxyConnectHeader{}[]` | `object` @@ -10475,7 +10475,7 @@ It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. | `proxyConnectHeader{}` | `array` -| +| | `proxyConnectHeader{}[]` | `object` @@ -11484,7 +11484,7 @@ It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. | `proxyConnectHeader{}` | `array` -| +| | `proxyConnectHeader{}[]` | `object` @@ -11982,7 +11982,7 @@ It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. | `proxyConnectHeader{}` | `array` -| +| | `proxyConnectHeader{}[]` | `object` @@ -13541,7 +13541,7 @@ More info: https://prometheus.io/docs/prometheus/latest/configuration/configurat | Relabelings configures the relabeling rules to apply to all scrape targets. The Operator automatically adds relabelings for a few standard Kubernetes fields -like `__meta_kubernetes_namespace` and `__meta_kubernetes_service_name`. +like `\__meta_kubernetes_namespace` and `\__meta_kubernetes_service_name`. Then the Operator adds the scrape class relabelings defined here. Then the Operator adds the target-specific relabelings defined in the scrape object. @@ -13683,7 +13683,7 @@ Description:: Relabelings configures the relabeling rules to apply to all scrape targets. The Operator automatically adds relabelings for a few standard Kubernetes fields -like `__meta_kubernetes_namespace` and `__meta_kubernetes_service_name`. +like `\__meta_kubernetes_namespace` and `\__meta_kubernetes_service_name`. Then the Operator adds the scrape class relabelings defined here. Then the Operator adds the target-specific relabelings defined in the scrape object. @@ -15850,7 +15850,7 @@ persistent volume is being resized. | `status` | `string` -| +| | `type` | `string` @@ -20992,7 +20992,7 @@ being performed. Only delete actions will be performed. | `shardStatuses[]` | `object` -| +| | `shards` | `integer` @@ -21246,7 +21246,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../monitoring_apis/prometheus-monitoring-coreos-com-v1.adoc#prometheus-monitoring-coreos-com-v1[`Prometheus`] schema -| +| |=== .HTTP responses @@ -21379,7 +21379,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../monitoring_apis/prometheus-monitoring-coreos-com-v1.adoc#prometheus-monitoring-coreos-com-v1[`Prometheus`] schema -| +| |=== .HTTP responses @@ -21481,7 +21481,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../autoscale_apis/scale-autoscaling-v1.adoc#scale-autoscaling-v1[`Scale`] schema -| +| |=== .HTTP responses @@ -21583,7 +21583,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../monitoring_apis/prometheus-monitoring-coreos-com-v1.adoc#prometheus-monitoring-coreos-com-v1[`Prometheus`] schema -| +| |=== .HTTP responses diff --git a/rest_api/monitoring_apis/servicemonitor-monitoring-coreos-com-v1.adoc b/rest_api/monitoring_apis/servicemonitor-monitoring-coreos-com-v1.adoc index 8056cde2491c..238ebd332ad5 100644 --- a/rest_api/monitoring_apis/servicemonitor-monitoring-coreos-com-v1.adoc +++ b/rest_api/monitoring_apis/servicemonitor-monitoring-coreos-com-v1.adoc @@ -346,7 +346,7 @@ Cannot be set at the same time as `authorization`, or `basicAuth`. | `params{}` | `array (string)` -| +| | `path` | `string` @@ -372,7 +372,7 @@ metadata labels. The Operator automatically adds relabelings for a few standard Kubernetes fields. -The original scrape job's name is available via the `__tmp_prometheus_job_name` label. +The original scrape job's name is available via the `\__tmp_prometheus_job_name` label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config @@ -768,7 +768,7 @@ It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. | `proxyConnectHeader{}` | `array` -| +| | `proxyConnectHeader{}[]` | `object` @@ -1304,7 +1304,7 @@ metadata labels. The Operator automatically adds relabelings for a few standard Kubernetes fields. -The original scrape job's name is available via the `__tmp_prometheus_job_name` label. +The original scrape job's name is available via the `\__tmp_prometheus_job_name` label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config -- @@ -1894,7 +1894,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../monitoring_apis/servicemonitor-monitoring-coreos-com-v1.adoc#servicemonitor-monitoring-coreos-com-v1[`ServiceMonitor`] schema -| +| |=== .HTTP responses @@ -2027,7 +2027,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../monitoring_apis/servicemonitor-monitoring-coreos-com-v1.adoc#servicemonitor-monitoring-coreos-com-v1[`ServiceMonitor`] schema -| +| |=== .HTTP responses diff --git a/rest_api/objects/index.adoc b/rest_api/objects/index.adoc index b12a3e9d7b33..abba6dbb087b 100644 --- a/rest_api/objects/index.adoc +++ b/rest_api/objects/index.adoc @@ -1339,7 +1339,7 @@ Required:: | `items` | xref:../oauth_apis/useroauthaccesstoken-oauth-openshift-io-v1.adoc#useroauthaccesstoken-oauth-openshift-io-v1[`array (UserOAuthAccessToken)`] -| +| | `kind` | `string` @@ -1818,12 +1818,12 @@ Type:: | Property | Type | Description | `owned` -| xref:../objects/index.adoc#com-github-operator-framework-api-pkg-operators-v1alpha1-APIServiceDescription[`array (APIServiceDescription)`] -| +| `array (APIServiceDescription)` +| | `required` -| xref:../objects/index.adoc#com-github-operator-framework-api-pkg-operators-v1alpha1-APIServiceDescription[`array (APIServiceDescription)`] -| +| `array (APIServiceDescription)` +| |=== @@ -1851,12 +1851,12 @@ Type:: | Property | Type | Description | `owned` -| xref:../objects/index.adoc#com-github-operator-framework-api-pkg-operators-v1alpha1-CRDDescription[`array (CRDDescription)`] -| +| `array (CRDDescription)` +| | `required` -| xref:../objects/index.adoc#com-github-operator-framework-api-pkg-operators-v1alpha1-CRDDescription[`array (CRDDescription)`] -| +| `array (CRDDescription)` +| |=== @@ -1886,11 +1886,11 @@ Required:: | `supported` | `boolean` -| +| | `type` | `string` -| +| |=== @@ -1923,7 +1923,7 @@ Required:: | `items` | xref:../operatorhub_apis/packagemanifest-packages-operators-coreos-com-v1.adoc#packagemanifest-packages-operators-coreos-com-v1[`array (PackageManifest)`] -| +| | `kind` | `string` @@ -1931,7 +1931,7 @@ Required:: | `metadata` | xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-ListMeta[`ListMeta`] -| +| |=== @@ -2628,7 +2628,7 @@ Required:: | `metadata` | xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-ListMeta[`ListMeta`] -| +| |=== @@ -2783,7 +2783,7 @@ Type:: | defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. | `items` -| xref:../objects/index.adoc#io-k8s-api-core-v1-KeyToPath[`array (KeyToPath)`] +| `array (KeyToPath)` | items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. | `name` @@ -2828,7 +2828,7 @@ Required:: | fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. | `nodePublishSecretRef` -| xref:../objects/index.adoc#io-k8s-api-core-v1-LocalObjectReference[`LocalObjectReference`] +| `LocalObjectReference` | nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. | `readOnly` @@ -2914,7 +2914,7 @@ Required:: | Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". | `valueFrom` -| xref:../objects/index.adoc#io-k8s-api-core-v1-EnvVarSource[`EnvVarSource`] +| `EnvVarSource` | Source for the environment variable's value. Cannot be used if value is not empty. |=== @@ -3085,15 +3085,15 @@ Required:: | `lastTransitionTime` | xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-Time[`Time`] -| +| | `message` | `string` -| +| | `reason` | `string` -| +| | `status` | `string` @@ -3589,11 +3589,11 @@ Required:: | `status` | `string` -| +| | `type` | `string` -| +| |=== ..status.modifyVolumeStatus @@ -3746,15 +3746,15 @@ Type:: | accessModes contains all ways the volume can be mounted. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes | `awsElasticBlockStore` -| xref:../objects/index.adoc#io-k8s-api-core-v1-AWSElasticBlockStoreVolumeSource[`AWSElasticBlockStoreVolumeSource`] +| `AWSElasticBlockStoreVolumeSource` | awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore | `azureDisk` -| xref:../objects/index.adoc#io-k8s-api-core-v1-AzureDiskVolumeSource[`AzureDiskVolumeSource`] +| `AzureDiskVolumeSource` | azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. | `azureFile` -| xref:../objects/index.adoc#io-k8s-api-core-v1-AzureFilePersistentVolumeSource[`AzureFilePersistentVolumeSource`] +| `AzureFilePersistentVolumeSource` | azureFile represents an Azure File Service mount on the host and bind mount to the pod. | `capacity` @@ -3762,11 +3762,11 @@ Type:: | capacity is the description of the persistent volume's resources and capacity. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity | `cephfs` -| xref:../objects/index.adoc#io-k8s-api-core-v1-CephFSPersistentVolumeSource[`CephFSPersistentVolumeSource`] +| `CephFSPersistentVolumeSource` | cephFS represents a Ceph FS mount on the host that shares a pod's lifetime | `cinder` -| xref:../objects/index.adoc#io-k8s-api-core-v1-CinderPersistentVolumeSource[`CinderPersistentVolumeSource`] +| `CinderPersistentVolumeSource` | cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md | `claimRef` @@ -3774,39 +3774,39 @@ Type:: | claimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#binding | `csi` -| xref:../objects/index.adoc#io-k8s-api-core-v1-CSIPersistentVolumeSource[`CSIPersistentVolumeSource`] +| `CSIPersistentVolumeSource` | csi represents storage that is handled by an external CSI driver (Beta feature). | `fc` -| xref:../objects/index.adoc#io-k8s-api-core-v1-FCVolumeSource[`FCVolumeSource`] +| `FCVolumeSource` | fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. | `flexVolume` -| xref:../objects/index.adoc#io-k8s-api-core-v1-FlexPersistentVolumeSource[`FlexPersistentVolumeSource`] +| `FlexPersistentVolumeSource` | flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. | `flocker` -| xref:../objects/index.adoc#io-k8s-api-core-v1-FlockerVolumeSource[`FlockerVolumeSource`] +| `FlockerVolumeSource` | flocker represents a Flocker volume attached to a kubelet's host machine and exposed to the pod for its usage. This depends on the Flocker control service being running | `gcePersistentDisk` -| xref:../objects/index.adoc#io-k8s-api-core-v1-GCEPersistentDiskVolumeSource[`GCEPersistentDiskVolumeSource`] +| `GCEPersistentDiskVolumeSource` | gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk | `glusterfs` -| xref:../objects/index.adoc#io-k8s-api-core-v1-GlusterfsPersistentVolumeSource[`GlusterfsPersistentVolumeSource`] +| `GlusterfsPersistentVolumeSource` | glusterfs represents a Glusterfs volume that is attached to a host and exposed to the pod. Provisioned by an admin. More info: https://examples.k8s.io/volumes/glusterfs/README.md | `hostPath` -| xref:../objects/index.adoc#io-k8s-api-core-v1-HostPathVolumeSource[`HostPathVolumeSource`] +| `HostPathVolumeSource` | hostPath represents a directory on the host. Provisioned by a developer or tester. This is useful for single-node development and testing only! On-host storage is not supported in any way and WILL NOT WORK in a multi-node cluster. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath | `iscsi` -| xref:../objects/index.adoc#io-k8s-api-core-v1-ISCSIPersistentVolumeSource[`ISCSIPersistentVolumeSource`] +| `ISCSIPersistentVolumeSource` | iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. | `local` -| xref:../objects/index.adoc#io-k8s-api-core-v1-LocalVolumeSource[`LocalVolumeSource`] +| `LocalVolumeSource` | local represents directly-attached storage with node affinity | `mountOptions` @@ -3814,11 +3814,11 @@ Type:: | mountOptions is the list of mount options, e.g. ["ro", "soft"]. Not validated - mount will simply fail if one is invalid. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options | `nfs` -| xref:../objects/index.adoc#io-k8s-api-core-v1-NFSVolumeSource[`NFSVolumeSource`] +| `NFSVolumeSource` | nfs represents an NFS mount on the host. Provisioned by an admin. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs | `nodeAffinity` -| xref:../objects/index.adoc#io-k8s-api-core-v1-VolumeNodeAffinity[`VolumeNodeAffinity`] +| `VolumeNodeAffinity` | nodeAffinity defines constraints that limit what nodes this volume can be accessed from. This field influences the scheduling of pods that use this volume. | `persistentVolumeReclaimPolicy` @@ -3831,23 +3831,23 @@ Possible enum values: - `"Retain"` means the volume will be left in its current phase (Released) for manual reclamation by the administrator. The default policy is Retain. | `photonPersistentDisk` -| xref:../objects/index.adoc#io-k8s-api-core-v1-PhotonPersistentDiskVolumeSource[`PhotonPersistentDiskVolumeSource`] +| `PhotonPersistentDiskVolumeSource` | photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine | `portworxVolume` -| xref:../objects/index.adoc#io-k8s-api-core-v1-PortworxVolumeSource[`PortworxVolumeSource`] +| `PortworxVolumeSource` | portworxVolume represents a portworx volume attached and mounted on kubelets host machine | `quobyte` -| xref:../objects/index.adoc#io-k8s-api-core-v1-QuobyteVolumeSource[`QuobyteVolumeSource`] +| `QuobyteVolumeSource` | quobyte represents a Quobyte mount on the host that shares a pod's lifetime | `rbd` -| xref:../objects/index.adoc#io-k8s-api-core-v1-RBDPersistentVolumeSource[`RBDPersistentVolumeSource`] +| `RBDPersistentVolumeSource` | rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md | `scaleIO` -| xref:../objects/index.adoc#io-k8s-api-core-v1-ScaleIOPersistentVolumeSource[`ScaleIOPersistentVolumeSource`] +| `ScaleIOPersistentVolumeSource` | scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. | `storageClassName` @@ -3855,7 +3855,7 @@ Possible enum values: | storageClassName is the name of StorageClass to which this persistent volume belongs. Empty value means that this volume does not belong to any StorageClass. | `storageos` -| xref:../objects/index.adoc#io-k8s-api-core-v1-StorageOSPersistentVolumeSource[`StorageOSPersistentVolumeSource`] +| `StorageOSPersistentVolumeSource` | storageOS represents a StorageOS volume that is attached to the kubelet's host machine and mounted into the pod More info: https://examples.k8s.io/volumes/storageos/README.md | `volumeAttributesClassName` @@ -3871,7 +3871,7 @@ Possible enum values: - `"Filesystem"` means the volume will be or is formatted with a filesystem. | `vsphereVolume` -| xref:../objects/index.adoc#io-k8s-api-core-v1-VsphereVirtualDiskVolumeSource[`VsphereVirtualDiskVolumeSource`] +| `VsphereVirtualDiskVolumeSource` | vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine |=== @@ -3984,7 +3984,7 @@ Type:: | Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata | `spec` -| xref:../objects/index.adoc#io-k8s-api-core-v1-PodSpec[`PodSpec`] +| `PodSpec` | Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status |=== @@ -4097,7 +4097,7 @@ Type:: | hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ | `scopeSelector` -| xref:../objects/index.adoc#io-k8s-api-core-v1-ScopeSelector_v2[`ScopeSelector_v2`] +| `ScopeSelector_v2` | scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. | `scopes` @@ -4159,7 +4159,7 @@ Type:: | Property | Type | Description | `claims` -| xref:../objects/index.adoc#io-k8s-api-core-v1-ResourceClaim[`array (ResourceClaim)`] +| `array (ResourceClaim)` | Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. @@ -4296,7 +4296,7 @@ Type:: | defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. | `items` -| xref:../objects/index.adoc#io-k8s-api-core-v1-KeyToPath[`array (KeyToPath)`] +| `array (KeyToPath)` | items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. | `optional` @@ -4465,7 +4465,7 @@ Type:: | Property | Type | Description | `matchLabelExpressions` -| xref:../objects/index.adoc#io-k8s-api-core-v1-TopologySelectorLabelRequirement[`array (TopologySelectorLabelRequirement)`] +| `array (TopologySelectorLabelRequirement)` | A list of topology selector requirements by labels. |=== @@ -4899,7 +4899,7 @@ Type:: | Property | Type | Description | `clusterRoleSelectors` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-LabelSelector_v3[`array (LabelSelector_v3)`] +| `array (LabelSelector_v3)` | ClusterRoleSelectors holds a list of selectors which will be used to find ClusterRoles and create the rules. If any of the selectors match, then the ClusterRole's permissions will be added |=== @@ -5378,63 +5378,63 @@ Type:: | `$ref` | `string` -| +| | `$schema` | `string` -| +| | `additionalItems` -| xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-JSONSchemaPropsOrBool[``] -| +| `` +| | `additionalProperties` -| xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-JSONSchemaPropsOrBool[``] -| +| `` +| | `allOf` | xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-JSONSchemaProps[`array (undefined)`] -| +| | `anyOf` | xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-JSONSchemaProps[`array (undefined)`] -| +| | `default` -| xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-JSON[`JSON`] +| `JSON` | default is a default value for undefined object fields. Defaulting is a beta feature under the CustomResourceDefaulting feature gate. Defaulting requires spec.preserveUnknownFields to be false. | `definitions` | xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-JSONSchemaProps[`object (undefined)`] -| +| | `dependencies` -| xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-JSONSchemaPropsOrStringArray[`object (undefined)`] -| +| `object (undefined)` +| | `description` | `string` -| +| | `enum` -| xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-JSON[`array (JSON)`] -| +| `array (JSON)` +| | `example` -| xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-JSON[`JSON`] -| +| `JSON` +| | `exclusiveMaximum` | `boolean` -| +| | `exclusiveMinimum` | `boolean` -| +| | `externalDocs` -| xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-ExternalDocumentation[`ExternalDocumentation`] -| +| `ExternalDocumentation` +| | `format` | `string` @@ -5444,87 +5444,87 @@ Type:: | `id` | `string` -| +| | `items` -| xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-JSONSchemaPropsOrArray[``] -| +| `` +| | `maxItems` | `integer` -| +| | `maxLength` | `integer` -| +| | `maxProperties` | `integer` -| +| | `maximum` | `number` -| +| | `minItems` | `integer` -| +| | `minLength` | `integer` -| +| | `minProperties` | `integer` -| +| | `minimum` | `number` -| +| | `multipleOf` | `number` -| +| | `not` | xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-JSONSchemaProps[``] -| +| | `nullable` | `boolean` -| +| | `oneOf` | xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-JSONSchemaProps[`array (undefined)`] -| +| | `pattern` | `string` -| +| | `patternProperties` | xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-JSONSchemaProps[`object (undefined)`] -| +| | `properties` | xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-JSONSchemaProps[`object (undefined)`] -| +| | `required` | `array (string)` -| +| | `title` | `string` -| +| | `type` | `string` -| +| | `uniqueItems` | `boolean` -| +| | `x-kubernetes-embedded-resource` | `boolean` @@ -5584,7 +5584,7 @@ Defaults to atomic for arrays. | x-kubernetes-preserve-unknown-fields stops the API server decoding step from pruning fields which are not specified in the validation schema. This affects fields recursively, but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema. This can either be true or undefined. False is forbidden. | `x-kubernetes-validations` -| xref:../objects/index.adoc#io-k8s-apiextensions-apiserver-pkg-apis-apiextensions-v1-ValidationRule[`array (ValidationRule)`] +| `array (ValidationRule)` | x-kubernetes-validations describes a list of validation rules written in the CEL expression language. |=== @@ -5612,7 +5612,7 @@ The serialization format is: (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.) - ::= "e" \| "E" + ::= "e" \| "E" No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. @@ -5735,7 +5735,7 @@ Type:: | Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. | `preconditions` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-Preconditions[`Preconditions`] +| `Preconditions` | Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be returned. | `propagationPolicy` @@ -5824,15 +5824,15 @@ Required:: | `group` | `string` -| +| | `kind` | `string` -| +| | `version` | `string` -| +| |=== @@ -5889,7 +5889,7 @@ Type:: | Property | Type | Description | `matchExpressions` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-LabelSelectorRequirement_v2[`array (LabelSelectorRequirement_v2)`] +| `array (LabelSelectorRequirement_v2)` | matchExpressions is a list of label selector requirements. The requirements are ANDed. | `matchLabels` @@ -6052,7 +6052,7 @@ Applied only if Name is not specified. More info: https://git.k8s.io/community/c | Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels | `managedFields` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-ManagedFieldsEntry[`array (ManagedFieldsEntry)`] +| `array (ManagedFieldsEntry)` | ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like "ci-cd". The set of fields is always in the version that the workflow used when modifying the object. | `name` @@ -6066,7 +6066,7 @@ Applied only if Name is not specified. More info: https://git.k8s.io/community/c Must be a DNS_LABEL. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces | `ownerReferences` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-OwnerReference[`array (OwnerReference)`] +| `array (OwnerReference)` | List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. | `resourceVersion` @@ -6149,7 +6149,7 @@ Applied only if Name is not specified. More info: https://git.k8s.io/community/c | Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels | `managedFields` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-ManagedFieldsEntry[`array (ManagedFieldsEntry)`] +| `array (ManagedFieldsEntry)` | ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like "ci-cd". The set of fields is always in the version that the workflow used when modifying the object. | `name` @@ -6163,7 +6163,7 @@ Applied only if Name is not specified. More info: https://git.k8s.io/community/c Must be a DNS_LABEL. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces | `ownerReferences` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-OwnerReference[`array (OwnerReference)`] +| `array (OwnerReference)` | List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. | `resourceVersion` @@ -6214,7 +6214,7 @@ Type:: | Suggested HTTP return code for this status, 0 if not set. | `details` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-StatusDetails[`StatusDetails`] +| `StatusDetails` | Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. | `kind` @@ -6269,7 +6269,7 @@ Type:: | Suggested HTTP return code for this status, 0 if not set. | `details` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-StatusDetails_v2[`StatusDetails_v2`] +| `StatusDetails_v2` | Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. | `kind` @@ -6324,7 +6324,7 @@ Type:: | Suggested HTTP return code for this status, 0 if not set. | `details` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-StatusDetails_v2[`StatusDetails_v2`] +| `StatusDetails_v2` | Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. | `kind` @@ -6379,7 +6379,7 @@ Type:: | Suggested HTTP return code for this status, 0 if not set. | `details` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-StatusDetails_v2[`StatusDetails_v2`] +| `StatusDetails_v2` | Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. | `kind` @@ -6434,7 +6434,7 @@ Type:: | Suggested HTTP return code for this status, 0 if not set. | `details` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-StatusDetails_v2[`StatusDetails_v2`] +| `StatusDetails_v2` | Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. | `kind` @@ -6489,7 +6489,7 @@ Type:: | Suggested HTTP return code for this status, 0 if not set. | `details` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-StatusDetails_v2[`StatusDetails_v2`] +| `StatusDetails_v2` | Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. | `kind` @@ -6544,7 +6544,7 @@ Type:: | Suggested HTTP return code for this status, 0 if not set. | `details` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-StatusDetails_v2[`StatusDetails_v2`] +| `StatusDetails_v2` | Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. | `kind` @@ -6599,7 +6599,7 @@ Type:: | Suggested HTTP return code for this status, 0 if not set. | `details` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-StatusDetails_v2[`StatusDetails_v2`] +| `StatusDetails_v2` | Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. | `kind` @@ -6654,7 +6654,7 @@ Type:: | Suggested HTTP return code for this status, 0 if not set. | `details` -| xref:../objects/index.adoc#io-k8s-apimachinery-pkg-apis-meta-v1-StatusDetails_v2[`StatusDetails_v2`] +| `StatusDetails_v2` | Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. | `kind` @@ -6728,7 +6728,7 @@ Required:: | `type` | `string` -| +| |=== diff --git a/rest_api/operator_apis/ingresscontroller-operator-openshift-io-v1.adoc b/rest_api/operator_apis/ingresscontroller-operator-openshift-io-v1.adoc index bbe8210a841f..e1cd522bfbc8 100644 --- a/rest_api/operator_apis/ingresscontroller-operator-openshift-io-v1.adoc +++ b/rest_api/operator_apis/ingresscontroller-operator-openshift-io-v1.adoc @@ -993,7 +993,7 @@ Type:: | `string` | protocol specifies whether the load balancer uses PROXY protocol to forward connections to the IngressController. See "service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: -"proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas" +"proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when @@ -2948,11 +2948,11 @@ This should be when the underlying condition changed. If that is not known, the | `message` | `string` -| +| | `reason` | `string` -| +| | `status` | `string` @@ -3548,7 +3548,7 @@ Type:: | `string` | protocol specifies whether the load balancer uses PROXY protocol to forward connections to the IngressController. See "service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: -"proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas" +"proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when @@ -4012,7 +4012,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../operator_apis/ingresscontroller-operator-openshift-io-v1.adoc#ingresscontroller-operator-openshift-io-v1[`IngressController`] schema -| +| |=== .HTTP responses @@ -4145,7 +4145,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../operator_apis/ingresscontroller-operator-openshift-io-v1.adoc#ingresscontroller-operator-openshift-io-v1[`IngressController`] schema -| +| |=== .HTTP responses @@ -4247,7 +4247,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../autoscale_apis/scale-autoscaling-v1.adoc#scale-autoscaling-v1[`Scale`] schema -| +| |=== .HTTP responses @@ -4349,7 +4349,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../operator_apis/ingresscontroller-operator-openshift-io-v1.adoc#ingresscontroller-operator-openshift-io-v1[`IngressController`] schema -| +| |=== .HTTP responses diff --git a/rest_api/security_apis/securitycontextconstraints-security-openshift-io-v1.adoc b/rest_api/security_apis/securitycontextconstraints-security-openshift-io-v1.adoc index 1456b1c5f98c..de76d838f5e2 100644 --- a/rest_api/security_apis/securitycontextconstraints-security-openshift-io-v1.adoc +++ b/rest_api/security_apis/securitycontextconstraints-security-openshift-io-v1.adoc @@ -85,12 +85,12 @@ is allowed in the "Volumes" field. | `allowedUnsafeSysctls` | `` | AllowedUnsafeSysctls is a list of explicitly allowed unsafe sysctls, defaults to none. -Each entry is either a plain sysctl name or ends in "*" in which case it is considered +Each entry is either a plain sysctl name or ends in "\*" in which case it is considered as a prefix of allowed sysctls. Single * means all unsafe sysctls are allowed. Kubelet has to whitelist all allowed unsafe sysctls explicitly to avoid rejection. Examples: -e.g. "foo/*" allows "foo/bar", "foo/baz", etc. +e.g. "foo/\*" allows "foo/bar", "foo/baz", etc. e.g. "foo.*" allows "foo.bar", "foo.baz", etc. | `apiVersion` @@ -111,11 +111,11 @@ process can gain more privileges than its parent process. | `forbiddenSysctls` | `` | ForbiddenSysctls is a list of explicitly forbidden sysctls, defaults to none. -Each entry is either a plain sysctl name or ends in "*" in which case it is considered +Each entry is either a plain sysctl name or ends in "\*" in which case it is considered as a prefix of forbidden sysctls. Single * means all sysctls are forbidden. Examples: -e.g. "foo/*" forbids "foo/bar", "foo/baz", etc. +e.g. "foo/\*" forbids "foo/bar", "foo/baz", etc. e.g. "foo.*" forbids "foo.bar", "foo.baz", etc. | `fsGroup` @@ -274,7 +274,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../security_apis/securitycontextconstraints-security-openshift-io-v1.adoc#securitycontextconstraints-security-openshift-io-v1[`SecurityContextConstraints`] schema -| +| |=== .HTTP responses @@ -429,7 +429,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../security_apis/securitycontextconstraints-security-openshift-io-v1.adoc#securitycontextconstraints-security-openshift-io-v1[`SecurityContextConstraints`] schema -| +| |=== .HTTP responses From 12a25233fb7cc6b826c9b4c6b8fddc719c5f0426 Mon Sep 17 00:00:00 2001 From: Eliska Romanova Date: Mon, 17 Feb 2025 09:25:33 +0100 Subject: [PATCH 263/669] OBSDOCS-1462: Update monitoring config map API reference content for OCP 4.18 release --- ...e-for-the-cluster-monitoring-operator.adoc | 31 ++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc b/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc index 54baec8e1e2c..1eec074af1b5 100644 --- a/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc +++ b/observability/monitoring/config-map-reference-for-the-cluster-monitoring-operator.adoc @@ -152,6 +152,8 @@ The `ClusterMonitoringConfiguration` resource defines settings that customize th |enableUserWorkload|*bool|`UserWorkloadEnabled` is a Boolean flag that enables monitoring for user-defined projects. +|userWorkload|*link:#userworkloadconfig[UserWorkloadConfig]|`UserWorkload` defines settings for the monitoring of user-defined projects. + |kubeStateMetrics|*link:#kubestatemetricsconfig[KubeStateMetricsConfig]|`KubeStateMetricsConfig` defines settings for the `kube-state-metrics` agent. |metricsServer|*link:#metricsserverconfig[MetricsServerConfig]|`MetricsServer` defines settings for the Metrics Server component. @@ -545,6 +547,10 @@ Appears in: link:#userworkloadconfiguration[UserWorkloadConfiguration] [options="header"] |=== | Property | Type | Description +|scrapeInterval|string|Configures the default interval between consecutive scrapes in case the `ServiceMonitor` or `PodMonitor` resource does not specify any value. The interval must be set between 5 seconds and 5 minutes. The value can be expressed in: seconds (for example `30s`), minutes (for example `1m`) or a mix of minutes and seconds (for example `1m30s`). The default value is `30s`. + +|evaluationInterval|string|Configures the default interval between rule evaluations in case the `PrometheusRule` resource does not specify any value. The interval must be set between 5 seconds and 5 minutes. The value can be expressed in: seconds (for example `30s`), minutes (for example `1m`) or a mix of minutes and seconds (for example `1m30s`). It only applies to `PrometheusRule` resources with the `openshift.io/prometheus-rule-evaluation-scope=\"leaf-prometheus\"` label. The default value is `30s`. + |additionalAlertmanagerConfigs|[]link:#additionalalertmanagerconfig[AdditionalAlertmanagerConfig]|Configures additional Alertmanager instances that receive alerts from the Prometheus component. By default, no additional Alertmanager instances are configured. |enforcedLabelLimit|*uint64|Specifies a per-scrape limit on the number of labels accepted for a sample. If the number of labels exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is `0`, which means that no limit is set. @@ -610,7 +616,7 @@ link:#prometheusrestrictedconfig[PrometheusRestrictedConfig] |oauth2|*monv1.OAuth2|Defines OAuth2 authentication settings for the remote write endpoint. -|proxyUrl|string|Defines an optional proxy URL. It is superseded by the cluster-wide proxy, if enabled. +|proxyUrl|string|Defines an optional proxy URL. If the cluster-wide proxy is enabled, it replaces the proxyUrl setting. The cluster-wide proxy supports both HTTP and HTTPS proxies, with HTTPS taking precedence. |queueConfig|*monv1.QueueConfig|Allows tuning configuration for remote write queue parameters. @@ -720,6 +726,8 @@ Appears in: link:#userworkloadconfiguration[UserWorkloadConfiguration] | Property | Type | Description |additionalAlertmanagerConfigs|[]link:#additionalalertmanagerconfig[AdditionalAlertmanagerConfig]|Configures how the Thanos Ruler component communicates with additional Alertmanager instances. The default value is `nil`. +|evaluationInterval|string|Configures the default interval between Prometheus rule evaluations in case the `PrometheusRule` resource does not specify any value. The interval must be set between 5 seconds and 5 minutes. The value can be expressed in: seconds (for example `30s`), minutes (for example `1m`) or a mix of minutes and seconds (for example `1m30s`). It applies to `PrometheusRule` resources without the `openshift.io/prometheus-rule-evaluation-scope=\"leaf-prometheus\"` label. The default value is `15s`. + |logLevel|string|Defines the log level setting for Thanos Ruler. The possible values are `error`, `warn`, `info`, and `debug`. The default value is `info`. |nodeSelector|map[string]string|Defines the nodes on which the Pods are scheduled. @@ -736,6 +744,21 @@ Appears in: link:#userworkloadconfiguration[UserWorkloadConfiguration] |=== +== UserWorkloadConfig + +=== Description + +The `UserWorkloadConfig` resource defines settings for the monitoring of user-defined projects. + +Appears in: link:#clustermonitoringconfiguration[ClusterMonitoringConfiguration] + +[options="header"] +|=== +| Property | Type | Description +|rulesWithoutLabelEnforcementAllowed|*bool|A Boolean flag that enables or disables the ability to deploy user-defined `PrometheusRules` objects for which the `namespace` label is not enforced to the namespace of the object. Such objects should be created in a namespace configured under the `namespacesWithoutLabelEnforcement` property of the `UserWorkloadConfiguration` resource. The default value is `true`. + +|=== + == UserWorkloadConfiguration === Description @@ -753,4 +776,10 @@ The `UserWorkloadConfiguration` resource defines the settings responsible for us |thanosRuler|*link:#thanosrulerconfig[ThanosRulerConfig]|Defines the settings for the Thanos Ruler component in user workload monitoring. +|namespacesWithoutLabelEnforcement|[]string|Defines the list of namespaces for which Prometheus and Thanos Ruler in user-defined monitoring do not enforce the `namespace` label value in `PrometheusRule` objects. + +The `namespacesWithoutLabelEnforcement` property allows users to define recording and alerting rules that can query across multiple projects (not limited to user-defined projects) instead of deploying identical `PrometheusRule` objects in each user project. + +To make the resulting alerts and metrics visible to project users, the query expressions should return a `namespace` label with a non-empty value. + |=== From 0b1feada6a269daaa5243ec95b6263e51b2b3ef3 Mon Sep 17 00:00:00 2001 From: Gabriel McGoldrick Date: Tue, 18 Feb 2025 16:30:01 +0000 Subject: [PATCH 264/669] OBSDOCS-1161 fix typo in command --- ...nitoringstack-object-for-cluster-observability-operator.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/monitoring-creating-a-monitoringstack-object-for-cluster-observability-operator.adoc b/modules/monitoring-creating-a-monitoringstack-object-for-cluster-observability-operator.adoc index fc1d37ab08b6..61100fca2561 100644 --- a/modules/monitoring-creating-a-monitoringstack-object-for-cluster-observability-operator.adoc +++ b/modules/monitoring-creating-a-monitoringstack-object-for-cluster-observability-operator.adoc @@ -64,7 +64,7 @@ example-coo-monitoring-stack 81m + [source,terminal] ---- -$ oc -n oc -n ns1-coo exec -c prometheus prometheus-example-coo-monitoring-stack-0 -- curl -s 'http://localhost:9090/api/v1/targets' | jq '.data.activeTargets[].discoveredLabels | select(.__meta_kubernetes_endpoints_label_app=="prometheus-coo-example-app")' +$ oc -n ns1-coo exec -c prometheus prometheus-example-coo-monitoring-stack-0 -- curl -s 'http://localhost:9090/api/v1/targets' | jq '.data.activeTargets[].discoveredLabels | select(.__meta_kubernetes_endpoints_label_app=="prometheus-coo-example-app")' ---- + .Example output From f64181e230e8a0e589c631579209f7427baf2ba1 Mon Sep 17 00:00:00 2001 From: Michael Ryan Peter Date: Fri, 24 Jan 2025 16:48:57 -0500 Subject: [PATCH 265/669] OSDOCS#12884[4.18][OLMv1]Create RBAC to manage and install ce --- extensions/ce/managing-ce.adoc | 11 +- .../olmv1-cluster-extension-permissions.adoc | 20 + ...olmv1-creating-a-cluster-role-binding.adoc | 65 +++ modules/olmv1-creating-a-cluster-role.adoc | 346 +++++++++++ modules/olmv1-creating-a-namespace.adoc | 23 + modules/olmv1-creating-a-service-account.adoc | 95 +-- .../olmv1-downloading-bundle-manifests.adoc | 151 +++++ ...ample-pipelines-operator-cluster-role.adoc | 15 + modules/olmv1-installing-an-operator.adoc | 88 +-- ...nstall-and-manage-extension-resources.adoc | 37 ++ ...ample-pipelines-installer-clusterrole.yaml | 550 ++++++++++++++++++ .../olmv1-manual-rbac-scoping-admonition.adoc | 11 + 12 files changed, 1231 insertions(+), 181 deletions(-) create mode 100644 modules/olmv1-cluster-extension-permissions.adoc create mode 100644 modules/olmv1-creating-a-cluster-role-binding.adoc create mode 100644 modules/olmv1-creating-a-cluster-role.adoc create mode 100644 modules/olmv1-creating-a-namespace.adoc create mode 100644 modules/olmv1-downloading-bundle-manifests.adoc create mode 100644 modules/olmv1-example-pipelines-operator-cluster-role.adoc create mode 100644 modules/olmv1-required-rbac-to-install-and-manage-extension-resources.adoc create mode 100644 snippets/example-pipelines-installer-clusterrole.yaml create mode 100644 snippets/olmv1-manual-rbac-scoping-admonition.adoc diff --git a/extensions/ce/managing-ce.adoc b/extensions/ce/managing-ce.adoc index 084218eb9820..47c744bd6a15 100644 --- a/extensions/ce/managing-ce.adoc +++ b/extensions/ce/managing-ce.adoc @@ -8,7 +8,7 @@ toc::[] After a catalog has been added to your cluster, you have access to the versions, patches, and over-the-air updates of the extensions and Operators that are published to the catalog. -You can manage extensions declaratively from the CLI using custom resources (CRs). +You can use custom resources (CRs) to manage extensions declaratively from the CLI. include::modules/olmv1-supported-extensions.adoc[leveloffset=+1] @@ -18,7 +18,14 @@ include::modules/olmv1-supported-extensions.adoc[leveloffset=+1] include::modules/olmv1-finding-operators-to-install.adoc[leveloffset=+1] include::modules/olmv1-catalog-queries.adoc[leveloffset=+2] -include::modules/olmv1-creating-a-service-account.adoc[leveloffset=+1] +include::modules/olmv1-cluster-extension-permissions.adoc[leveloffset=+1] +include::modules/olmv1-creating-a-namespace.adoc[leveloffset=+2] +include::modules/olmv1-creating-a-service-account.adoc[leveloffset=+2] +include::modules/olmv1-downloading-bundle-manifests.adoc[leveloffset=+2] +include::modules/olmv1-required-rbac-to-install-and-manage-extension-resources.adoc[leveloffset=+2] +include::modules/olmv1-creating-a-cluster-role.adoc[leveloffset=+2] +include::modules/olmv1-example-pipelines-operator-cluster-role.adoc[leveloffset=+2] +include::modules/olmv1-creating-a-cluster-role-binding.adoc[leveloffset=+2] include::modules/olmv1-installing-an-operator.adoc[leveloffset=+1] [role="_additional-resources"] diff --git a/modules/olmv1-cluster-extension-permissions.adoc b/modules/olmv1-cluster-extension-permissions.adoc new file mode 100644 index 000000000000..60e655779e02 --- /dev/null +++ b/modules/olmv1-cluster-extension-permissions.adoc @@ -0,0 +1,20 @@ +// Module included in the following assemblies: +// +// * extensions/ce/managing-ce.adoc + +:_mod-docs-content-type: CONCEPT + +[id="olmv1-cluster-extension-permissions_{context}"] += Cluster extension permissions + +In {olmv0-first}, a single service account with cluster administrator privileges manages all cluster extensions. + +{olmv1} is designed to be more secure than {olmv0} by default. {olmv1} manages a cluster extension by using the service account specified in an extension's custom resource (CR). Cluster administrators can create a service account for each cluster extension. As a result, administrators can follow the principle of least privilege and assign only the role-based access controls (RBAC) to install and manage that extension. + +You must add each permission to either a cluster role or role. Then you must bind the cluster role or role to the service account with a cluster role binding or role binding. + +You can scope the RBAC to either the cluster or to a namespace. Use cluster roles and cluster role bindings to scope permissions to the cluster. Use roles and role bindings to scope permissions to a namespace. Whether you scope the permissions to the cluster or to a namespace depends on the design of the extension you want to install and manage. + +include::snippets/olmv1-manual-rbac-scoping-admonition.adoc[] + +If a new version of an installed extension requires additional permissions, {olmv1} halts the update process until a cluster administrator grants those permissions. diff --git a/modules/olmv1-creating-a-cluster-role-binding.adoc b/modules/olmv1-creating-a-cluster-role-binding.adoc new file mode 100644 index 000000000000..be359b6b762b --- /dev/null +++ b/modules/olmv1-creating-a-cluster-role-binding.adoc @@ -0,0 +1,65 @@ +// Module included in the following assemblies: +// +// * extensions/ce/managing-ce.adoc + +:_mod-docs-content-type: PROCEDURE + +[id="olmv1-creating-a-cluster-rol-binding_{context}"] += Creating a cluster role binding for an extension + +After you have created a service account and cluster role, you must bind the cluster role to the service account with a cluster role binding manifest. + +.Prerequisites + +* Access to an {product-title} cluster using an account with `cluster-admin` permissions. +* You have created and applied the following resources for the extension you want to install: +** Namespace +** Service account +** Cluster role + +.Procedure + +. Create a cluster role binding to bind the cluster role to the service account, similar to the following example: ++ +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: -installer-binding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: -installer-clusterrole +subjects: +- kind: ServiceAccount + name: -installer + namespace: +---- ++ +.Example `pipelines-cluster-role-binding.yaml` file +[%collapsible] +==== +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: pipelines-installer-binding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: pipelines-installer-clusterrole +subjects: +- kind: ServiceAccount + name: pipelines-installer + namespace: pipelines +---- +==== + +. Apply the cluster role binding by running the following command: ++ +[source,terminal] +---- +$ oc apply -f pipelines-cluster-role-binding.yaml +---- diff --git a/modules/olmv1-creating-a-cluster-role.adoc b/modules/olmv1-creating-a-cluster-role.adoc new file mode 100644 index 000000000000..de0c403d1e82 --- /dev/null +++ b/modules/olmv1-creating-a-cluster-role.adoc @@ -0,0 +1,346 @@ +// Module included in the following assemblies: +// +// * extensions/ce/managing-ce.adoc + +:_mod-docs-content-type: PROCEDURE + +[id="olmv1-creating-a-cluster-role_{context}"] += Creating a cluster role for an extension + +You must review the `install.spec.clusterpermissions` stanza of the cluster service version (CSV) and the manifests of an extension carefully to define the required role-based access controls (RBAC) of the extension that you want to install. You must create a cluster role by copying the required RBAC from the CSV to the new manifest. + +[TIP] +==== +If you want to test the process for installing and updating an extension in {olmv1}, you can use the following cluster role to grant cluster administrator permissions. This manifest is for testing purposes only. It should not be used in production clusters. + +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: -installer-clusterrole +rules: +- apiGroups: ["*"] + resources: ["*"] + verbs: ["*"] +---- +==== + +The following procedure uses the `openshift-pipelines-operator-rh.clusterserviceversion.yaml` file of the {pipelines-title} Operator as an example. The examples include excerpts of the RBAC required to install and manage the {pipelines-shortname} Operator. For a complete manifest, see "Example cluster role for the {pipelines-title} Operator". + +include::snippets/olmv1-manual-rbac-scoping-admonition.adoc[] + +.Prerequisites + +* Access to an {product-title} cluster using an account with `cluster-admin` permissions. +* You have downloaded the manifests in the image reference of the extension that you want to install. + +.Procedure + +. Create a new cluster role manifest, similar to the following example: ++ +.Example `-cluster-role.yaml` file +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: -installer-clusterrole +---- + +. Edit your cluster role manifest to include permission to update finalizers on the extension, similar to the following example: ++ +.Example -cluster-role.yaml +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: pipelines-installer-clusterrole +rules: +- apiGroups: + - olm.operatorframework.io + resources: + - clusterextensions/finalizers + verbs: + - update + # Scoped to the name of the ClusterExtension + resourceNames: + - # <1> +---- +<1> Specifies the value from the `metadata.name` field from the custom resource (CR) of the extension. + +. Search for the `clusterrole` and `clusterrolebindings` values in the `rules.resources` field in the extension's CSV file. + +** Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example: ++ +.Example cluster role manifest +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: pipelines-installer-clusterrole +rules: +# ... +# ClusterRoles and ClusterRoleBindings for the controllers of the extension +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterroles + verbs: + - create # <1> + - list + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterroles + verbs: + - get + - update + - patch + - delete + resourceNames: # <2> + - "*" +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + verbs: + - create + - list + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + verbs: + - get + - update + - patch + - delete + resourceNames: + - "*" +# ... +---- +<1> You cannot scope `create`, `list`, and `watch` permissions to specific resource names (the `resourceNames` field). You must scope these permissions to their resources (the `resources` field). +<2> Some resource names are generated by using the following format: `.`. After you install the extension, look up the resource names for the cluster roles and cluster role bindings for the controller of the extension. Replace the wildcard characters in this example with the generated names and follow the principle of least privilege. + +. Search for the `customresourcedefinitions` value in the `rules.resources` field in the extension's CSV file. + +** Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example: ++ +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: pipelines-installer-clusterrole +rules: +# ... +# Custom resource definitions of the extension +- apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - create + - list + - watch +- apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - get + - update + - patch + - delete + resourceNames: + - manualapprovalgates.operator.tekton.dev + - openshiftpipelinesascodes.operator.tekton.dev + - tektonaddons.operator.tekton.dev + - tektonchains.operator.tekton.dev + - tektonconfigs.operator.tekton.dev + - tektonhubs.operator.tekton.dev + - tektoninstallersets.operator.tekton.dev + - tektonpipelines.operator.tekton.dev + - tektonresults.operator.tekton.dev + - tektontriggers.operator.tekton.dev +# ... +---- + +. Search the CSV file for stanzas with the `permissions` and `clusterPermissions` values in the `rules.resources` spec. + +** Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example: ++ +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: pipelines-installer-clusterrole +rules: +# ... +# Excerpt from install.spec.clusterPermissions +- apiGroups: + - '' + resources: + - nodes + - pods + - services + - endpoints + - persistentvolumeclaims + - events + - configmaps + - secrets + - pods/log + - limitranges + verbs: + - create + - list + - watch + - delete + - deletecollection + - patch + - get + - update +- apiGroups: + - extensions + - apps + resources: + - ingresses + - ingresses/status + verbs: + - create + - list + - watch + - delete + - patch + - get + - update + # ... +---- + +. Search the CSV file for resources under the `install.spec.deployments` stanza. + +** Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example: ++ +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: pipelines-installer-clusterrole +rules: +# ... +# Excerpt from install.spec.deployments +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - get + - update + - patch + - delete + # scoped to the extension controller deployment name + resourceNames: + - openshift-pipelines-operator + - tekton-operator-webhook +# ... +---- + +. Search for the `services` and `configmaps` values in the `rules.resources` field in the extension's CSV file. + +** Copy the API groups, resources, verbs, and resource names to your manifest, similar to the following example: ++ +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: pipelines-installer-clusterrole +rules: +# ... +# Services +- apiGroups: + - "" + resources: + - services + verbs: + - create +- apiGroups: + - "" + resources: + - services + verbs: + - get + - list + - watch + - update + - patch + - delete + # scoped to the service name + resourceNames: + - openshift-pipelines-operator-monitor + - tekton-operator + - tekton-operator-webhook +# configmaps +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create +- apiGroups: + - "" + resources: + - configmaps + verbs: + - get + - list + - watch + - update + - patch + - delete + # scoped to the configmap name + resourceNames: + - config-logging + - tekton-config-defaults + - tekton-config-observability + - tekton-operator-controller-config-leader-election + - tekton-operator-info + - tekton-operator-webhook-config-leader-election +- apiGroups: + - operator.tekton.dev + resources: + - tekton-config-read-role + - tekton-result-read-role + verbs: + - get + - watch + - list +---- + +. Add the cluster role manifest to the cluster by running the following command: ++ +[source,terminal] +---- +$ oc apply -f -installer-clusterrole.yaml +---- ++ +.Example command +[source,terminal] +---- +$ oc apply -f pipelines-installer-clusterrole.yaml +---- diff --git a/modules/olmv1-creating-a-namespace.adoc b/modules/olmv1-creating-a-namespace.adoc new file mode 100644 index 000000000000..cf79788ef5fc --- /dev/null +++ b/modules/olmv1-creating-a-namespace.adoc @@ -0,0 +1,23 @@ +// Module included in the following assemblies: +// +// * extensions/ce/managing-ce.adoc + +:_mod-docs-content-type: PROCEDURE + +[id="olmv1-creating-a-namespace_{context}"] += Creating a namespace + +Before you create a service account to install and manage your cluster extension, you must create a namespace. + +.Prerequisites + +* Access to an {product-title} cluster using an account with `cluster-admin` permissions. + +.Procedure + +* Create a new namespace for the service account of the extension that you want to install by running the following command: ++ +[source,terminal] +---- +$ oc adm new-project +---- diff --git a/modules/olmv1-creating-a-service-account.adoc b/modules/olmv1-creating-a-service-account.adoc index dc48fb4a8b01..d8d3868b1fab 100644 --- a/modules/olmv1-creating-a-service-account.adoc +++ b/modules/olmv1-creating-a-service-account.adoc @@ -5,14 +5,9 @@ :_mod-docs-content-type: PROCEDURE [id="olmv1-creating-a-service-account_{context}"] -= Creating a service account to manage cluster extensions += Creating a service account for an extension -Unlike {olmv0-first}, {olmv1} does not have permissions to install, update, and manage cluster extensions. Cluster administrators must create a service account and assign the role-based access controls (RBAC) required to install, update, and manage cluster extensions. - -[IMPORTANT] -==== -include::snippets/olmv1-known-issue-service-accounts.adoc[] -==== +You must create a service account to install, manage, and update a cluster extension. .Prerequisites @@ -50,89 +45,3 @@ metadata: ---- $ oc apply -f extension-service-account.yaml ---- -. Create a cluster role and assign RBAC, similar to the following example: -+ -[WARNING] -==== -The following cluster role does not follow the principle of least privilege. This cluster role is intended for testing purposes only. Do not use it on production clusters. -==== -+ -[source,yaml] ----- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: -installer-clusterrole -rules: -- apiGroups: ["*"] - resources: ["*"] - verbs: ["*"] ----- -+ -.Example `pipelines-cluster-role.yaml` file -[%collapsible] -==== -[source,yaml] ----- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: pipelines-installer-clusterrole -rules: -- apiGroups: ["*"] - resources: ["*"] - verbs: ["*"] ----- -==== - -. Add the cluster role to the cluster by running the following command: -+ -[source,terminal] ----- -$ oc apply -f pipelines-role.yaml ----- - -. Bind the permissions granted by the cluster role to the service account by creating a cluster role binding, similar to the following example: -+ -[source,yaml] ----- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: -installer-binding -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: -installer-clusterrole -subjects: -- kind: ServiceAccount - name: -installer - namespace: ----- -+ -.Example `pipelines-cluster-role-binding.yaml` file -[%collapsible] -==== -[source,yaml] ----- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: pipelines-installer-binding -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: pipelines-installer-clusterrole -subjects: -- kind: ServiceAccount - name: pipelines-installer - namespace: pipelines ----- -==== - -. Apply the cluster role binding by running the following command: -+ -[source,terminal] ----- -$ oc apply -f pipelines-cluster-role-binding.yaml ----- diff --git a/modules/olmv1-downloading-bundle-manifests.adoc b/modules/olmv1-downloading-bundle-manifests.adoc new file mode 100644 index 000000000000..1d35b4c687ab --- /dev/null +++ b/modules/olmv1-downloading-bundle-manifests.adoc @@ -0,0 +1,151 @@ +// Module included in the following assemblies: +// +// * extensions/ce/managing-ce.adoc + +:_mod-docs-content-type: PROCEDURE + +[id="olmv1-downloading-bundle-manifests_{context}"] += Downloading the bundle manifests of an extension + +Use the `opm` CLI tool to download the bundle manifests of the extension that you want to install. Use the CLI tool or text editor of your choice to view the manifests and find the required permissions to install and manage the extension. + +.Prerequisites + +* You have access to an {product-title} cluster using an account with `cluster-admin` permissions. +* You have decided which extension you want to install. +* You have installed the `opm` CLI tool. + +.Procedure + +. Inspect the available versions and images of the extension you want to install by running the following command: ++ +[source,terminal] +---- +$ opm render : | \ + jq -cs '.[] | select( .schema == "olm.bundle" ) | \ + select( .package == "") | \ + {"name":.name, "image":.image}' +---- ++ +.Example command +[%collapsible] +==== +[source,terminal,subs="attributes"] +---- +$ opm render registry.redhat.io/redhat/redhat-operator-index:v{product-version} | \ + jq -cs '.[] | select( .schema == "olm.bundle" ) | \ + select( .package == "openshift-pipelines-operator-rh") | \ + {"name":.name, "image":.image}' +---- +==== ++ +.Example output +[%collapsible] +==== +[source,text] +---- +{"name":"openshift-pipelines-operator-rh.v1.14.3","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:3f64b29f6903981470d0917b2557f49d84067bccdba0544bfe874ec4412f45b0"} +{"name":"openshift-pipelines-operator-rh.v1.14.4","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec"} +{"name":"openshift-pipelines-operator-rh.v1.14.5","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:f7b19ce26be742c4aaa458d37bc5ad373b5b29b20aaa7d308349687d3cbd8838"} +{"name":"openshift-pipelines-operator-rh.v1.15.0","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:22be152950501a933fe6e1df0e663c8056ca910a89dab3ea801c3bb2dc2bf1e6"} +{"name":"openshift-pipelines-operator-rh.v1.15.1","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:64afb32e3640bb5968904b3d1a317e9dfb307970f6fda0243e2018417207fd75"} +{"name":"openshift-pipelines-operator-rh.v1.15.2","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:8a593c1144709c9aeffbeb68d0b4b08368f528e7bb6f595884b2474bcfbcafcd"} +{"name":"openshift-pipelines-operator-rh.v1.16.0","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:a46b7990c0ad07dae78f43334c9bd5e6cba7b50ca60d3f880099b71e77bed214"} +{"name":"openshift-pipelines-operator-rh.v1.16.1","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:29f27245e93b3f605647993884751c490c4a44070d3857a878d2aee87d43f85b"} +{"name":"openshift-pipelines-operator-rh.v1.16.2","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:2037004666526c90329f4791f14cb6cc06e8775cb84ba107a24cc4c2cf944649"} +{"name":"openshift-pipelines-operator-rh.v1.17.0","image":"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:d75065e999826d38408049aa1fde674cd1e45e384bfdc96523f6bad58a0e0dbc"} +---- +==== + +. Make a directory to extract the image of the bundle that you want to install by running the following command: ++ +[source,terminal] +---- +$ mkdir +---- + +. Change into the directory by running the following command: ++ +[source,terminal] +---- +$ cd +---- + +. Find the image reference of the version that you want to install and run the following command: ++ +[source,terminal] +---- +$ oc image extract @sha256: +---- ++ +.Example command +[source,terminal] +---- +$ oc image extract registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:f7b19ce26be742c4aaa458d37bc5ad373b5b29b20aaa7d308349687d3cbd8838 +---- + +. Change into the `manifests` directory by running the following command: ++ +[source,terminal] +---- +$ cd manifests +---- + +. View the contents of the manifests directory by entering the following command. The output lists the manifests of the resources required to install, manage, and operate your extension. ++ +[source,terminal] +---- +$ tree +---- ++ +.Example output +[%collapsible] +==== +[source,text] +---- +. +├── manifests +│   ├── config-logging_v1_configmap.yaml +│   ├── openshift-pipelines-operator-monitor_monitoring.coreos.com_v1_servicemonitor.yaml +│   ├── openshift-pipelines-operator-prometheus-k8s-read-binding_rbac.authorization.k8s.io_v1_rolebinding.yaml +│   ├── openshift-pipelines-operator-read_rbac.authorization.k8s.io_v1_role.yaml +│   ├── openshift-pipelines-operator-rh.clusterserviceversion.yaml +│   ├── operator.tekton.dev_manualapprovalgates.yaml +│   ├── operator.tekton.dev_openshiftpipelinesascodes.yaml +│   ├── operator.tekton.dev_tektonaddons.yaml +│   ├── operator.tekton.dev_tektonchains.yaml +│   ├── operator.tekton.dev_tektonconfigs.yaml +│   ├── operator.tekton.dev_tektonhubs.yaml +│   ├── operator.tekton.dev_tektoninstallersets.yaml +│   ├── operator.tekton.dev_tektonpipelines.yaml +│   ├── operator.tekton.dev_tektonresults.yaml +│   ├── operator.tekton.dev_tektontriggers.yaml +│   ├── tekton-config-defaults_v1_configmap.yaml +│   ├── tekton-config-observability_v1_configmap.yaml +│   ├── tekton-config-read-rolebinding_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml +│   ├── tekton-config-read-role_rbac.authorization.k8s.io_v1_clusterrole.yaml +│   ├── tekton-operator-controller-config-leader-election_v1_configmap.yaml +│   ├── tekton-operator-info_rbac.authorization.k8s.io_v1_rolebinding.yaml +│   ├── tekton-operator-info_rbac.authorization.k8s.io_v1_role.yaml +│   ├── tekton-operator-info_v1_configmap.yaml +│   ├── tekton-operator_v1_service.yaml +│   ├── tekton-operator-webhook-certs_v1_secret.yaml +│   ├── tekton-operator-webhook-config-leader-election_v1_configmap.yaml +│   ├── tekton-operator-webhook_v1_service.yaml +│   ├── tekton-result-read-rolebinding_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml +│   └── tekton-result-read-role_rbac.authorization.k8s.io_v1_clusterrole.yaml +├── metadata +│   ├── annotations.yaml +│   └── properties.yaml +└── root + └── buildinfo + ├── content_manifests + │   └── openshift-pipelines-operator-bundle-container-v1.16.2-3.json + └── Dockerfile-openshift-pipelines-pipelines-operator-bundle-container-v1.16.2-3 +---- +==== + +.Next steps + +* View the contents of the `install.spec.clusterpermissions` stanza of cluster service version (CSV) file in the `manifests` directory using your preferred CLI tool or text editor. The following examples reference the `openshift-pipelines-operator-rh.clusterserviceversion.yaml` file of the {pipelines-title} Operator. +* Keep this file open as a reference while assigning permissions to the cluster role file in the following procedure. diff --git a/modules/olmv1-example-pipelines-operator-cluster-role.adoc b/modules/olmv1-example-pipelines-operator-cluster-role.adoc new file mode 100644 index 000000000000..24b83911da2d --- /dev/null +++ b/modules/olmv1-example-pipelines-operator-cluster-role.adoc @@ -0,0 +1,15 @@ +// Module included in the following assemblies: +// +// * extensions/ce/managing-ce.adoc + +:_mod-docs-content-type: REFERENCE + +[id="olmv1-example-cluster-role-pipelines_{context}"] += Example cluster role for the {pipelines-title} Operator + +See the following example for a complete cluster role manifest for the {pipelines-shortname} Operator. + +[source,yaml] +---- +include::snippets/example-pipelines-installer-clusterrole.yaml[] +---- diff --git a/modules/olmv1-installing-an-operator.adoc b/modules/olmv1-installing-an-operator.adoc index 07900f227d20..39d93ac7dbf1 100644 --- a/modules/olmv1-installing-an-operator.adoc +++ b/modules/olmv1-installing-an-operator.adoc @@ -7,98 +7,14 @@ [id="olmv1-installing-an-operator_{context}"] = Installing a cluster extension from a catalog -You can install an extension from a catalog by creating a custom resource (CR) and applying it to the cluster. {olmv1-first} supports installing cluster extensions, including {olmv0} Operators via the `registry+v1` bundle format, that are scoped to the cluster. For more information, see _Supported extensions_. +You can install an extension from a catalog by creating a custom resource (CR) and applying it to the cluster. {olmv1-first} supports installing cluster extensions, including {olmv0} Operators in the `registry+v1` bundle format, that are scoped to the cluster. For more information, see _Supported extensions_. .Prerequisites -* You have installed the `jq` CLI tool. -* You have installed the `opm` CLI tool. -* You have created a service account and assigned enough role-based access controls (RBAC) to install, update, and manage the extension you want to install. For more information, see "Creating a service account to manage cluster extensions". +* You have created a service account and assigned enough role-based access controls (RBAC) to install, update, and manage the extension that you want to install. For more information, see "Cluster extension permissions". .Procedure -. Inspect a package for channel and version information from a local copy of your catalog file by completing the following steps: - -.. Get a list of channels from a selected package by running the following command: -+ -[source,terminal] ----- -$ opm render : \ - | jq -s '.[] | select( .schema == "olm.channel" ) \ - | select( .package == "") \ - | .name' ----- -+ -.Example command -[%collapsible] -==== -[source,terminal,subs=attributes+] ----- -$ opm render registry.redhat.io/redhat/redhat-operator-index:v{product-version} \ - | jq -s '.[] | select( .schema == "olm.channel" ) \ - | select( .package == "openshift-pipelines-operator-rh") \ - | .name' ----- -==== -+ -.Example output -[%collapsible] -==== -[source,text] ----- -"latest" -"pipelines-1.14" -"pipelines-1.15" -"pipelines-1.16" ----- -==== - -.. Get a list of the versions published in a channel by running the following command: -+ -[source,terminal] ----- -$ opm render : \ - | jq -s '.[] | select( .package == "" ) \ - | select( .schema == "olm.channel" ) \ - | select( .name == "" ) | .entries \ - | .[] | .name' ----- -+ -.Example command -[%collapsible] -==== -[source,terminal,subs=attributes+] ----- -$ opm render registry.redhat.io/redhat/redhat-operator-index:v{product-version} \ - | jq -s '.[] | select( .package == "openshift-pipelines-operator-rh" ) \ - | select( .schema == "olm.channel" ) | select( .name == "latest" ) \ - | .entries | .[] | .name' ----- -==== -+ -.Example output -[%collapsible] -==== -[source,text] ----- -"openshift-pipelines-operator-rh.v1.14.3" -"openshift-pipelines-operator-rh.v1.14.4" -"openshift-pipelines-operator-rh.v1.14.5" -"openshift-pipelines-operator-rh.v1.15.0" -"openshift-pipelines-operator-rh.v1.15.1" -"openshift-pipelines-operator-rh.v1.15.2" -"openshift-pipelines-operator-rh.v1.16.0" -"openshift-pipelines-operator-rh.v1.16.1" ----- -==== - -. If you want to install your extension into a new namespace, run the following command: -+ -[source,terminal] ----- -$ oc adm new-project ----- - . Create a CR, similar to the following example: + .Example `pipelines-operator.yaml` CR diff --git a/modules/olmv1-required-rbac-to-install-and-manage-extension-resources.adoc b/modules/olmv1-required-rbac-to-install-and-manage-extension-resources.adoc new file mode 100644 index 000000000000..b52c98699339 --- /dev/null +++ b/modules/olmv1-required-rbac-to-install-and-manage-extension-resources.adoc @@ -0,0 +1,37 @@ +// Module included in the following assemblies: +// +// * extensions/ce/managing-ce.adoc + +:_mod-docs-content-type: REFERENCE + +[id="olmv1-required-rbac-to-install-and-manage-extension-resources_{context}"] += Required permissions to install and manage a cluster extension + +You must inspect the manifests included in the bundle image of a cluster extension to assign the necessary permissions. The service account requires enough role-based access controls (RBAC) to create and manage the following resources. + +[IMPORTANT] +==== +Follow the principle of least privilege and scope permissions to specific resource names with the least RBAC required to run. +==== + +Admission plugins:: Because {product-title} clusters use the `OwnerReferencesPermissionEnforcement` admission plugin, cluster extensions must have permissions to update the `blockOwnerDeletion` and `ownerReferences` finalizers. + +Cluster role and cluster role bindings for the controllers of the extension:: You must define RBAC so that the installation service account can create and manage cluster roles and cluster role bindings for the extension controllers. + +Cluster service version (CSV):: You must define RBAC for the resources defined in the CSV of the cluster extension. + +Cluster-scoped bundle resources:: You must define RBAC to create and manage any cluster-scoped resources included in the bundle. If the cluster-scoped resources matches another resource type, such as a `ClusterRole`, you can add the resource to the pre-existing rule under the `resources` or `resourceNames` field. + +Custom resource definitions (CRDs):: You must define RBAC so that the installation service account can create and manage the CRDs for the extension. Also, you must grant the service account for the controller of the extension the RBAC to manage its CRDs. + +Deployments:: You must define RBAC for the installation service account to create and manage the deployments needed by the extension controller, such as services and config maps. + +Extension permissions:: You must include RBAC for the permissions and cluster permissions defined in the CSV. The installation service account needs the ability to grant these permissions to the extension controller, which needs these permissions to run. + +Namespace-scoped bundle resources:: You must define RBAC for any namespace-scoped bundle resources. The installation service account requires permission to create and manage resources, such as config maps or services. + +Roles and role bindings:: You must define RBAC for any roles or role bindings defined in the CSV. The installation service account needs permission to create and manage those roles and role bindings. + +// I am deleting the secrets section because I think it covered under the "extension permissions" term. Please let me know if I should put it back and if you have a suggestion for the defintion. + +Service accounts:: You must define RBAC so that the installation service account can create and manage the service accounts for the extension controllers. diff --git a/snippets/example-pipelines-installer-clusterrole.yaml b/snippets/example-pipelines-installer-clusterrole.yaml new file mode 100644 index 000000000000..0b7b2dda35c3 --- /dev/null +++ b/snippets/example-pipelines-installer-clusterrole.yaml @@ -0,0 +1,550 @@ +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: pipelines-installer-clusterrole +rules: +- apiGroups: + - olm.operatorframework.io + resources: + - clusterextensions/finalizers + verbs: + - update + # Scoped to the name of the ClusterExtension + resourceNames: + - pipes # the value from from the extension's custom resource (CR) +# ClusterRoles and ClusterRoleBindings for the controllers of the extension +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterroles + verbs: + - create + - list + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterroles + verbs: + - get + - update + - patch + - delete + resourceNames: + - "*" +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + verbs: + - create + - list + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + verbs: + - get + - update + - patch + - delete + resourceNames: + - "*" +# Extension's custom resource definitions +- apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - create + - list + - watch +- apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - get + - update + - patch + - delete + resourceNames: + - manualapprovalgates.operator.tekton.dev + - openshiftpipelinesascodes.operator.tekton.dev + - tektonaddons.operator.tekton.dev + - tektonchains.operator.tekton.dev + - tektonconfigs.operator.tekton.dev + - tektonhubs.operator.tekton.dev + - tektoninstallersets.operator.tekton.dev + - tektonpipelines.operator.tekton.dev + - tektonresults.operator.tekton.dev + - tektontriggers.operator.tekton.dev +- apiGroups: + - '' + resources: + - nodes + - pods + - services + - endpoints + - persistentvolumeclaims + - events + - configmaps + - secrets + - pods/log + - limitranges + verbs: + - create + - list + - watch + - delete + - deletecollection + - patch + - get + - update +- apiGroups: + - extensions + - apps + resources: + - ingresses + - ingresses/status + verbs: + - create + - list + - watch + - delete + - patch + - get + - update +- apiGroups: + - '' + resources: + - namespaces + verbs: + - get + - list + - create + - update + - delete + - patch + - watch +- apiGroups: + - apps + resources: + - deployments + - daemonsets + - replicasets + - statefulsets + - deployments/finalizers + verbs: + - delete + - deletecollection + - create + - patch + - get + - list + - update + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - servicemonitors + verbs: + - get + - create + - delete +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterroles + - roles + verbs: + - delete + - deletecollection + - create + - patch + - get + - list + - update + - watch + - bind + - escalate +- apiGroups: + - '' + resources: + - serviceaccounts + verbs: + - get + - list + - create + - update + - delete + - patch + - watch + - impersonate +- apiGroups: + - rbac.authorization.k8s.io + resources: + - clusterrolebindings + - rolebindings + verbs: + - get + - update + - delete + - patch + - create + - list + - watch +- apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + - customresourcedefinitions/status + verbs: + - get + - create + - update + - delete + - list + - patch + - watch +- apiGroups: + - admissionregistration.k8s.io + resources: + - mutatingwebhookconfigurations + - validatingwebhookconfigurations + verbs: + - get + - list + - create + - update + - delete + - patch + - watch +- apiGroups: + - build.knative.dev + resources: + - builds + - buildtemplates + - clusterbuildtemplates + verbs: + - get + - list + - create + - update + - delete + - patch + - watch +- apiGroups: + - extensions + resources: + - deployments + verbs: + - get + - list + - create + - update + - delete + - patch + - watch +- apiGroups: + - extensions + resources: + - deployments/finalizers + verbs: + - get + - list + - create + - update + - delete + - patch + - watch +- apiGroups: + - operator.tekton.dev + resources: + - '*' + - tektonaddons + verbs: + - delete + - deletecollection + - create + - patch + - get + - list + - update + - watch +- apiGroups: + - tekton.dev + - triggers.tekton.dev + - operator.tekton.dev + - pipelinesascode.tekton.dev + resources: + - '*' + verbs: + - add + - delete + - deletecollection + - create + - patch + - get + - list + - update + - watch +- apiGroups: + - dashboard.tekton.dev + resources: + - '*' + - tektonaddons + verbs: + - delete + - deletecollection + - create + - patch + - get + - list + - update + - watch +- apiGroups: + - security.openshift.io + resources: + - securitycontextconstraints + verbs: + - use + - get + - list + - create + - update + - delete +- apiGroups: + - events.k8s.io + resources: + - events + verbs: + - create +- apiGroups: + - route.openshift.io + resources: + - routes + verbs: + - delete + - deletecollection + - create + - patch + - get + - list + - update + - watch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - get + - list + - create + - update + - delete + - patch + - watch +- apiGroups: + - console.openshift.io + resources: + - consoleyamlsamples + - consoleclidownloads + - consolequickstarts + - consolelinks + verbs: + - delete + - deletecollection + - create + - patch + - get + - list + - update + - watch +- apiGroups: + - autoscaling + resources: + - horizontalpodautoscalers + verbs: + - delete + - create + - patch + - get + - list + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - delete + - deletecollection + - create + - patch + - get + - list + - update + - watch +- apiGroups: + - monitoring.coreos.com + resources: + - servicemonitors + verbs: + - delete + - deletecollection + - create + - patch + - get + - list + - update + - watch +- apiGroups: + - batch + resources: + - jobs + - cronjobs + verbs: + - delete + - deletecollection + - create + - patch + - get + - list + - update + - watch +- apiGroups: + - '' + resources: + - namespaces/finalizers + verbs: + - update +- apiGroups: + - resolution.tekton.dev + resources: + - resolutionrequests + - resolutionrequests/status + verbs: + - get + - list + - watch + - create + - delete + - update + - patch +- apiGroups: + - console.openshift.io + resources: + - consoleplugins + verbs: + - get + - list + - watch + - create + - delete + - update + - patch +# Deployments specified in install.spec.deployments +- apiGroups: + - apps + resources: + - deployments + verbs: + - create + - list + - watch +- apiGroups: + - apps + resources: + - deployments + verbs: + - get + - update + - patch + - delete + # scoped to the extension controller deployment name + resourceNames: + - openshift-pipelines-operator + - tekton-operator-webhook +# Service accounts in the CSV +- apiGroups: + - "" + resources: + - serviceaccounts + verbs: + - create + - list + - watch +- apiGroups: + - "" + resources: + - serviceaccounts + verbs: + - get + - update + - patch + - delete + # scoped to the extension controller's deployment service account + resourceNames: + - openshift-pipelines-operator +# Services +- apiGroups: + - "" + resources: + - services + verbs: + - create +- apiGroups: + - "" + resources: + - services + verbs: + - get + - list + - watch + - update + - patch + - delete + # scoped to the service name + resourceNames: + - openshift-pipelines-operator-monitor + - tekton-operator + - tekton-operator-webhook +# configmaps +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create +- apiGroups: + - "" + resources: + - configmaps + verbs: + - get + - list + - watch + - update + - patch + - delete + # scoped to the configmap name + resourceNames: + - config-logging + - tekton-config-defaults + - tekton-config-observability + - tekton-operator-controller-config-leader-election + - tekton-operator-info + - tekton-operator-webhook-config-leader-election +- apiGroups: + - operator.tekton.dev + resources: + - tekton-config-read-role + - tekton-result-read-role + verbs: + - get + - watch + - list +--- diff --git a/snippets/olmv1-manual-rbac-scoping-admonition.adoc b/snippets/olmv1-manual-rbac-scoping-admonition.adoc new file mode 100644 index 000000000000..09e40d7cb6f9 --- /dev/null +++ b/snippets/olmv1-manual-rbac-scoping-admonition.adoc @@ -0,0 +1,11 @@ +// Text snippet included in the following modules: +// +// * modules/olmv1-cluster-extension-permissions.adoc +// * modules/olmv1-creating-a-cluster-role.adoc + +:_mod-docs-content-type: SNIPPET + +[IMPORTANT] +==== +To simply the following procedure and improve readability, the following example manifest uses permissions that are scoped to the cluster. You can further restrict some of the permissions by scoping them to the namespace of the extension instead of the cluster. +==== From 7cc5a2bfdf18897e09c05f85d9075d0126e100f4 Mon Sep 17 00:00:00 2001 From: Andrea Hoffer Date: Wed, 12 Feb 2025 15:30:12 -0500 Subject: [PATCH 266/669] OSDOCS#11611: Noting not to include the port number --- modules/customize-certificates-api-add-named.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/customize-certificates-api-add-named.adoc b/modules/customize-certificates-api-add-named.adoc index 3984d11e721d..53e5e4f6dc44 100644 --- a/modules/customize-certificates-api-add-named.adoc +++ b/modules/customize-certificates-api-add-named.adoc @@ -69,7 +69,7 @@ $ oc patch apiserver cluster \ [{"names": [""], //<1> "servingCertificate": {"name": ""}}]}}}' <2> ---- -<1> Replace `` with the FQDN that the API server should provide the certificate for. +<1> Replace `` with the FQDN that the API server should provide the certificate for. Do not include the port number. <2> Replace `` with the name used for the secret in the previous step. . Examine the `apiserver/cluster` object and confirm the secret is now From 0707df315fbf251c4435449b88e2818e702ded72 Mon Sep 17 00:00:00 2001 From: mletalie Date: Thu, 13 Feb 2025 15:26:23 -0500 Subject: [PATCH 267/669] SDN OVN Fix --- modules/migrate-sdn-ovn-osd.adoc | 2 +- modules/migrate-sdn-ovn.adoc | 11 ++++++----- networking/about-managed-networking.adoc | 17 ++++++++++++++++- .../migrate-from-openshift-sdn-osd.adoc | 4 ++-- rosa_release_notes/rosa-release-notes.adoc | 9 ++++++--- 5 files changed, 31 insertions(+), 12 deletions(-) diff --git a/modules/migrate-sdn-ovn-osd.adoc b/modules/migrate-sdn-ovn-osd.adoc index 5a4320c6559c..7b9b8b9223d3 100644 --- a/modules/migrate-sdn-ovn-osd.adoc +++ b/modules/migrate-sdn-ovn-osd.adoc @@ -3,7 +3,7 @@ :_mod-docs-content-type: PROCEDURE [id="migrate-sdn-ovn-ocm-cli_{context}"] -= Initiate migration using the OpenShift Cluster Manager API command-line interface (ocm) CLI += Initiating migration using the OpenShift Cluster Manager API command-line interface (ocm) CLI [WARNING] ==== diff --git a/modules/migrate-sdn-ovn.adoc b/modules/migrate-sdn-ovn.adoc index 904bfd953b31..e32d9493d633 100644 --- a/modules/migrate-sdn-ovn.adoc +++ b/modules/migrate-sdn-ovn.adoc @@ -3,7 +3,7 @@ :_mod-docs-content-type: PROCEDURE [id="migrate-sdn-ovn-cli_{context}"] -= Initiate migration using the ROSA CLI += Initiating migration using the ROSA CLI [WARNING] ==== @@ -25,11 +25,12 @@ $ rosa edit cluster -c <1> You cannot include the optional flag `--ovn-internal-subnets` in the command unless you define a value for the flag `--network-type`. ==== -:_mod-docs-content-type: PROCEDURE -[id="verify-sdn-ovn_{context}"] -= Verify migration status using the ROSA CLI +.Verification + +* To check the status of the migration, run the following command: + ++ -To check the status of the migration, run the following command: [source,terminal] ---- $ rosa describe cluster -c <1> diff --git a/networking/about-managed-networking.adoc b/networking/about-managed-networking.adoc index 059075201f05..44b2fc3bad9d 100644 --- a/networking/about-managed-networking.adoc +++ b/networking/about-managed-networking.adoc @@ -23,8 +23,17 @@ ifdef::openshift-rosa[] ==== Before upgrading {rosa-classic} clusters that are configured with the OpenShift SDN network plugin to version 4.17, you must migrate to the OVN-Kubernetes network plugin. For more information, see _Migrating from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin_ in the _Additional resources_ section. ==== - endif::openshift-rosa[] + +ifdef::openshift-dedicated[] + +[IMPORTANT] +==== +Before upgrading {product-title} clusters that are configured with the OpenShift SDN network plugin to version 4.17, you must migrate to the OVN-Kubernetes network plugin. For more information, see _Migrating from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin_ in the _Additional resources_ section. +==== +endif::openshift-dedicated[] + + [discrete] [role="_additional-resources"] [id="additional-resources_{context}"] @@ -34,3 +43,9 @@ endif::openshift-rosa[] ifdef::openshift-rosa[] * xref:../networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc#migrate-from-openshift-sdn[Migrating from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin] endif::openshift-rosa[] + +ifdef::openshift-dedicated[] + +* xref:../networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn-osd.adoc#migrate-from-openshift-sdn-osd[Migrating from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin] +endif::openshift-dedicated[] + diff --git a/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn-osd.adoc b/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn-osd.adoc index 190a8b23b273..3624498acffd 100644 --- a/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn-osd.adoc +++ b/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn-osd.adoc @@ -3,7 +3,7 @@ = Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin include::_attributes/common-attributes.adoc[] include::_attributes/attributes-openshift-dedicated.adoc[] -:context: migrate-from-openshift-sdn +:context: migrate-from-openshift-sdn-osd toc::[] @@ -21,4 +21,4 @@ Some considerations before starting migration initiation are: include::modules/migrate-sdn-ovn-osd.adoc[leveloffset=+1] .Additional resources -link:https://docs.openshift.com/container-platform/4.16/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.html#patching-ovnk-address-ranges_migrate-from-openshift-sdn[Patching OVN-Kubernetes address ranges] \ No newline at end of file +* link:https://docs.openshift.com/container-platform/4.16/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.html#patching-ovnk-address-ranges_migrate-from-openshift-sdn[Patching OVN-Kubernetes address ranges] \ No newline at end of file diff --git a/rosa_release_notes/rosa-release-notes.adoc b/rosa_release_notes/rosa-release-notes.adoc index db021b8d5259..147900243cc2 100644 --- a/rosa_release_notes/rosa-release-notes.adoc +++ b/rosa_release_notes/rosa-release-notes.adoc @@ -26,9 +26,12 @@ ifdef::openshift-rosa[] + // * **{product-title} SDN network plugin blocks future major upgrades** * **Initiate live migration from OpenShift SDN to OVN-Kubernetes.** -As part of the {product-title} move to OVN-Kubernetes as the only supported network plugin starting with {product-title} 4.17, users can now initiate live migration from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin. -If your cluster uses the OpenShift SDN network plugin, you cannot upgrade to future major versions of {product-title} without migrating to OVN-Kubernetes. For more information about migrating to OVN-Kubernetes, see xref:../networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc#migrate-from-openshift-sdn[Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin]. - +As part of the {product-title} move to OVN-Kubernetes as the only supported network plugin starting with {product-title} version 4.17, users can now initiate live migration from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin. ++ +If your cluster uses the OpenShift SDN network plugin, you cannot upgrade to future major versions of {product-title} without migrating to OVN-Kubernetes. ++ +For more information about migrating to OVN-Kubernetes, see xref:../networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc#migrate-from-openshift-sdn[Migrating from OpenShift SDN network plugin to OVN-Kubernetes network plugin]. ++ [IMPORTANT] ==== Egress lockdown is a Technology Preview feature. From b4fd394a82ee7ae76163b4253fb84374a0e68f7e Mon Sep 17 00:00:00 2001 From: sbeskin Date: Tue, 18 Feb 2025 18:12:37 +0200 Subject: [PATCH 268/669] CNV-49587 --- ...viewing-network-state-of-node-console.adoc | 23 +++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-) diff --git a/modules/virt-viewing-network-state-of-node-console.adoc b/modules/virt-viewing-network-state-of-node-console.adoc index af7bfbc14899..155e004df95a 100644 --- a/modules/virt-viewing-network-state-of-node-console.adoc +++ b/modules/virt-viewing-network-state-of-node-console.adoc @@ -4,7 +4,7 @@ :_mod-docs-content-type: PROCEDURE [id="virt-viewing-network-state-of-node-console_{context}"] -= Viewing the network state of a node from the web console += Viewing the network state of a node (NNS) from the web console As an administrator, you can use the {product-title} web console to observe `NodeNetworkState` resources and network interfaces, and access network details. @@ -15,5 +15,24 @@ In the *NodeNetworkState* page, you can view the list of `NodeNetworkState` reso . To access the detailed information about `NodeNetworkState` resource, click the `NodeNetworkState` resource name listed in the *Name* column . -. to expand and view the *Network Details* section for the `NodeNetworkState` resource, click the *>* icon . Alternatively, you can click on each interface type under the *Network interface* column to view the network details. +. To expand and view the *Network Details* section for the `NodeNetworkState` resource, click the greater than (*>*) symbol . Alternatively, you can click on each interface type under the *Network interface* column to view the network details. + +[id="virt-viewing-graphical-representation-of-nns-topology_{context}"] +== Viewing a graphical representation of the NNS topology + +:FeatureName: NNS topology view +include::snippets/technology-preview.adoc[] + +To make the configuration of the node network in the cluster easier to understand, you can view it in the form of a diagram. The NNS topology diagram displays all node components (network interface controllers, bridges, bonds, and VLANs), their properties and configurations, and connections between the nodes. + +To open the topology view of the cluster, do the following: + +. In the *Administrator* view of the web console, navigate to *Networking* -> *NodeNetworkState*. +. In the upper-right corner of the page, click the *Topology* icon. ++ +The NNS topology diagram opens. Each group of components represents a single node. ++ +* To display the configuration and propertires of a node, click inside the border of the node. +* To display the features or the YAML file of a specific component (for example, an interface or a bridge), click the icon of the component. +* The icons of active components have green borders; the icons of disconnected components have red borders. From 696482518bbbd43f84e8f294fa29e42e1f67554c Mon Sep 17 00:00:00 2001 From: Lisa Pettyjohn Date: Tue, 18 Feb 2025 16:32:03 -0500 Subject: [PATCH 269/669] OSDOCS-12890-fix#Add anchor for xrefing --- .../persistent-storage-csi-gcp-pd.adoc | 1 + 1 file changed, 1 insertion(+) diff --git a/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc b/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc index 0de2013920bd..58b3ab5831c8 100644 --- a/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc +++ b/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc @@ -29,6 +29,7 @@ ifndef::openshift-dedicated[] ==== endif::openshift-dedicated[] +[id="c3-instance-type-for-bare-metal-and-n4-machine-series"] == C3 instance type for bare metal and N4 machine series include::modules/persistent-storage-csi-gcp-hyperdisk-limitations.adoc[leveloffset=+2] From b72b19445eb0ee865df0752dafff74c5577e319d Mon Sep 17 00:00:00 2001 From: Lisa Pettyjohn Date: Thu, 12 Dec 2024 13:00:08 -0500 Subject: [PATCH 270/669] OSDOCS-????#Group Volume Snapshots (GA) --- _topic_maps/_topic_map.yml | 2 + ...sistent-storage-csi-drivers-supported.adoc | 53 +++---- ...orage-csi-group-snapshot-create-admin.adoc | 43 ++++++ ...nt-storage-csi-group-snapshots-create.adoc | 132 ++++++++++++++++++ ...orage-csi-group-snapshots-limitations.adoc | 13 ++ ...-storage-csi-group-snapshots-overview.adoc | 26 ++++ ...t-storage-csi-group-snapshots-restore.adoc | 68 +++++++++ ...ersistent-storage-csi-group-snapshots.adoc | 29 ++++ 8 files changed, 341 insertions(+), 25 deletions(-) create mode 100644 modules/persistent-storage-csi-group-snapshot-create-admin.adoc create mode 100644 modules/persistent-storage-csi-group-snapshots-create.adoc create mode 100644 modules/persistent-storage-csi-group-snapshots-limitations.adoc create mode 100644 modules/persistent-storage-csi-group-snapshots-overview.adoc create mode 100644 modules/persistent-storage-csi-group-snapshots-restore.adoc create mode 100644 storage/container_storage_interface/persistent-storage-csi-group-snapshots.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index ad0de2176d4a..6c52e1828d2b 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -1765,6 +1765,8 @@ Topics: File: ephemeral-storage-csi-inline - Name: CSI volume snapshots File: persistent-storage-csi-snapshots + - Name: CSI volume group snapshots + File: persistent-storage-csi-group-snapshots - Name: CSI volume cloning File: persistent-storage-csi-cloning - Name: Managing the default storage class diff --git a/modules/persistent-storage-csi-drivers-supported.adoc b/modules/persistent-storage-csi-drivers-supported.adoc index f055d8383518..de1f0dac2141 100644 --- a/modules/persistent-storage-csi-drivers-supported.adoc +++ b/modules/persistent-storage-csi-drivers-supported.adoc @@ -35,44 +35,42 @@ In addition to the drivers listed in the following table, ROSA functions with CS endif::openshift-rosa,openshift-aro[] .Supported CSI drivers and features in {product-title} -[cols=",^v,^v,^v,^v,^v width="100%",options="header"] +[cols=",^v,^v,^v,^v,^v,^v width="100%",options="header"] |=== -|CSI driver |CSI volume snapshots |CSI cloning |CSI resize |Inline ephemeral volumes -|AWS EBS | ✅ | | ✅| -|AWS EFS | | | | +|CSI driver |CSI volume snapshots |CSI volume group snapshots ^[1]^ |CSI cloning |CSI resize |Inline ephemeral volumes +|AWS EBS | ✅ | | | ✅| +|AWS EFS | | | | | ifndef::openshift-rosa[] -|Google Compute Platform (GCP) persistent disk (PD)| ✅| ✅^[5]^ | ✅| -|GCP Filestore | ✅ | | ✅| +|Google Compute Platform (GCP) persistent disk (PD)| ✅| |✅^[2]^ | ✅| +|GCP Filestore | ✅ | | | ✅| endif::openshift-rosa[] ifndef::openshift-dedicated,openshift-rosa[] -|{ibm-power-server-name} Block | | | ✅ | -|{ibm-cloud-name} Block | ✅^[3]^ | | ✅^[3]^| +|{ibm-power-server-name} Block | | | | ✅ | +|{ibm-cloud-name} Block | ✅^[3]^ | | | ✅^[3]^| endif::openshift-dedicated,openshift-rosa[] -|LVM Storage | ✅ | ✅ | ✅ | +|LVM Storage | ✅ | | ✅ | ✅ | ifndef::openshift-dedicated,openshift-rosa[] -|Microsoft Azure Disk | ✅ | ✅ | ✅| -|Microsoft Azure Stack Hub | ✅ | ✅ | ✅| -|Microsoft Azure File | ✅^[4]^ | ✅^[4]^ | ✅| ✅ -|OpenStack Cinder | ✅ | ✅ | ✅| -|OpenShift Data Foundation | ✅ | ✅ | ✅| -|OpenStack Manila | ✅ | | ✅ | -|CIFS/SMB | | ✅ | | -|VMware vSphere | ✅^[1]^ | | ✅^[2]^| +|Microsoft Azure Disk | ✅ | | ✅ | ✅| +|Microsoft Azure Stack Hub | ✅ | | ✅ | ✅| +|Microsoft Azure File | ✅^[4]^ | | ✅^[4]^ | ✅| ✅ +|OpenStack Cinder | ✅ | | ✅ | ✅| +|OpenShift Data Foundation | ✅ | ✅ | ✅ | ✅| +|OpenStack Manila | ✅ | | | ✅ | +|Shared Resource | | | | | ✅ +|CIFS/SMB | | | ✅ | | +|VMware vSphere | ✅^[5]^ | | | ✅^[6]^| endif::openshift-dedicated,openshift-rosa[] |=== ifndef::openshift-dedicated,openshift-rosa[] -- 1. -* Requires vSphere version 7.0 Update 3 or later for both vCenter Server and ESXi. - -* Does not support fileshare volumes. +:FeatureName: CSI volume group snapshots +include::snippets/technology-preview.adoc[leveloffset=+1] 2. -* Offline volume expansion: minimum required vSphere version is 6.7 Update 3 P06 - -* Online volume expansion: minimum required vSphere version is 7.0 Update 2. +* Cloning is not supported on hyperdisk-balanced disks with storage pools. 3. @@ -85,11 +83,16 @@ ifndef::openshift-dedicated,openshift-rosa[] * Azure File cloning and snapshot are Technology Preview features: :FeatureName: Azure File CSI cloning and snapshot -include::snippets/technology-preview.adoc[leveloffset=+2] +include::snippets/technology-preview.adoc[leveloffset=+1] 5. -* Cloning is not supported on hyperdisk-balanced disks with storage pools. +* Requires vSphere version 7.0 Update 3 or later for both vCenter Server and ESXi. + +* Does not support fileshare volumes. + +6. +* Online expansion is supported from vSphere version 7.0 Update 2 and later. -- endif::openshift-dedicated,openshift-rosa[] \ No newline at end of file diff --git a/modules/persistent-storage-csi-group-snapshot-create-admin.adoc b/modules/persistent-storage-csi-group-snapshot-create-admin.adoc new file mode 100644 index 000000000000..2ace7dae60ee --- /dev/null +++ b/modules/persistent-storage-csi-group-snapshot-create-admin.adoc @@ -0,0 +1,43 @@ +// Module included in the following assemblies: +// +// * storage/container_storage_interface/persistent-storage-csi-group-snapshots.adoc + +:_mod-docs-content-type: PROCEDURE +[id="persistent-storage-csi-group-snapshots-create-admin_{context}"] += Creating a volume group snapshot class + +Before you can create volume group snapshots, the cluster administrator needs to create a `VolumeGroupSnapshotClass`. + +This object describes how volume group snapshots should be created, including the driver information, the deletion policy, etc. + +.Prerequisites +* Logged in to a running {product-title} cluster with administrator privileges. + +* Enabled this feature using feature gates. For information about how to use feature gates, see _Enabling features sets by using feature gates_. + +.Procedure + +To create a `VolumeGroupSnapshotClass`: + +. Create a `VolumeGroupSnapshotClass` YAML file using the following example file: ++ +.Example volume group snapshot class YAML file +[source, yaml] +---- +apiVersion: groupsnapshot.storage.k8s.io/v1beta1 +kind: VolumeGroupSnapshotClass <1> +metadata: + name: csi-hostpath-groupsnapclass <2> +deletionPolicy: Delete +driver: hostpath.csi.k8s.io + …... +---- +<1> Specifies the `VolumeGroupSnapshotClass` object. +<2> Name of the `VolumeGroupSnapshotClass`. + +. Create the 'VolumeGroupSnapshotClass' object by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- diff --git a/modules/persistent-storage-csi-group-snapshots-create.adoc b/modules/persistent-storage-csi-group-snapshots-create.adoc new file mode 100644 index 000000000000..0ceb001bb380 --- /dev/null +++ b/modules/persistent-storage-csi-group-snapshots-create.adoc @@ -0,0 +1,132 @@ +// Module included in the following assemblies: +// +// * storage/container_storage_interface/persistent-storage-csi-group-snapshots.adoc + +:_mod-docs-content-type: PROCEDURE +[id="persistent-storage-csi-group-snapshots-create_{context}"] += Creating a volume group snapshot + +When you create a `VolumeGroupSnapshot` object, {product-title} creates a volume group snapshot. + +.Prerequisites +* Logged in to a running {product-title} cluster. +* Enabled this feature using feature gates. For information about how to use feature gates, see _Enabling features sets by using feature gates_. +* The persistent volume claims (PVCs) that you want to group for the snapshot have been created using a CSI driver that supports `VolumeGroupSnapshot` objects. +* A storage class to provision the storage back end. +* Administrator has created the `VolumeGroupSnapshotClass` object. + +.Procedure + +To create a volume group snapshot: + +. Locate (or create) the PVCs that you want to include in the volume group snapshot: ++ +[source, terminal] +---- +$ oc get pvc +---- ++ +.Example command output ++ +[source, terminal] +---- +NAME STATUS VOLUME CAPACITY ACCESSMODES AGE +pvc-0 Bound pvc-a42d7ea2-e3df-11ed-b5ea-0242ac120002 1Gi RWO 48s +pvc-1 Bound pvc-a42d81b8-e3df-11ed-b5ea-0242ac120002 1Gi RWO 48S +---- ++ +This example uses two PVCs + +. Label the PVCs to belong to a snapshot group: +.. Label PVC pvc-0 by running the following command: ++ +[source, terminal] +---- +$ oc label pvc pvc-0 group=myGroup +---- ++ +.Example output +[source, terminal] +---- +persistentvolumeclaim/pvc-0 labeled +---- +.. Label PVC pvc-1 by running the following command: ++ +[source, terminal] +---- +$ oc label pvc pvc-1 group=myGroup +---- ++ +.Example output +[source, terminal] +---- +persistentvolumeclaim/pvc-1 labeled +---- ++ +In this example, you are labeling PVC "pvc-0" and "pvc-1" to belong to group "myGroup". + +. Create a `VolumeGroupSnapshot` object to specify your volume group snapshot: +.. Create a `VolumeGroupSnapshot` object YAML file with the following example file: ++ +.Example VolumeGroupSnapshot YAML file +[source, yaml] +---- +apiVersion: groupsnapshot.storage.k8s.io/v1beta1 +kind: VolumeGroupSnapshot <1> +metadata: + name: <2> + namespace: <3> +spec: + volumeGroupSnapshotClassName: <4> + source: + selector: + matchLabels: + group: myGroup <5> +---- +<1> The `VolumeGroupSnapshot` object requests creation of a volume group snapshot for multiple PVCs. +<2> Name of the volume group snapshot. +<3> Namespace for the volume group snapshot. +<4> The `VolumeGroupSnapshotClass` name. This object is created by the administrator and describes how volume group snapshots should be created. +<5> The name of the label used to group the desired PVCs for the snapshot. In this example, it is "myGroup". + +.. Create the `VolumeGroupSnapshot` object by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- + +.Results +Individual volume snapshots are created according to how many PVCs were specified as part of the volume group snapshot. + +These individual volume snapshots are named with the following format: : + +.Example individual volume snapshot +[source, yaml] +---- +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshot +metadata: + name: snapshot-4dc1c53a29538b36e85003503a4bcac5dbde4cff59e81f1e3bb80b6c18c3fd03 + namespace: default + ownerReferences: + - apiVersion: groupsnapshot.storage.k8s.io/v1beta1 + kind: VolumeGroupSnapshot + name: my-groupsnapshot + uid: ba2d60c5-5082-4279-80c2-daa85f0af354 + resourceVersion: "124503" + uid: c0137282-f161-4e86-92c1-c41d36c6d04c +spec: + source: + persistentVolumeClaimName:pvc-1 +status: + volumeGroupSnapshotName: volume-group-snapshot-name +---- + +In the preceding example, two individual volume snapshots are created as part of the volume group snapshot. + +[source, terminal] +---- +snapshot-4dc1c53a29538b36e85003503a4bcac5dbde4cff59e81f1e3bb80b6c18c3fd03 +snapshot-fbfe59eff570171765df664280910c3bf1a4d56e233a5364cd8cb0152a35965b +---- \ No newline at end of file diff --git a/modules/persistent-storage-csi-group-snapshots-limitations.adoc b/modules/persistent-storage-csi-group-snapshots-limitations.adoc new file mode 100644 index 000000000000..b9df79c62869 --- /dev/null +++ b/modules/persistent-storage-csi-group-snapshots-limitations.adoc @@ -0,0 +1,13 @@ +// Module included in the following assemblies: +// +// * storage/container_storage_interface/persistent-storage-csi-group-snapshots.adoc + +:_mod-docs-content-type: CONCEPT +[id="persistent-storage-csi-group-snapshots-limitations_{context}"] += CSI volume group snapshots limitations + +Volume group snapshots has the following limitations: + +* Does not support reverting an existing persistent volume claim (PVC) to an earlier state represented by a snapshot It only supports provisioning a new volume from a snapshot. + +* No guarantees of application consistency, for example, crash consistency, are provided beyond those provided by the storage system. For more information about application consistency, see link:https://github.com/kubernetes/community/blob/master/wg-data-protection/data-protection-workflows-white-paper.md#quiesce-and-unquiesce-hooks[Quiesce and Unquiesce Hooks]. \ No newline at end of file diff --git a/modules/persistent-storage-csi-group-snapshots-overview.adoc b/modules/persistent-storage-csi-group-snapshots-overview.adoc new file mode 100644 index 000000000000..da8eec60735e --- /dev/null +++ b/modules/persistent-storage-csi-group-snapshots-overview.adoc @@ -0,0 +1,26 @@ +// Module included in the following assemblies: +// +// * storage/container_storage_interface/persistent-storage-csi-group-snapshots.adoc + +:_mod-docs-content-type: CONCEPT +[id="persistent-storage-csi-group-snapshots-overview_{context}"] += Overview of CSI volume group snapshots + +A _snapshot_ represents the state of the storage volume in a cluster at a particular point in time. Volume snapshots can be used to provision a new volume. + +A _volume group snapshot_ uses a label selector to group multiple persistent volume claims for snapshotting. A volume group snapshot represents copies from multiple volumes that are taken at the same point-in-time. This can be useful for applications that contain multiple volumes. + +Container Storage Interface (CSI) volume group snapshots needs to be supported by the CSI driver. {rh-storage} supports volume group snapshots. + +Volume group snapshots provide three new API objects for managing snapshots: + +`VolumeGroupSnapshot`:: +Requests creation of a volume group snapshot for multiple persistent volume claims. It contains information about the volume group snapshot operation, such as the timestamp when the volume group snapshot was taken, and whether it is ready to use. + +`VolumeGroupSnapshotContent`:: +Created by the snapshot controller for a dynamically created volumeGroupSnapshot. It contains information about the volume group snapshot including the volume group snapshot ID. This object represents a provisioned resource on the cluster (a group snapshot). The `VolumeGroupSnapshotContent` object binds to the volume group snapshot for which it was created with a one-to-one mapping. + +`VolumeGroupSnapshotClass`:: +Created by cluster administrators to describe how volume group snapshots should be created, including the driver information, the deletion policy, etc. + +These three API kinds are defined as `CustomResourceDefinitions` (CRDs). These CRDs must be installed in a {product-title} cluster for a CSI driver to support volume group snapshots. \ No newline at end of file diff --git a/modules/persistent-storage-csi-group-snapshots-restore.adoc b/modules/persistent-storage-csi-group-snapshots-restore.adoc new file mode 100644 index 000000000000..0437ce6e1be0 --- /dev/null +++ b/modules/persistent-storage-csi-group-snapshots-restore.adoc @@ -0,0 +1,68 @@ +// Module included in the following assemblies: +// +// * storage/container_storage_interface/persistent-storage-csi-group-snapshots.adoc + +:_mod-docs-content-type: PROCEDURE +[id="persistent-storage-csi-group-snapshots-restore_{context}"] += Restoring a volume group snapshot + +You can use the `VolumeGroupSnapshot` custom resource definition (CRD) content to restore the existing volumes to a previous state. + +To restore existing volumes, you can request a new persistent volume claim (PVC) to be created from a `VolumeSnapshot` object that is part of a `VolumeGroupSnapshot`. This triggers provisioning of a new volume that is populated with data from the specified snapshot. Repeat this process until all volumes are created from all the snapshots that are part of a volume group snapshot. + +.Prerequisites +* Logged in to a running {product-title} cluster. +* PVC has been created using a Container Storage Interface (CSI) driver that supports volume group snapshots. +* A storage class to provision the storage back end. +* A volume group snapshot has been created and is ready to use. + +.Procedure + +To restore existing volumes to a previous state from a volume group snapshot: + +. Specify a `VolumeSnapshot` data source from a volume group snapshot for a PVC as shown in the following example: ++ +.Example restore PVC YAML file +[source, yaml] +---- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: <1> + namespace: <2> +spec: + storageClassName: csi-hostpath-sc + dataSource: + name: snapshot-fbfe59eff570171765df664280910c3bf1a4d56e233a5364cd8cb0152a35965b <3> + kind: VolumeSnapshot <4> + apiGroup: snapshot.storage.k8s.io <5> + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +---- +<1> Name of the restore PVC. +<2> Name of the namespace. +<3> Name of an individual volume snapshot that is part of the volume group snapshot to use as source. +<4> Must be set to the `VolumeSnapshot` value. +<5> Must be set to the `snapshot.storage.k8s.io` value + +. Create the PVC by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml <1> +---- +<1> Name of the PVC restore file specified in the preceding step. + +. Verify that the restored PVC has been created by running the following command: ++ +[source,terminal] +---- +$ oc get pvc +---- ++ +A new PVC with the name you specified in the first step appears. + +. Repeat the procedure as needed until all volumes are created from all the snapshots that are part of a volume group snapshot. diff --git a/storage/container_storage_interface/persistent-storage-csi-group-snapshots.adoc b/storage/container_storage_interface/persistent-storage-csi-group-snapshots.adoc new file mode 100644 index 000000000000..329e1645b693 --- /dev/null +++ b/storage/container_storage_interface/persistent-storage-csi-group-snapshots.adoc @@ -0,0 +1,29 @@ +:_mod-docs-content-type: ASSEMBLY +[id="persistent-storage-csi-group-snapshots"] += CSI volume group snapshots +include::_attributes/common-attributes.adoc[] +:context: persistent-storage-csi-group-snapshots + +toc::[] + +This document describes how to use volume group snapshots with supported Container Storage Interface (CSI) drivers to help protect against data loss in {product-title}. Familiarity with xref:../../storage/understanding-persistent-storage.adoc#persistent-volumes_understanding-persistent-storage[persistent volumes] is suggested. + +:FeatureName: CSI volume group snapshots +include::snippets/technology-preview.adoc[leveloffset=+1] + +To use this Technology Preview feature, you must xref:../../hosted_control_planes/hcp-using-feature-gates.adoc#hcp-enable-feature-sets_hcp-using-feature-gates[enable it using feature gates]. + +include::modules/persistent-storage-csi-group-snapshots-overview.adoc[leveloffset=+1] + +include::modules/persistent-storage-csi-group-snapshots-limitations.adoc[leveloffset=+1] + +include::modules/persistent-storage-csi-group-snapshot-create-admin.adoc[leveloffset=+1] + +include::modules/persistent-storage-csi-group-snapshots-create.adoc[leveloffset=+1] + +include::modules/persistent-storage-csi-group-snapshots-restore.adoc[leveloffset=+1] + +== Additional resources +* xref:../../storage/container_storage_interface/persistent-storage-csi-snapshots.adoc#persistent-storage-csi-snapshots[CSI volume snapshots] + +* xref:../../hosted_control_planes/hcp-using-feature-gates.adoc#hcp-enable-feature-sets_hcp-using-feature-gates[Enabling features sets by using feature gates] \ No newline at end of file From 12c5d06d335fc815e8d0cb78aa175ccf544fe8d9 Mon Sep 17 00:00:00 2001 From: Alex Dellapenta Date: Tue, 3 Dec 2024 15:45:27 -0700 Subject: [PATCH 271/669] Rm Hybrid Helm & Java Operators from OSDK docs --- _topic_maps/_topic_map.yml | 15 -- _topic_maps/_topic_map_osd.yml | 15 -- _topic_maps/_topic_map_rosa.yml | 15 -- .../osdk-updating-v125-to-v128.adoc | 23 +-- modules/osdk-bundle-operator.adoc | 1 - modules/osdk-common-prereqs.adoc | 19 --- modules/osdk-create-project.adoc | 21 --- modules/osdk-deploy-olm.adoc | 6 - modules/osdk-hh-create-cr.adoc | 157 ----------------- modules/osdk-hh-create-go-api.adoc | 36 ---- modules/osdk-hh-create-helm-api.adoc | 34 ---- modules/osdk-hh-create-project.adoc | 38 ----- modules/osdk-hh-defining-go-api.adoc | 66 ------- modules/osdk-hh-helm-api-logic.adoc | 23 --- modules/osdk-hh-helm-reconciler.adoc | 38 ----- modules/osdk-hh-implement-controller.adoc | 13 -- modules/osdk-hh-main-go.adoc | 79 --------- modules/osdk-hh-project-layout.adoc | 65 ------- modules/osdk-hh-rbac.adoc | 132 -------------- ...osdk-java-controller-labels-memcached.adoc | 19 --- ...-java-controller-memcached-deployment.adoc | 49 ------ .../osdk-java-controller-reconcile-loop.adoc | 74 -------- modules/osdk-java-create-api-controller.adoc | 57 ------- modules/osdk-java-create-cr.adoc | 23 --- modules/osdk-java-define-api.adoc | 70 -------- modules/osdk-java-generate-crd.adoc | 66 ------- modules/osdk-java-implement-controller.adoc | 161 ------------------ modules/osdk-java-project-layout.adoc | 37 ---- modules/osdk-project-file.adoc | 20 --- modules/osdk-quickstart.adoc | 33 ---- modules/osdk-run-deployment.adoc | 99 ----------- modules/osdk-run-locally.adoc | 108 +----------- modules/osdk-run-operator.adoc | 7 - modules/osdk-updating-128-to-131.adoc | 20 +-- modules/osdk-updating-131-to-1361.adoc | 22 +-- operators/index.adoc | 3 +- .../osdk-hybrid-helm-updating-projects.adoc | 24 --- .../operator_sdk/helm/osdk-hybrid-helm.adoc | 72 -------- operators/operator_sdk/java/_attributes | 1 - operators/operator_sdk/java/images | 1 - operators/operator_sdk/java/modules | 1 - .../java/osdk-java-project-layout.adoc | 15 -- .../java/osdk-java-quickstart.adoc | 29 ---- .../operator_sdk/java/osdk-java-tutorial.adoc | 87 ---------- .../java/osdk-java-updating-projects.adoc | 21 --- operators/operator_sdk/java/snippets | 1 - snippets/osdk-deprecation.adoc | 6 - 47 files changed, 9 insertions(+), 1913 deletions(-) delete mode 100644 modules/osdk-hh-create-cr.adoc delete mode 100644 modules/osdk-hh-create-go-api.adoc delete mode 100644 modules/osdk-hh-create-helm-api.adoc delete mode 100644 modules/osdk-hh-create-project.adoc delete mode 100644 modules/osdk-hh-defining-go-api.adoc delete mode 100644 modules/osdk-hh-helm-api-logic.adoc delete mode 100644 modules/osdk-hh-helm-reconciler.adoc delete mode 100644 modules/osdk-hh-implement-controller.adoc delete mode 100644 modules/osdk-hh-main-go.adoc delete mode 100644 modules/osdk-hh-project-layout.adoc delete mode 100644 modules/osdk-hh-rbac.adoc delete mode 100644 modules/osdk-java-controller-labels-memcached.adoc delete mode 100644 modules/osdk-java-controller-memcached-deployment.adoc delete mode 100644 modules/osdk-java-controller-reconcile-loop.adoc delete mode 100644 modules/osdk-java-create-api-controller.adoc delete mode 100644 modules/osdk-java-create-cr.adoc delete mode 100644 modules/osdk-java-define-api.adoc delete mode 100644 modules/osdk-java-generate-crd.adoc delete mode 100644 modules/osdk-java-implement-controller.adoc delete mode 100644 modules/osdk-java-project-layout.adoc delete mode 100644 operators/operator_sdk/helm/osdk-hybrid-helm-updating-projects.adoc delete mode 100644 operators/operator_sdk/helm/osdk-hybrid-helm.adoc delete mode 120000 operators/operator_sdk/java/_attributes delete mode 120000 operators/operator_sdk/java/images delete mode 120000 operators/operator_sdk/java/modules delete mode 100644 operators/operator_sdk/java/osdk-java-project-layout.adoc delete mode 100644 operators/operator_sdk/java/osdk-java-quickstart.adoc delete mode 100644 operators/operator_sdk/java/osdk-java-tutorial.adoc delete mode 100644 operators/operator_sdk/java/osdk-java-updating-projects.adoc delete mode 120000 operators/operator_sdk/java/snippets diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 6c52e1828d2b..6def758a30fe 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2009,21 +2009,6 @@ Topics: File: osdk-helm-updating-projects - Name: Helm support File: osdk-helm-support - - Name: Hybrid Helm Operator - File: osdk-hybrid-helm - - Name: Updating Hybrid Helm-based projects - File: osdk-hybrid-helm-updating-projects - - Name: Java-based Operators - Dir: java - Topics: - - Name: Getting started - File: osdk-java-quickstart - - Name: Tutorial - File: osdk-java-tutorial - - Name: Project layout - File: osdk-java-project-layout - - Name: Updating Java-based projects - File: osdk-java-updating-projects - Name: Defining cluster service versions (CSVs) File: osdk-generating-csvs - Name: Working with bundle images diff --git a/_topic_maps/_topic_map_osd.yml b/_topic_maps/_topic_map_osd.yml index d610d4f33a1f..35eda57f372d 100644 --- a/_topic_maps/_topic_map_osd.yml +++ b/_topic_maps/_topic_map_osd.yml @@ -779,21 +779,6 @@ Topics: File: osdk-helm-updating-projects - Name: Helm support File: osdk-helm-support -# - Name: Hybrid Helm Operator <= Tech Preview -# File: osdk-hybrid-helm -# - Name: Updating Hybrid Helm-based projects <= Tech Preview -# File: osdk-hybrid-helm-updating-projects -# - Name: Java-based Operators <= Tech Preview -# Dir: java -# Topics: -# - Name: Getting started -# File: osdk-java-quickstart -# - Name: Tutorial -# File: osdk-java-tutorial -# - Name: Project layout -# File: osdk-java-project-layout -# - Name: Updating Java-based projects -# File: osdk-java-updating-projects - Name: Defining cluster service versions (CSVs) File: osdk-generating-csvs - Name: Working with bundle images diff --git a/_topic_maps/_topic_map_rosa.yml b/_topic_maps/_topic_map_rosa.yml index 9b9596b1c1f7..8cc0aaeef5ec 100644 --- a/_topic_maps/_topic_map_rosa.yml +++ b/_topic_maps/_topic_map_rosa.yml @@ -1023,21 +1023,6 @@ Topics: File: osdk-helm-updating-projects - Name: Helm support File: osdk-helm-support -# - Name: Hybrid Helm Operator <= Tech Preview -# File: osdk-hybrid-helm -# - Name: Updating Hybrid Helm-based projects <= Tech Preview -# File: osdk-hybrid-helm-updating-projects -# - Name: Java-based Operators <= Tech Preview -# Dir: java -# Topics: -# - Name: Getting started -# File: osdk-java-quickstart -# - Name: Tutorial -# File: osdk-java-tutorial -# - Name: Project layout -# File: osdk-java-project-layout -# - Name: Updating Java-based projects -# File: osdk-java-updating-projects - Name: Defining cluster service versions (CSVs) File: osdk-generating-csvs - Name: Working with bundle images diff --git a/_unused_topics/osdk-updating-v125-to-v128.adoc b/_unused_topics/osdk-updating-v125-to-v128.adoc index 1ee8e317e20e..1bbbe0a635ab 100644 --- a/_unused_topics/osdk-updating-v125-to-v128.adoc +++ b/_unused_topics/osdk-updating-v125-to-v128.adoc @@ -3,8 +3,7 @@ // * operators/operator_sdk/golang/osdk-golang-updating-projects.adoc // * operators/operator_sdk/ansible/osdk-ansible-updating-projects.adoc // * operators/operator_sdk/helm/osdk-helm-updating-projects.adoc -// * operators/operator_sdk/helm/osdk-hybrid-helm-updating-projects.adoc -// * operators/operator_sdk/java/osdk-java-updating-projects.adoc +// * operators/operator_sdk/helm/ ifeval::["{context}" == "osdk-golang-updating-projects"] :golang: @@ -18,14 +17,6 @@ ifeval::["{context}" == "osdk-helm-updating-projects"] :helm: :type: Helm endif::[] -ifeval::["{context}" == "osdk-hybrid-helm-updating-projects"] -:hybrid: -:type: Hybrid Helm -endif::[] -ifeval::["{context}" == "osdk-java-updating-projects"] -:java: -:type: Java -endif::[] :_mod-docs-content-type: PROCEDURE [id="osdk-upgrading-projects_{context}"] @@ -40,7 +31,7 @@ The following procedure updates an existing {type}-based Operator project for co .Procedure -ifdef::helm,hybrid,java[] +ifdef::helm[] * Find the `ose-kube-rbac-proxy` pull spec in the following files, and update the image tag to `v4.14`: endif::[] ifdef::ansible,golang[] @@ -136,12 +127,4 @@ endif::[] ifeval::["{context}" == "osdk-helm-updating-projects"] :!helm: :!type: -endif::[] -ifeval::["{context}" == "osdk-hybrid-helm-updating-projects"] -:!hybrid: -:!type: -endif::[] -ifeval::["{context}" == "osdk-java-updating-projects"] -:!java: -:!type: -endif::[] +endif::[] \ No newline at end of file diff --git a/modules/osdk-bundle-operator.adoc b/modules/osdk-bundle-operator.adoc index 15fdada31ca4..d27dbcdfdd05 100644 --- a/modules/osdk-bundle-operator.adoc +++ b/modules/osdk-bundle-operator.adoc @@ -1,7 +1,6 @@ // Module included in the following assemblies: // // * operators/operator_sdk/golang/osdk-golang-tutorial.adoc -// * operators/operator_sdk/java/osdk-java-tutorial.adoc // * operators/operator_sdk/ansible/osdk-ansible-tutorial.adoc // * operators/operator_sdk/helm/osdk-helm-tutorial.adoc // * operators/operator_sdk/osdk-working-bundle-images.adoc diff --git a/modules/osdk-common-prereqs.adoc b/modules/osdk-common-prereqs.adoc index ddb14007afb5..a02d83933ea3 100644 --- a/modules/osdk-common-prereqs.adoc +++ b/modules/osdk-common-prereqs.adoc @@ -6,10 +6,7 @@ // * operators/operator_sdk/ansible/osdk-ansible-tutorial.adoc // * operators/operator_sdk/helm/osdk-helm-quickstart.adoc // * operators/operator_sdk/helm/osdk-helm-tutorial.adoc -// * operators/operator_sdk/helm/osdk-hybrid-helm.adoc // * operators/operator_sdk/osdk-working-bundle-images.adoc -// * operators/operator_sdk/java/osdk-java-quickstart.adoc -// * operators/operator_sdk/java/osdk-java-tutorial.adoc ifeval::["{context}" == "osdk-ansible-quickstart"] :ansible: @@ -23,12 +20,6 @@ endif::[] ifeval::["{context}" == "osdk-golang-tutorial"] :golang: endif::[] -ifeval::["{context}" == "osdk-java-quickstart"] -:java: -endif::[] -ifeval::["{context}" == "osdk-java-tutorial"] -:java: -endif::[] [id="osdk-common-prereqs_{context}"] = Prerequisites @@ -45,10 +36,6 @@ ifdef::ansible[] * link:https://www.python.org/downloads/[Python] 3.9+ * link:https://pypi.org/project/kubernetes/[Python Kubernetes client] endif::[] -ifdef::java[] -* link:https://java.com/en/download/help/download_options.html[Java] 11+ -* link:https://maven.apache.org/install.html[Maven] 3.6.3+ -endif::[] ifndef::openshift-dedicated,openshift-rosa[] * Logged into an {product-title} {product-version} cluster with `oc` with an account that has `cluster-admin` permissions endif::openshift-dedicated,openshift-rosa[] @@ -69,9 +56,3 @@ endif::[] ifeval::["{context}" == "osdk-golang-tutorial"] :!golang: endif::[] -ifeval::["{context}" == "osdk-java-quickstart"] -:!java: -endif::[] -ifeval::["{context}" == "osdk-java-tutorial"] -:!java: -endif::[] diff --git a/modules/osdk-create-project.adoc b/modules/osdk-create-project.adoc index b2fec6c03c56..cbff19fe7a0e 100644 --- a/modules/osdk-create-project.adoc +++ b/modules/osdk-create-project.adoc @@ -19,11 +19,6 @@ ifeval::["{context}" == "osdk-helm-tutorial"] :type: Helm :app: nginx endif::[] -ifeval::["{context}" == "osdk-java-tutorial"] -:java: -:type: Java -:app: memcached -endif::[] :_mod-docs-content-type: PROCEDURE [id="osdk-create-project_{context}"] @@ -63,9 +58,6 @@ endif::[] ifdef::helm[] with the `helm` plugin endif::[] -ifdef::java[] -with the `quarkus` plugin -endif::[] to initialize the project: + [source,terminal,subs="attributes+"] @@ -109,14 +101,6 @@ The `init` command creates the `nginx-operator` project specifically for watchin . For Helm-based projects, the `init` command generates the RBAC rules in the `config/rbac/role.yaml` file based on the resources that would be deployed by the default manifest for the chart. Verify that the rules generated in this file meet the permission requirements of the Operator. endif::[] -ifdef::java[] ----- -$ operator-sdk init \ - --plugins=quarkus \ - --domain=example.com \ - --project-name=memcached-operator ----- -endif::[] ifeval::["{context}" == "osdk-golang-tutorial"] :!golang: @@ -133,8 +117,3 @@ ifeval::["{context}" == "osdk-helm-tutorial"] :!type: :!app: endif::[] -ifeval::["{context}" == "osdk-java-tutorial"] -:!java: -:!type: -:!app: -endif::[] \ No newline at end of file diff --git a/modules/osdk-deploy-olm.adoc b/modules/osdk-deploy-olm.adoc index ee5084d5bf37..9fb528129f17 100644 --- a/modules/osdk-deploy-olm.adoc +++ b/modules/osdk-deploy-olm.adoc @@ -11,9 +11,6 @@ endif::[] ifeval::["{context}" == "osdk-working-bundle-images"] :golang: endif::[] -ifeval::["{context}" == "osdk-java-tutorial"] -:java: -endif::[] :_mod-docs-content-type: PROCEDURE [id="osdk-deploy-olm_{context}"] @@ -70,7 +67,4 @@ ifeval::["{context}" == "osdk-golang-tutorial"] endif::[] ifeval::["{context}" == "osdk-working-bundle-images"] :!golang: -endif::[] -ifeval::["{context}" == "osdk-java-tutorial"] -:!java: endif::[] \ No newline at end of file diff --git a/modules/osdk-hh-create-cr.adoc b/modules/osdk-hh-create-cr.adoc deleted file mode 100644 index 6e2706bfc731..000000000000 --- a/modules/osdk-hh-create-cr.adoc +++ /dev/null @@ -1,157 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/helm/osdk-hybrid-helm.adoc - -:_mod-docs-content-type: PROCEDURE -[id="osdk-hh-create-cr_{context}"] -= Creating custom resources - -After your Operator is installed, you can test it by creating custom resources (CRs) that are now provided on the cluster by the Operator. - -.Procedure - -. Change to the namespace where your Operator is installed: -+ -[source,terminal] ----- -$ oc project -system ----- - -. Update the sample `Memcached` CR manifest at the `config/samples/cache_v1_memcached.yaml` file by updating the `replicaCount` field to `3`: -+ -.Example `config/samples/cache_v1_memcached.yaml` file -[%collapsible] -==== -[source,yaml] ----- -apiVersion: cache.my.domain/v1 -kind: Memcached -metadata: - name: memcached-sample -spec: - # Default values copied from /helm-charts/memcached/values.yaml - affinity: {} - autoscaling: - enabled: false - maxReplicas: 100 - minReplicas: 1 - targetCPUUtilizationPercentage: 80 - fullnameOverride: "" - image: - pullPolicy: IfNotPresent - repository: nginx - tag: "" - imagePullSecrets: [] - ingress: - annotations: {} - className: "" - enabled: false - hosts: - - host: chart-example.local - paths: - - path: / - pathType: ImplementationSpecific - tls: [] - nameOverride: "" - nodeSelector: {} - podAnnotations: {} - podSecurityContext: {} - replicaCount: 3 - resources: {} - securityContext: {} - service: - port: 80 - type: ClusterIP - serviceAccount: - annotations: {} - create: true - name: "" - tolerations: [] ----- -==== - -. Create the `Memcached` CR: -+ -[source,terminal] ----- -$ oc apply -f config/samples/cache_v1_memcached.yaml ----- - -. Ensure that the Memcached Operator creates the deployment for the sample CR with the correct size: -+ -[source,terminal] ----- -$ oc get pods ----- -+ -.Example output -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 18m -memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 18m -memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 18m ----- - -. Update the sample `MemcachedBackup` CR manifest at the `config/samples/cache_v1_memcachedbackup.yaml` file by updating the `size` to `2`: -+ -.Example `config/samples/cache_v1_memcachedbackup.yaml` file -[%collapsible] -==== -[source,yaml] ----- -apiVersion: cache.my.domain/v1 -kind: MemcachedBackup -metadata: - name: memcachedbackup-sample -spec: - size: 2 ----- -==== - -. Create the `MemcachedBackup` CR: -+ -[source,terminal] ----- -$ oc apply -f config/samples/cache_v1_memcachedbackup.yaml ----- - -. Ensure that the count of `memcachedbackup` pods is the same as specified in the CR: -+ -[source,terminal] ----- -$ oc get pods ----- -+ -.Example output -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -memcachedbackup-sample-8649699989-4bbzg 1/1 Running 0 22m -memcachedbackup-sample-8649699989-mq6mx 1/1 Running 0 22m ----- - -. You can update the `spec` in each of the above CRs, and then apply them again. The controller reconciles again and ensures that the size of the pods is as specified in the `spec` of the respective CRs. - -. Clean up the resources that have been created as part of this tutorial: - -.. Delete the `Memcached` resource: -+ -[source,terminal] ----- -$ oc delete -f config/samples/cache_v1_memcached.yaml ----- - -.. Delete the `MemcachedBackup` resource: -+ -[source,terminal] ----- -$ oc delete -f config/samples/cache_v1_memcachedbackup.yaml ----- - -.. If you used the `make deploy` command to test the Operator, run the following command: -+ -[source,terminal] ----- -$ make undeploy ----- diff --git a/modules/osdk-hh-create-go-api.adoc b/modules/osdk-hh-create-go-api.adoc deleted file mode 100644 index e09f106a62cd..000000000000 --- a/modules/osdk-hh-create-go-api.adoc +++ /dev/null @@ -1,36 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/helm/osdk-hybrid-helm.adoc - -:_mod-docs-content-type: PROCEDURE -[id="osdk-hh-create-go-api_{context}"] -= Creating a Go API - -Use the Operator SDK CLI to create a Go API. - -.Procedure - -. Run the following command to create a Go API with group `cache`, version `v1`, and kind `MemcachedBackup`: -+ -[source,terminal] ----- -$ operator-sdk create api \ - --group=cache \ - --version v1 \ - --kind MemcachedBackup \ - --resource \ - --controller \ - --plugins=go/v4 ----- - -. When prompted, enter `y` for creating both resource and controller: -+ -[source,terminal] ----- -$ Create Resource [y/n] -y -Create Controller [y/n] -y ----- - -This procedure generates the `MemcachedBackup` resource API at `api/v1/memcachedbackup_types.go` and the controller at `controllers/memcachedbackup_controller.go`. diff --git a/modules/osdk-hh-create-helm-api.adoc b/modules/osdk-hh-create-helm-api.adoc deleted file mode 100644 index 776299bbf318..000000000000 --- a/modules/osdk-hh-create-helm-api.adoc +++ /dev/null @@ -1,34 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/helm/osdk-hybrid-helm.adoc - -:_mod-docs-content-type: PROCEDURE -[id="osdk-hh-create-helm-api_{context}"] -= Creating a Helm API - -Use the Operator SDK CLI to create a Helm API. - -.Procedure - -* Run the following command to create a Helm API with group `cache`, version `v1`, and kind `Memcached`: -+ -[source,terminal] ----- -$ operator-sdk create api \ - --plugins helm.sdk.operatorframework.io/v1 \ - --group cache \ - --version v1 \ - --kind Memcached ----- - -[NOTE] -==== -This procedure also configures your Operator project to watch the `Memcached` resource with API version `v1` and scaffolds a boilerplate Helm chart. Instead of creating the project from the boilerplate Helm chart scaffolded by the Operator SDK, you can alternatively use an existing chart from your local file system or remote chart repository. - -For more details and examples for creating Helm API based on existing or new charts, run the following command: - -[source,terminal] ----- -$ operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help ----- -==== diff --git a/modules/osdk-hh-create-project.adoc b/modules/osdk-hh-create-project.adoc deleted file mode 100644 index d3541e2a58ea..000000000000 --- a/modules/osdk-hh-create-project.adoc +++ /dev/null @@ -1,38 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/helm/osdk-hybrid-helm.adoc - -:_mod-docs-content-type: PROCEDURE -[id="osdk-hh-create-project_{context}"] -= Creating a project - -Use the Operator SDK CLI to create a project called `memcached-operator`. - -.Procedure - -. Create a directory for the project: -+ -[source,terminal] ----- -$ mkdir -p $HOME/github.com/example/memcached-operator ----- - -. Change to the directory: -+ -[source,terminal] ----- -$ cd $HOME/github.com/example/memcached-operator ----- - -. Run the `operator-sdk init` command to initialize the project. This example uses a domain of `my.domain` so that all API groups are `.my.domain`: -+ -[source,terminal] ----- -$ operator-sdk init \ - --plugins=hybrid.helm.sdk.operatorframework.io \ - --project-version="3" \ - --domain my.domain \ - --repo=github.com/example/memcached-operator ----- -+ -The `init` command generates the RBAC rules in the `config/rbac/role.yaml` file based on the resources that would be deployed by the chart's default manifests. Verify that the rules generated in the `config/rbac/role.yaml` file meet your Operator's permission requirements. diff --git a/modules/osdk-hh-defining-go-api.adoc b/modules/osdk-hh-defining-go-api.adoc deleted file mode 100644 index ff50be3741cb..000000000000 --- a/modules/osdk-hh-defining-go-api.adoc +++ /dev/null @@ -1,66 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/helm/osdk-hybrid-helm.adoc - -:_mod-docs-content-type: PROCEDURE -[id="osdk-hh-defining-go-api_{context}"] -= Defining the API - -Define the API for the `MemcachedBackup` custom resource (CR). - -Represent this Go API by defining the `MemcachedBackup` type, which will have a `MemcachedBackupSpec.Size` field to set the quantity of Memcached backup instances (CRs) to be deployed, and a `MemcachedBackupStatus.Nodes` field to store a CR's pod names. - -[NOTE] -==== -The `Node` field is used to illustrate an example of a `Status` field. -==== - -.Procedure - -. Define the API for the `MemcachedBackup` CR by modifying the Go type definitions in the `api/v1/memcachedbackup_types.go` file to have the following `spec` and `status`: -+ -.Example `api/v1/memcachedbackup_types.go` file -[%collapsible] -==== -[source,golang] ----- -// MemcachedBackupSpec defines the desired state of MemcachedBackup -type MemcachedBackupSpec struct { - // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster - // Important: Run "make" to regenerate code after modifying this file - - //+kubebuilder:validation:Minimum=0 - // Size is the size of the memcached deployment - Size int32 `json:"size"` -} - -// MemcachedBackupStatus defines the observed state of MemcachedBackup -type MemcachedBackupStatus struct { - // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster - // Important: Run "make" to regenerate code after modifying this file - // Nodes are the names of the memcached pods - Nodes []string `json:"nodes"` -} ----- -==== - -. Update the generated code for the resource type: -+ -[source,terminal] ----- -$ make generate ----- -+ -[TIP] -==== -After you modify a `*_types.go` file, you must run the `make generate` command to update the generated code for that resource type. -==== - -. After the API is defined with `spec` and `status` fields and CRD validation markers, generate and update the CRD manifests: -+ -[source,terminal] ----- -$ make manifests ----- - -This Makefile target invokes the `controller-gen` utility to generate the CRD manifests in the `config/crd/bases/cache.my.domain_memcachedbackups.yaml` file. diff --git a/modules/osdk-hh-helm-api-logic.adoc b/modules/osdk-hh-helm-api-logic.adoc deleted file mode 100644 index cfa3f703b62a..000000000000 --- a/modules/osdk-hh-helm-api-logic.adoc +++ /dev/null @@ -1,23 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/helm/osdk-hybrid-helm.adoc - -:_mod-docs-content-type: CONCEPT -[id="osdk-hh-helm-api-logic_{context}"] -= Operator logic for the Helm API - -By default, your scaffolded Operator project watches `Memcached` resource events as shown in the `watches.yaml` file and executes Helm releases using the specified chart. - -.Example `watches.yaml` file -[%collapsible] -==== -[source,yaml] ----- -# Use the 'create api' subcommand to add watches to this file. -- group: cache.my.domain - version: v1 - kind: Memcached - chart: helm-charts/memcached -#+kubebuilder:scaffold:watch ----- -==== diff --git a/modules/osdk-hh-helm-reconciler.adoc b/modules/osdk-hh-helm-reconciler.adoc deleted file mode 100644 index c637f0f15f42..000000000000 --- a/modules/osdk-hh-helm-reconciler.adoc +++ /dev/null @@ -1,38 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/helm/osdk-hybrid-helm.adoc - -:_mod-docs-content-type: CONCEPT -[id="osdk-hh-helm-reconciler_{context}"] -= Custom Helm reconciler configurations using provided library APIs - -A disadvantage of existing Helm-based Operators is the inability to configure the Helm reconciler, because it is abstracted from users. For a Helm-based Operator to reach the Seamless Upgrades capability (level II and later) that reuses an already existing Helm chart, a hybrid between the Go and Helm Operator types adds value. - -The APIs provided in the link:https://github.com/operator-framework/helm-operator-plugins[`helm-operator-plugins`] library allow Operator authors to make the following configurations: - -* Customize value mapping based on cluster state -* Execute code in specific events by configuring the reconciler's event recorder -* Customize the reconciler's logger -* Setup `Install`, `Upgrade`, and `Uninstall` annotations to enable Helm's actions to be configured based on the annotations found in custom resources watched by the reconciler -* Configure the reconciler to run with `Pre` and `Post` hooks - -The above configurations to the reconciler can be done in the `main.go` file: - -[%collapsible] -==== -.Example `main.go` file -[source,golang] ----- -// Operator's main.go -// With the help of helpers provided in the library, the reconciler can be -// configured here before starting the controller with this reconciler. -reconciler := reconciler.New( - reconciler.WithChart(*chart), - reconciler.WithGroupVersionKind(gvk), -) - -if err := reconciler.SetupWithManager(mgr); err != nil { - panic(fmt.Sprintf("unable to create reconciler: %s", err)) -} ----- -==== diff --git a/modules/osdk-hh-implement-controller.adoc b/modules/osdk-hh-implement-controller.adoc deleted file mode 100644 index e025d59757f2..000000000000 --- a/modules/osdk-hh-implement-controller.adoc +++ /dev/null @@ -1,13 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/helm/osdk-hybrid-helm.adoc - -:_mod-docs-content-type: CONCEPT -[id="osdk-hh-implement-controller_{context}"] -= Controller implementation - -The controller in this tutorial performs the following actions: - -* Create a `Memcached` deployment if it does not exist. -* Ensure that the deployment size is the same as specified by the `Memcached` CR spec. -* Update the `Memcached` CR status with the names of the `memcached` pods. diff --git a/modules/osdk-hh-main-go.adoc b/modules/osdk-hh-main-go.adoc deleted file mode 100644 index 727c0a3cb4a5..000000000000 --- a/modules/osdk-hh-main-go.adoc +++ /dev/null @@ -1,79 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/helm/osdk-hybrid-helm.adoc - -:_mod-docs-content-type: CONCEPT -[id="osdk-hh-main-go_{context}"] -= Differences in main.go - -For standard Go-based Operators and the Hybrid Helm Operator, the `main.go` file handles the scaffolding the initialization and running of the link:https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/manager#Manager[`Manager`] program for the Go API. For the Hybrid Helm Operator, however, the `main.go` file also exposes the logic for loading the `watches.yaml` file and configuring the Helm reconciler. - -.Example `main.go` file -[%collapsible] -==== -[source,terminal] ----- -... - for _, w := range ws { - // Register controller with the factory - reconcilePeriod := defaultReconcilePeriod - if w.ReconcilePeriod != nil { - reconcilePeriod = w.ReconcilePeriod.Duration - } - - maxConcurrentReconciles := defaultMaxConcurrentReconciles - if w.MaxConcurrentReconciles != nil { - maxConcurrentReconciles = *w.MaxConcurrentReconciles - } - - r, err := reconciler.New( - reconciler.WithChart(*w.Chart), - reconciler.WithGroupVersionKind(w.GroupVersionKind), - reconciler.WithOverrideValues(w.OverrideValues), - reconciler.SkipDependentWatches(w.WatchDependentResources != nil && !*w.WatchDependentResources), - reconciler.WithMaxConcurrentReconciles(maxConcurrentReconciles), - reconciler.WithReconcilePeriod(reconcilePeriod), - reconciler.WithInstallAnnotations(annotation.DefaultInstallAnnotations...), - reconciler.WithUpgradeAnnotations(annotation.DefaultUpgradeAnnotations...), - reconciler.WithUninstallAnnotations(annotation.DefaultUninstallAnnotations...), - ) -... ----- -==== - -The manager is initialized with both `Helm` and `Go` reconcilers: - -.Example `Helm` and `Go` reconcilers -[%collapsible] -==== -[source,terminal] ----- -... -// Setup manager with Go API - if err = (&controllers.MemcachedBackupReconciler{ - Client: mgr.GetClient(), - Scheme: mgr.GetScheme(), - }).SetupWithManager(mgr); err != nil { - setupLog.Error(err, "unable to create controller", "controller", "MemcachedBackup") - os.Exit(1) - } - - ... -// Setup manager with Helm API - for _, w := range ws { - - ... - if err := r.SetupWithManager(mgr); err != nil { - setupLog.Error(err, "unable to create controller", "controller", "Helm") - os.Exit(1) - } - setupLog.Info("configured watch", "gvk", w.GroupVersionKind, "chartPath", w.ChartPath, "maxConcurrentReconciles", maxConcurrentReconciles, "reconcilePeriod", reconcilePeriod) - } - -// Start the manager - if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { - setupLog.Error(err, "problem running manager") - os.Exit(1) - } ----- -==== diff --git a/modules/osdk-hh-project-layout.adoc b/modules/osdk-hh-project-layout.adoc deleted file mode 100644 index d2b37bb865f4..000000000000 --- a/modules/osdk-hh-project-layout.adoc +++ /dev/null @@ -1,65 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/helm/osdk-hybrid-helm.adoc - -:_mod-docs-content-type: REFERENCE -[id="osdk-hh-project-layout_{context}"] -= Project layout - -The Hybrid Helm Operator scaffolding is customized to be compatible with both Helm and Go APIs. - -[options="header",cols="1a,4a"] -|=== - -|File/folders |Purpose - -|`Dockerfile` -|Instructions used by a container engine to build your Operator image with the `make docker-build` command. - -|`Makefile` -|Build file with helper targets to help you work with your project. - -|`PROJECT` -|YAML file containing metadata information for the Operator. Represents the project's configuration and is used to track useful information for the CLI and plugins. - -|`bin/` -|Contains useful binaries such as the `manager` which is used to run your project locally and the `kustomize` utility used for the project configuration. - -|`config/` -|Contains configuration files, including all link:https://kustomize.io/[Kustomize] manifests, to launch your Operator project on a cluster. Plugins might use it to provide functionality. For example, for the Operator SDK to help create your Operator bundle, the CLI looks up the CRDs and CRs which are scaffolded in this directory. - -`config/crd/`:: Contains custom resource definitions (CRDs). - -`config/default/`:: Contains a Kustomize base for launching the controller in a standard configuration. - -`config/manager/`:: Contains the manifests to launch your Operator project as pods on the cluster. - -`config/manifests/`:: Contains the base to generate your OLM manifests in the `bundle/` directory. - -`config/prometheus/`:: Contains the manifests required to enable project to serve metrics to Prometheus such as the `ServiceMonitor` resource. - -`config/scorecard/`:: Contains the manifests required to allow you test your project with the scorecard tool. - -`config/rbac/`:: Contains the RBAC permissions required to run your project. - -`config/samples/`:: Contains samples for custom resources. - -|`api/` -|Contains the Go API definition. - -|`internal/controllers/` -|Contains the controllers for the Go API. - -|`hack/` -|Contains utility files, such as the file used to scaffold the license header for your project files. - -|`main.go` -|Main program of the Operator. Instantiates a new manager that registers all custom resource definitions (CRDs) in the `apis/` directory and starts all controllers in the `controllers/` directory. - -|`helm-charts/` -|Contains the Helm charts which can be specified using the `create api` command with the Helm plugin. - -|`watches.yaml` -|Contains group/version/kind (GVK) and Helm chart location. Used to configure the Helm watches. - -|=== diff --git a/modules/osdk-hh-rbac.adoc b/modules/osdk-hh-rbac.adoc deleted file mode 100644 index 8fa72d125914..000000000000 --- a/modules/osdk-hh-rbac.adoc +++ /dev/null @@ -1,132 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/helm/osdk-hybrid-helm.adoc - -:_mod-docs-content-type: CONCEPT -[id="osdk-hh-rbac_{context}"] -= Permissions and RBAC manifests - -The controller requires certain role-based access control (RBAC) permissions to interact with the resources it manages. For the Go API, these are specified with RBAC markers, as shown in the Operator SDK tutorial for standard Go-based Operators. - -For the Helm API, the permissions are scaffolded by default in `roles.yaml`. Currently, however, due to a known issue when the Go API is scaffolded, the permissions for the Helm API are overwritten. As a result of this issue, ensure that the permissions defined in `roles.yaml` match your requirements. - -[NOTE] -==== -This known issue is being tracked in link:https://github.com/operator-framework/helm-operator-plugins/issues/142[]. -==== - -The following is an example `role.yaml` for a Memcached Operator: - -.Example `Helm` and `Go` reconcilers -[%collapsible] -==== -[source,yaml] ----- ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: manager-role -rules: -- apiGroups: - - "" - resources: - - namespaces - verbs: - - get -- apiGroups: - - apps - resources: - - deployments - - daemonsets - - replicasets - - statefulsets - verbs: - - create - - delete - - get - - list - - patch - - update - - watch -- apiGroups: - - cache.my.domain - resources: - - memcachedbackups - verbs: - - create - - delete - - get - - list - - patch - - update - - watch -- apiGroups: - - cache.my.domain - resources: - - memcachedbackups/finalizers - verbs: - - create - - delete - - get - - list - - patch - - update - - watch -- apiGroups: - - "" - resources: - - pods - - services - - services/finalizers - - endpoints - - persistentvolumeclaims - - events - - configmaps - - secrets - - serviceaccounts - verbs: - - create - - delete - - get - - list - - patch - - update - - watch -- apiGroups: - - cache.my.domain - resources: - - memcachedbackups/status - verbs: - - get - - patch - - update -- apiGroups: - - policy - resources: - - events - - poddisruptionbudgets - verbs: - - create - - delete - - get - - list - - patch - - update - - watch -- apiGroups: - - cache.my.domain - resources: - - memcacheds - - memcacheds/status - - memcacheds/finalizers - verbs: - - create - - delete - - get - - list - - patch - - update - - watch ----- -==== diff --git a/modules/osdk-java-controller-labels-memcached.adoc b/modules/osdk-java-controller-labels-memcached.adoc deleted file mode 100644 index 2ee93a21ed73..000000000000 --- a/modules/osdk-java-controller-labels-memcached.adoc +++ /dev/null @@ -1,19 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/java/osdk-java-tutorial.adoc - -:_mod-docs-content-type: CONCEPT -[id="osdk-java-controller-labels-memcached_{context}"] -= Defining `labelsForMemcached` - -`labelsForMemcached` is a utility to return a map of the labels to attach to the resources: - -[source,java] ----- - private Map labelsForMemcached(Memcached m) { - Map labels = new HashMap<>(); - labels.put("app", "memcached"); - labels.put("memcached_cr", m.getMetadata().getName()); - return labels; - } ----- \ No newline at end of file diff --git a/modules/osdk-java-controller-memcached-deployment.adoc b/modules/osdk-java-controller-memcached-deployment.adoc deleted file mode 100644 index 6f673b16e57f..000000000000 --- a/modules/osdk-java-controller-memcached-deployment.adoc +++ /dev/null @@ -1,49 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/java/osdk-java-tutorial.adoc - -:_mod-docs-content-type: CONCEPT -[id="osdk-java-controller-memcached-deployment_{context}"] -= Define the `createMemcachedDeployment` - -The `createMemcachedDeployment` method uses the link:https://fabric8.io/[fabric8] `DeploymentBuilder` class: - -[source,java] ----- - private Deployment createMemcachedDeployment(Memcached m) { - Deployment deployment = new DeploymentBuilder() - .withMetadata( - new ObjectMetaBuilder() - .withName(m.getMetadata().getName()) - .withNamespace(m.getMetadata().getNamespace()) - .build()) - .withSpec( - new DeploymentSpecBuilder() - .withReplicas(m.getSpec().getSize()) - .withSelector( - new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) - .withTemplate( - new PodTemplateSpecBuilder() - .withMetadata( - new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) - .withSpec( - new PodSpecBuilder() - .withContainers( - new ContainerBuilder() - .withImage("memcached:1.4.36-alpine") - .withName("memcached") - .withCommand("memcached", "-m=64", "-o", "modern", "-v") - .withPorts( - new ContainerPortBuilder() - .withContainerPort(11211) - .withName("memcached") - .build()) - .build()) - .build()) - .build()) - .build()) - .build(); - deployment.addOwnerReference(m); - return deployment; - } ----- \ No newline at end of file diff --git a/modules/osdk-java-controller-reconcile-loop.adoc b/modules/osdk-java-controller-reconcile-loop.adoc deleted file mode 100644 index ef4a8f5a4532..000000000000 --- a/modules/osdk-java-controller-reconcile-loop.adoc +++ /dev/null @@ -1,74 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/java/osdk-java-tutorial.adoc - -:_mod-docs-content-type: CONCEPT -[id="osdk-java-controller-reconcile-loop_{context}"] -= Reconcile loop - -. Every controller has a reconciler object with a `Reconcile()` method that implements the reconcile loop. The reconcile loop is passed the `Deployment` argument, as shown in the following example: -+ -[source,java] ----- - Deployment deployment = client.apps() - .deployments() - .inNamespace(resource.getMetadata().getNamespace()) - .withName(resource.getMetadata().getName()) - .get(); ----- - -. As shown in the following example, if the `Deployment` is `null`, the deployment needs to be created. After you create the `Deployment`, you can determine if reconciliation is necessary. If there is no need of reconciliation, return the value of `UpdateControl.noUpdate()`, otherwise, return the value of `UpdateControl.updateStatus(resource): -+ -[source, java] ----- - if (deployment == null) { - Deployment newDeployment = createMemcachedDeployment(resource); - client.apps().deployments().create(newDeployment); - return UpdateControl.noUpdate(); - } ----- - -. After getting the `Deployment`, get the current and required replicas, as shown in the following example: -+ -[source,java] ----- - int currentReplicas = deployment.getSpec().getReplicas(); - int requiredReplicas = resource.getSpec().getSize(); ----- - -. If `currentReplicas` does not match the `requiredReplicas`, you must update the `Deployment`, as shown in the following example: -+ -[source,java] ----- - if (currentReplicas != requiredReplicas) { - deployment.getSpec().setReplicas(requiredReplicas); - client.apps().deployments().createOrReplace(deployment); - return UpdateControl.noUpdate(); - } ----- - -. The following example shows how to obtain the list of pods and their names: -+ -[source,java] ----- - List pods = client.pods() - .inNamespace(resource.getMetadata().getNamespace()) - .withLabels(labelsForMemcached(resource)) - .list() - .getItems(); - - List podNames = - pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList()); ----- - -. Check if resources were created and verify podnames with the Memcached resources. If a mismatch exists in either of these conditions, perform a reconciliation as shown in the following example: -+ -[source,java] ----- - if (resource.getStatus() == null - || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { - if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); - resource.getStatus().setNodes(podNames); - return UpdateControl.updateResource(resource); - } ----- \ No newline at end of file diff --git a/modules/osdk-java-create-api-controller.adoc b/modules/osdk-java-create-api-controller.adoc deleted file mode 100644 index 0625932185b8..000000000000 --- a/modules/osdk-java-create-api-controller.adoc +++ /dev/null @@ -1,57 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/java/osdk-java-tutorial.adoc - -:_mod-docs-content-type: PROCEDURE -[id="osdk-java-create-api-controller_{context}"] -= Creating an API and controller - -Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller. - -.Procedure - -. Run the following command to create an API: -+ -[source,terminal] ----- -$ operator-sdk create api \ - --plugins=quarkus \// <1> - --group=cache \// <2> - --version=v1 \// <3> - --kind=Memcached <4> ----- -<1> Set the plugin flag to `quarkus`. -<2> Set the group flag to `cache`. -<3> Set the version flag to `v1`. -<4> Set the kind flag to `Memcached`. - -.Verification - -. Run the `tree` command to view the file structure: -+ -[source,terminal] ----- -$ tree ----- -+ -.Example output -[source,terminal] ----- -. -├── Makefile -├── PROJECT -├── pom.xml -└── src - └── main - ├── java - │ └── com - │ └── example - │ ├── Memcached.java - │ ├── MemcachedReconciler.java - │ ├── MemcachedSpec.java - │ └── MemcachedStatus.java - └── resources - └── application.properties - -6 directories, 8 files ----- diff --git a/modules/osdk-java-create-cr.adoc b/modules/osdk-java-create-cr.adoc deleted file mode 100644 index 046050f1812f..000000000000 --- a/modules/osdk-java-create-cr.adoc +++ /dev/null @@ -1,23 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/java/osdk-java-tutorial.adoc - -:_mod-docs-content-type: PROCEDURE -[id="osdk-java-create-cr_{context}"] -= Creating a Custom Resource - -After generating the CRD manifests, you can create the Custom Resource (CR). - -.Procedure -* Create a Memcached CR called `memcached-sample.yaml`: -+ -[source,yaml] ----- -apiVersion: cache.example.com/v1 -kind: Memcached -metadata: - name: memcached-sample -spec: - # Add spec fields here - size: 1 ----- \ No newline at end of file diff --git a/modules/osdk-java-define-api.adoc b/modules/osdk-java-define-api.adoc deleted file mode 100644 index 5392b6e6489b..000000000000 --- a/modules/osdk-java-define-api.adoc +++ /dev/null @@ -1,70 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/java/osdk-java-tutorial.adoc - -:_mod-docs-content-type: PROCEDURE -[id="osdk-java-define-api_{context}"] -= Defining the API - -Define the API for the `Memcached` custom resource (CR). - -.Procedure -* Edit the following files that were generated as part of the `create api` process: - -.. Update the following attributes in the `MemcachedSpec.java` file to define the desired state of the `Memcached` CR: -+ -[source,java] ----- -public class MemcachedSpec { - - private Integer size; - - public Integer getSize() { - return size; - } - - public void setSize(Integer size) { - this.size = size; - } -} ----- - -.. Update the following attributes in the `MemcachedStatus.java` file to define the observed state of the `Memcached` CR: -+ -[NOTE] -==== -The example below illustrates a Node status field. It is recommended that you use link:https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties[typical status properties] in practice. -==== -+ -[source,java] ----- -import java.util.ArrayList; -import java.util.List; - -public class MemcachedStatus { - - // Add Status information here - // Nodes are the names of the memcached pods - private List nodes; - - public List getNodes() { - if (nodes == null) { - nodes = new ArrayList<>(); - } - return nodes; - } - - public void setNodes(List nodes) { - this.nodes = nodes; - } -} ----- - -.. Update the `Memcached.java` file to define the Schema for Memcached APIs that extends to both `MemcachedSpec.java` and `MemcachedStatus.java` files. -+ -[source,java] ----- -@Version("v1") -@Group("cache.example.com") -public class Memcached extends CustomResource implements Namespaced {} ----- \ No newline at end of file diff --git a/modules/osdk-java-generate-crd.adoc b/modules/osdk-java-generate-crd.adoc deleted file mode 100644 index 7ae177353f58..000000000000 --- a/modules/osdk-java-generate-crd.adoc +++ /dev/null @@ -1,66 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/java/osdk-java-tutorial.adoc - -:_mod-docs-content-type: PROCEDURE -[id="osdk-java-generate-crd_{context}"] -= Generating CRD manifests - -After the API is defined with `MemcachedSpec` and `MemcachedStatus` files, you can generate CRD manifests. - -.Procedure - -* Run the following command from the `memcached-operator` directory to generate the CRD: -+ -[source,terminal] ----- -$ mvn clean install ----- - -.Verification - -* Verify the contents of the CRD in the `target/kubernetes/memcacheds.cache.example.com-v1.yml` file as shown in the following example: -+ -[source,terminal] ----- -$ cat target/kubernetes/memcacheds.cache.example.com-v1.yaml ----- -+ -.Example output -[source,yaml] ----- -# Generated by Fabric8 CRDGenerator, manual edits might get overwritten! -apiVersion: apiextensions.k8s.io/v1 -kind: CustomResourceDefinition -metadata: - name: memcacheds.cache.example.com -spec: - group: cache.example.com - names: - kind: Memcached - plural: memcacheds - singular: memcached - scope: Namespaced - versions: - - name: v1 - schema: - openAPIV3Schema: - properties: - spec: - properties: - size: - type: integer - type: object - status: - properties: - nodes: - items: - type: string - type: array - type: object - type: object - served: true - storage: true - subresources: - status: {} ----- \ No newline at end of file diff --git a/modules/osdk-java-implement-controller.adoc b/modules/osdk-java-implement-controller.adoc deleted file mode 100644 index be612051ec6a..000000000000 --- a/modules/osdk-java-implement-controller.adoc +++ /dev/null @@ -1,161 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/java/osdk-java-tutorial.adoc - -:_mod-docs-content-type: PROCEDURE -[id="osdk-java-implement-controller_{context}"] -= Implementing the controller - -After creating a new API and controller, you can implement the controller logic. - -.Procedure - -. Append the following dependency to the `pom.xml` file: -+ -[source,xml] ----- - - commons-collections - commons-collections - 3.2.2 - ----- - -. For this example, replace the generated controller file `MemcachedReconciler.java` with following example implementation: -+ -.Example `MemcachedReconciler.java` -[%collapsible] -==== -[source,java] ----- -package com.example; - -import io.fabric8.kubernetes.client.KubernetesClient; -import io.javaoperatorsdk.operator.api.reconciler.Context; -import io.javaoperatorsdk.operator.api.reconciler.Reconciler; -import io.javaoperatorsdk.operator.api.reconciler.UpdateControl; -import io.fabric8.kubernetes.api.model.ContainerBuilder; -import io.fabric8.kubernetes.api.model.ContainerPortBuilder; -import io.fabric8.kubernetes.api.model.LabelSelectorBuilder; -import io.fabric8.kubernetes.api.model.ObjectMetaBuilder; -import io.fabric8.kubernetes.api.model.OwnerReferenceBuilder; -import io.fabric8.kubernetes.api.model.Pod; -import io.fabric8.kubernetes.api.model.PodSpecBuilder; -import io.fabric8.kubernetes.api.model.PodTemplateSpecBuilder; -import io.fabric8.kubernetes.api.model.apps.Deployment; -import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; -import io.fabric8.kubernetes.api.model.apps.DeploymentSpecBuilder; -import org.apache.commons.collections.CollectionUtils; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.stream.Collectors; - -public class MemcachedReconciler implements Reconciler { - private final KubernetesClient client; - - public MemcachedReconciler(KubernetesClient client) { - this.client = client; - } - - // TODO Fill in the rest of the reconciler - - @Override - public UpdateControl reconcile( - Memcached resource, Context context) { - // TODO: fill in logic - Deployment deployment = client.apps() - .deployments() - .inNamespace(resource.getMetadata().getNamespace()) - .withName(resource.getMetadata().getName()) - .get(); - - if (deployment == null) { - Deployment newDeployment = createMemcachedDeployment(resource); - client.apps().deployments().create(newDeployment); - return UpdateControl.noUpdate(); - } - - int currentReplicas = deployment.getSpec().getReplicas(); - int requiredReplicas = resource.getSpec().getSize(); - - if (currentReplicas != requiredReplicas) { - deployment.getSpec().setReplicas(requiredReplicas); - client.apps().deployments().createOrReplace(deployment); - return UpdateControl.noUpdate(); - } - - List pods = client.pods() - .inNamespace(resource.getMetadata().getNamespace()) - .withLabels(labelsForMemcached(resource)) - .list() - .getItems(); - - List podNames = - pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList()); - - - if (resource.getStatus() == null - || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { - if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); - resource.getStatus().setNodes(podNames); - return UpdateControl.updateResource(resource); - } - - return UpdateControl.noUpdate(); - } - - private Map labelsForMemcached(Memcached m) { - Map labels = new HashMap<>(); - labels.put("app", "memcached"); - labels.put("memcached_cr", m.getMetadata().getName()); - return labels; - } - - private Deployment createMemcachedDeployment(Memcached m) { - Deployment deployment = new DeploymentBuilder() - .withMetadata( - new ObjectMetaBuilder() - .withName(m.getMetadata().getName()) - .withNamespace(m.getMetadata().getNamespace()) - .build()) - .withSpec( - new DeploymentSpecBuilder() - .withReplicas(m.getSpec().getSize()) - .withSelector( - new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) - .withTemplate( - new PodTemplateSpecBuilder() - .withMetadata( - new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) - .withSpec( - new PodSpecBuilder() - .withContainers( - new ContainerBuilder() - .withImage("memcached:1.4.36-alpine") - .withName("memcached") - .withCommand("memcached", "-m=64", "-o", "modern", "-v") - .withPorts( - new ContainerPortBuilder() - .withContainerPort(11211) - .withName("memcached") - .build()) - .build()) - .build()) - .build()) - .build()) - .build(); - deployment.addOwnerReference(m); - return deployment; - } -} ----- -==== -+ -The example controller runs the following reconciliation logic for each `Memcached` custom resource (CR): -+ --- -* Creates a Memcached deployment if it does not exist. -* Ensures that the deployment size matches the size specified by the `Memcached` CR spec. -* Updates the `Memcached` CR status with the names of the `memcached` pods. --- diff --git a/modules/osdk-java-project-layout.adoc b/modules/osdk-java-project-layout.adoc deleted file mode 100644 index 44d25728830a..000000000000 --- a/modules/osdk-java-project-layout.adoc +++ /dev/null @@ -1,37 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/operator_sdk/java/osdk-java-project-layout.adoc - -:_mod-docs-content-type: REFERENCE -[id="osdk-java-project-layout_{context}"] -= Java-based project layout - -Java-based Operator projects generated by the `operator-sdk init` command contain the following files and directories: - -[options="header",cols="1,4"] -|=== - -|File or directory |Purpose - -|`pom.xml` -|File that contains the dependencies required to run the Operator. - -|`/` -|Directory that contains the files that represent the API. If the domain is `example.com`, this folder is called `example/`. - -|`MemcachedReconciler.java` -|Java file that defines controller implementations. - -|`MemcachedSpec.java` -|Java file that defines the desired state of the Memcached CR. - -|`MemcachedStatus.java` -|Java file that defines the observed state of the Memcached CR. - -|`Memcached.java` -|Java file that defines the Schema for Memcached APIs. - -|`target/kubernetes/` -|Directory that contains the CRD yaml files. - -|=== diff --git a/modules/osdk-project-file.adoc b/modules/osdk-project-file.adoc index 5239f3abe1a7..8f0f27d51877 100644 --- a/modules/osdk-project-file.adoc +++ b/modules/osdk-project-file.adoc @@ -3,7 +3,6 @@ // * operators/operator_sdk/golang/osdk-golang-tutorial.adoc // * operators/operator_sdk/ansible/osdk-ansible-tutorial.adoc // * operators/operator_sdk/helm/osdk-helm-tutorial.adoc -// * operators/operator_sdk/java/osdk-java-tutorial.adoc ifeval::["{context}" == "osdk-golang-tutorial"] :golang: @@ -20,11 +19,6 @@ ifeval::["{context}" == "osdk-helm-tutorial"] :type: Helm :app: nginx endif::[] -ifeval::["{context}" == "osdk-java-tutorial"] -:java: -:type: Java -:app: memcached -endif::[] [id="osdk-project-file_{context}"] = PROJECT file @@ -80,15 +74,6 @@ resources: version: "3" ---- endif::[] -ifdef::java[] ----- -domain: example.com -layout: -- quarkus.javaoperatorsdk.io/v1-alpha -projectName: memcached-operator -version: "3" ----- -endif::[] ifeval::["{context}" == "osdk-golang-tutorial"] :!golang: @@ -104,9 +89,4 @@ ifeval::["{context}" == "osdk-helm-tutorial"] :!helm: :!type: :!app: -endif::[] -ifeval::["{context}" == "osdk-java-tutorial"] -:!java: -:!type: -:!app: endif::[] \ No newline at end of file diff --git a/modules/osdk-quickstart.adoc b/modules/osdk-quickstart.adoc index 93fe1712cd93..d0df298e44d9 100644 --- a/modules/osdk-quickstart.adoc +++ b/modules/osdk-quickstart.adoc @@ -25,13 +25,6 @@ ifeval::["{context}" == "osdk-helm-quickstart"] :app: nginx :group: demo endif::[] -ifeval::["{context}" == "osdk-java-quickstart"] -:java: -:type: Java -:app-proper: Memcached -:app: memcached -:group: cache -endif::[] :_mod-docs-content-type: PROCEDURE [id="osdk-quickstart_{context}"] @@ -64,9 +57,6 @@ endif::[] ifdef::helm[] with the `helm` plugin endif::[] -ifdef::java[] -with the `quarkus` plugin -endif::[] to initialize the project: + [source,terminal,subs="attributes+"] @@ -92,14 +82,6 @@ $ operator-sdk init \ --plugins=helm ---- endif::[] -ifdef::java[] ----- -$ operator-sdk init \ - --plugins=quarkus \ - --domain=example.com \ - --project-name=memcached-operator ----- -endif::[] . *Create an API.* + @@ -136,15 +118,6 @@ $ operator-sdk create api \ + This API uses the built-in Helm chart boilerplate from the `helm create` command. endif::[] -ifdef::java[] ----- -$ operator-sdk create api \ - --plugins quarkus \ - --group {group} \ - --version v1 \ - --kind {app-proper} ----- -endif::[] . *Build and push the Operator image.* + @@ -251,9 +224,3 @@ ifeval::["{context}" == "osdk-helm-quickstart"] :!app-proper: :!app: endif::[] -ifeval::["{context}" == "osdk-java-quickstart"] -:!java: -:!type: -:!app-proper: -:!app: -endif::[] diff --git a/modules/osdk-run-deployment.adoc b/modules/osdk-run-deployment.adoc index 655b5d4c9327..a3585d25da5b 100644 --- a/modules/osdk-run-deployment.adoc +++ b/modules/osdk-run-deployment.adoc @@ -8,9 +8,6 @@ ifeval::["{context}" == "osdk-golang-tutorial"] :golang: endif::[] -ifeval::["{context}" == "osdk-java-tutorial"] -:java: -endif::[] :_mod-docs-content-type: PROCEDURE [id="osdk-run-deployment_{context}"] @@ -59,67 +56,17 @@ $ make docker-push IMG=//: The name and tag of the image, for example `IMG=//:`, in both the commands can also be set in your Makefile. Modify the `IMG ?= controller:latest` value to set your default image name. ==== -ifdef::java[] -. Run the following command to install the CRD to the default namespace: -+ -[source,terminal] ----- -$ oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml ----- -+ -.Example output -[source,terminal] ----- -customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created ----- - -. Create a file called `rbac.yaml` as shown in the following example: -+ -[source,yaml] ----- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: memcached-operator-admin -subjects: -- kind: ServiceAccount - name: memcached-quarkus-operator-operator - namespace: -roleRef: - kind: ClusterRole - name: cluster-admin - apiGroup: "" ----- -+ -[IMPORTANT] -==== -The `rbac.yaml` file will be applied at a later step. -==== - -endif::[] - . Run the following command to deploy the Operator: + [source,terminal] ---- $ make deploy IMG=//: ---- -ifeval::["{context}" != "osdk-java-tutorial"] + By default, this command creates a namespace with the name of your Operator project in the form `-system` and is used for the deployment. This command also installs the RBAC manifests from `config/rbac`. -endif::[] -ifdef::java[] -. Run the following command to grant `cluster-admin` privileges to the `memcached-quarkus-operator-operator` by applying the `rbac.yaml` file created in a previous step: -+ -[source,terminal] ----- -$ oc apply -f rbac.yaml ----- -endif::[] . Run the following command to verify that the Operator is running: + -ifeval::["{context}" != "osdk-java-tutorial"] [source,terminal] ---- $ oc get deployment -n -system @@ -131,53 +78,7 @@ $ oc get deployment -n -system NAME READY UP-TO-DATE AVAILABLE AGE -controller-manager 1/1 1 1 8m ---- -endif::[] -ifdef::java[] -[source,terminal] ----- -$ oc get all -n default ----- -+ -.Example output -[source,terminal] ----- -NAME READY UP-TO-DATE AVAILABLE AGE -pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18s ----- - -. Run the following command to apply the `memcached-sample.yaml` and create the `memcached-sample` pod: -+ -[source,terminal] ----- -$ oc apply -f memcached-sample.yaml ----- -+ -.Example output -[source,terminal] ----- -memcached.cache.example.com/memcached-sample created ----- -.Verification - -* Run the following command to confirm the pods have started: -+ -[source,terminal] ----- -$ oc get all ----- -+ -.Example output -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s -pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s ----- -endif::[] ifeval::["{context}" == "osdk-golang-tutorial"] :!golang: endif::[] -ifeval::["{context}" == "osdk-java-tutorial"] -:!java: -endif::[] diff --git a/modules/osdk-run-locally.adoc b/modules/osdk-run-locally.adoc index fb0d38d13a10..db45a25e710a 100644 --- a/modules/osdk-run-locally.adoc +++ b/modules/osdk-run-locally.adoc @@ -13,9 +13,6 @@ endif::[] ifeval::["{context}" == "osdk-helm-tutorial"] :helm: endif::[] -ifeval::["{context}" == "osdk-java-tutorial"] -:java: -endif::[] :_mod-docs-content-type: PROCEDURE @@ -25,7 +22,7 @@ endif::[] You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing. .Procedure -ifeval::["{context}" != "osdk-java-tutorial"] + * Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your `~/.kube/config` file and run the Operator locally: + [source,terminal] @@ -35,7 +32,6 @@ $ make install run + .Example output [source,terminal] -endif::[] ifdef::golang[] ---- ... @@ -69,104 +65,7 @@ ifdef::helm[] {"level":"info","ts":1612652420.2309358,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting workers","worker count":8} ---- endif::[] -ifdef::java[] -. Run the following command to compile the Operator: -+ -[source,terminal] ----- -$ mvn clean install ----- -+ -.Example output -[source,terminal] ----- -[INFO] ------------------------------------------------------------------------ -[INFO] BUILD SUCCESS -[INFO] ------------------------------------------------------------------------ -[INFO] Total time: 11.193 s -[INFO] Finished at: 2021-05-26T12:16:54-04:00 -[INFO] ------------------------------------------------------------------------ ----- - -. Run the following command to install the CRD to the default namespace: -+ -[source,terminal] ----- -$ oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml ----- -+ -.Example output -[source,terminal] ----- -customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created ----- - -. Create a file called `rbac.yaml` as shown in the following example: -+ -[source,yaml] ----- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: memcached-operator-admin -subjects: -- kind: ServiceAccount - name: memcached-quarkus-operator-operator - namespace: -roleRef: - kind: ClusterRole - name: cluster-admin - apiGroup: "" ----- - -. Run the following command to grant `cluster-admin` privileges to the `memcached-quarkus-operator-operator` by applying the `rbac.yaml` file: -+ -[source,terminal] ----- -$ oc apply -f rbac.yaml ----- - -. Enter the following command to run the Operator: -+ -[source,terminal] ----- -$ java -jar target/quarkus-app/quarkus-run.jar ----- -+ -[NOTE] -==== -The `java` command will run the Operator and remain running until you end the process. You will need another terminal to complete the rest of these commands. -==== - -. Apply the `memcached-sample.yaml` file with the following command: -+ -[source,terminal] ----- -$ kubectl apply -f memcached-sample.yaml ----- -+ -.Example output -[source,terminal] ----- -memcached.cache.example.com/memcached-sample created ----- -.Verification - -* Run the following command to confirm that the pod has started: -+ -[source,terminal] ----- -$ oc get all ----- -+ -.Example output -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s ----- -endif::[] ifeval::["{context}" == "osdk-golang-tutorial"] :!golang: endif::[] @@ -175,7 +74,4 @@ ifeval::["{context}" == "osdk-ansible-tutorial"] endif::[] ifeval::["{context}" == "osdk-helm-tutorial"] :!helm: -endif::[] -ifeval::["{context}" == "osdk-java-tutorial"] -:!java: -endif::[] +endif::[] \ No newline at end of file diff --git a/modules/osdk-run-operator.adoc b/modules/osdk-run-operator.adoc index d3e029d5937d..c18c25ce2960 100644 --- a/modules/osdk-run-operator.adoc +++ b/modules/osdk-run-operator.adoc @@ -3,7 +3,6 @@ // * operators/operator_sdk/golang/osdk-golang-tutorial.adoc // * operators/operator_sdk/ansible/osdk-ansible-tutorial.adoc // * operators/operator_sdk/helm/osdk-helm-tutorial.adoc -// * operators/operator_sdk/helm/osdk-hybrid-helm.adoc ifeval::["{context}" == "osdk-golang-tutorial"] :golang: @@ -14,9 +13,6 @@ endif::[] ifeval::["{context}" == "osdk-helm-tutorial"] :helm: endif::[] -ifeval::["{context}" == "osdk-java-tutorial"] -:java: -endif::[] [id="osdk-run-operator_{context}"] = Running the Operator @@ -68,6 +64,3 @@ endif::[] ifeval::["{context}" == "osdk-helm-tutorial"] :!helm: endif::[] -ifeval::["{context}" == "osdk-java-tutorial"] -:!java: -endif::[] diff --git a/modules/osdk-updating-128-to-131.adoc b/modules/osdk-updating-128-to-131.adoc index c70c2c939a5a..a599893acc4d 100644 --- a/modules/osdk-updating-128-to-131.adoc +++ b/modules/osdk-updating-128-to-131.adoc @@ -3,8 +3,6 @@ // * operators/operator_sdk/golang/osdk-golang-updating-projects.adoc // * operators/operator_sdk/ansible/osdk-ansible-updating-projects.adoc // * operators/operator_sdk/helm/osdk-helm-updating-projects.adoc -// * operators/operator_sdk/helm/osdk-hybrid-helm-updating-projects.adoc -// * operators/operator_sdk/java/osdk-java-updating-projects.adoc ifeval::["{context}" == "osdk-golang-updating-projects"] :golang: @@ -18,14 +16,6 @@ ifeval::["{context}" == "osdk-helm-updating-projects"] :helm: :type: Helm endif::[] -ifeval::["{context}" == "osdk-hybrid-helm-updating-projects"] -:hybrid: -:type: Hybrid Helm -endif::[] -ifeval::["{context}" == "osdk-java-updating-projects"] -:java: -:type: Java -endif::[] :_mod-docs-content-type: PROCEDURE [id="osdk-upgrading-projects_{context}"] @@ -40,7 +30,7 @@ The following procedure updates an existing {type}-based Operator project for co .Procedure -ifdef::golang,hybrid,java[] +ifdef::golang[] * Edit your Operator project's Makefile to update the Operator SDK version to {osdk_ver}, as shown in the following example: + .Example Makefile @@ -211,11 +201,3 @@ ifeval::["{context}" == "osdk-helm-updating-projects"] :!helm: :!type: endif::[] -ifeval::["{context}" == "osdk-hybrid-helm-updating-projects"] -:!hybrid: -:!type: -endif::[] -ifeval::["{context}" == "osdk-java-updating-projects"] -:!java: -:!type: -endif::[] diff --git a/modules/osdk-updating-131-to-1361.adoc b/modules/osdk-updating-131-to-1361.adoc index 11884cfbbe69..5d0bafb61a3a 100644 --- a/modules/osdk-updating-131-to-1361.adoc +++ b/modules/osdk-updating-131-to-1361.adoc @@ -3,8 +3,6 @@ // * operators/operator_sdk/golang/osdk-golang-updating-projects.adoc // * operators/operator_sdk/ansible/osdk-ansible-updating-projects.adoc // * operators/operator_sdk/helm/osdk-helm-updating-projects.adoc -// * operators/operator_sdk/helm/osdk-hybrid-helm-updating-projects.adoc -// * operators/operator_sdk/java/osdk-java-updating-projects.adoc ifeval::["{context}" == "osdk-golang-updating-projects"] :golang: @@ -18,14 +16,6 @@ ifeval::["{context}" == "osdk-helm-updating-projects"] :helm: :type: Helm endif::[] -ifeval::["{context}" == "osdk-hybrid-helm-updating-projects"] -:hybrid: -:type: Hybrid Helm -endif::[] -ifeval::["{context}" == "osdk-java-updating-projects"] -:java: -:type: Java -endif::[] :_mod-docs-content-type: PROCEDURE [id="osdk-upgrading-projects_{context}"] @@ -70,7 +60,7 @@ ifdef::ansible[] FROM registry.redhat.io/openshift4/ose-ansible-operator:v{product-version} ---- endif::[] -ifdef::golang,hybrid[] +ifdef::golang[] . The `go/v4` plugin is now stable and is the default version used when scaffolding a Go-based Operator. The transition from Golang v2 and v3 plugins to the new Golang v4 plugin introduces significant changes. This migration is designed to enhance your project's functionality and compatibility, reflecting the evolving landscape of Golang development. + For more information on the reasoning behind these changes, see link:https://book.kubebuilder.io/migration/v3vsv4#tldr-of-the-new-gov4-plugin[go/v3 vs go/v4] in the Kubebuilder documentation. @@ -117,7 +107,7 @@ sigs.k8s.io/controller-runtime v0.17.3 $ go mod tidy ---- -ifdef::golang,hybrid[] +ifdef::golang[] .. Projects are now scaffolded with `kube-rbac-proxy` version `0.16.0`. Modify the version of `kube-rbac-proxy` in the scaffolded `config/default/manager_auth_proxy_patch.yaml` file by making the following changes: + [source,diff] @@ -268,11 +258,3 @@ ifeval::["{context}" == "osdk-helm-updating-projects"] :!helm: :!type: endif::[] -ifeval::["{context}" == "osdk-hybrid-helm-updating-projects"] -:!hybrid: -:!type: -endif::[] -ifeval::["{context}" == "osdk-java-updating-projects"] -:!java: -:!type: -endif::[] diff --git a/operators/index.adoc b/operators/index.adoc index 152a841fdeb2..af8ff4565885 100644 --- a/operators/index.adoc +++ b/operators/index.adoc @@ -16,9 +16,8 @@ As an Operator author, you can perform the following development tasks for OLM-b ** xref:../operators/operator_sdk/osdk-installing-cli.adoc#osdk-installing-cli[Install Operator SDK CLI]. // The Operator quickstarts aren't published for OSD/ROSA, so for OSD/ROSA, these xrefs point to the tutorials instead. ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] -** Create xref:../operators/operator_sdk/golang/osdk-golang-quickstart.adoc#osdk-golang-quickstart[Go-based Operators], xref:../operators/operator_sdk/ansible/osdk-ansible-quickstart.adoc#osdk-ansible-quickstart[Ansible-based Operators], xref:../operators/operator_sdk/java/osdk-java-quickstart.adoc#osdk-java-quickstart[Java-based Operators], and xref:../operators/operator_sdk/helm/osdk-helm-quickstart.adoc#osdk-helm-quickstart[Helm-based Operators]. +** Create xref:../operators/operator_sdk/golang/osdk-golang-quickstart.adoc#osdk-golang-quickstart[Go-based Operators], xref:../operators/operator_sdk/ansible/osdk-ansible-quickstart.adoc#osdk-ansible-quickstart[Ansible-based Operators], and xref:../operators/operator_sdk/helm/osdk-helm-quickstart.adoc#osdk-helm-quickstart[Helm-based Operators]. endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] -// TODO: When the Java-based Operators is GA, it can be added to the list below for OSD/ROSA. ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] ** Create xref:../operators/operator_sdk/golang/osdk-golang-tutorial.adoc#osdk-golang-tutorial[Go-based Operators], xref:../operators/operator_sdk/ansible/osdk-ansible-tutorial.adoc#osdk-ansible-tutorial[Ansible-based Operators], and xref:../operators/operator_sdk/helm/osdk-helm-tutorial.adoc#osdk-helm-tutorial[Helm-based Operators]. endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] diff --git a/operators/operator_sdk/helm/osdk-hybrid-helm-updating-projects.adoc b/operators/operator_sdk/helm/osdk-hybrid-helm-updating-projects.adoc deleted file mode 100644 index 27d6dd6fdcea..000000000000 --- a/operators/operator_sdk/helm/osdk-hybrid-helm-updating-projects.adoc +++ /dev/null @@ -1,24 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="osdk-hybrid-helm-updating-projects"] -= Updating Hybrid Helm-based projects for newer Operator SDK versions -include::_attributes/common-attributes.adoc[] -:context: osdk-hybrid-helm-updating-projects - -toc::[] - -{product-title} {product-version} supports Operator SDK {osdk_ver}. If you already have the {osdk_ver_n1} CLI installed on your workstation, you can update the CLI to {osdk_ver} by xref:../../../operators/operator_sdk/osdk-installing-cli.adoc#osdk-installing-cli[installing the latest version]. - -include::snippets/osdk-deprecation.adoc[] - -However, to ensure your existing Operator projects maintain compatibility with Operator SDK {osdk_ver}, update steps are required for the associated breaking changes introduced since {osdk_ver_n1}. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with {osdk_ver_n1}. - -include::modules/osdk-updating-131-to-1361.adoc[leveloffset=+1] - -[id="additional-resources_osdk-hybrid-helm-upgrading-projects"] -[role="_additional-resources"] -== Additional resources - -* xref:../../../operators/operator_sdk/osdk-pkgman-to-bundle.adoc#osdk-pkgman-to-bundle[Migrating package manifest projects to bundle format] -* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.10/html-single/operators/index#osdk-upgrading-v1101-to-v1160_osdk-upgrading-projects[Upgrading projects for Operator SDK 1.16.0] -* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/operators/developing-operators#osdk-upgrading-v180-to-v1101_osdk-upgrading-projects[Upgrading projects for Operator SDK v1.10.1] -* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/operators/developing-operators#osdk-upgrading-v130-to-v180_osdk-upgrading-projects[Upgrading projects for Operator SDK v1.8.0] diff --git a/operators/operator_sdk/helm/osdk-hybrid-helm.adoc b/operators/operator_sdk/helm/osdk-hybrid-helm.adoc deleted file mode 100644 index 8335c5637581..000000000000 --- a/operators/operator_sdk/helm/osdk-hybrid-helm.adoc +++ /dev/null @@ -1,72 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="osdk-hybrid-helm"] -= Operator SDK tutorial for Hybrid Helm Operators -include::_attributes/common-attributes.adoc[] -:context: osdk-hybrid-helm - -toc::[] - -The standard Helm-based Operator support in the Operator SDK has limited functionality compared to the Go-based and Ansible-based Operator support that has reached the Auto Pilot capability (level V) in the xref:../../../operators/understanding/olm-what-operators-are.adoc#olm-maturity-model_olm-what-operators-are[Operator maturity model]. - -include::snippets/osdk-deprecation.adoc[] - -The Hybrid Helm Operator enhances the existing Helm-based support's abilities through Go APIs. With this hybrid approach of Helm and Go, the Operator SDK enables Operator authors to use the following process: - -* Generate a default structure for, or _scaffold_, a Go API in the same project as Helm. -* Configure the Helm reconciler in the `main.go` file of the project, through the libraries provided by the Hybrid Helm Operator. - -:FeatureName: The Hybrid Helm Operator -include::snippets/technology-preview.adoc[] - -This tutorial walks through the following process using the Hybrid Helm Operator: - -* Create a `Memcached` deployment through a Helm chart if it does not exist -* Ensure that the deployment size is the same as specified by `Memcached` custom resource (CR) spec -* Create a `MemcachedBackup` deployment by using the Go API - -include::modules/osdk-common-prereqs.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources -* xref:../../../operators/operator_sdk/osdk-installing-cli.adoc#osdk-installing-cli[Installing the Operator SDK CLI] -* xref:../../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI] - -include::modules/osdk-hh-create-project.adoc[leveloffset=+1] -.Additional resources - -* This procedure creates a project structure that is compatible with both Helm and Go APIs. To learn more about the project directory structure, see xref:../../../operators/operator_sdk/helm/osdk-hybrid-helm.adoc#osdk-hh-project-layout_osdk-hybrid-helm[Project layout]. - -include::modules/osdk-hh-create-helm-api.adoc[leveloffset=+1] -.Additional resources - -* xref:../../../operators/operator_sdk/helm/osdk-helm-tutorial.adoc#osdk-helm-existing-chart_osdk-helm-tutorial[Existing Helm charts] - -include::modules/osdk-hh-helm-api-logic.adoc[leveloffset=+2] -.Additional resources - -* For detailed documentation on customizing the Helm Operator logic through the chart, see xref:../../../operators/operator_sdk/helm/osdk-helm-tutorial.adoc#osdk-helm-logic_osdk-helm-tutorial[Understanding the Operator logic]. - -include::modules/osdk-hh-helm-reconciler.adoc[leveloffset=+2] - -include::modules/osdk-hh-create-go-api.adoc[leveloffset=+1] - -include::modules/osdk-hh-defining-go-api.adoc[leveloffset=+2] - -include::modules/osdk-hh-implement-controller.adoc[leveloffset=+2] - -For a detailed explanation on how to configure the controller to perform the above mentioned actions, see xref:../../../operators/operator_sdk/golang/osdk-golang-tutorial.adoc#osdk-golang-implement-controller_osdk-golang-tutorial[Implementing the controller] in the Operator SDK tutorial for standard Go-based Operators. - -include::modules/osdk-hh-main-go.adoc[leveloffset=+2] - -include::modules/osdk-hh-rbac.adoc[leveloffset=+2] -.Additional resources - -* xref:../../../operators/operator_sdk/golang/osdk-golang-tutorial.adoc#osdk-golang-controller-rbac-markers_osdk-golang-tutorial[RBAC markers for Go-based Operators] - -include::modules/osdk-run-locally.adoc[leveloffset=+1] - -include::modules/osdk-run-deployment.adoc[leveloffset=+1] - -include::modules/osdk-hh-create-cr.adoc[leveloffset=+1] - -include::modules/osdk-hh-project-layout.adoc[leveloffset=+1] diff --git a/operators/operator_sdk/java/_attributes b/operators/operator_sdk/java/_attributes deleted file mode 120000 index bf7c2529fdb4..000000000000 --- a/operators/operator_sdk/java/_attributes +++ /dev/null @@ -1 +0,0 @@ -../../../_attributes/ \ No newline at end of file diff --git a/operators/operator_sdk/java/images b/operators/operator_sdk/java/images deleted file mode 120000 index 4399cbb3c0f3..000000000000 --- a/operators/operator_sdk/java/images +++ /dev/null @@ -1 +0,0 @@ -../../../images/ \ No newline at end of file diff --git a/operators/operator_sdk/java/modules b/operators/operator_sdk/java/modules deleted file mode 120000 index 7e8b50bee77a..000000000000 --- a/operators/operator_sdk/java/modules +++ /dev/null @@ -1 +0,0 @@ -../../../modules/ \ No newline at end of file diff --git a/operators/operator_sdk/java/osdk-java-project-layout.adoc b/operators/operator_sdk/java/osdk-java-project-layout.adoc deleted file mode 100644 index 433c6f07c81e..000000000000 --- a/operators/operator_sdk/java/osdk-java-project-layout.adoc +++ /dev/null @@ -1,15 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="osdk-java-project-layout"] -= Project layout for Java-based Operators -include::_attributes/common-attributes.adoc[] -:context: osdk-java-project-layout -:FeatureName: Java-based Operator SDK -include::snippets/technology-preview.adoc[] - -toc::[] - -The `operator-sdk` CLI can generate, or _scaffold_, a number of packages and files for each Operator project. - -include::snippets/osdk-deprecation.adoc[] - -include::modules/osdk-java-project-layout.adoc[leveloffset=+1] diff --git a/operators/operator_sdk/java/osdk-java-quickstart.adoc b/operators/operator_sdk/java/osdk-java-quickstart.adoc deleted file mode 100644 index 177e97b05dd2..000000000000 --- a/operators/operator_sdk/java/osdk-java-quickstart.adoc +++ /dev/null @@ -1,29 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="osdk-java-quickstart"] -= Getting started with Operator SDK for Java-based Operators -include::_attributes/common-attributes.adoc[] -:context: osdk-java-quickstart -:FeatureName: Java-based Operator SDK -include::snippets/technology-preview.adoc[] - -// This assembly is not included in the OSD and ROSA docs, because it is Tech Preview. However, once Java-based Operator SDK is GA, this assembly will still need to be excluded from OSD and ROSA if it continues to require cluster-admin permissions. - -toc::[] - -To demonstrate the basics of setting up and running a Java-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Java-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster. - -include::snippets/osdk-deprecation.adoc[] - -include::modules/osdk-common-prereqs.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources -* xref:../../../operators/operator_sdk/osdk-installing-cli.adoc#osdk-installing-cli[Installing the Operator SDK CLI] -* xref:../../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI] - -include::modules/osdk-quickstart.adoc[leveloffset=+1] - -[id="next-steps_osdk-java-quickstart"] -== Next steps - -* See xref:../../../operators/operator_sdk/java/osdk-java-tutorial.adoc#osdk-java-tutorial[Operator SDK tutorial for Java-based Operators] for a more in-depth walkthrough on building a Java-based Operator. diff --git a/operators/operator_sdk/java/osdk-java-tutorial.adoc b/operators/operator_sdk/java/osdk-java-tutorial.adoc deleted file mode 100644 index a0d5d5d4a2fb..000000000000 --- a/operators/operator_sdk/java/osdk-java-tutorial.adoc +++ /dev/null @@ -1,87 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="osdk-java-tutorial"] -= Operator SDK tutorial for Java-based Operators -include::_attributes/common-attributes.adoc[] -:context: osdk-java-tutorial -:FeatureName: Java-based Operator SDK -include::snippets/technology-preview.adoc[] - -// This assembly is not currrently included in the OSD and ROSA distros, because it is Tech Preview. However, some conditionalization has been added for OSD and ROSA so that the content will be applicable to those distros once this feature is GA and included in the OSD and ROSA docs. - -toc::[] - -Operator developers can take advantage of Java programming language support in the Operator SDK to build an example Java-based Operator for Memcached, a distributed key-value store, and manage its lifecycle. - -include::snippets/osdk-deprecation.adoc[] - -This process is accomplished using two centerpieces of the Operator Framework: - -Operator SDK:: The `operator-sdk` CLI tool and `java-operator-sdk` library API - -Operator Lifecycle Manager (OLM):: Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster - -ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] -[NOTE] -==== -This tutorial goes into greater detail than xref:../../../operators/operator_sdk/java/osdk-java-quickstart.adoc#osdk-java-quickstart[Getting started with Operator SDK for Java-based Operators]. -==== -endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] - -// The "Getting started" quickstarts require cluster-admin and are therefore only available in OCP. -ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] -[NOTE] -==== -This tutorial goes into greater detail than link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-java-quickstart[Getting started with Operator SDK for Java-based Operators] in the OpenShift Container Platform documentation. -==== -endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] - -include::modules/osdk-common-prereqs.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources -* xref:../../../operators/operator_sdk/osdk-installing-cli.adoc#osdk-installing-cli[Installing the Operator SDK CLI] -* xref:../../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI] - -include::modules/osdk-create-project.adoc[leveloffset=+1] -include::modules/osdk-project-file.adoc[leveloffset=+2] - -include::modules/osdk-java-create-api-controller.adoc[leveloffset=+1] -include::modules/osdk-java-define-api.adoc[leveloffset=+2] -include::modules/osdk-java-generate-crd.adoc[leveloffset=+2] -include::modules/osdk-java-create-cr.adoc[leveloffset=+2] - -include::modules/osdk-java-implement-controller.adoc[leveloffset=+1] - -The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to xref:../../../operators/operator_sdk/java/osdk-java-tutorial.adoc#osdk-run-operator_osdk-java-tutorial[Running the Operator]. - -include::modules/osdk-java-controller-reconcile-loop.adoc[leveloffset=+2] -include::modules/osdk-java-controller-labels-memcached.adoc[leveloffset=+2] -include::modules/osdk-java-controller-memcached-deployment.adoc[leveloffset=+2] - -include::modules/osdk-run-operator.adoc[leveloffset=+1] - -ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] -[role="_additional-resources"] -.Additional resources -* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-run-locally_osdk-java-tutorial[Running locally outside the cluster] (OpenShift Container Platform documentation) -* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/operators/index#osdk-run-deployment_osdk-java-tutorial[Running as a deployment on the cluster] (OpenShift Container Platform documentation) -endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] - -// In OSD/ROSA, the only applicable option for running the Operator is to bundle and deploy with OLM. -ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] -include::modules/osdk-run-locally.adoc[leveloffset=+2] -include::modules/osdk-run-deployment.adoc[leveloffset=+2] -endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] - -[id="osdk-bundle-deploy-olm_{context}"] -=== Bundling an Operator and deploying with Operator Lifecycle Manager - -include::modules/osdk-bundle-operator.adoc[leveloffset=+3] -include::modules/osdk-deploy-olm.adoc[leveloffset=+3] - -[role="_additional-resources"] -[id="additional-resources_osdk-java-tutorial"] -== Additional resources - -* See xref:../../../operators/operator_sdk/java/osdk-java-project-layout.adoc#osdk-java-project-layout[Project layout for Java-based Operators] to learn about the directory structures created by the Operator SDK. -* If a xref:../../../networking/enable-cluster-wide-proxy.adoc#enable-cluster-wide-proxy[cluster-wide egress proxy is configured], cluster administrators can xref:../../../operators/admin/olm-configuring-proxy-support.adoc#olm-configuring-proxy-support[override the proxy settings or inject a custom CA certificate] for specific Operators running on Operator Lifecycle Manager (OLM). diff --git a/operators/operator_sdk/java/osdk-java-updating-projects.adoc b/operators/operator_sdk/java/osdk-java-updating-projects.adoc deleted file mode 100644 index 82ed3dbbbaf7..000000000000 --- a/operators/operator_sdk/java/osdk-java-updating-projects.adoc +++ /dev/null @@ -1,21 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="osdk-java-updating-projects"] -= Updating projects for newer Operator SDK versions -include::_attributes/common-attributes.adoc[] -:context: osdk-java-updating-projects - -toc::[] - -{product-title} {product-version} supports Operator SDK {osdk_ver}. If you already have the {osdk_ver_n1} CLI installed on your workstation, you can update the CLI to {osdk_ver} by xref:../../../operators/operator_sdk/osdk-installing-cli.adoc#osdk-installing-cli[installing the latest version]. - -include::snippets/osdk-deprecation.adoc[] - -However, to ensure your existing Operator projects maintain compatibility with Operator SDK {osdk_ver}, update steps are required for the associated breaking changes introduced since {osdk_ver_n1}. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with {osdk_ver_n1}. - -include::modules/osdk-updating-131-to-1361.adoc[leveloffset=+1] - -[id="additional-resources_osdk-java-upgrading-projects"] -[role="_additional-resources"] -== Additional resources - -* xref:../../../operators/operator_sdk/osdk-pkgman-to-bundle.adoc#osdk-pkgman-to-bundle[Migrating package manifest projects to bundle format] diff --git a/operators/operator_sdk/java/snippets b/operators/operator_sdk/java/snippets deleted file mode 120000 index ce62fd7c41e2..000000000000 --- a/operators/operator_sdk/java/snippets +++ /dev/null @@ -1 +0,0 @@ -../../../snippets/ \ No newline at end of file diff --git a/snippets/osdk-deprecation.adoc b/snippets/osdk-deprecation.adoc index 32fa5a15f5ff..bdd1c0f2e776 100644 --- a/snippets/osdk-deprecation.adoc +++ b/snippets/osdk-deprecation.adoc @@ -18,12 +18,6 @@ // * operators/operator_sdk/helm/osdk-helm-support.adoc // * operators/operator_sdk/helm/osdk-helm-tutorial.adoc // * operators/operator_sdk/helm/osdk-helm-updating-projects.adoc -// * operators/operator_sdk/helm/osdk-hybrid-helm-updating-projects.adoc -// * operators/operator_sdk/helm/osdk-hybrid-helm.adoc -// * operators/operator_sdk/java/osdk-java-project-layout.adoc -// * operators/operator_sdk/java/osdk-java-quickstart.adoc -// * operators/operator_sdk/java/osdk-java-tutorial.adoc -// * operators/operator_sdk/java/osdk-java-updating-projects.adoc // * operators/operator_sdk/osdk-about.adoc // * operators/operator_sdk/osdk-bundle-validate.adoc // * operators/operator_sdk/osdk-cli-ref.adoc From a532d31d45cb02cf0199fe97c17192e595077f65 Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Wed, 12 Feb 2025 12:34:01 +0000 Subject: [PATCH 272/669] DIAGRAMS-528: Added tenant isolation diagram to UDN docs --- images/528-OpenShift-multitenant-0225.png | Bin 0 -> 44338 bytes .../about-user-defined-networks.adoc | 6 +++++- 2 files changed, 5 insertions(+), 1 deletion(-) create mode 100644 images/528-OpenShift-multitenant-0225.png diff --git a/images/528-OpenShift-multitenant-0225.png b/images/528-OpenShift-multitenant-0225.png new file mode 100644 index 0000000000000000000000000000000000000000..c4585bdc1082f4927bcaab4eeb68badcf9de08b2 GIT binary patch literal 44338 zcmeFZWmJ}H*ENjYvTZ>H3lS-40RbBUQKTD{ZloLAtrAK~t8{m#K}jhM7p0W6NcT4n zpZocqU*C^!jQ7|3zKpT=0OY!^^E}RVtTor1a~&_FZi?>MO1qVcifV`0^()d;R2wp> zsMb1f-iY6rK9ld`usAxbDq!T-x{grKjS@m3A}4JUjS=<84aK+0QIH(w0}|s@pws286jRyd!!R z?S<*+s0RdQTNC;5f4i>c<(#0Kp~AtZa8XpecDAO2RwgXCn^>X9~@+EpHdB=Z!M`d<;J>@^& zQa%5_zy7-v|EH`;;_T;F+2(yOA3c6-N?GddU7LxC>7|E9k@4)n&`|8s{A7K~Du9FU?V`U5qK23@t00JiWZE zW=52b2-(Y)xgV+j9wsUmetnl@sPM8z$2fkP?(@?ysE?G)z2Ecfd!bgv?nlN{R1X}E za&ybp3NMZr=M`<-L95r9Z)Z6oVma~iDtRMO_ru?@3ApX2ez{n=Y}$GF+W`&7_o2=U zV)yUg_Z^-bsAYF?ahV=&zE$$-zyLKK{MeSAX+4=rDfgc4XI0iRFsuu&5Xnsn?a@VWV9)zVW-PP%0_qZ-kt8e zY;W9}deyXWzF(wsU=tP99N*^5iO@YqjwI9s3sQoPti>C3uc{T;H|{l9J+wGG-aI$a zV>Q~wlV|Gn`t`f}J*9)6sYPU`hZ^S}<8Jd_mYkRfwdg6`aS1n1)y*a*E?(CVBa^lx zW3sp61U0pXk*T0z6N|tuJRYmr64%i`eU-W@nu5oUT^ui5OpGY78g8;5Nhvex#D#O_ zbFEws3J&&RyhXRXJlkc7d|2#q;p3wXjj7s|MoC?TPI)QaHqXrazQ$a7x|KI8Nj7G8 zuswB7PLBS<)S&%X&VcvZx2GeXx7MDYbZyHtj@v}Y`)-_nWLNo6M{{$NAKmkycHEN-su_5a$K`o*d)Oe5LpA-;3Y4AsZtd(U7Vw zAvE>va*l9RcJ{dqn>G#iR(R!_{djawQ?u{q&*S6`W2CR{IK*3Z|87ZA(TZ|KMMY$0 zq-0>M*^f`+SV(G-rFSB$OER}_-##N9{_oP#QbY9Z6a0dKjPv7^tXl&$!Cohi@SA`6 zb8etEG;3sTVR4a?F;wvU`SZ36!zebjT)mng{+y--?2Y1%+aIoMDOjGNw5oT#yST2h zP$x~Nn%nmGojm#zh*;g)5IIZB^sNWj;<41RMy;v*ou`tsOaB5e2CBF;Nk$y%YDqJ z`1rC8hideF^_gxnOw#}Pje6eB&Q4kp=X^HR>~{qYvjK_QwrxAn zf>^QH$CGJcHq^jWFj*tmmSq}RJ*-<36mv6JK>ydb_x9ZnH?cH_@>`UqejXnjjKSVf z9&Spg7aY$I!+r`#)W|gQ&$S+ny&ZO~qOoylAO{;e)0lz>!pn5-TuZ8EkbhucAo+`9 zm!F-!6>{EpShqSr+kegSx69OnpPp_DICuYpwyy1HTSlyWLV!-y+xPcMUeb(eI8Hou zp8N6KD737?i;>mo_gs(1z(J#dO21oPrzEB07ZQXTwv0k zeOHwvP4^>$1_-i9r=M3}?bq+fwQ8ODfs1ZT(G-=TVDFA9%cjc3$%~7L-Nv<>w`(NH z#rZO{)CgRr$M(N@qNS#^R%A6E$&~rnv11ftB+=>lfl%gi=M)hgwfASNbwVBgyr@>{ z74JH>G?G^9jlEv*`_s0_;ih<{FkDV;V%c@XKFMs~zDZRF3pVfer2J{taM52`ns4=8 z+Pr1U_wqLDyEh(2} zB5$PdKm6>whw-K!BH^W*TZZa^J$u6VI&CjcOicJnEsil=kfVmp?z-QAytJIlc2Y z!=wV8*<}wJu2P-o;MRP*$-Oi*`UaDelX-nxj*#$1_>Qz$BZu-{9<6eo?>m4jH7_!m zC1Cd_9MP>~FgZP)5f^kEIb!sS>nKZIUENgaA=87eBDGE<>Ev1te)w?p)zGi+VZOAD zo2cm;6f~S7kXw3fSCOyGEi7c7?>r)#W3G+xBq`SSA0d8zB~prVt%h~dg?<;W8~Psh zTzIZ~9aBN6+WenBxqRFE3=9lRCr@5M+Q~TI|7^#h25e+i4F_py&!NV+sB3PtOdK4y zyPOvjs#^~VT92F^^5QdWs!}wXM0BMu9b{6=O%LERH8Dt&%iMr3(w8E)U_bm5^)1B_$gh;}m=X0u1c(_c6^9%|4S z8{zG}cNr1R%NStPp8Y*r1Zh2N#>&w#AI0PbB{wI>Uwo5zu5sTon|gjUQj3yK^6V(3 zIgw@v*PXTJqxF7=nUU|3S&a$#fZrjgtcGQ-+ zK;HJ!rAvw1c!gJ&CTY)^y}O?y>OqHf_G4(TIUJIuUtc=6S{EU~`q4VaypP+c_Sd}`294t4U9}y_cON69bhd^gv7d2qSyjENU;g^oHM>Gw zwH_kW*GzDDoJwq{H7&|{X+o68s6~oZB~vy++{aa^{mq*<0sIz`)?@AZxz&U9QTu3U zVh~~dbNR?T@BO)SN%@mr_%yt3)6NDwuT&@Xix)3$-MZEDq|yFCKbt{Y*K-AjWBNCNdLS zb`YhVzjJjR_4zwW*%5#fu`*Ha+LZXTQihld57*icM)=9b$+K2I;z!vd3sqf~k#Wyn z9>t4Ktp@de?b@{?_Uc%qVv!W5MeEA#-PWURH19L(W-SD5#*CDn_f`3GVTWlYdc5Ip z@D1;;4vf3_c%$#Akq`64)KmjPEs`W`TvTM;Nu=7wa4|2VrlLZ}dGo~DIBbh+(Fs!m zjmwL(Sw;sA9MHAUnd|V}m93F&_OU@dLVO=;l3QyVQf|ndQtSOlwZ54=hE2aJ*^AQC zDO+E8&^7cXv$i@$uipEdTY4|8R6vqEBGqOX0I zlJ4WKuagA-jd^#utajAx-?K;4LM>o!&Q?Oj%q;nge7tR?2Z_YxrODbX%Ru}CaH1UW zH328*&!6`eadFiQ#7A37)%er>B=p-`a<4x3=RSk*K5ihYtLx+{#;}dMQBJ35C z)HBjsW4WA|v$BS%e;9Oy{_OtvIDkvny_!nh^Q$kbgNm+2dqQI~O5^V$rZZ>0)@2$t zvjyr(x819)(o^}#BuG@s2QP-Yh;U9zcb&UVfD9g(O z>?ZuxCQEY@u?k5cYFR}xY5S2@xsBVnC89@1M-xQp?5-esWsHpC0Zt+ikrU6?15RK5 zJu{QZ($*3y$DAi&-d<661fP;gV?dw+_4(fqe*XM9Ryhq!!yoqX@g`l~mpsTQ;+WZ(ABbx|SQBD;icb~ z>(}y%Yxpm9)i~y;;k|8Tu+&FYfPbSSnn(%*sJ6Hjs&Mb#2eMlYkQ={E4c1S$XnN2X zqio5B3OlXb7yCpREpON&0=FY{o}mp=JoR!3F=)TKJj=T6U8s zbCo@&K=7=(@7jj3>uI_5UNbb9G{yS^o?VQ_6I+pan-csqDJco%DiYAfzs&_HYqTRz zLHn!sZEQ$_Y@a+?FDAXZvOL|SVjAPTykIk8M>L#T7>0W<*hf6#EAf651%hU%mkdH+COqsss55^qvhL2ZT4$&jWN_yKbFaf-(=kpdd2fT%l>QD{?(5&b zfA<|ee*8H1onop$4#GiI)(QT&Dz3j5x4bm# zO3Nb-xR4sOg=YW$NNm{3nW$T#@PtO55Ug2?&~Vm|&(GtKVP>cJ@Cb?vx-GG*(--GA zL`Fm~@$xp+g|KTBswnp|2nZ;bdC)7!%6hkVbjW3!-6hde7ChG5s};60>-&$JH_u>6 zv?)Ns9rS<)ELjy#^e8Q|EqJewr8q zd>MH_4k)`1w}>`M@#{omj7-^7;XHsI>tZ8 zNl67>U-ELF3>sXysGGs6x>=%EuKa3A(OjJ~RgYKXXpFx7cnzNDk|EjXZ;oFz?-(nj z&cw|=aCv;^5FS#t!^nG*9ZR)phnrxq9}#xV@XT({7pbWEC?onpTgLYH7k5n&%NHh)oZpk zF)0yU$8XOONc#6cQ5JLQQqf8>Qya=wgtc zpC5pggeW+8;EWW*JV0|#4*Y~djP1ILw+RPl`d?rFmz)2ub@>0w%_mx}`CW8^hevML zu3h315?-%fy+S3w8by_D`lYmV2qc35s9PWg!ZdB+kpp+A}u)#6YnhS7d_o&*#=B4GJ%&6@7t1WY{LcEDhEtRolg z?w|l8BO}t-b+pVrp`rA_8>q6Gq1SME1%Nxp~%pa6Wa>u`H}JE8eDY}^=&o)INC z>@!t6A0h`7wjSXg8^+L1gQ0q|^ONz_$j!^m9&mi=uGo$#+z=5EQe=xIjev(2Q2*g3F-xHb?Uj6`U0D>j(p@b3vDZK$O2VyNt7y^J^0-HOAjAD zM8!xpTx@8Nb)y$iCz#ftIZ<-cTB@;s0hSEAiyso=UX4O8?5L=gXWag#WbBmjXZnb`dNvL!(=<~#7~^xw(pGAtP`$L3EjD8US6J~lat)L3lCLtNAdK%eSIT=4a(8;)zqwxC`tg> zpX~U<@*MOhZBj=?g;-%>VOD;*B4`-WW1z#cv$Kly7F&>7`i+FW_{cB0dRXmII zMIrMzP|}1>fu!;0;}>JA`9InWr%uHpGOe4R-;j`y0hj>0+cz@8LQfy+lJS7hzeG*I zN|twYboBFr3@DvUpPgh90}uD|+alU3CMhY0w_jOVG{p+^`l^>9+7imkuOq!xP$-L;P5-b0jR>#IY0fV$G7Af< zYnHlx|Mvc}WD@)7)7RPae?J1_sF}XJFr4IQwuZ{)iI(j!d_TC5{?ho zZ3-w1B|I=tMnKckmRzg2M~@!;xBk_(#>Vz%cD5NUlpJtC-c>l=r7jlO}Rvg|G21y0OAkzFRz z)qZv?;oZBV5~owFN0oQb3%569I?S37MTAz+I{r4L8czy*O2bHN8o^Bw;o*8{3l_Q` zQ)7vkBG|QdA3PWxBNL^p^GTasG*}v8WJxanf(xpRZsHZ6FBr<0g7NaPtCA3+X3gu(t`n zw~?AI$XLo6w?Xm;NRS3dWUNkViK_h5V;w5sa;i9<@I#2}uL+LB#h+$ltI?YU*@hm1 z$GrDCR^+>R9(HSjTAsXwFAE22=P$HNfVKL7-3D#xgGJ7!{pf?87N%}P2qNH=RiiNI z&Y`T4H==YL)dGa4QMc>c$EuWiWxQy$@b<2ugn@p2q<Uz{2Ers~ockrOzVAOVi>1!HDWi8yoi2oRQ+g^bW|Lp}xP#$Vd(nySW z`RUUqZqv^5M5Y4y!c*W5(u(9UUh4Ick&y;S#0$T|Jf^^fGxP8?+IplLwMvIwbCU*= zHLYAsgXE^1Z<~U$1UXJ3kg|UL`nQiptCrUkbATn6$0d~Z;a(t7iV58R`D8zvYBW(c zIXF1ZAgBebhQbFz9BTS679cx94l@KTwb0`sVhO?Ks;H>MpzhO#SUCwH`zfi%8+Wi| z37K_2fGBv+Adp+%6TM6{Q6vD$iG&7*C6O%4w|5sy*Sn&K=|f;5iVsLvSrT$XjS|oR zZ)9rboeTy)Q0d3sOw=@zzOZ&+73C3kjajA|NO0N?l%kad31<0t(nwg2wev?nY9?Jk ztSpj&TLZ)oB&@e4T$-iZ>pvDhr?bBfTD=~egtnt7RxTD=#MaH5lYJ!>@?z{NCg24WFL^}a^p`!i%DcL{IC-nya$H8Y%IDIskx=ursNJ&W{j3zEBp5(p+m8=?T&Tgz}xjC^cGT0Su5lKeZ(7|#WRoi98WB%P{N;gA|&*>oolH)3vW-2}P+uEio4XMMz9 zq<;>VPSv+~rL+t-?X>JMRI8So+mQWodu^@%Qjq*l!9)F3Kc*6=T|^rIPi-;dYF z4-v?UbP_pA*h*3f`wuFBRw5+^Y*VfE?T=V!z5XhH-x+Li@+A1(?~jd*1pvWe7u^^I zsz8XVWcEnk^neQL1iQ%B?DkA!6+|8rWzqiU3&wPC?)5egPf7-zyN#GliA&0Z=0%i9 zonq74g9_-rh@MQC)bF@!9--NG3w#QSc>t&O%Z=L)nkjUCW>!dSc*S0*r%H%!qIqn5 zmv@wjsUqE@w;a?sEGl-CG4|ufe-@sm>F?>WxMi?5%cMh@@U(1vChdx~LbE}zi)dM- zj6e-DfP@@59*HukNA5m_PY^6b12Da0Pak|5BdYGLvOfgS789JaX__G z2$yb+Uh#u93ka&T0w-7?xE*GUBpzgf<1JViiW3K~W0wb8jv+)^JQa$X1O2VHXFtk5 z&jbKh27Gto&{=8L4!QB#%fW{OxwDLWOFW`}s3j$g2w{^#?FlL*?{)1pF>Om1Bbx`M z)WVTPK|!Hl=}#pszj=7fGM>ZPWp|%&I4ku2^j7d5xVmHW=5LQSY?}goGYpD9oxc<| zjtcWo8d9T3p?9{?l1Dc|paaSW|?T+5W7=l59J3%X`&M2Bz0(~7& zw>?&743~?SrmZFF17Tu)*~765T05GHmj1k|?lwfI(sJqCaMuVDU%e;-$qP!Rxp^wm zgr8wFs~ElG?}r_0&YSTiXcP%kz{+dW5j-TL0{g&-FvBPbS?Jhrhpk(muJufJS#<_l z*ToeIIi9oq{Vs+TP9Y@gN-Nhq^sU$%k*HuF8&+xr$6NLp=swxR-fgZM4XJ51n9`THjb ztLabM*!8~ryQ|gN1*jBhY5{zGKS)1oxSPdW==uxb@Q-<|hBLUP$YG>Fe zoLD63yd}@w=>d4&VeGsj=JD3oum7nuEk$blefL^sXaP#U_9ER1+!pC*C}dQWm6a2r zRl0-~LeGdyqoiUZ9Sp>89{=95=ae4sHjpW{cqAGV9sSVrcB!W%1H6s#G=Kc~0bOpThncIHJIKm^%G=Jh5a_>X&dtSUtC&# z!sGXk619uTPJtz5WtEV*h%Ta26Qs}Y89Al2&0lEg_h&fr+)l<|`=Z+qCgFmcA51Bu zOKW!qD9qW}4z<&MqT~P%fN-ETD}e&ixRN8^}+iSSPtyL2$j~2PDmLaQkjty927)o6Jk0zqmmil zw*VVgeJbsEn1`@I(9U1j^1xAsEy;HwYp2>x*&rQ6(#T=c*MTQyrT9R!HEv7)4s>i% z)qfwjfwX#fLna}XulZgHQ~rb74zM)XSVtZy>eWH~GC+bp zCrL*3l|HGY{bVhGtt`sCaZ^_p+GqWZPwFbUwOtvYycj@=BmFL*UyS8oUCxXZ{0!Ac zX24TK<1Qj)&JzCtEQDb!l|M^fxuu?^!Q+TU($O1=tGR_SKeUmIuprq&#H%%Kz z1JZ{5-O8FSF(`jS-`+p2=HQz2f8a@Aczt{)>QEdQ8G9JBj6)KaR~E+y4zlQ;_rULB z(3SBP(W7ZfLW&^n1^60diSkC#+Tn+E7?xu(5I{a<-#`DnLpJo+5sg9HygXq}zp#MPzpI&|?dA66}3*3!Kkyw)!m2;Uif$WfQ~&zYC2Yy>0gz62Y;` z8L`!iO38N~GW~lC){_To*4>Dp;S}+R@7n_RkdZg{mMvRSf~Na^{CE#Apt7uvEr)`TDg+KWJ%k3lXmfZ~Cc$aNUxT>>`o@4L?=iO2(DC4gK11NaRJOGOF+Kz%S= zwaG42qoBd2Aq{((KUrwir1=OOHfL4Kl|=y}x(T@!rRG!cpvXFl4t}(3 zdaM)@5<&Mj>_g^TTx3>J_B1(6E|+rY6DjKhQ~V{L1i#sc!R zKI&^^Cdmlmult*tA$#+xf$)+?r$5RjDx0J(bfRb;?>p5W2&W#q)Vl}dE^+;VHRp(b zhtLnB7rRxgU`0iMQ)FvOK#~A-H}1K0{^O(~DRS;B2VS6Wh{kCae4c+KB1{fv;}%*>20;4%Dx zvB;oj;EzBvRhto7ke@#_QAQuQ_BUc(k0AIXKR~kS!C?>?mPS?$m#a1Z!8;t@!?!xO zu84{b!*K`*@9j<3t|cVh8Fh=k0O`Ec@7ne2|B@XAWHby=$djppP#UxbARsPU*GlYt zk{dbd86oM?J59YIgep_Pl}k-0CRzu(ii#qxtT7XWSwYsYDV{@EkXVaQmEeIjHPSSc zhXP&@0h$fhS(VNe;_3MT5aJpLA8t1xwEFxZGetVpv^|quaSH77^Yi}J8@B<*0LmUT zq)i9p1+e6+K3;iVhFw^=bIMNEUhG|!;7elIC0*?K8f>j8G-A-P}NEgX2Hs{gNE}8pb4i=h;DrAYo!Yp z9snci(K4&x;v6@Xe)*yYiqC#!ejsA%se4)tcU29z!oZ$OA&>(J*EGm-409QzYVbF` z+tB?MNdk!E8nF(GQ@>S7*Si~V2dw{lq^tV1*(|0K7g6;Pbfkz8&pZs8Y8dWazmNSLgB=rnTE1LNjmGUNNIaO%gg22HZ%j zzCNf}W%P7|{Kzs2fGymkE*^*YZo;NZ4DoVEqdmDGF7!dAM=n2r>3kWurgWB68&)qv zLKhHwDM=N?FiH9ove63cr_*7s(@Y44Z6(n8(A>g;0RV34xEshB)GzDO@er`Qtm^rS z(DJ?O9v^kkDG*0L?_;hSc6!)QYGI8p;m`0a9bdmm070P#d{}xV6}kbrS8nSkPoBIr z=>d5GQ=K8&Y|=h}0W%#r@a*~XIG5Fx$k|?e_C7i~6N}?lp23Hu_vy(NtgbXl4^TiP zQgUFXQmWPqSjd^h7El-WqT`~a-$^fQ)FwRK`iscRyhbf|Mv|8QRI+c~zMcD13oiDc zrdVlT6$m`-fT{hkNgzaNyndP2CuxWf#|6lB0sE=wzkp?%CCUiN&i7l9abe;AOHYw7 zKs#URJP8|I1K6qht^r~ix1IRFv6Ju+oL&WB;RsSe)``P)4g*ot;juTMuoK`pr&9Bl zQUe>LmMiLw%->gEu%6%DaV_s1VK$^+Cah{1pewy@eg} zP;Pwh2u^Ml`XKh5}Ikz1l1d(CfJCz%cvR}!;VThcO>)U*Hf}jfLqiPI4YZ}B_;&NZzTO{W@+|_&gs1D>}X)TNcwb|(7YfJojuxs z0CFG^0L?1&z&C{Izwab)0Z6O#-fg9_Y}XAoTpn-GF{d`%83E#8qUuV~e7?)bk6EGS z+zt*9S-2W0G?lVp*LKJysjFEHHQYw@j>h_Q!jxZsTove)l=7QF=k5mVa_KH!Lo9Y^ z4|&^=p)@Cj)zs7)-(TKT$@DLB?X~1!E7v#>4#aIzpWXi9>b6lzR_)X*KM1c~v09tK zNlt0px${o{8qpYB?<}H_e6`BF(?21cN6!eJ3!&RUo{;@0x<|sm_Zk7`Qu=3Q-d}pk4O39PO<-N+p}uesve)jYfFH5?_3b0)9*)U#Bu|s1 zk8O~a5cc5%%lnT=wvs)jKoC6EBZ~EN^Bp!_G#{-eewIk~34)@T0;LxXv_c&7xA`b^ z&?%aMyWJ|WTLOfga(D09Bbj#gGiES$@4hEs(O6qs+yCh`Oy9JeTK{T`9JSEU6%r)0 zJQx`66)50i#B~Z(8G|%N``#K7m$ptHp=JWE$mHAJmkc@ID!Wk&9Nx9$BWU_8CPY<; zxPu;~z0QFTjUO8PCUjOtOPy4yeKoJsnkc^;$ss5C?Qs1 z0;QBg`)0dbnsFUBhV4K+H*KEcVg|S;j7bbD9qiz)SCwiPssj0vFqXvCWBLOQL=Mf; zXn-?vf0st#v8v4FNXq~7l4%=x4yEo#;()#V7ub7DUC`vJ9+L5wrfc9pnaIDQoSJ1rIn4R7blm6>1xXqTV;FJ5=`%`sbjt3TA4lvEAwacodZd zQiwSGQFTa*-k6a<5*P$%X>3+nJT~lFzFkSzuw|$Se0_)X>Y7!#%&(4G`wkq4 zf?r&f7_5=-A>)1^uO!`|((I&s**TTa4wcqZ!>*aoodQMHr+#%TP6qqvOVdcqNjrA% zSStdU?cKXK9L@D=SjoR6k#5aIh!n6nN8^Rg>pP-YP@kxb5M`)WYvFd4G$XiZVNu23E}g zSQsBJ2NnfZaFWmhb^qxcdK5adw6R=25M;p_$TJNnKxZr0)Wz2_PgWH(hS+}Mru+tF z-UN!+)$=h;#PY(-XdHU70ma9>dF#@ksib-zIA`AbZ^X$SRK^U=vYmtt{WT0X>X&}g z;Ho4hL&*+;ig#*hl`q(WR8I{uUxhqMc#O!*cqd0>RAT+=7hVzR>+g?*(_B_u#OK;xE|#A#Mz|P;Zp6Ghh{QhX7Tum6Vrgu2N~YLjuMB~3G-sCIV+ zsA8tLu5nfxCp>C*MDORju8gQbNWzkvRJ@&^8S0?BD&=){cD4>qpzeKkXHn!RGy#BS zUwab(w^p_qv#jNMfgAq1cA%TpkQwWduhP^G{sit(25IiRILhY)nlvl_vi|Vi$tn-4 z-`y8Mu72k1IR(FE^HW;DAy%&276G^I1sB-a*lxl?J*&lIQmmfC`o$d3me6dGo;PS9 zQ%;21?(mn0X@6N2mE!^PB=%1;>MN^=Czq)R{KOdOG6in->3G$uQ+|G1N0SBh&Ye&R z3s|}spHtIkRw<`NQIBlV_Ac~VFkKw)BJLqbyONrA@J2)5Q{_&@QRex7GM}SQIArt?y83pfB0}b;FfVmt{h>^hb@<@j5Zkg zhb|Hc9$Rp^=j!9Mna>F9IgiEK5ZYYgTY{8pyhHWnUZ#5j>yBc&W6!A@XFp{T00{|_ z&}^x!x@)z3DFfX2M*@t&su+*?wORODnwVILG=JCoW za2jQ0RJp7l*tpba0vbw7;)^R6Og3kZrx1w}Y~rPV;i8Yb^YIbss{w;;YHhO9=-j-+ z!IQNF-F>6f^pO*00=R6xVcVr+OC5m*+rRb&9Qg}?hK;y375k7(`46?5AHiQkSP7Zv zQD9T@zL2M3d8mVxBW=_W6!tzg)t0HN28MFF4vXVY52z;_mYwhU*9syVtLU{fP1Ek3 z?fRhWhtQ)jG_b*S`)8mwDLOjZJVWa!;TUgx*za#q!nhTOyhaYgW~hheY~8Ff@RdueZ4Xw=^+y@9$ZAr~b*7 zJ*hh!R+)_?Tl7%=s{MVHv`1=lb8`)VqR6-V@zIGw`)tL4kx+UoM&S+eb2m|ul3{kPb1t6P{APTn`Y`hMi)s2RMM*) zYupkz7$?u2zxQQxJ&jLRU2{!M4Y9rIs|v%kD>7CDnXcFNlym<|dA0{QJj%4~Vav}h z`##PHSrfc@CRM{}+GmhE9cw)3uN#sRRhtIt*SGit96XRC(~Sb*dLP}@i;A<4U0oH` zMh+ZD#1zd|k>x2bb(du;8KEYuvhE?HvM*m0-8o^NH2~)pIf^742IMb37UJe#D=;RcFG_5hOWM41C$Cc?JR(!jNeDu?MnRV5hFRW0xK!KM*Y}jEscD`# zo3>nrfgC&-a^MMXL-QtfA@pwgP=h1s`=KGXZlw4d#_Q1abUMwdRc60V(J96tj^w7T zu>BC0P)4$z@Ed5ncVWnfYRiZlO#ixe!&W`Qq|VY%XSzcqtHBr4F7TloLVu$yA4SxxBG8zN*Tn2MgHz@dr8-TNX&h)bA z{)rXZDGHXONnaf!dwcsfr*AxIV2lbTN)G|z`4Q^b;*iR=%;oyD^AD@YD8qa^+WlJ? zCXa`NXxaC5A1N!){Az@pLNfQuj<==_u+pnqSX;|O5@<}*y~S|yBrzIq#fT)C!h|g= zpeDBy{08eL3YebTnYGI(9==voWKA%3p+>yxFj0{S2FPk6l*J^Fk7P)&-on-0J@at; zcH9i|kDLD=(vWF!BhyJqNxkk!O2CM-^{tR~D{(=Cj~tkX2cJO4zxGey59^>)Q^b*V zh2YniQBVN&>H77q&Hq3I0Vx`T|9B0>nVmA`pwknm6B+gd4b^QtI7v#Iz*_$w7*ZlE zA{3l-OuAxU-T;A{(NF;myx_hT$MA?ZZ0AG7^pnwNEw0cMkOI-6^yd{aoQNDv7J_KG z$kx0%HUpSI__JY|?03XkDKBUmVw55ls5}LYO)JBkp+8PI*TgEIJ-C>Cr1|9B%Ry*^IVDAbwHK)X4Iq zb`i$3l7>h)>HNc&obR|P+AC3lee;NX%C5dm_3*ptZi(XAq0^2H5jwaf7n@A zsuE?suW2vBkfs(B`z)e44o;6hQ&Z^9`i$5tZL?r{QK}K}M|*XfUhp$K17w36)jlAj zz@R5$K=1MM^T%K!pU5c?6pZ-ay_L~_h=E+vtZddgas2oc-dAPkZiM#7mLy!{qlm&` zY$;io73}O+=pAh6ig%Yyo`v5R2s|kjfmD~JDt7FBzk1+^Un{v=r!JkippW`{v1@a-iW-=F z%=YmD27rBouh)}Zm_%wrp_4;TBtP_JGH8kxUJecbwL-_%In5lz46(X>S5{{Fc+udM z8?03^*t>rwCf;C_3(IjF$8>0_h>D659kIR^h8)@irLHR1(#bc=A$yRuJ5@cck?7mhqY z^pd~(Df_;roxMGPn}u?=+53(~Z63QHWM~aN1R_@JpyzfP&M%wGaofR>qP1jwp-AN9 z=AJowRtkgM#*yQ#crftMwJaz61Ai*;@>f{iV9+U@tpt*3z=kCgi)X;-Kif|80k5gH z?a)ahrl4jVS3vaCr%#{4yrO{{CdR>Z+liltpH>p$9l;H&CUXRL{5;eeZlE`!7Jy7c zA1#@cF@wJj8B;7Q?ojP9(auojaayn>b{FhzayATX+piH5jG>Mn@Q`ULq5F{I5<(pR zh`|CV1}u+s?iVi~2f8u|2AWo$2iuV2(QoWSQA001jH6)W(O~zYDTYZV!oMmR8;{0$ ze_UfMab$r*`Sa&b9hSSk!vu7B>1vlSy<5q>?=awmK~^SG7cRusw~LX8<%A*TTVFvm za%?CNeBv)%UAgJ z>PSf>l>^mUEsLB&(PFh&;f~`Nevy`}X|8SmtCDCre@^XQjILrsM8-6aCRAwNyGOJM z9#|GBURi7AZ$s)K++d3&r>ESVKm|Ix-d*7Cl*l;oHbD%ljy#ij6Amo&Dl)R{KkO1+5P+9Nu4Zow9`~Wx|xEVpvB20UcEp- za<;tcoS-0a6s#_fyKv?E7s)|HLt#b~@ss%rd=4=Pk0@V4;SP`K0<0qi2E+z)z3;FR zdSu!nEvm7%pmT^W3D|aMn0Vs{2K>f_;~@CK50Qi!gz__X*q5e=y(eyBUPPqfw&42V573bCr

  • B8wV?iP~MrJlWEj?FMbg%f7~Ox7;QbA9O~mfxCBEGJpRIimJoS> z&@%3R-bmtEd~ikBan3BS$!!o>gbW9yEggsD?gg4r-?1BoJon1tiC1DLMvhVBU)OG{Z8RsTB7yd=n_!7WkQ^duMI-t_OT6Pz81 zC_(l;ZhUp4yVXHQlO9F?*@~1n`|F1LlBNTFeb`*~OqH~7KP#b~{+fO@=kepm@7cGcoPW(w#} zj=7?JpILJ6HhxIsiGPo1b_QKzIi4Uxe7y)KiCG?9UEOJDIY@qzlWpEiGMqbJrpXM0 zSRqG(`13m6{_41caB8z{>;+Svf@C%^!ESuk3$3jz+LZ7Y~aBM$k9VAXt zPU{`0yKmyeEus1%hROP5KA5rV$Z8a@uq`fY=Q}coRe1agjgc`b>`R2*W7MrgJQNHC zsiC%!y?|wn;4Wg%vgR-XNDg2#%ql7)%B78$Z)dFyHu31QwO>zvv+eiSO@|wf9BT%D zoec6eCXVEYn78!T4JH|o_z7u_tPGLN5n{yt+>B_kgjYlsu&fuuZGwsTo~~O<2NLW$aAO=q*}Vr2Fv7)xVj+R& zo}X(o9x`SXi>V|6i^)k#*e}1J`oKjag+*{1{}*XRXZ`_q7VvV#m~J@m-_oNK&HO=v zHf%XZmoAgTlKP%*?Qewh9x_6)q6vtwXC0^4q$Rf+>7QCxbsen>nBTk3WKgrCHw!za zlCLLktxcstMw{CKeH|#iISXZXKBQS`rxcprHryO)I!+#P1Dq>%w!7Q1`LK8yN| z?=H?~`97~-zt(0RbNy0QraQkp_45^b+YX5~*w8RwmRa0zlyUJ>J~}K&8LCHDSh4=C z26N)#Yj`e}4Rd>hxK{a1*^ukxe1-@YOX`y)h`k1gV#rTc0vpOO*!aN(P4Q?y|t7 z?Eiu46vlXE^uE1YbCpW!4kjQT9jBuD&QA$L0O~db{-_?|;56mi?b|2L6wD>zm%)W9WM^vMRm#s0R*8 zsr>$Cz%qv%+QJ*?c8SWa3LZjK)xCeC!sKFd@H)Umj16ymO7%w(^{H=YNc@<&5)Rx$ z)jz%dbs3r4g%Nn`n(%fPbmzh3Jx~-8U3-uFVdUz}`SU^eT|5f6*7j5X6;{RE2*cBB zAFKt7`9l521>x60K{s~YcvQIfpXWmLzv^*ac&MGCNY2XwPb{^aYQxP-mmZM@3@r3Z za==7);pOBV?}`b33Ch+Ivo0^!yesC%EFXz%kOBCs@&kz)MTzkGFr7aA?>(y1X2>^W z&J2L!#l7A=|B11qWn^pN)Xc!@YaCR2&&kKh!Fi8^F~bkeum4{kaAq1f;XsZ~ob%(ribBo}^CS<~BC&qT9939V>%pKVEKY8GPxBZq{ zfu)4e6j(RvAl24zfQw<2e(Z?XNyw}+NKEq5(z~xxjd|{2xZrHvUqwUA5QByCumzFf zaoE{Nf$9BaYZ_V)8s{2;C!n)3U^>3J^TA<397D)!(*AbL5F7|0NnpK+2VvSdS<4BI zk4z*Iw;gbHS;ZRIlYpVfJj#FlIVeBS1i&&W4l7D95k*^x9wmi;$3Q(Zvf{*JDRv!fKLf>`!}v(S_{*9up{ zwS{AUOB@#HsK(eaO=6D*1AhmEi@kDtyt4pf@s72^60n$(qnc9gUA?*8%x`n%U0UJ_ zBmAj0=p&r*qlVKHa7t1Wfv=cQ8Pw1hT1%CafC>*zpWByp|0^=s0}~HHH)OsOtoLa~ z#=_h+RI59YjexQaR$>PtcVP!MU~w(~^ojv07MmQRI-7@zMPMW2&7f~EGK|}en^D<# z6uIA}1z8{u24bWM3G(l84ru4BriY;VWrzc|hbdCs0G-?Uo!D!$*iN0g29C3V3~l0? z2}D8@5}}AgZLUJ#Yy{(&X9XrL0Ox9wscQU9f@+R*Qo#&u$=W*t;HR1;*bB5+R1)L~ z?~D!}h`El86aWjN7xKKfW`3_n@Erq$$-?i?)NT_=IBDXoP9W}^oWDrMP?4<9Oi?mbrl>ipkzXi^Nj&s$!X2xpa>8<{QOkrRcLfT+Hv~AtUJz4!4rjeuSm>7 zuoKKJ% zjT~wC<;xd`1v}VSXl`!1kZ1NMQ%bw@>V_?-C?DR&nYY1kU zu+}LBC;0^gVlmP|XkpR>+}!;juODqCm_r6e#$jxxQhUS}JW+?RxVPm6!aPBvSgkFM|O3eTvO~s zq8mg-BZ+-TUuU*zdQ*|fKJ;s zF6gTm0B9n;E=mY4nw*nNCL}>;90z$qZbu&29j7~;#wniIyJ-%iP!|@)^2W&o844|- zmT#S(cmlBH^Fy%}$7+1X`VuD&YF{+A4knsKpN+fV$R;vJO{7DxHobTZpn8e7m>7)4 z2D7^!wEkoCc?#F(CKUzCgS7azU6^zxJ578`8Z(|g@&maDc*g6HKcj*tH%?}Nkh=;2+z z@lWgU!wN%oG&m&2i1pP_%*lzE#C`*5o*ZgSyv-dch?X#1GY#Ph$?-0j`l+-ueT{RQ zKhSXLkk29(Zz3XRIEPICw-*3FfE<+qx{*w|1N(IglOr}@Y?bgtpQ;_N0V@}TEtbG) zop=dk4#FKkDsaNMDUmfs-O=o`?KmRPka$cSqVP5M!~SF871%G*xEFF}HhKuG$xZZ1 z4cJoLHe;%gJqUdlbdt)36=U8Y7No^bc*&C4d$2|$iT6PCJi6C{3fwr2h9Aa4jUy;a|nIq(Pn_X z0)*+-oXy?=1WS>t!RxZv5&++A;%R_ZBRW2Y;~B)M7f&wo1UgvKut4USe7|JCvHWR} z@V*yvfU^H?%}mC!BHTX4@sH0uhJlE=iEnN|43N!jFYFPvh!&2Zc&NC(K`n3?Ta8F0 zaYIn~vNeHd^BW=o?Sc|$SSOTQ_&1#=^(--QfuI2eT-%!KSY(2KeDH}}5jpQ-pY$>z zFu*d}J(F-!f$XpWy41ti`061tMLiX)@-Z4G_Fm zYHu#TfJY3Jo@b#-RVN$?ihtK`Au8IRst>dl#akF{E89M_wQfz@6myT8NHREQsZOTprF7=gK#7y6_LyD`yw-i#NmXP zNnoCl<{dN?w0JL|v*Yc$^ouVYgtUQfkzbcy7`;tMZ=jNd4MQ`_1ysLc*9*-_x=MP; z2tnFwpLxFPa+J*9wwZF;9s?P1fp1?PeWG)kbkTq*C&Dz+ZvgI$7)6oH9(!jUP<+@7 z>^;xnxew9q95!v4C#~(rcvPIb$K`}AoG)hj`uiW{EzApCZ9!Vmkiw(DkQTu1K|@0W z+m&xfOMAh804XCuB3y#P|D3e6Nur&}egG@^71~z{EtW#@?D{HQK0mLXea6R)Gaw+P zw7zlUfCjHx{3U_b$T+h085kNsYX7Tm?M}xwNW^Ggt?0Nk0o%SR;c)qC@eC9ffHaH7 z*Mi*KAGf;q!w*Su=5kzPswQ8U;^H#V4^a<6rg0HVxE7OT3NiQ>;HiUGP2sErN2>1i z(sVbZxtmVk3~bq7ZIZryf2O4aF!cTm&sgj+L~M$b-tEi?L{O%LDa6Ype(jKN#$)ym zVE-V<<|O_rYguyEf29s5nNR3(E4@j1cj zNis>GdqggDK_!t+L6C*LF_hypm=OZN{q~j*VP7Ko5LRl=g+Ai$gWRX6POTnb24R}V zH2z!_Z5qGZb214t4$CT%x+B{s?seFPpW|NHv}b9!c9AS#=IXpXLHU-X_wC!)Zmz;v z*WN;(gZqS9=BTr#;e9A8NbfRmVS;ud2?Qd45!xE0^`w!sAC%1F=;&R1Ob($ceo zaw0pAd}yd-`fmGpe0%gd159S0sW>H*Ov_G-qjJ~c>(ux&70g%m!}$V4(`#(GWZ+^R zZ~g%XBs_5GVU%;lF`9(3NIDUR)oi$4d>k4Ad4W$pBEW_NCb7A{a?wVTTt4YW>}3$m zO~B!AmvnyjSa=_@+Vy-ZDbF#K3IWJk4Bwe4B22$SE1{h5y@o#~d~!9&oM2EI@G~UN zMlvQAx&^}Kc0d&Ya;rVWOboLTDe8|qxF2v8h>`e`8m%zm>`SP8`EBw%swN|kh}1BI z&mab5jx(&#i5Q8jDpFHDt~ombfIWN@*UAq^=ws0PPtFq{7Iy^iMNZRClpDybh#v?N z0BAYlL8}%^p_02uc5ZkJ zI{Enc)IqkPCvl1BRD9haXW&7o!Olw9NCW_X;84Wo0wLZM5d8IyJ|qytCIqdM*D{wH zUc9&9>}I1_$D@iuCK`ZzbDOUM8F%Ew0(dQe|BCGc$kr*)d$Sc8lO1fC1bSbPBF<;E zJ24#$cy<2S(^oSJzg<@$!?539dXu{2)T8Iep%$u!d58!+F-RSW6!o1;9^U;^7%D(K z{|jV0e%EcY!Jd%_aZbPY#KGr z??Qz^!Z227&%gjN9SjnYc`s4ynBcXMqg#kCS9GvEob3lS%!r_yRA?|v-VSTE z3m{a;4sMWoc@s19Bg4`dl&QEG;x%1^=Om8haac5@rRU@*R%uJ`*1|j(z8!#N;B7qNC3xli6}mIU$*$Qjh}8=}k<>u1r?(Fn0I zq}oS9a)Ef;M@R_B5loL9u>&1TOtU2{687li=jJajx#+*HAqR^P1PQZ%kX-tjsP&20 z(o;#741ydG{a-S)+m{ifYB7h)%P1B&E`DJ(8ENXM=S|J zSWWcU0KEdZ7k50Ij@VY4??BTcH>&6P_o(WA`f+~gy`-hpw&R?Lm$*?{vyYewx?!Ra z1v|$MB2J+t_zDT0sPZoXtn^vC>riVSVme@KVs(xE81ozd>G}Sh3zc=w5X;7mf6Vh_ zW&g!D^*)w*;B=}??W&YZc*jo<{B;XWU}3|bqmdfcVA&-D`sr&nQhH_}zSm&=%D^JM zgS$*Q1pSbk;>6aM!nYkUIq+ zhW`2^Kbw9?Ep%Rl^vl-c2&p+S_{8x7@lE$>q&L||Nar_72n-RNT?Ir%voJ*PoN{t; zG0NVtKjWsB%=4?5B2wl{L9L_5Pb&BLkuMeV)n6HzZdAD<0lAJHjTLbq4ixJR-z}HC&8H za~?dbVlwO81tBZqY!uURsSDs;9^7Cw(SUO_JxvcCp)YC9u0`i?&S`S(&EB%;{D00w zxv}onkV_8+q}4bJ-?gmKeS_KU!bP~j*dTlxEt@>z1#mcU&J?*cxAWOw6PosI|rtt zxg`VB(c0EPV?|E{swlFtL29>g z@?(7N8ZGg1L!#=;vdUhxZ%{VEox416=KLQf z^v2EC(Ks#&O8K-lzs;@3SCJ$eP^c+T$$4lY_}6&SUPD6+aj(7VIwlK@;L$A#|0oJx zS?2L<)aa(D)redKsIJ@zPcu}?7eLzy$Vfpsc!XG~5xrY|JnKptYnqB>Pg{1J6oMo= zh{p{`0{mZB$kJTU4VU&Hpw=TEd?2Lk5)H)(wu$GV{eQ#*)JUK;uZ2>$9w-Vl#=GUP zN9pA{aRozG(Z3bPRDno&`8cT#4stZ)TF{rvASnqTqF;0rUf;Bm@-Q|Zx=bQ!4cG*~ z4QNNai0*=zF8#r0fLZ8?#Lzv&F+YXogdYUL(45!`*aR&GXRIh89fV*46PaU&u@O3; z4^ZuEAJQyRQMJEE_W2uNRjepQQ3&99#i%^4+6bP2pW)#tTyj4Yv%+RONQV7IFZBWs z<<+&c=1GuAv7r+C!G{5!jXqBClo&(u*N>goo}3&<5^7|`%3PFJN1*P9@~+*2=w#7! zuVCfL&fT=i;vkOO0w^oo-iVj`5)zkt+%yLgq4ga={n4MF0fp8gq-C4dQbv<8wF}<=OOOdv!~V^y&58D&V8`5(Dt-H$Ftts*#wveaT!a9z#^nNy(1%${YQ|T zkfUY@^GBRsv2+i$8v;X|M?xvE`9?51^jg7MOiMd45rfmx4jFB7=moq_nB=$hD(HPu zMm4F5onC<5`m*zAs`Va`vLuZ_Ee3; zm4+%RsB=&O|7;n9e8C{CSy3k#Q19!rkGL07x|3k8lmJcrdrXWct)ztr=fn{`IzPMh zAmVa!i=PjT*ds(A6ta*(@Rysz>g5$M)v}_ZyB~L+A=-!klodfZS!pepmZmJ_tZ zV_KNzuLbLMW#1F=4of(EZae>R6=kldUD25hwZV^1_b*C(XJE#i9l;5@IULJ}aNHx7 z{R=P3cp~l{%C*s?Y0%g&Ar6#brlDk;kas|NrDn!>P^hY^Z#fiQN=o>TiWy1 ztAg%&Kp??m4~6RtvEqi42!0ctg|;aB9?)}jmR6j$ByOSr zD+KmD+yHNyktVYYc88m}#0vs3a^!suGo^=|Xjg8S*83%uTm`Nw59leSkMI`yT5i|0 z6}7a&i6z@W$vLpT$;m=1S5ewnb#ywl2IOoL+CmPkrdXSx?Vq4(_4*wN zROjcE2^7Tp&>^lkgFC9lZ<8)h`VExQ3_Kj^T!OWp$*WeFjSm2?E`EFe{+TD8=j202 zUI7=}wEcJlu_glYX0PP;tc7I)Gwjv3}p98Ks{*J zS4^epk+H48{g1qJ5wiJdhnuScD-5F@*dDI?)yP27Lj3%<5R#0-e@VXC__;OjKePbk z*eUY!UrEvb?^lWP$$L`EkU&8b8?`TU9Rov#=2h6U;ZVZRUU#4`)gXkMz9m*8QC5#RX&V#abH779rS-^+y+O7P#5*|6YX|)XgTfz(BxR3jX1u|D(tNuEr4<#6B zyioy`E+FD5!3=dF`Y+X>KgxC=MC`bZ`+{1C>29%G3C($Vs?)-Zlo*~DCW7>8@?~6JM{haq`voAVP{U=Rc-D1`u2Je zgMNikWo-f7Wwl z7DJI}s}sjpIL9R|HYDbB&V9<~OSe}&TK_GUI^o0-5&OF%ZK*OYYTl)Tx}@trH8oYl zs;-r)l;MW4U6+3IOx@1=>>FV){X?pG8eS*-b@#5V$q@S$4xHO+yo@jT_mRl?y4Yd_ zVT&6#&$KCA9S;!w8KSRd7e?)x;iathM>!?r2V_UJ{U@lot^ftWgIR^UhU=5vr{Y(C z0|U`vbW@D|?219ll0V6QKvi#IhS(e~pazw_4=vU$tu0dj1rhxn3cUZ7>6K zzO*MOucH{vH(jPWR1I4P2eu0gCcOT4>QglW-?;t07+J%;u4!apwnW;mbM9HE)7fQz z#Okr^DmOtt=_N?@{D7*w&Z@_Y2JNT$o0hqwrS4?<_sq38v83nyM40I1~}u`4hzC# z7U-`k?VDJ&u}SEhl2QpMWE@p9*wk&OB>Lmd>{lbHwD`pfI}ylHUPH{hijV4RM{wKxvZg#Rz1{Es{(FEJTSj}(^ZPo(u3MM zW72M#lzsa3hQUVIXmmZ;o7B`j^|ske%Vj7X&1`Dgz{hjPd@b7&>?){~!I4#~SI^7` z{z!IdAGp(!av$?y^1N8@V^MfB&lo@Jljsw(`XUg!RYJGG0j-e5ED@u3-0~SHZ|cPQ z!ddv9+179*rhyj9!mHiDk)Jl;=TUzQ= zzcn{EAA5B)uk1NZTuN+xql;$m2Fl)`GJ`H0XYXrf-@D9yc!ciZW|xVwQOPnM&t;yb zdy=lloH6vDqdBJdZ?B3eUdtEG)E913`QQmTsz)ES&Z-TbZojj-{k+iz+emWYit@hA z!CHo3Jgv(2XTIKi(@BwIEwvb9FEJ5Xwa*F4Y!te9n|MMbT(xTYir8LrG$ENOI^?(UKzcL2CX*xR&xyl znz-~FrF(F|DmKq--|Xh*wu=6za_R<`o&T=xl|Srw1ONW#=3+E3PqdtRgVtTGLR@%~ znOj7m8BH-YHPswnKJn9yQj?UKc~mfG@U3iT4)k<)TSl4IX8aWawd+B$ek+}ZX52?+ zy2_N?-+rJ`89`#|2a3U|(p1Q)Ora}8-|wI(MS}ah_|QAY;zJP!`_kcL@_FyoG5Xtd zF&*o4Wvy81n`zI)T#pNecJ%|GgTN1`!W%oLGctF3_|q}|=qqO` zJc&T)r(Q5$&&+I9ExV-%A!I%MjY84ktnL8%M<8@vRb5uo6?Qm&)#nIQd^=qiD|Pt1 z^rG!;TJ{6ykM+lh!i#K|z1ywvn85SG{3-O9D+>HU@>YoAC1<3@AECqb09PT+HCtN| z!L0AMW_$Rg#bQq~TrTZ-G&+hr(yR8o_1Sg4bKf-lZG#k%3N-gOaO+ zDVvg)W!Zl5Z=pGAn5x>=CDlm_(YE81)iGs5Gp9ShJz90ktt;+KMv;xjMB(gAyZr4T zx>qs-Pa;*jZkC;9ZxOm3*^PUJZ7VOoC2h2^LngJmU0I~GJ8geir)ejQDEU(3b5 zVta*`B5&8kmgQF;C=|yPdE=HN4^F39eS4OvvLBgMvgA^V1<=dUH74cPxEVfP*_0+I z`6ubR4$z1EfeS~7{rQWS2mfHEo|pb@EqJ)+-x5FtXY7$*IUfIS7Iv2P-*$?)|F2&} zzJrM(k$ca<%-dtJGP5fwa!K^qv7D}33LUx^Fua^kw{m*txX*WQW!v$0ZKXtsCxxAY z@nlTX%%9D6?2Oc6VP-CH`w~a3uxfW0X&>wNUq08iI1yQ_d`9|+V8Oh<_C5+lWh6l- zU!cQvZ0|1au3!3ot(Lcj`afQ3v!W)BS$5{_B3}=^&cM<2MPqytV&|m?oT_tPRTa@S zZTN7c@Wk-)QmaGvMb2y~3u>Rx>Ab7{US6T0Q8#HQ-#srRsLdpF3j`TTm`MxOBHd{73;h@bHtUcv7+rA~YY2}K`w{+RQbJ^FwOn#u5KdH0|4M-fTW;z`# zbAK@EY<5^2l>V99(a^!+>ntfCB5;07mw)H@{Od-Ou}&S^{F`)_@09JLVbIf$-o?yZ zU0=V0FMM(n9j-zB7AxiKw$WrJso&fCs)wrUuxY2iCPnNG4*m4IG5=c3LTE-5@VkOyJuM< z*dv$5cZ`ioyK1vL&pz?eR&9lpt7q1mZE2^UV_ozv`}8#n(~Sit-zl44u&yl>Y|WHn zLLR}*y=qnC83%`|Ti1AtOQH@O-p0(_-Y$9YK$d&i(BbXAv-NonC9_S&Z!`;bGIWl= zzUKepa=2gWqi4)HODXN5wrAgVaxQ+a3=vc;dU1w*TM4^^t3ev$VsnpX=Tx7d1ckCM zN-d$t_OCq+lZo0v1$p|7owFlz-_Atm_8W)^yHU5gKr zckTMrSQINYU7+9RH&ixoM&ft1RrNq}%g{mB#V29iPxlJqM9LBl>TK4J+M^{p`7=sg zoKj0il?8$Wdo$RlLb85pkN9^hIlf)DnsR(kUhms=-Djk;4sml|wW_{mRhccMk@9tm zG&L@3NvH!;SxIy(#*rH7cB9(7-K|lnd&F`!VRm?jjz7jG)m~RSeDZN@$=sH!Z(H?e ze?03{T3X`Rs>ONKb^hn-%>U9~kGBz*W>Yfl_+7og$Lp*Vt5|1U zkHK5Tgli?PoXcSaK8*Q&&&pzjc5%yVa(abHP0Wl13AJsFjTmd&?f*hcaCfUvu)77yW{2E z6^4hB+&-E9ywl?E()G91mpCIC2PI1q0Y0nek2YsflOmIhT^trCOH!w9>J;s^oVVl5 z|9dQT_M>m4PqO>mD=#m8EZ*BAPDZ&i$x;@hG!#*Ox1S?RM>>{cEfw;SMOY1v6|mCL z-L|S~m-eR)&&j&>zG72tT0N_0o(Y>RqVzKc7EF7XUY z-IH|E zO3DR8(!M4LI-Sa0y0xW5$$j$EwamumW>&J0#XL%-hU#j&;}vvRGhO>5arBfRRn{( z($%$g2!++>_tXdHi3z&YkCe<5OjLMz%@#N2Z!GFRX*)AH9B4ED<=oNY<)y-9%<_ni z!&beVc_nvOkwB&!nQLGu=OhUl8YpweX>(Gd|5RD(TWv`?fghT5&J{NXBx5~zZ@$yAKU>Q} zwiP(7cm5Jv!CY2puXWiSzn7t*J$cQ61tXPV9n-_2Gc&YWO0J zYs?*3v>LCC=;x*zJl%jz&$g(`R&mR#fD!*=^Qlxbowap-ttgJ=3Tm^uuUbq-K2i^wJln?gVqSE3sK3-zg9aG zU)z!^D*rMRh4NUj>9R<~%tEJw6^+_)ga@zw{*N5{J3FeG5kwyR*wKnfx7xv7C<0%@ zNVT$Tacv4Ovqqk?A>KuK>sxo@iwnOyaq#KGk~tzPMQ(SVoZ=X%c zP-bzb`>(}SY!uJi9p84|=hf+Sp6DYxfK5koy@^#xqoua>qD{ndRL{VO8`g4X0p>P?+RVG<|x%?ylWWVCLSv2`t|>-@e)X*#9xh%#v1FcHC-={jb^ zoHIX_*R|59sNZPOf-C^5&b$JzBRvDh z5_g-*)=bH*9r@~XJMgL!Djv$017OWwO1aTj0mrb#;f##3p4~?~p=SSogSS;Ri z{Mm0*VnvVAib z#$5DErW40Li5GA(GfPwbm$9e%FCR4u(bKI=%_ouR4M;ibFMKc+^2e9jbclB;`}-}=ePHWaDdhC> z!zxkAHB#YEeoab267cQvrugRB4o!at)tjmZ4_)+84i|!do8+{naGo6oeIM?|g zgM3-%Y(>d_$2??&4BAoSInmKJWiy7Yu1(Ad))A6a6t)}@;Cmd&xa2^n$nRXYbX@|=hlhaw6kxI$Y2<=3K`eH_Q8q>paJCsNKNiq8fN(eXAdjjo&T%1?ctvz4MN zeWULCiOa&)53|Rww2%4bFurE=glxHC?fr}$wtcRJ3%lQE9Afd8{><5c8vKT+1wUuW zhB+z%_nCWDUVj`O{=3Si{J#--C`;DPY2b%YZsRd=adkVFp`jt=xBhOc4THV|Ora|=?^8E)B;XHF)nDbV(|`?@5md7pg2RBv*4o=S~QOf+vc zuloA+GtQEJmzi0DW}7X!I)cIa=t59Vs1Im_G2(bixI=WcmvbHmGxMCQlyK-*j`r2~ z!`tTGf!Kj5=emI@jl%N!3xu*{P$4#a*rip?B^cXVc^gKA~03TgD0`HCO03@yPeuz#&?Y|QMU zxO>!?9nrV87RLGDo3)a`UEt`RRH%x?PM8fGla+acmK<4XXn42};3PaIBrq&HcI>c8 z`hmt0wM#GCjYZTpaVN`Q9B@vLSR-7CWq@ zy}jVpPJs6zPoF+zX@R`lKsv38+#blnV9DuI3}P@rA+I9ATuD<-@%;H>4Cc7o-a*s! zS6ewa{Lqk71Z#+ZYj}9G&V0!|LGquI9*RX)oxy9;NZh<@*DmF_OL|b^Gxlro^787z z5xIH>7(Om_`o#q=P_Jj8q%=uVMjZv=KJHHbYPo^1oUSI`(1i?$KtO6uJL_N^l=lQ- zY!848nMtF+DHyhj_ybpAPDuAFF#v*zqCVRg0TAM*7Sh=Jw|L=3Z zVg0sY!vQb-4#^AE_ZR=c3K zBk6=iJ%zIVww%m~OXYW?P&Ps~##E$db>1|fss5e;cpQS=V+~_oY&=={^((|uXhPWN zm635&RCFsA&350@kL6$H(Pt>URM}|KH!-($`}U``7jqI5_v6%usGJ;eGQ_w7`9H7{ zSJUmcQC0T7HV5PKZRUMmo@7gFYc!@wT$B6jCnoj6RQD@#9s~DHu!#EY@?fY7M>beT zBX;P4%2G~L(|2}025xH?l3=jjgr73DtgWtQhPcd|%s{Liyb;8JIEfMT0@w`Wu}C<% zv`02u%TFcl@ru^O)KbH=@{Uvg!0VS~(^?GAx&w=O8x%BO z*vovz*F2H%!y!{-e6cPu-3}f;95fk`)Hoo=76>&OUQO9&Zw&F;;^3N=s~C2KIM9LB zt*trF>GP|Nc@nkAJx5mYB?=0MDq`IhW7+-mksUZTWI_AY>`0Co=a5PIQ+9p9V9n9z zJMAU!@Nf2hRAvy+aDat{W!;`sG`IyOLkb^#fnNseB<<_qlf!g$bS?`_FD~%&biZrj z@k1m5?#p`U&>?01O#xgvtkXh7*~wXxjruM6-%xdp2{1n^3gd=LPCnOq55}N zxf&k{LpY6UHajg9+1btYR`p7FnZeH$-iT^gvH|iV%{#BXxNtV!+F=X3(95gIeCEf{ zXcoL)M3aOs{E)SWm$LRYoA2q3F&QU^n!7LZZ;DDt*o*0u!u56EW^cLOjtU43w<}7o^J%aBhiyrap*U(UV_k9Xxb}=M-?E0yfeXO%3 zVl$oa3K5U_ngnspy`jzB)xuaTZ!+tI;lbY?O^s7a*xmg$$Ng8LZqC&4zp|~aw?`*R zjg((^3*fR}*`KE4f8~RHboE(3#PcdKhJRR;H&M+y`@n>fc)aDwM)L_}zkn$VmJCcbK|>7h_XSh%~(p6GT>qiYOOT+yjZ!w;twN&+K zG)V{DCTHr~#7dt@+YIUJRtN@g6UL^d-WeGgpbE97w?k#eBUznLZ{p+c&&RS$Q0E)z~p}M<}T4H<6vEhbOq3!!M1%m_Sb_XX| zL@wGO&g=>yu~i}9rR@vJ|T!&-(%d6%BPJ8-{R zLOt#>6)kB^aQ?R7(|CgqLPv@NgtI={JHJD&F&(dwlRok!=w57E za6;q2I@+l@)dmo@sgRKKycx!JOk1zUYU|?SvVOyc-G5mk)Tqo!N=fA#ryyz`HIhaW zIEIGP(}VFT5$ziK=q-X}<*5e`{EW8?te7Dz&^N z@3%7FaCIvJ7+&QrY{yvJ$Tkr0m63_y2L6p9=dk#!N_XBT{~pb}VmB@0EGmSn-NYjqz9Es7#jN1^&;NhMT;LR{r=Q|FxrW&}5=L zwDZ=r{JEW$dtEJ6-)-?Og494MK15%=$U3b#JZ%VqZyD^vhab;vw94jnkAicaD5UR@Z>i%KzsB|62_U#(eA5P!}V&uHE-9Gn8%Uq6AtC~0+ z{X*}LI*;SLg6gMAhIE^3Ev$+C+uAA_KRd)$8}B<7+HDQBlAf8_KBQwv1_WCkM5ixe zFvJED>jveA)V@bn^^qA%5N;_>tYhlnF-0Y%5&Msyg5LF<4OEl0MOej%7^qavHKt7+ zFmQjlw6IwW*_GCjI^+@Ohy9Tyvrhf!XkVRVgEH=%U3&aY=1^1xwy0o%aZ3=*_jzNBw@CQoe6o(oqVJ~E;4&FjAkS{vu%u!WSmAagw-n< zvGpBx99S5!VY5MU5OrzK?jijLMS2^B1C!;ND&P9yOjKkz~$k(cp-b#`0w$d&>VGLpJe>lqkM zBQZuPhoE#~>N@_6LZhNzRx)RgQ>vX|a%&B}CU!`*+(^kFglH1SrKC9So+MuYm5&%} zEpjRB!olVTkwUQ{&qNwNt^pYjF~>1pUDGmxXpPZcSwG^Uc~^=Wa{t~v5(0VUDkKOEL1VSlBb) ztieh}J>YcUo3Tj1>W$vl)4y!nzWo6%VGvN-@F}9X^(wr9k!qKU1P?6ykslAHVm~0S z4ABZf79OJ5r!j(8nZDr6gBWKghnAL>&F?-VYlH?X8pBne`cI~o97D9M4PZgfqZ>A~Dht8|TXx4PQT#Wx)*X|Sv8oA914 z?Wi?vdGDZ6lG`L!A(h4=l|K?bR$DJ&8i}On5Lho z@sO8%gUL93>hgJr``?<&HhExH@UZseU|VFt0I&Y)v@U}r4Bxphj%szoGPm#BUOy7H xy4Q~Nk85V(30-63QZ9DShGZR&3lX!+t3^zV8J5rVb(3q$$)1-e*ia20j&T4 literal 0 HcmV?d00001 diff --git a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc index 3e09f6ad7ed3..f022d64ff672 100644 --- a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc +++ b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc @@ -15,7 +15,7 @@ UDN improves the flexibility and segmentation capabilities of the default Layer The following diagram shows four cluster namespaces, where each namespace has a single assigned UDN, and each UDN has an assigned custom subnet for its pod IP allocations. The OVN-Kubernetes handles any overlapping UDN subnets. Without using the Kubernetes network policy, a pod attached to a UDN can communicate with other pods in that UDN. By default, these pods are isolated from communicating with pods that exist in other UDNs. For microsegmentation, you can apply the Kubernetes network policy within a UDN. You can assign one or more UDNs to a namespace, with a limitation of only one primary UDN to a namespace, and one or more namespaces to a UDN. -image::527-OpenShift-UDN-isolation-012025.png[Namespace isolation concept in a user-defined network (UDN)] +image::527-OpenShift-UDN-isolation-012025.png[The namespace isolation concept in a user-defined network (UDN)] [NOTE] ==== @@ -24,6 +24,10 @@ Support for the Localnet topology on both primary and secondary networks will be Unlike a network attachment definition (NAD), which is only namespaced scope, a cluster administrator can use a UDN to create and define additional networks that span multiple namespaces at the cluster level by leveraging the `ClusterUserDefinedNetwork` custom resource (CR). Additionally, a cluster administrator or a cluster user can use a UDN to define additional networks at the namespace level with the `UserDefinedNetwork` CR. +The following diagram shows tenant isolation that a cluster administrator created by defining a `ClusterUserDefinedNetwork` (CR) for each tenant. This network configuration allows a network to span across many namespaces. In the diagram, the `udn-1` disconnected network selects `namespace-1` and `namespace-2`, while the `udn-2` disconnected network selects `namespace-3` and `namespace-4`. A tenant acts as a disconnected network that is isolated from other tenants' networks. Pods from a namespace can communicate with pods in another namespace only if those namespaces exist in the same tenant network. + +image::528-OpenShift-multitenant-0225.png[The tenant isolation concept in a user-defined network (UDN)] + The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a `ClusterUserDefinedNetwork` or `UserDefinedNetwork` custom resource, how to create the custom resource, and additional configuration details that might be relevant to your deployment. // Looks like this may be out for 4.17, but in for 4.18 as of 8/19/24 From f45eca5bcdc8f62a769deb5cac00174af8d739d3 Mon Sep 17 00:00:00 2001 From: srir Date: Wed, 29 Jan 2025 12:14:56 +0530 Subject: [PATCH 273/669] MULTIARCH#4560: Adding a new API field in images.config.openshift.io --- modules/images-configuration-parameters.adoc | 29 +++++++++++++++++++ ...nodes-cluster-enabling-features-about.adoc | 1 + openshift_images/image-configuration.adoc | 9 ++++++ 3 files changed, 39 insertions(+) diff --git a/modules/images-configuration-parameters.adoc b/modules/images-configuration-parameters.adoc index c1adae16f445..6b0eb4682636 100644 --- a/modules/images-configuration-parameters.adoc +++ b/modules/images-configuration-parameters.adoc @@ -47,6 +47,35 @@ pods. For instance, whether or not to allow insecure access. It does not contain Either `blockedRegistries` or `allowedRegistries` can be set, but not both. +ifndef::openshift-rosa,openshift-dedicated[] +|`imageStreamImportMode` +|Controls the import mode behavior of image streams. + +You must enable the `TechPreviewNoUpgrade` feature set in the `FeatureGate` custom resource (CR) to enable the `imageStreamImportMode` feature. +For more information about feature gates, see "Understanding feature gates". + +You can set the `imageStreamImportMode` field to either of the following values: + +* `Legacy`: Indicates that the legacy behavior must be used. The legacy behavior discards the manifest list and imports a single sub-manifest. In this case, the platform is chosen in the following order of priority: +. Tag annotations: Determining the platform by using the platform-specific annotations in the image tags. +. Control plane architecture or the operating system: Selecting the platform based on the architecture or the operating system of the control plane. +. `linux/amd64`: If no platform is selected by the preceeding methods, the `linux/amd64` platform is selected. +. The first manifest in the list is selected. + +* `PreserveOriginal`: Indicates that the original manifest is preserved. The manifest list and its sub-manifests are imported. + +If you specify a value for this field, the value is applied to the newly created image stream tags that do not already have this value manually set. + +If you do not configure this field, the behavior is decided based on the payload type advertised by the `ClusterVersion` status. In this case, the platform is chosen as follows: + +* The single architecture payload implies that the `Legacy` mode is applicable. +* The multi payload implies that the `PreserveOriginal` mode is applicable. + +For information about importing manifest lists, see "Working with manifest lists". + +:FeatureName: `imageStreamImportMode` +include::snippets/technology-preview.adoc[] +endif::openshift-rosa,openshift-dedicated[] |=== [WARNING] diff --git a/modules/nodes-cluster-enabling-features-about.adoc b/modules/nodes-cluster-enabling-features-about.adoc index 56dc13302ffa..c702ee9a5a99 100644 --- a/modules/nodes-cluster-enabling-features-about.adoc +++ b/modules/nodes-cluster-enabling-features-about.adoc @@ -28,6 +28,7 @@ The following Technology Preview features are enabled by this feature set: ** Dynamic Resource Allocation API. Enables a new API for requesting and sharing resources between pods and containers. This is an internal feature that most users do not need to interact with. (`DynamicResourceAllocation`) ** Pod security admission enforcement. Enables the restricted enforcement mode for pod security admission. Instead of only logging a warning, pods are rejected if they violate pod security standards. (`OpenShiftPodSecurityAdmission`) ** StatefulSet pod availability upgrading limits. Enables users to define the maximum number of statefulset pods unavailable during updates which reduces application downtime. (`MaxUnavailableStatefulSet`) +** Image mode behavior of image streams. Enables a new API for controlling the import mode behavior of image streams. (`imageStreamImportMode`) ** `OVNObservability` resource allows you to verify expected network behavior. Supports the following network APIs: `NetworkPolicy`, `AdminNetworkPolicy`, `BaselineNetworkPolicy`, `UserDefinesdNetwork` isolation, multicast ACLs, and egress firewalls. When enabled, you can view network events in the terminal. ** `gcpLabelsTags` ** `vSphereStaticIPs` diff --git a/openshift_images/image-configuration.adoc b/openshift_images/image-configuration.adoc index 0580146935dc..49666969cd05 100644 --- a/openshift_images/image-configuration.adoc +++ b/openshift_images/image-configuration.adoc @@ -10,6 +10,15 @@ Use the following procedure to configure image registries. include::modules/images-configuration-parameters.adoc[leveloffset=+1] +ifndef::openshift-rosa,openshift-dedicated[] +[role="_additional-resources"] +.Additional resources + +* xref:../openshift_images/image-streams-manage.adoc#images-imagestream-import-import-mode_image-streams-managing[Working with manifest lists] + +* xref:../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling-features-about_nodes-cluster-enabling[Understanding feature gates] +endif::openshift-rosa,openshift-dedicated[] + include::modules/images-configuration-file.adoc[leveloffset=+1] include::modules/images-configuration-allowed.adoc[leveloffset=+2] From 45600c17118ba9c606085253f65a4607201c5ecf Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Mon, 20 Jan 2025 12:56:43 +0000 Subject: [PATCH 274/669] OCPBUGS-48271: Updated UDN doc to expand on Layer 2 and 3 --- modules/nw-udn-best-practices.adoc | 8 +++++++- modules/nw-udn-cr.adoc | 5 +++-- 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/modules/nw-udn-best-practices.adoc b/modules/nw-udn-best-practices.adoc index eebd3ef61987..05b0288435e6 100644 --- a/modules/nw-udn-best-practices.adoc +++ b/modules/nw-udn-best-practices.adoc @@ -41,4 +41,10 @@ Before setting up a `UserDefinedNetwork` (UDN) resource, you should consider the * When creating network segmentation, you should only use the NAD resource if user-defined network segmentation cannot be completed using the UDN resource. -* The cluster subnet and services CIDR for a UDN cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default network's join subnet, you must not use that value to configure a UDN `joinSubnets` field. If the default address values are used anywhere in the cluster's networ, you must override it by setting the `joinSubnets` field. For more information, see "Additional configuration details for a UserDefinedNetworks CR". \ No newline at end of file +* The cluster subnet and services CIDR for a UDN cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default network's join subnet, you must not use that value to configure a UDN `joinSubnets` field. If the default address values are used anywhere in the network for the cluster, you must override it by setting the `joinSubnets` field. For more information, see "Additional configuration details for a UserDefinedNetworks CR". + +* The cluster subnet and services CIDR for a UDN cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default join subnet for the network, you must not use that value to configure a UDN `joinSubnets` field. If the default address values are used anywhere in the network for the cluster you must override the default values by setting the `joinSubnets` field. For more information, see "Additional configuration details for a UserDefinedNetworks CR". + +* A layer 2 topology creates a virtual switch that is distributed across all nodes in a cluster. Virtual machines and pods connect to this virtual switch so that all these components can communicate with each other within the same subnet. If you decide not to specify a layer 2 subnet, then you must manually configure IP addresses for each pod in your cluster. When not specifying a layer 2 subnet, port security is limited to preventing Media Access Control (MAC) spoofing only, and does not include IP spoofing. A layer 2 topology creates a single broadcast domain that can be challenging in large network environments, whereby the topology might cause a broadcast storm that can degrade network performance. + +* A layer 3 topology creates a unique layer 2 segment for each node in a cluster. The layer 3 routing mechanism interconnects these segments so that virtual machines and pods that are hosted on different nodes can communicate with each other. A layer 3 topology can effectively manage large broadcast domains by assigning each domain to a specific node, so that broadcast traffic has a reduced scope. To configure a layer 3 topology, you must configure `cidr` and `hostSubnet` parameters. diff --git a/modules/nw-udn-cr.adoc b/modules/nw-udn-cr.adoc index 37b85d11ac67..474a40762a83 100644 --- a/modules/nw-udn-cr.adoc +++ b/modules/nw-udn-cr.adoc @@ -79,6 +79,7 @@ spec: hostSubnet: 24 - cidr: 2001:db8::/60 hostSubnet: 64 +# ... ---- <1> Name of your `UserDefinedNetwork` resource. This should not be `default` or duplicate any global namespaces created by the Cluster Network Operator (CNO). <2> The `topology` field describes the network configuration; accepted values are `Layer2` and `Layer3`. Specifying a `Layer3` topology type creates a layer 2 segment per node, each with a different subnet. Layer 3 routing is used to interconnect node subnets. @@ -88,8 +89,8 @@ spec: + * The `subnets` field is mandatory. * The type for the `subnets` field is `cidr` and `hostSubnet`: -** `cidr` is the cluster subnet and accepts a string value. -** `hostSubnet` specifies the nodes subnet prefix that the cluster subnet is split to. +** `cidr` is equivalent to the `clusterNetwork` configuration settings of a cluster. The IP addresses in the CIDR are distributed to pods in the user defined network. This parameter accepts a string value. +** `hostSubnet` defines the per-node subnet prefix. ** For IPv6, only a `/64` length is supported for `hostSubnet`. + . Apply your request by running the following command: From efb7421ac68bdbd260dffd7b66859227d0ee8aec Mon Sep 17 00:00:00 2001 From: subhtk Date: Mon, 17 Feb 2025 12:15:19 +0530 Subject: [PATCH 275/669] Added proxy support section in oc-mirror v2 --- .../mirroring/about-installing-oc-mirror-v2.adoc | 3 +++ modules/oc-mirror-proxy-support.adoc | 9 +++++++++ 2 files changed, 12 insertions(+) create mode 100644 modules/oc-mirror-proxy-support.adoc diff --git a/disconnected/mirroring/about-installing-oc-mirror-v2.adoc b/disconnected/mirroring/about-installing-oc-mirror-v2.adoc index 8424039eae0e..7c7fe06a6183 100644 --- a/disconnected/mirroring/about-installing-oc-mirror-v2.adoc +++ b/disconnected/mirroring/about-installing-oc-mirror-v2.adoc @@ -107,6 +107,9 @@ include::modules//oc-mirror-enclave-support-about.adoc[leveloffset=+1] // How to mirror to an Enclave include::modules/oc-mirror-enclave-support.adoc[leveloffset=+2] +// Proxy support +include::modules/oc-mirror-proxy-support.adoc[leveloffset=+1] + [role="_additional-resources"] .Additional resources * xref:../../disconnected/updating/disconnected-update-osus.adoc#updating-disconnected-cluster-osus[Updating a cluster in a disconnected environment using the OpenShift Update Service] diff --git a/modules/oc-mirror-proxy-support.adoc b/modules/oc-mirror-proxy-support.adoc new file mode 100644 index 000000000000..fb9e4b5f7f25 --- /dev/null +++ b/modules/oc-mirror-proxy-support.adoc @@ -0,0 +1,9 @@ +// Module included in the following assemblies: +// +// * installing/disconnected_install/installing-mirroring-disconnected-v2.adoc + +:_mod-docs-content-type: CONCEPT +[id="oc-mirror-proxy-support_{context}"] += oc-mirror plugin v2 support proxy setting + +The oc-mirror plugin v2 can operate in a proxy-configured environment. The plugin can use system proxy settings to retrieve images for {product-title}, Operator catalog, and the `additionalImages` registry. \ No newline at end of file From baaf41c8526d2c50eaecf72db499186857f5eee0 Mon Sep 17 00:00:00 2001 From: subhtk Date: Mon, 10 Feb 2025 12:57:30 +0530 Subject: [PATCH 276/669] Updated Operator catalog for oc-mirror --- .../oc-mirror-operator-catalog-filtering.adoc | 32 +++++++++---------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/modules/oc-mirror-operator-catalog-filtering.adoc b/modules/oc-mirror-operator-catalog-filtering.adoc index 9b65333b621d..4c0de65dc830 100644 --- a/modules/oc-mirror-operator-catalog-filtering.adoc +++ b/modules/oc-mirror-operator-catalog-filtering.adoc @@ -10,10 +10,6 @@ oc-mirror plugin v2 selects the list of bundles for mirroring by processing the When oc-mirror plugin v2 selects bundles for mirroring, it does not infer Group Version Kind (GVK) or bundle dependencies, omitting them from the mirroring set. Instead, it strictly adheres to the user instructions. You must explicitly specify any required dependent packages and their versions. -Bundle versions typically use semantic versioning standards (SemVer), and you can sort bundles within a channel by version. You can select buncles that fall within a specific range in the `ImageSetConfig`. - -This selection algorithm ensures consistent outcomes compared to oc-mirror plugin v1. However, it does not include upgrade graph details, such as `replaces`, `skip`, and `skipRange`. This approach differs from the OLM algorithm. It might mirror more bundles than necessary for upgrading a cluster because of potentially shorter upgrade paths between the `minVersion` and `maxVersion`. - .Use the following table to see what bundle versions are included in different scenarios [cols="1,2",options="header"] @@ -31,7 +27,7 @@ mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10 ---- -|For each package in the catalog, 1 bundle, corresponding to the head version of the default channel for that package. +|For each package in the catalog, one bundle, corresponding to the head version for each channel of that package. a|Scenario 2 @@ -42,7 +38,7 @@ mirror: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10 full: true ---- -|All bundles of all channels of the specified catalog +|All bundles of all channels of the specified catalog. a|Scenario 3 @@ -54,7 +50,7 @@ mirror: packages: - name: compliance-operator ---- -|One bundle, corresponding to the head version of the default channel for that package +|One bundle, corresponding to the head version for each channel of that package. a|Scenario 4 @@ -67,7 +63,7 @@ mirror: - packages: - name: elasticsearch-operator ---- -|All bundles of all channels for the packages specified +|All bundles of all channels for the packages specified. a|Scenario 5 @@ -80,7 +76,7 @@ mirror: - name: compliance-operator minVersion: 5.6.0 ---- -|All bundles in the default channel, from the `minVersion`, up to the channel head for that package that do not rely on the shortest path from upgrade the graph. +|All bundles in all channels, from the `minVersion`, up to the channel head for that package. a|Scenario 6 @@ -93,7 +89,7 @@ mirror: - name: compliance-operator maxVersion: 6.0.0 ---- -|All bundles in the default channel that are lower than the `maxVersion` for that package. +|All bundles in all channels that are lower than the `maxVersion` for that package. a|Scenario 7 @@ -107,7 +103,7 @@ mirror: minVersion: 5.6.0 maxVersion: 6.0.0 ---- -|All bundles in the default channel, between the `minVersion` and `maxVersion` for that package. The head of the channel is not included, even if multiple channels are included in the filtering. +|All bundles in all channels, between the `minVersion` and `maxVersion` for that package. The head of the channel is not included, even if multiple channels are included in the filtering. a|Scenario 8 @@ -121,7 +117,7 @@ mirror: channels - name: stable ---- -|The head bundle for the selected channel of that package. +|The head bundle for the selected channel of that package. You must use the `defaultChannel` field in case the filtered channels are not the default. a|Scenario 9 @@ -136,7 +132,8 @@ mirror: channels: - name: 'stable-v0' ---- -|All bundles for the specified packages and channels. +|All bundles for the packages and channels specified. +The `defaultChannel` should be used in case the filtered channels are not the default. a|Scenario 10 @@ -166,7 +163,7 @@ mirror: - name: stable minVersion: 5.6.0 ---- -|Within the selected channel of that package, all versions starting with the `minVersion` up to the channel head. This scenario does not relyon the shortest path from the upgrade graph. +|Within the selected channel of that package, all versions starting with the `minVersion` up to the channel head. You must use the `defaultChannel` field in case the filtered channels are not the default. a|Scenario 12 @@ -181,7 +178,9 @@ mirror: - name: stable maxVersion: 6.0.0 ---- -|Within the selected channel of that package, all versions up to the `maxVersion` (not relying on the shortest path from the upgrade graph). The head of the channel is not included, even if multiple channels are included in the filtering. +|Within the selected channel of that package, all versions up to `maxVersion`. +Head of channel is not included, even if multiple channels are included in the filtering. +You might see errors if this filtering leads to a channel with multiple heads. You must use the `defaultChannel` field in case the filtered channels are not the default. a|Scenario 13 @@ -197,7 +196,8 @@ mirror: minVersion: 5.6.0 maxVersion: 6.0.0 ---- -|Within the selected channel of that package, all versions between the `minVersion` and `maxVersion`, not relying on the shortest path from the upgrade graph. The head of the channel is not included, even if multiple channels are included in the filtering. +|Within the selected channel of that package, all versions between the `minVersion` and `maxVersion`. The head of channel is not included, even if multiple channels are included in the filtering. +You might see errors if this filtering leads to a channel with multiple heads. You must use the `defaultChannel` field in case the filtered channels are not the default. a|Scenario 14 From 962239c0dafb37a87d8e93bc74b9c04ab1da7990 Mon Sep 17 00:00:00 2001 From: Max Bridges Date: Tue, 18 Feb 2025 17:58:17 -0500 Subject: [PATCH 277/669] Restore RHOsO monitoring topic to nav Resolves OSDOCS-13402 --- _topic_maps/_topic_map.yml | 2 ++ .../monitoring/shiftstack-prometheus-configuration.adoc | 6 +++--- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 6def758a30fe..e19d4e90daf6 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2986,6 +2986,8 @@ Topics: File: troubleshooting-monitoring-issues - Name: Config map reference for the Cluster Monitoring Operator File: config-map-reference-for-the-cluster-monitoring-operator + - Name: Monitoring clusters that run on RHOSO + File: shiftstack-prometheus-configuration - Name: Logging Dir: logging Distros: openshift-enterprise,openshift-origin diff --git a/observability/monitoring/shiftstack-prometheus-configuration.adoc b/observability/monitoring/shiftstack-prometheus-configuration.adoc index 8a2e9f120d5d..61c03872ea7d 100644 --- a/observability/monitoring/shiftstack-prometheus-configuration.adoc +++ b/observability/monitoring/shiftstack-prometheus-configuration.adoc @@ -17,14 +17,14 @@ include::modules/monitoring-configuring-shiftstack-remotewrite.adoc[leveloffset= [role="_additional-resources"] .Additional resources -* xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#configuring-remote-write-storage_configuring-the-monitoring-stack[Configuring remote write storage] -* xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#adding-cluster-id-labels-to-metrics_configuring-the-monitoring-stack[Adding cluster ID labels to metrics] +* xref:../../observability/monitoring/configuring-user-workload-monitoring/configuring-metrics-uwm.adoc#configuring-remote-write-storage_configuring-metrics-uwm[Configuring remote write storage] +* xref:../../observability/monitoring/about-ocp-monitoring/key-concepts.adoc#adding-cluster-id-labels-to-metrics_key-concepts[Adding cluster ID labels to metrics] include::modules/monitoring-configuring-shiftstack-scraping.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../observability/monitoring/accessing-third-party-monitoring-apis.adoc#monitoring-querying-metrics-by-using-the-federation-endpoint-for-prometheus_accessing-monitoring-apis-by-using-the-cli[Querying metrics by using the federation endpoint for Prometheus] +* xref:../../observability/monitoring/accessing-metrics/accessing-monitoring-apis-by-using-the-cli.adoc#monitoring-querying-metrics-by-using-the-federation-endpoint-for-prometheus_accessing-monitoring-apis-by-using-the-cli[Querying metrics by using the federation endpoint for Prometheus] include::modules/monitoring-shiftstack-metrics.adoc[leveloffset=+1] From 2fce0f50a0c38cc7428e7fc84f7805e2440730fc Mon Sep 17 00:00:00 2001 From: "A.Arnold" Date: Tue, 18 Feb 2025 18:58:24 +0000 Subject: [PATCH 278/669] OADP-4883: Added 1.4.3 release notes Signed-off-by: A.Arnold --- .../release-notes/oadp-1-4-release-notes.adoc | 3 ++- modules/oadp-1-4-3-release-notes.adoc | 26 +++++++++++++++++++ 2 files changed, 28 insertions(+), 1 deletion(-) create mode 100644 modules/oadp-1-4-3-release-notes.adoc diff --git a/backup_and_restore/application_backup_and_restore/release-notes/oadp-1-4-release-notes.adoc b/backup_and_restore/application_backup_and_restore/release-notes/oadp-1-4-release-notes.adoc index 3172471d771c..6dc254f42291 100644 --- a/backup_and_restore/application_backup_and_restore/release-notes/oadp-1-4-release-notes.adoc +++ b/backup_and_restore/application_backup_and_restore/release-notes/oadp-1-4-release-notes.adoc @@ -14,6 +14,7 @@ The release notes for {oadp-first} describe new features and enhancements, depre For additional information about {oadp-short}, see link:https://access.redhat.com/articles/5456281[{oadp-first} FAQs] ==== +include::modules/oadp-1-4-3-release-notes.adoc[leveloffset=+1] include::modules/oadp-1-4-2-release-notes.adoc[leveloffset=+1] [role="_additional-resources"] @@ -37,4 +38,4 @@ endif::openshift-rosa-hcp[] To upgrade from OADP 1.3 to 1.4, no Data Protection Application (DPA) changes are required. -include::modules/oadp-verifying-upgrade-1-4-0.adoc[leveloffset=+2] \ No newline at end of file +include::modules/oadp-verifying-upgrade-1-4-0.adoc[leveloffset=+2] diff --git a/modules/oadp-1-4-3-release-notes.adoc b/modules/oadp-1-4-3-release-notes.adoc new file mode 100644 index 000000000000..cdf4677725b9 --- /dev/null +++ b/modules/oadp-1-4-3-release-notes.adoc @@ -0,0 +1,26 @@ +// Module included in the following assemblies: +// +// * backup_and_restore/oadp-1-4-release-notes.adoc + +:_mod-docs-content-type: REFERENCE + +[id="oadp-1-4-3-release-notes_{context}"] += {oadp-short} 1.4.3 release notes + +The {oadp-first} 1.4.3 release notes lists the following new feature. + +[id="new-features-1-4-3_{context}"] +== New features + +.Notable changes in the `kubevirt` velero plugin in version 0.7.1 + +With this release, the `kubevirt` velero plugin has been updated to version 0.7.1. Notable improvements include the following bug fix and new features: + +* Virtual machine instances (VMIs) are no longer ignored from backup when the owner VM is excluded. +* Object graphs now include all extra objects during backup and restore operations. +* Optionally generated labels are now added to new firmware Universally Unique Identifiers (UUIDs) during restore operations. +* Switching VM run strategies during restore operations is now possible. +* Clearing a MAC address by label is now supported. +* The restore-specific checks during the backup operation are now skipped. +* The `VirtualMachineClusterInstancetype` and `VirtualMachineClusterPreference` custom resource definitions (CRDs) are now supported. +//link:https://issues.redhat.com/browse/OADP-5551[OADP-5551] \ No newline at end of file From d9c0b7b8d9f3f7b3e29626d410db37a79d712ec2 Mon Sep 17 00:00:00 2001 From: Olivia Payne Date: Fri, 31 Jan 2025 09:53:59 -0500 Subject: [PATCH 279/669] OSDOCS#12236: Updates to Welcome and Learn more OCP pages --- welcome/index.adoc | 1 + welcome/learn_more_about_openshift.adoc | 161 +++++++++++++----------- 2 files changed, 90 insertions(+), 72 deletions(-) diff --git a/welcome/index.adoc b/welcome/index.adoc index 830dbafe3784..b634948bafcc 100644 --- a/welcome/index.adoc +++ b/welcome/index.adoc @@ -28,6 +28,7 @@ To navigate the {product-title} {product-version} documentation, you can use one * Use the navigation bar to browse the documentation. * Select the task that interests you from xref:../welcome/learn_more_about_openshift.adoc#learn_more_about_openshift[Learn more about {product-title}]. +* {product-title} has a variety of layered offerings to add additional functionality and extend the capabilities of a cluster. For more information, see link:https://access.redhat.com/support/policy/updates/openshift_operators[{product-title} Operator Life Cycles] endif::openshift-rosa,openshift-dedicated,openshift-dpu,openshift-telco[] ifdef::openshift-dpu[] diff --git a/welcome/learn_more_about_openshift.adoc b/welcome/learn_more_about_openshift.adoc index efc7f877328b..ac6edd65c4db 100644 --- a/welcome/learn_more_about_openshift.adoc +++ b/welcome/learn_more_about_openshift.adoc @@ -8,33 +8,45 @@ toc::[] Use the following sections to find content to help you learn about and better understand {product-title} functions: +[id="support"] +== Learning and support + +[options="header",cols="2*"] +|=== +| Learn about {product-title} |Optional additional resources + +|link:https://www.openshift.com/learn/whats-new[What's new in {product-title}] +|link:https://www.openshift.com/blog?hsLang=en-us[OpenShift blog] + +|link:https://access.redhat.com/support/policy/updates/openshift[{product-title} Life Cycle Policy] +|link:https://access.redhat.com/support/policy/updates/openshift#ocp4_phases[{product-title} life cycle] + +|link:https://learn.openshift.com/?extIdCarryOver=true&sc_cid=701f2000001Css5AAC[OpenShift Interactive Learning Portal] +|link:https://access.redhat.com/articles/4217411[OpenShift Knowledgebase articles] + +| xref:../support/getting-support.adoc#getting-support[Getting Support] +| xref:../support/gathering-cluster-data.adoc#gathering-data[Gathering data about your cluster] + +|=== + [id="architecture"] == Architecture -[options="header",cols="3*"] +[options="header",cols="2*"] |=== -| Learn about {product-title} |Plan an {product-title} deployment |Optional additional resources +| Learn about {product-title} |Optional additional resources | link:https://www.openshift.com/blog/enterprise-kubernetes-with-openshift-part-one?extIdCarryOver=true&sc_cid=701f2000001Css5AAC[Enterprise Kubernetes with OpenShift] | link:https://access.redhat.com/articles/4128421[Tested platforms] -| link:https://www.openshift.com/blog?hsLang=en-us[OpenShift blog] | xref:../architecture/architecture.adoc#architecture[Architecture] | xref:../security/container_security/security-understanding.adoc#understanding-security[Security and compliance] -| link:https://www.openshift.com/learn/whats-new[What's new in {product-title}] -| | xref:../networking/understanding-networking.adoc#understanding-networking[Networking] -| link:https://access.redhat.com/support/policy/updates/openshift#ocp4_phases[{product-title} life cycle] +| xref:../networking/ovn_kubernetes_network_provider/ovn-kubernetes-architecture-assembly.adoc#ovn-kubernetes-architecture-con[OVN-Kubernetes architecture] -| | xref:../backup_and_restore/index.adoc#backup-restore-overview[Backup and restore] -| - -| link:https://learn.openshift.com/?extIdCarryOver=true&sc_cid=701f2000001Css5AAC[OpenShift Interactive Learning Portal] -| -a|* xref:../support/getting-support.adoc#getting-support[Getting Support] -* link:https://access.redhat.com/articles/4217411[OpenShift Knowledgebase articles] +| xref:../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#scenario-2-restoring-cluster-state[Restoring to a previous cluster state] |=== @@ -50,7 +62,7 @@ Explore the following {product-title} installation tasks: | xref:../installing/overview/installing-preparing.adoc#installing-preparing[Selecting a cluster installation method and preparing it for users] | xref:../installing/overview/installing-fips.adoc#installing-fips-mode_installing-fips[Installing a cluster in FIPS mode] -| +| xref:../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc#agent-installer-fips-compliance_preparing-to-install-with-agent-based-installer[About FIPS compliance] |=== @@ -61,8 +73,8 @@ Explore the following {product-title} installation tasks: |=== | Learn about other installer tasks on {product-title} |Optional additional resources -| xref:../installing/validation_and_troubleshooting/installing-troubleshooting.adoc#installing-troubleshooting[Check installation logs] -| +| xref:../installing/validation_and_troubleshooting/installing-troubleshooting.adoc#installing-troubleshooting[Troubleshooting installation issues] +| xref:../installing/validation_and_troubleshooting/validating-an-installation.adoc#validating-an-installation[Validating an installation] | xref:../storage/persistent_storage/persistent-storage-ocs.adoc#red-hat-openshift-data-foundation[Install {rh-storage-first}] | xref:../machine_configuration/mco-coreos-layering.adoc#mco-coreos-layering[{op-system-first} image layering] @@ -134,34 +146,34 @@ a|* xref:../machine_management/index.adoc#machine-api-overview_overview-of-machi | Manage xref:../machine_management/index.adoc#machine-mgmt-intro-managing-compute_overview-of-machine-management[compute] and xref:../machine_management/index.adoc#machine-mgmt-intro-managing-control-plane_overview-of-machine-management[control plane] machines with machine sets. | -|xref:../machine_management/deploying-machine-health-checks.adoc#deploying-machine-health-checks[Deploying machine health checks] -| +| xref:../machine_management/deploying-machine-health-checks.adoc#deploying-machine-health-checks[Deploying machine health checks] +| xref:../machine_management/control_plane_machine_management/cpmso-about.adoc#cpmso-about[About control plane machine sets] -|xref:../machine_management/applying-autoscaling.adoc#applying-autoscaling[Applying autoscaling to an {product-title} cluster] -| +| xref:../machine_management/applying-autoscaling.adoc#applying-autoscaling[Applying autoscaling to an {product-title} cluster] +| xref:../nodes/pods/nodes-pods-priority.adoc#nodes-pods-priority[Including pod priority in pod scheduling decisions] | xref:../registry/index.adoc#registry-overview[Manage container registries] | link:https://access.redhat.com/documentation/en-us/red_hat_quay/[Red Hat Quay] | xref:../authentication/understanding-authentication.adoc#understanding-authentication[Manage users and groups] -| +| xref:../authentication/impersonating-system-admin.adoc#impersonating-system-admin[Impersonating the system:admin user] | xref:../authentication/understanding-authentication.adoc#understanding-authentication[Manage authentication] -| xref:../authentication/understanding-identity-provider.adoc#supported-identity-providers[multiple identity providers] +| xref:../authentication/understanding-identity-provider.adoc#supported-identity-providers[Multiple identity providers] -| Manage xref:../security/certificates/replacing-default-ingress-certificate.adoc#replacing-default-ingress[ingress], xref:../security/certificates/api-server.adoc#api-server-certificates[API server], and xref:../security/certificates/service-serving-certificate.adoc#add-service-serving[service] certificates -| +| Manage xref:../security/certificates/replacing-default-ingress-certificate.adoc#replacing-default-ingress[Ingress], xref:../security/certificates/api-server.adoc#api-server-certificates[API server], and xref:../security/certificates/service-serving-certificate.adoc#add-service-serving[Service] certificates +| xref:../networking/network_security/network-policy-apis.adoc#network-policy-apis[Network security] | xref:../networking/understanding-networking.adoc#understanding-networking[Manage networking] a|* xref:../networking/networking_operators/cluster-network-operator.adoc#nw-cluster-network-operator_cluster-network-operator[Cluster Network Operator] -* xref:../networking/multiple_networks/understanding-multiple-networks.adoc#understanding-multiple-networks[multiple network interfaces] -* xref:../networking/network_security/network_policy/about-network-policy.adoc#about-network-policy[network policy] +* xref:../networking/multiple_networks/understanding-multiple-networks.adoc#understanding-multiple-networks[Multiple network interfaces] +* xref:../networking/network_security/network_policy/about-network-policy.adoc#about-network-policy[Network policy] | xref:../operators/understanding/olm-understanding-operatorhub.adoc#olm-understanding-operatorhub[Manage Operators] -| +| xref:../operators/user/olm-creating-apps-from-installed-operators.adoc#olm-creating-apps-from-installed-operators[Creating applications from installed Operators] +| xref:../windows_containers/index.adoc#index[{productwinc} overview] | xref:../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads_understanding-windows-container-workloads[Understanding Windows container workloads] -| |=== @@ -178,17 +190,17 @@ a|* xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updati * xref:../disconnected/updating/index.adoc#about-disconnected-updates[Using the OpenShift Update Service in a disconnected environment] | xref:../operators/understanding/crds/crd-extending-api-with-crds.adoc#crd-extending-api-with-crds[Use custom resource definitions (CRDs) to modify the cluster] -a|* xref:../operators/understanding/crds/crd-extending-api-with-crds.adoc#crd-creating-custom-resources-definition_crd-extending-api-with-crds[create a CRD] -* xref:../operators/understanding/crds/crd-managing-resources-from-crds.adoc#crd-managing-resources-from-crds[manage resources from CRDs] +a|* xref:../operators/understanding/crds/crd-extending-api-with-crds.adoc#crd-creating-custom-resources-definition_crd-extending-api-with-crds[Create a CRD] +* xref:../operators/understanding/crds/crd-managing-resources-from-crds.adoc#crd-managing-resources-from-crds[Manage resources from CRDs] | xref:../applications/quotas/quotas-setting-per-project.adoc#quotas-setting-per-project[Set resource quotas] -| xref:../applications/quotas/quotas-setting-per-project.adoc#quotas-setting-per-project[set quotas] +| xref:../applications/quotas/quotas-setting-per-project.adoc#quotas-setting-per-project[Set quotas] | xref:../applications/pruning-objects.adoc#pruning-objects[Prune and reclaim resources] -| +| xref:../cicd/builds/advanced-build-operations.adoc#builds-build-pruning-advanced-build-operations[Performing advanced builds] | xref:../scalability_and_performance/recommended-performance-scale-practices/recommended-infrastructure-practices.adoc#scaling-cluster-monitoring-operator[Scale] and xref:../scalability_and_performance/using-node-tuning-operator.adoc#using-node-tuning-operator[tune] clusters -| +| xref:../scalability_and_performance/index.adoc#index[{product-title} scalability and performance] |=== @@ -200,13 +212,13 @@ a|* xref:../operators/understanding/crds/crd-extending-api-with-crds.adoc#crd-cr |Learn about {product-title} |Optional additional resources | xref:../observability/logging/cluster-logging.adoc#cluster-logging[OpenShift Logging] -| +| xref:../observability/cluster_observability_operator/ui_plugins/logging-ui-plugin.adoc#logging-ui-plugin[Logging UI pluigin] +| xref:../observability/distr_tracing/distr-tracing-rn.adoc#distr-tracing-rn[Release notes for the {DTProductName}] | xref:../observability/distr_tracing/distr_tracing_arch/distr-tracing-architecture.adoc#distr-tracing-architecture[{jaegername}] -| | xref:../observability/otel/otel-installing.adoc#install-otel[Red Hat build of OpenTelemetry] -| +| xref:../observability/otel/otel-config-multicluster.adoc#otel-config-multicluster[Gathering the observability data from multiple clusters] | xref:../observability/network_observability/network-observability-overview.adoc#network-observability-overview[About Network Observability] a|* xref:../observability/network_observability/metrics-alerts-dashboards.adoc#metrics-alerts-dashboards_metrics-alerts-dashboards[Using metrics with dashboards and alerts] @@ -225,40 +237,27 @@ a|* xref:../support/remote_health_monitoring/about-remote-health-monitoring.adoc |=== |Learn about {product-title} |Optional additional resources -| xref:../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[Manage storage] -| - -| xref:../storage/understanding-ephemeral-storage.adoc#understanding-ephemeral-storage[Storage] -| +| xref:../storage/index.adoc#storage-types[Storage types] +a| * xref:../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[Persistent storage] +* xref:../storage/understanding-ephemeral-storage.adoc#understanding-ephemeral-storage[Ephemeral storage] |=== [id="application_site_reliability_engineer"] == Application Site Reliability Engineer (App SRE) -[options="header",cols="3*"] +[options="header",cols="2*"] |=== -|Learn about {product-title} |Deploy and manage applications |Optional additional resources +|Learn about {product-title} |Optional additional resources -| +| xref:../applications/index.adoc#building-applications-overview[Building applications overview] | xref:../applications/projects/working-with-projects.adoc#working-with-projects[Projects] -| xref:../support/getting-support.adoc#getting-support[Getting Support] -| xref:../architecture/architecture.adoc#architecture[Architecture] | xref:../operators/understanding/olm-what-operators-are.adoc#olm-what-operators-are[Operators] -| link:https://access.redhat.com/articles/4217411[OpenShift Knowledgebase articles] +| xref:../operators/operator-reference.adoc#cluster-operator-reference[Cluster Operator reference] -| | xref:../observability/logging/cluster-logging.adoc#cluster-logging[Logging] -| link:https://access.redhat.com/support/policy/updates/openshift#ocp4_phases[{product-title} Life Cycle] - -| | link:https://www.openshift.com/blog/tag/logging[Blogs about logging] -| - -| -| xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[Monitoring] -| |=== @@ -302,20 +301,38 @@ a|* xref:../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-sched * xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infrastructure-machinesets[Creating infrastructure machine sets] |=== +[id="self-managed-hcp"] +== {hcp-capital} -// -//[id="hosted-control-plane-activities"] -//== Hosted control plane activities - -//[options="header",cols="2*"] -// |=== -// |Learn about {hcp-capital} |Optional additional resources - -// | xref:../hosted_control_planes/index.adoc#hosted_control_planes[Hosted control planes overview] -// | xref:../hosted_control_planes/index.adoc#hosted-control-planes-overview_hcp-overview[Introduction to hosted control planes] - -// | xref:../hosted_control_planes/hcp-getting-started.adoc#hcp-getting-started [Getting started with hosted control planes] -// | - -// |=== -// \ No newline at end of file +[options="header",cols="2*"] +|=== +|Learn about {hcp} |Optional additional resources + +| xref:../hosted_control_planes/index.adoc#hosted-control-planes-overview[Hosted control planes overview] +a| +xref:../hosted_control_planes/index.adoc#hosted-control-planes-version-support_hcp-overview[Versioning for {hcp}] + +| Preparing to deploy +a| * xref:../hosted_control_planes/hcp-prepare/hcp-requirements.adoc#hcp-requirements[Requirements for {hcp}] +* xref:../hosted_control_planes/hcp-prepare/hcp-sizing-guidance.adoc#hcp-sizing-guidance[Sizing guidance for {hcp}] +* xref:../hosted_control_planes/hcp-prepare/hcp-override-resource-util.adoc#hcp-override-resource-util[Overriding resource utilization measurements] +* xref:../hosted_control_planes/hcp-prepare/hcp-cli.adoc#hcp-cli[Installing the {hcp} command-line interface] +* xref:../hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc#hcp-distribute-workloads[Distributing hosted cluster workloads] +* xref:../hosted_control_planes/hcp-prepare/hcp-enable-disable.adoc#hcp-enable-disable[Enabling or disabling the {hcp} feature] + +| Deploying {hcp} +a| * xref:../hosted_control_planes/hcp-deploy/hcp-deploy-virt.adoc#hcp-deploy-virt[Deploying {hcp} on {VirtProductName}] +* xref:../hosted_control_planes/hcp-deploy/hcp-deploy-aws.adoc#hcp-deploy-aws[Deploying {hcp} on {aws-short}] +* xref:../hosted_control_planes/hcp-deploy/hcp-deploy-bm.adoc#hcp-deploy-bm[Deploying {hcp} on bare metal] +* xref:../hosted_control_planes/hcp-deploy/hcp-deploy-non-bm.adoc#hcp-deploy-non-bm[Deploying {hcp} on non-bare-metal agent machines] +* xref:../hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc#hcp-deploy-ibmz[Deploying {hcp} on {ibm-z-title}] +* xref:../hosted_control_planes/hcp-deploy/hcp-deploy-ibm-power.adoc#hcp-deploy-ibm-power[Deploying {hcp} on {ibm-power-title}] + +| Deploying {hcp} in a disconnected environment +a| * xref:../hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc#hcp-deploy-dc-bm[Deploying {hcp} on bare metal in a disconnected environment] +* xref:../hosted_control_planes/hcp-disconnected/hcp-deploy-dc-virt.adoc#hcp-deploy-dc-virt[Deploying {hcp} on {VirtProductName} in a disconnected environment] + +| xref:../hosted_control_planes/hcp-troubleshooting.adoc#hcp-troubleshooting[Troubleshooting {hcp}] +a| xref:../hosted_control_planes/hcp-troubleshooting.adoc#hosted-control-planes-troubleshooting_hcp-troubleshooting[Gathering information to troubleshoot {hcp}] + +|=== \ No newline at end of file From 28c9f6ac8c3a5b2a30b2ef91783760bf5799c0e7 Mon Sep 17 00:00:00 2001 From: Brendan Daly Date: Mon, 17 Feb 2025 16:22:14 +0000 Subject: [PATCH 280/669] OSDOCS-1311_2:updating AWS AMIs --- ...installation-aws-user-infra-rhcos-ami.adoc | 124 +++++++++--------- 1 file changed, 62 insertions(+), 62 deletions(-) diff --git a/modules/installation-aws-user-infra-rhcos-ami.adoc b/modules/installation-aws-user-infra-rhcos-ami.adoc index b7a41e27e96b..e4cb1fbbfa5a 100644 --- a/modules/installation-aws-user-infra-rhcos-ami.adoc +++ b/modules/installation-aws-user-infra-rhcos-ami.adoc @@ -23,97 +23,97 @@ ifndef::openshift-origin[] |AWS AMI |`af-south-1` -|`ami-019b3e090bb062842` +|`ami-01bf6b6fca71a7dc3` |`ap-east-1` -|`ami-0cb76d97f77cda0a1` +|`ami-0594c08334dcc4afb` |`ap-northeast-1` -|`ami-0d7d4b329e5403cfb` +|`ami-0313928874609075d` |`ap-northeast-2` -|`ami-02d3789d532feb517` +|`ami-09cfc5a33f840ce70` |`ap-northeast-3` -|`ami-08b82c4899109b707` +|`ami-02fece2c48e16e9f2` |`ap-south-1` -|`ami-0c184f8b5ad8af69d` +|`ami-063d0eaf658eb4dc5` |`ap-south-2` -|`ami-0b0525037b9a20e9a` +|`ami-0c4930cae17448786` |`ap-southeast-1` -|`ami-0dbee0006375139a7` +|`ami-068f696694b2fc0f1` |`ap-southeast-2` -|`ami-043072b1af91be72f` +|`ami-04aee88a86e139991` |`ap-southeast-3` -|`ami-09d8bbf16b228139e` +|`ami-0363d9df44ce25cd3` |`ap-southeast-4` -|`ami-01c6b29e9c57b434b` +|`ami-05b72aa8744449f86` |`ca-central-1` -|`ami-06fda1fa0b65b864b` +|`ami-0a7c95e80fb37ade8` |`ca-west-1` -|`ami-0141eea486b5e2c43` +|`ami-0818def2f3d7a696d` |`eu-central-1` -|`ami-0f407de515454fdd0` +|`ami-02c8714aef084ee90` |`eu-central-2` -|`ami-062cfad83bc7b71b8` +|`ami-083d349477a4e9f69` |`eu-north-1` -|`ami-0af77aba6aebb5086` +|`ami-03f4002a3746bc66b` |`eu-south-1` -|`ami-04d9da83bc9f854fc` +|`ami-038d816008adca0be` |`eu-south-2` -|`ami-035d487abf54f0af7` +|`ami-099f491d6ab9706d0` |`eu-west-1` -|`ami-043dd3b788dbaeb1c` +|`ami-0f0ebf16ff38e816f` |`eu-west-2` -|`ami-0c7d0f90a4401b723` +|`ami-0abb7730ffd4d9944` |`eu-west-3` -|`ami-039baa878e1def55f` +|`ami-032c22188cbfff12c` |`il-central-1` -|`ami-07d305bf03b2148de` +|`ami-08171fe42c6af2676` |`me-central-1` -|`ami-0fc457e8897ccb41a` +|`ami-0f1c6a3d726f5b7b5` |`me-south-1` -|`ami-0af99a751cf682b90` +|`ami-019faf03d74520d13` |`sa-east-1` -|`ami-04a7300f64ee01d68` +|`ami-01591af00107320c3` |`us-east-1` -|`ami-01b53f2824bf6d426` +|`ami-08f1807771f4e468b` |`us-east-2` -|`ami-0565349610e27bd41` +|`ami-078e26f293629fe91` |`us-gov-east-1` -|`ami-0020504fa043fe41d` +|`ami-068e56023ec09c2b1` |`us-gov-west-1` -|`ami-036798bce4722d3c2` +|`ami-09ba2da65d9d836cf` |`us-west-1` -|`ami-0147c634ad692da52` +|`ami-01d1d2ed3d63466da` |`us-west-2` -|`ami-0c65d71e89d43aa90` +|`ami-0d769ba340e913a8c` |=== @@ -126,97 +126,97 @@ ifndef::openshift-origin[] |AWS AMI |`af-south-1` -|`ami-0e585ef53405bebf5` +|`ami-02d76a4f0c0ee24cd` |`ap-east-1` -|`ami-05f32f1715bb51bda` +|`ami-07e78c2c0f5f81a49` |`ap-northeast-1` -|`ami-05ecb62bab0c50e52` +|`ami-0e3a6e27f6940ab63` |`ap-northeast-2` -|`ami-0a3ffb2c07c9e4a8d` +|`ami-0116db61662393b23` |`ap-northeast-3` -|`ami-0ae6746ea17d1042c` +|`ami-07dd3d8930d1c27eb` |`ap-south-1` -|`ami-00deb5b08c86060b8` +|`ami-07121d273482babf9` |`ap-south-2` -|`ami-047a47d5049781e03` +|`ami-084f561e41c26ab95` |`ap-southeast-1` -|`ami-09cb598f0d36fde4c` +|`ami-02301ea2b50fc247f` |`ap-southeast-2` -|`ami-01fe8a7538500f24c` +|`ami-0690a605a9bb33d00` |`ap-southeast-3` -|`ami-051b3f67dd787d5e9` +|`ami-08d243c0580c87c80` |`ap-southeast-4` -|`ami-04d2e14a9eef40143` +|`ami-013dad9ce63ec3dc0` |`ca-central-1` -|`ami-0f66973ff12d09356` +|`ami-0238dbad4895283b7` |`ca-west-1` -|`ami-0c9f3e2f0470d6d0b` +|`ami-0faded0cfdf14248a` |`eu-central-1` -|`ami-0a79af8849b425a8a` +|`ami-085a88c5d03df3675` |`eu-central-2` -|`ami-0f9f66951c9709471` +|`ami-08d6da8ffaa81e2b4` |`eu-north-1` -|`ami-0670362aa7eb9032d` +|`ami-0077ccc8e7962b7ee` |`eu-south-1` -|`ami-031b24b970eae750b` +|`ami-02c649a544c9395f2` |`eu-south-2` -|`ami-0734d2ed55c00a46c` +|`ami-0a955bda5a7189ebd` |`eu-west-1` -|`ami-0a9af75c2649471c0` +|`ami-040969e306c9a3efa` |`eu-west-2` -|`ami-0b84155a3672ac44e` +|`ami-06b30fcc40988cc96` |`eu-west-3` -|`ami-02b51442c612818d4` +|`ami-00dc4e0a7798ae0c5` |`il-central-1` -|`ami-0d2c47a297d483ce4` +|`ami-0c1adf273a43b58e2` |`me-central-1` -|`ami-0ef3005246bd83b07` +|`ami-00817b16f81e58b86` |`me-south-1` -|`ami-0321ca1ee89015eda` +|`ami-0f72a9bb1975ba0f9` |`sa-east-1` -|`ami-0e63f1103dc71d8ae` +|`ami-083cf54e8ffc2d716` |`us-east-1` -|`ami-0404da96615c73bec` +|`ami-0eebf083d985a0bcf` |`us-east-2` -|`ami-04c3bd7be936f728f` +|`ami-0b04071739ccf4af2` |`us-gov-east-1` -|`ami-0d30bc0b99b153247` +|`ami-092fec5203140ddd8` |`us-gov-west-1` -|`ami-0ee006f84d6aa5045` +|`ami-078ee5edd87052e70` |`us-west-1` -|`ami-061bfd61d5cfd7aa6` +|`ami-0344d1d886514e258` |`us-west-2` -|`ami-05ffb8f6f18b8e3f8` +|`ami-07ef3531e7692a7ae` |=== endif::openshift-origin[] From e65f6337c998f8e48452d8248d0cd5a0eac58c2a Mon Sep 17 00:00:00 2001 From: "A.Arnold" Date: Fri, 7 Feb 2025 17:08:57 +0000 Subject: [PATCH 281/669] OADP-4884: Attributes for OADP 1.4.3 Signed-off-by: A.Arnold --- _attributes/common-attributes.adoc | 8 ++--- .../oadp-features-plugins.adoc | 4 +-- modules/oadp-ibm-power-test-support.adoc | 4 +-- modules/oadp-ibm-z-test-support.adoc | 4 +-- modules/velero-oadp-version-relationship.adoc | 30 +++++++++---------- 5 files changed, 21 insertions(+), 29 deletions(-) diff --git a/_attributes/common-attributes.adoc b/_attributes/common-attributes.adoc index 31d04438e57d..116bfd051645 100644 --- a/_attributes/common-attributes.adoc +++ b/_attributes/common-attributes.adoc @@ -44,9 +44,9 @@ endif::[] :oadp-first: OpenShift API for Data Protection (OADP) :oadp-full: OpenShift API for Data Protection :oadp-short: OADP -:oadp-version: 1.4.2 -:oadp-version-1-3: 1.3.3 -:oadp-version-1-4: 1.4.2 +:oadp-version: 1.4.3 +:oadp-version-1-3: 1.3.5 +:oadp-version-1-4: 1.4.3 :oadp-bsl-api: backupstoragelocations.velero.io :oc-first: pass:quotes[OpenShift CLI (`oc`)] :product-registry: OpenShift image registry @@ -367,4 +367,4 @@ endif::openshift-origin[] :hcp-capital: Hosted control planes :hcp: hosted control planes :mce: multicluster engine for Kubernetes Operator -:mce-short: multicluster engine Operator \ No newline at end of file +:mce-short: multicluster engine Operator diff --git a/backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc b/backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc index 79cd47ad66f9..bfd7b1c4cb7a 100644 --- a/backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc +++ b/backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc @@ -24,10 +24,8 @@ ifndef::openshift-rosa,openshift-rosa-hcp[] OpenShift API for Data Protection (OADP) is platform neutral. The information that follows relates only to {ibm-power-name} and to {ibm-z-name}. -* {oadp-short} 1.1.7 was tested successfully against {product-title} 4.11 for both {ibm-power-name} and {ibm-z-name}. The sections that follow give testing and support information for {oadp-short} 1.1.7 in terms of backup locations for these systems. -* {oadp-short} 1.2.3 was tested successfully against {product-title} 4.12, 4.13, 4.14, and 4.15 for both {ibm-power-name} and {ibm-z-name}. The sections that follow give testing and support information for {oadp-short} 1.2.3 in terms of backup locations for these systems. * {oadp-short} {oadp-version-1-3} was tested successfully against {product-title} 4.12, 4.13, 4.14, and 4.15 for both {ibm-power-name} and {ibm-z-name}. The sections that follow give testing and support information for {oadp-short} {oadp-version-1-3} in terms of backup locations for these systems. -* {oadp-short} {oadp-version-1-4} was tested successfully against {product-title} 4.14, 4.15, and 4.16 for both {ibm-power-name} and {ibm-z-name}. The sections that follow give testing and support information for {oadp-short} {oadp-version-1-4} in terms of backup locations for these systems. +* {oadp-short} {oadp-version-1-4} was tested successfully against {product-title} 4.14, 4.15, 4.16, and 4.17 for both {ibm-power-name} and {ibm-z-name}. The sections that follow give testing and support information for {oadp-short} {oadp-version-1-4} in terms of backup locations for these systems. include::modules/oadp-ibm-power-test-support.adoc[leveloffset=+2] diff --git a/modules/oadp-ibm-power-test-support.adoc b/modules/oadp-ibm-power-test-support.adoc index f7197f8bfa55..319b26f08423 100644 --- a/modules/oadp-ibm-power-test-support.adoc +++ b/modules/oadp-ibm-power-test-support.adoc @@ -6,7 +6,5 @@ [id="oadp-ibm-power-test-matrix_{context}"] = OADP support for target backup locations using {ibm-power-title} -* {ibm-power-name} running with {product-title} 4.11 and 4.12, and {oadp-first} 1.1.7 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running {ibm-power-name} with {product-title} 4.11 and 4.12, and {oadp-short} 1.1.7 against all S3 backup location targets, which are not AWS, as well. -* {ibm-power-name} running with {product-title} 4.12, 4.13, 4.14, and 4.15, and {oadp-short} 1.2.3 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running {ibm-power-name} with {product-title} 4.12, 4.13. 4.14, and 4.15, and {oadp-short} 1.2.3 against all S3 backup location targets, which are not AWS, as well. * {ibm-power-name} running with {product-title} 4.12, 4.13, 4.14, and 4.15, and {oadp-short} {oadp-version-1-3} was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running {ibm-power-name} with {product-title} 4.13, 4.14, and 4.15, and {oadp-short} {oadp-version-1-3} against all S3 backup location targets, which are not AWS, as well. -* {ibm-power-name} running with {product-title} 4.14, 4.15, and 4.16, and {oadp-short} {oadp-version-1-4} was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running {ibm-power-name} with {product-title} 4.14, 4.15, and 4.16, and {oadp-short} {oadp-version-1-4} against all S3 backup location targets, which are not AWS, as well. +* {ibm-power-name} running with {product-title} 4.14, 4.15, 4.16, and 4.17, and {oadp-short} {oadp-version-1-4} was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running {ibm-power-name} with {product-title} 4.14, 4.15, 4.16, and 4.17, and {oadp-short} {oadp-version-1-4} against all S3 backup location targets, which are not AWS, as well. diff --git a/modules/oadp-ibm-z-test-support.adoc b/modules/oadp-ibm-z-test-support.adoc index 1ab726c3926c..9fa240bacae4 100644 --- a/modules/oadp-ibm-z-test-support.adoc +++ b/modules/oadp-ibm-z-test-support.adoc @@ -6,7 +6,5 @@ [id="oadp-ibm-z-test-support_{context}"] = OADP testing and support for target backup locations using {ibm-z-title} -* {ibm-z-name} running with {product-title} 4.11 and 4.12, and {oadp-first} 1.1.7 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running {ibm-z-name} with {product-title} 4.11 and 4.12, and {oadp-short} 1.1.7 against all S3 backup location targets, which are not AWS, as well. -* {ibm-z-name} running with {product-title} 4.12, 4.13, 4.14, and 4.15, and {oadp-short} 1.2.3 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running {ibm-z-name} with {product-title} 4.12, 4.13, 4.14 and 4.15, and {oadp-short} 1.2.3 against all S3 backup location targets, which are not AWS, as well. * {ibm-z-name} running with {product-title} 4.12, 4.13, 4.14, and 4.15, and {oadp-version-1-3} was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running {ibm-z-name} with {product-title} 4.13 4.14, and 4.15, and {oadp-version-1-3} against all S3 backup location targets, which are not AWS, as well. -* {ibm-z-name} running with {product-title} 4.14, 4.15, and 4.16, and {oadp-version-1-4} was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running {ibm-z-name} with {product-title} 4.14, 4.15, and 4.16, and {oadp-version-1-4} against all S3 backup location targets, which are not AWS, as well. +* {ibm-z-name} running with {product-title} 4.14, 4.15, 4.16, and 4.17, and {oadp-version-1-4} was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running {ibm-z-name} with {product-title} 4.14, 4.15, 4.16, and 4.17, and {oadp-version-1-4} against all S3 backup location targets, which are not AWS, as well. diff --git a/modules/velero-oadp-version-relationship.adoc b/modules/velero-oadp-version-relationship.adoc index f4d65ceffac2..46c1cd0e3de0 100644 --- a/modules/velero-oadp-version-relationship.adoc +++ b/modules/velero-oadp-version-relationship.adoc @@ -1,3 +1,9 @@ +// Module included in the following assemblies: +// +// backup_and_restore/application_backup_and_restore/installing/oadp-installing-operator.adoc +// backup_and_restore/application_backup_and_restore/troubleshooting.adoc +// + :_mod-docs-content-type: CONCEPT [id="velero-oadp-version-relationship_{context}"] = OADP-Velero-{product-title} version relationship @@ -5,22 +11,14 @@ [cols="3", options="header"] |=== |OADP version |Velero version |{product-title} version -| 1.1.0 | link:https://{velero-domain}/docs/v1.9/[1.9] | 4.9 and later -| 1.1.1 | link:https://{velero-domain}/docs/v1.9/[1.9] | 4.9 and later -| 1.1.2 | link:https://{velero-domain}/docs/v1.9/[1.9] | 4.9 and later -| 1.1.3 | link:https://{velero-domain}/docs/v1.9/[1.9] | 4.9 and later -| 1.1.4 | link:https://{velero-domain}/docs/v1.9/[1.9] | 4.9 and later -| 1.1.5 | link:https://{velero-domain}/docs/v1.9/[1.9] | 4.9 and later -| 1.1.6 | link:https://{velero-domain}/docs/v1.9/[1.9] | 4.11 and later -| 1.1.7 | link:https://{velero-domain}/docs/v1.9/[1.9] | 4.11 and later -| 1.2.0 | link:https://{velero-domain}/docs/v1.11/[1.11] | 4.11 and later -| 1.2.1 | link:https://{velero-domain}/docs/v1.11/[1.11] | 4.11 and later -| 1.2.2 | link:https://{velero-domain}/docs/v1.11/[1.11] | 4.11 and later -| 1.2.3 | link:https://{velero-domain}/docs/v1.11/[1.11] | 4.11 and later -| 1.3.0 | link:https://{velero-domain}/docs/v1.12/[1.12] | 4.10-4.15 -| 1.3.1 | link:https://{velero-domain}/docs/v1.12/[1.12] | 4.10-4.15 -| 1.3.2 | link:https://{velero-domain}/docs/v1.12/[1.12] | 4.10-4.15 +| 1.3.0 | link:https://{velero-domain}/docs/v1.12/[1.12] | 4.12-4.15 +| 1.3.1 | link:https://{velero-domain}/docs/v1.12/[1.12] | 4.12-4.15 +| 1.3.2 | link:https://{velero-domain}/docs/v1.12/[1.12] | 4.12-4.15 +| 1.3.3 | link:https://{velero-domain}/docs/v1.12/[1.12] | 4.12-4.15 +| 1.3.4 | link:https://{velero-domain}/docs/v1.12/[1.12] | 4.12-4.15 +| 1.3.5 | link:https://{velero-domain}/docs/v1.12/[1.12] | 4.12-4.15 | 1.4.0 | link:https://{velero-domain}/docs/v1.14/[1.14] | 4.14-4.18 | 1.4.1 | link:https://{velero-domain}/docs/v1.14/[1.14] | 4.14-4.18 | 1.4.2 | link:https://{velero-domain}/docs/v1.14/[1.14] | 4.14-4.18 -|=== \ No newline at end of file +| 1.4.3 | link:https://{velero-domain}/docs/v1.14/[1.14] | 4.14-4.18 +|=== From 41bcd7662f821c55a2222c7bdfce358348f4715d Mon Sep 17 00:00:00 2001 From: sbeskin Date: Wed, 19 Feb 2025 10:13:04 +0200 Subject: [PATCH 282/669] CNV-45436 --- modules/virt-accessing-rdp-console.adoc | 2 +- modules/virt-attaching-vm-to-ovn-secondary-nw-cli.adoc | 2 +- modules/virt-configuring-downward-metrics.adoc | 2 +- modules/virt-configuring-runstrategy-vm.adoc | 5 ----- modules/virt-configuring-vm-real-time.adoc | 2 +- modules/virt-connecting-vm-secondarynw-using-fqdn.adoc | 2 +- ...-new-vm-from-cloned-pvc-using-datavolumetemplate.adoc | 2 +- modules/virt-creating-service-cli.adoc | 2 +- modules/virt-creating-vm-cli.adoc | 2 +- ...virt-creating-vm-cloned-pvc-data-volume-template.adoc | 2 +- modules/virt-defining-watchdog-device-vm.adoc | 2 +- modules/virt-runstrategies-vms.adoc | 4 ++-- modules/virt-setting-resource-quota-limits-for-vms.adoc | 2 +- modules/virt-vm-custom-scheduler.adoc | 2 +- snippets/virt-dynamic-key.yaml | 2 +- snippets/virt-static-key.yaml | 2 +- virt/monitoring/virt-monitoring-vm-health.adoc | 2 +- virt/nodes/virt-node-maintenance.adoc | 9 +-------- 18 files changed, 18 insertions(+), 30 deletions(-) diff --git a/modules/virt-accessing-rdp-console.adoc b/modules/virt-accessing-rdp-console.adoc index 4e0aa7436bba..ec5c2feb29b9 100644 --- a/modules/virt-accessing-rdp-console.adoc +++ b/modules/virt-accessing-rdp-console.adoc @@ -25,7 +25,7 @@ metadata: name: vm-ephemeral namespace: example-namespace spec: - running: false + runStrategy: Halted template: metadata: labels: diff --git a/modules/virt-attaching-vm-to-ovn-secondary-nw-cli.adoc b/modules/virt-attaching-vm-to-ovn-secondary-nw-cli.adoc index 987774ce0661..10e5b2180330 100644 --- a/modules/virt-attaching-vm-to-ovn-secondary-nw-cli.adoc +++ b/modules/virt-attaching-vm-to-ovn-secondary-nw-cli.adoc @@ -22,7 +22,7 @@ kind: VirtualMachine metadata: name: vm-server spec: - running: true + runStrategy: Always template: spec: domain: diff --git a/modules/virt-configuring-downward-metrics.adoc b/modules/virt-configuring-downward-metrics.adoc index 797ba6eebb8a..f988d08a3964 100644 --- a/modules/virt-configuring-downward-metrics.adoc +++ b/modules/virt-configuring-downward-metrics.adoc @@ -40,7 +40,7 @@ spec: name: u1.medium preference: name: fedora - running: true + runStrategy: Always template: metadata: labels: diff --git a/modules/virt-configuring-runstrategy-vm.adoc b/modules/virt-configuring-runstrategy-vm.adoc index e56d84c4703d..c585c95950c1 100644 --- a/modules/virt-configuring-runstrategy-vm.adoc +++ b/modules/virt-configuring-runstrategy-vm.adoc @@ -8,11 +8,6 @@ You can configure a run strategy for a virtual machine (VM) by using the command line. -[IMPORTANT] -==== -The `spec.runStrategy` and `spec.running` keys are mutually exclusive. A VM configuration that contains values for both keys is invalid. -==== - .Procedure * Edit the `VirtualMachine` resource by running the following command: diff --git a/modules/virt-configuring-vm-real-time.adoc b/modules/virt-configuring-vm-real-time.adoc index 3dd0f52f30d7..a3483d04413f 100644 --- a/modules/virt-configuring-vm-real-time.adoc +++ b/modules/virt-configuring-vm-real-time.adoc @@ -23,7 +23,7 @@ kind: VirtualMachine metadata: name: realtime-vm spec: - running: true + runStrategy: Always template: metadata: annotations: diff --git a/modules/virt-connecting-vm-secondarynw-using-fqdn.adoc b/modules/virt-connecting-vm-secondarynw-using-fqdn.adoc index 55ab60138cb5..be2b6ff6e8b4 100644 --- a/modules/virt-connecting-vm-secondarynw-using-fqdn.adoc +++ b/modules/virt-connecting-vm-secondarynw-using-fqdn.adoc @@ -40,7 +40,7 @@ metadata: name: example-vm namespace: example-namespace spec: - running: true + runStrategy: Always template: spec: domain: diff --git a/modules/virt-creating-new-vm-from-cloned-pvc-using-datavolumetemplate.adoc b/modules/virt-creating-new-vm-from-cloned-pvc-using-datavolumetemplate.adoc index b1d2cad5a1d7..0de6292020c1 100644 --- a/modules/virt-creating-new-vm-from-cloned-pvc-using-datavolumetemplate.adoc +++ b/modules/virt-creating-new-vm-from-cloned-pvc-using-datavolumetemplate.adoc @@ -47,7 +47,7 @@ metadata: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone <1> spec: - running: false + runStrategy: Halted template: metadata: labels: diff --git a/modules/virt-creating-service-cli.adoc b/modules/virt-creating-service-cli.adoc index 126fc7b17ef3..847714d437c2 100644 --- a/modules/virt-creating-service-cli.adoc +++ b/modules/virt-creating-service-cli.adoc @@ -25,7 +25,7 @@ metadata: name: example-vm namespace: example-namespace spec: - running: false + runStrategy: Halted template: metadata: labels: diff --git a/modules/virt-creating-vm-cli.adoc b/modules/virt-creating-vm-cli.adoc index c69f8b212d4b..de67ef21bd30 100644 --- a/modules/virt-creating-vm-cli.adoc +++ b/modules/virt-creating-vm-cli.adoc @@ -38,7 +38,7 @@ This example manifest does not configure VM authentication. name: u1.medium <3> preference: name: rhel.9 <4> - running: true + runStrategy: Always template: spec: domain: diff --git a/modules/virt-creating-vm-cloned-pvc-data-volume-template.adoc b/modules/virt-creating-vm-cloned-pvc-data-volume-template.adoc index cb47006fee68..a7f323abdd28 100644 --- a/modules/virt-creating-vm-cloned-pvc-data-volume-template.adoc +++ b/modules/virt-creating-vm-cloned-pvc-data-volume-template.adoc @@ -27,7 +27,7 @@ metadata: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone <1> spec: - running: false + runStrategy: Halted template: metadata: labels: diff --git a/modules/virt-defining-watchdog-device-vm.adoc b/modules/virt-defining-watchdog-device-vm.adoc index f2959f889ea7..cb6854361acb 100644 --- a/modules/virt-defining-watchdog-device-vm.adoc +++ b/modules/virt-defining-watchdog-device-vm.adoc @@ -25,7 +25,7 @@ metadata: kubevirt.io/vm: vm2-rhel84-watchdog name: spec: - running: false + runStrategy: Halted template: metadata: labels: diff --git a/modules/virt-runstrategies-vms.adoc b/modules/virt-runstrategies-vms.adoc index 39eb8483ca2f..6a68799b55e3 100644 --- a/modules/virt-runstrategies-vms.adoc +++ b/modules/virt-runstrategies-vms.adoc @@ -9,7 +9,7 @@ The `spec.runStrategy` key has four possible values: `Always`:: -The virtual machine instance (VMI) is always present when a virtual machine (VM) is created on another node. A new VMI is created if the original stops for any reason. This is the same behavior as `running: true`. +The virtual machine instance (VMI) is always present when a virtual machine (VM) is created on another node. A new VMI is created if the original stops for any reason. `RerunOnFailure`:: The VMI is re-created on another node if the previous instance fails. The instance is not re-created if the VM stops successfully, such as when it is shut down. @@ -18,7 +18,7 @@ The VMI is re-created on another node if the previous instance fails. The instan You control the VMI state manually with the `start`, `stop`, and `restart` virtctl client commands. The VM is not automatically restarted. `Halted`:: -No VMI is present when a VM is created. This is the same behavior as `running: false`. +No VMI is present when a VM is created. Different combinations of the `virtctl start`, `stop` and `restart` commands affect the run strategy. diff --git a/modules/virt-setting-resource-quota-limits-for-vms.adoc b/modules/virt-setting-resource-quota-limits-for-vms.adoc index 4f802b689735..7123c09479da 100644 --- a/modules/virt-setting-resource-quota-limits-for-vms.adoc +++ b/modules/virt-setting-resource-quota-limits-for-vms.adoc @@ -19,7 +19,7 @@ kind: VirtualMachine metadata: name: with-limits spec: - running: false + runStrategy: Halted template: spec: domain: diff --git a/modules/virt-vm-custom-scheduler.adoc b/modules/virt-vm-custom-scheduler.adoc index 7d973f8713f0..83de4f57a503 100644 --- a/modules/virt-vm-custom-scheduler.adoc +++ b/modules/virt-vm-custom-scheduler.adoc @@ -22,7 +22,7 @@ kind: VirtualMachine metadata: name: vm-fedora spec: - running: true + runStrategy: Always template: spec: schedulerName: my-scheduler <1> diff --git a/snippets/virt-dynamic-key.yaml b/snippets/virt-dynamic-key.yaml index cd5b8eae9eb2..2caf565d526f 100644 --- a/snippets/virt-dynamic-key.yaml +++ b/snippets/virt-dynamic-key.yaml @@ -18,7 +18,7 @@ spec: name: u1.medium preference: name: rhel.9 - running: true + runStrategy: Always template: spec: domain: diff --git a/snippets/virt-static-key.yaml b/snippets/virt-static-key.yaml index 14d2bf1e7a59..b80495a807a9 100644 --- a/snippets/virt-static-key.yaml +++ b/snippets/virt-static-key.yaml @@ -18,7 +18,7 @@ spec: name: u1.medium preference: name: rhel.9 - running: true + runStrategy: Always template: spec: domain: diff --git a/virt/monitoring/virt-monitoring-vm-health.adoc b/virt/monitoring/virt-monitoring-vm-health.adoc index ab7a199d6409..1134e6a6476c 100644 --- a/virt/monitoring/virt-monitoring-vm-health.adoc +++ b/virt/monitoring/virt-monitoring-vm-health.adoc @@ -26,7 +26,7 @@ You can define a watchdog to monitor the health of the guest operating system by The watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive: -* `poweroff`: The VM powers down immediately. If `spec.running` is set to `true` or `spec.runStrategy` is not set to `manual`, then the VM reboots. +* `poweroff`: The VM powers down immediately. If `spec.runStrategy` is not set to `manual`, the VM reboots. * `reset`: The VM reboots in place and the guest operating system cannot react. + [NOTE] diff --git a/virt/nodes/virt-node-maintenance.adoc b/virt/nodes/virt-node-maintenance.adoc index 04a2acfd9489..9bdce4d490f1 100644 --- a/virt/nodes/virt-node-maintenance.adoc +++ b/virt/nodes/virt-node-maintenance.adoc @@ -82,14 +82,7 @@ endif::openshift-rosa,openshift-dedicated[] [id="run-strategies"] == Run strategies -A virtual machine (VM) configured with `spec.running: true` is immediately restarted. The `spec.runStrategy` key provides greater flexibility for determining how a VM behaves under certain conditions. - -[IMPORTANT] -==== -The `spec.runStrategy` and `spec.running` keys are mutually exclusive. Only one of them can be used. - -A VM configuration with both keys is invalid. -==== +The `spec.runStrategy` key determines how a VM behaves under certain conditions. include::modules/virt-runstrategies-vms.adoc[leveloffset=+2] From a652414027b991d657b7f312089bf14a162662ee Mon Sep 17 00:00:00 2001 From: Michael Burke Date: Mon, 20 Jan 2025 09:43:02 -0500 Subject: [PATCH 283/669] WMCO EUS to EUS upgrade --- _topic_maps/_topic_map.yml | 2 +- modules/windows-upgrades-eus.adoc | 14 ++++++++ modules/wmco-upgrades-eus-using-cli.adoc | 33 ++++++++++++++++++ .../wmco-upgrades-eus-using-web-console.adoc | 34 +++++++++++++++++++ modules/wmco-upgrades.adoc | 8 ++--- windows_containers/windows-node-upgrades.adoc | 30 ++++++++++++++-- 6 files changed, 113 insertions(+), 8 deletions(-) create mode 100644 modules/windows-upgrades-eus.adoc create mode 100644 modules/wmco-upgrades-eus-using-cli.adoc create mode 100644 modules/wmco-upgrades-eus-using-web-console.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index e19d4e90daf6..0364ba392413 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2864,7 +2864,7 @@ Topics: File: creating-windows-machineset-vsphere - Name: Scheduling Windows container workloads File: scheduling-windows-workloads -- Name: Windows node upgrades +- Name: Windows node updates File: windows-node-upgrades - Name: Using Bring-Your-Own-Host Windows instances as nodes File: byoh-windows-instance diff --git a/modules/windows-upgrades-eus.adoc b/modules/windows-upgrades-eus.adoc new file mode 100644 index 000000000000..3231617a7001 --- /dev/null +++ b/modules/windows-upgrades-eus.adoc @@ -0,0 +1,14 @@ +// Module included in the following assemblies: +// +// * windows_containers/windows-node-upgrades.adoc + +:_mod-docs-content-type: CONCEPT +[id="wmco-upgrades-eus_{context}"] += Windows Machine Config Operator Control Plane Only update + +{product-title} and Windows Machine Config Operator (WMCO) support updating from one EUS version to another EUS version of {product-title}, in a process called a *Control Plane Only* update. After upgrading the cluster, the Windows nodes are updated from the starting EUS version to the new EUS version while keeping the Windows workloads in a healthy state with no disruptions. + +[IMPORTANT] +==== +This update was previously known as an *EUS-to-EUS* update and is now referred to as a *Control Plane Only* update. These updates are only viable between *even-numbered minor versions* of {product-title}. +==== diff --git a/modules/wmco-upgrades-eus-using-cli.adoc b/modules/wmco-upgrades-eus-using-cli.adoc new file mode 100644 index 000000000000..81a92c4f54c1 --- /dev/null +++ b/modules/wmco-upgrades-eus-using-cli.adoc @@ -0,0 +1,33 @@ +// Module included in the following assemblies: +// +// * windows_containers/windows-node-upgrades.adoc + +:_mod-docs-content-type: PROCEDURE +[id="wmco-upgrades-eus-using-cli_{context}"] += WMCO Control Plane Only update by using the CLI + +You can use the {oc-first} to perform a Control Plane Only update of the Windows Machine Config Operator (WMCO). + +.Prerequisites +* The cluster must be running on a supported EUS version of {product-title}. +* All Windows nodes must be in a healthy state. +* All Windows nodes must be running on the same version of the WMCO. +* All the of the prerequisites of the Control Plane Only update are met, as described in "Performing a Control Plane Only update." + +.Procedure + +. Uninstall the WMCO Operator from the cluster by following the steps in "Deleting Operators from a cluster using the CLI." ++ +[IMPORTANT] +==== +Delete the Operator only. Do not delete the Windows namespace or any Windows workloads. +==== + +. Update {product-title} by following the steps in "Performing a Control Plane Only update." + +. Install the new WMCO version by following the steps in "Installing the Windows Machine Config Operator using the CLI." + +.Verification + +* On the Verify that the *Status* shows *Succeeded* to confirm successful installation of the WMCO. + diff --git a/modules/wmco-upgrades-eus-using-web-console.adoc b/modules/wmco-upgrades-eus-using-web-console.adoc new file mode 100644 index 000000000000..a3283daa8815 --- /dev/null +++ b/modules/wmco-upgrades-eus-using-web-console.adoc @@ -0,0 +1,34 @@ +// Module included in the following assemblies: +// +// * windows_containers/windows-node-upgrades.adoc + +:_mod-docs-content-type: PROCEDURE +[id="wmco-upgrades-eus-using-web-console_{context}"] += WMCO Control Plane Only update by using the web console + +You can use the {product-title} web console to perform a Control Plane Only update of the Windows Machine Config Operator (WMCO). + +.Prerequisites +* The cluster must be running on a supported EUS version of {product-title}. +* All Windows nodes must be in a healthy state. +* All Windows nodes must be running on the same version of the WMCO. +* All the of the prerequisites of the Control Plane Only update are met, as described in "Performing a Control Plane Only update." + +.Procedure + +. Uninstall WMCO operator by using the following the steps: ++ +[IMPORTANT] +==== +Delete the Operator only. Do not delete the Windows namespace or any Windows workloads. +==== ++ +.. Log in to the {product-title} web console. +.. Navigate to *Operators -> OperatorHub*. +.. Use the *Filter by keyword* box to search for `Red Hat Windows Machine Config Operator`. +.. Click the *Red Hat Windows Machine Config Operator* tile. The Operator tile indicates it is installed. +.. In the *Windows Machine Config Operator* descriptor page, click *Uninstall*. + +. Update {product-title} by following the steps in "Performing a Control Plane Only update." + +. Install the new WMCO version by following the steps in "Installing the Windows Machine Config Operator using the web console." diff --git a/modules/wmco-upgrades.adoc b/modules/wmco-upgrades.adoc index 7dbcba1d03a3..0053acd94e50 100644 --- a/modules/wmco-upgrades.adoc +++ b/modules/wmco-upgrades.adoc @@ -3,16 +3,16 @@ // * windows_containers/windows-node-upgrades.adoc [id="wmco-upgrades_{context}"] -= Windows Machine Config Operator upgrades += Windows Machine Config Operator updates -When a new version of the Windows Machine Config Operator (WMCO) is released that is compatible with the current cluster version, the Operator is upgraded based on the upgrade channel and subscription approval strategy it was installed with when using the Operator Lifecycle Manager (OLM). The WMCO upgrade results in the Kubernetes components in the Windows machine being upgraded. +When a new version of the Windows Machine Config Operator (WMCO) is released that is compatible with the current cluster version, the Operator is updated based on the update channel and subscription approval strategy it was installed with when using the Operator Lifecycle Manager (OLM). The WMCO update results in the Kubernetes components in the Windows machine being updated. [NOTE] ==== -If you are upgrading to a new version of the WMCO and want to use cluster monitoring, you must have the `openshift.io/cluster-monitoring=true` label present in the WMCO namespace. If you add the label to a pre-existing WMCO namespace, and there are already Windows nodes configured, restart the WMCO pod to allow monitoring graphs to display. +If you are updating to a new version of the WMCO and want to use cluster monitoring, you must have the `openshift.io/cluster-monitoring=true` label present in the WMCO namespace. If you add the label to a pre-existing WMCO namespace, and there are already Windows nodes configured, restart the WMCO pod to allow monitoring graphs to display. ==== -For a non-disruptive upgrade, the WMCO terminates the Windows machines configured by the previous version of the WMCO and recreates them using the current version. This is done by deleting the `Machine` object, which results in the drain and deletion of the Windows node. To facilitate an upgrade, the WMCO adds a version annotation to all the configured nodes. During an upgrade, a mismatch in version annotation results in the deletion and recreation of a Windows machine. To have minimal service disruptions during an upgrade, the WMCO only updates one Windows machine at a time. +For a non-disruptive update, the WMCO terminates the Windows machines configured by the previous version of the WMCO and recreates them using the current version. This is done by deleting the `Machine` object, which results in the drain and deletion of the Windows node. To facilitate an update, the WMCO adds a version annotation to all the configured nodes. During an update, a mismatch in version annotation results in the deletion and recreation of a Windows machine. To have minimal service disruptions during an update, the WMCO only updates one Windows machine at a time. After the update, it is recommended that you set the `spec.os.name.windows` parameter in your workload pods. The WMCO uses this field to authoritatively identify the pod operating system for validation and is used to enforce Windows-specific pod security context constraints (SCCs). diff --git a/windows_containers/windows-node-upgrades.adoc b/windows_containers/windows-node-upgrades.adoc index 9dedf9241029..d8b4b5d74675 100644 --- a/windows_containers/windows-node-upgrades.adoc +++ b/windows_containers/windows-node-upgrades.adoc @@ -1,13 +1,37 @@ :_mod-docs-content-type: ASSEMBLY [id="windows-node-upgrades"] -= Windows node upgrades += Windows node updates include::_attributes/common-attributes.adoc[] :context: windows-node-upgrades toc::[] -You can ensure your Windows nodes have the latest updates by upgrading the Windows Machine Config Operator (WMCO). +You can ensure your Windows nodes have the latest updates by updating the Windows Machine Config Operator (WMCO). + +You can update the WMCO in any of the following scenarios: + +* Within the current version. for example, from <10.y.z> to <10.y.z+1>. +* To a new, contiguous version. For example, from <10.y> to <10.y+1>. +* From an EUS version to another EUS version by using a Control Plane Only update. For example, from <10.y> to <10.y+2>. include::modules/wmco-upgrades.adoc[leveloffset=+1] -For more information on Operator upgrades using the Operator Lifecycle Manager (OLM), see xref:../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators]. +[role="_additional-resources"] +.Additional resources +* xref:../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators]. + +include::modules/windows-upgrades-eus.adoc[leveloffset=+1] +include::modules/wmco-upgrades-eus-using-web-console.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources +* xref:../updating/updating_a_cluster/control-plane-only-update.adoc#control-plane-only-update[Performing a Control Plane Only update] +* xref:../windows_containers/enabling-windows-container-workloads.adoc#installing-wmco-using-web-console_enabling-windows-container-workloads[Installing the Windows Machine Config Operator using the web console] + +include::modules/wmco-upgrades-eus-using-cli.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources +* xref:../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operator-from-a-cluster-using-cli_olm-deleting-operators-from-a-cluster[Deleting Operators from a cluster using the CLI] +* xref:../updating/updating_a_cluster/control-plane-only-update.adoc#control-plane-only-update[Performing a Control Plane Only update] +* xref:../windows_containers/enabling-windows-container-workloads.adoc#installing-wmco-using-cli_enabling-windows-container-workloads[Installing the Windows Machine Config Operator using the CLI] From 20efaa14bc712b31ba86cfc6d6d53062d6e918da Mon Sep 17 00:00:00 2001 From: JoeAldinger Date: Wed, 5 Feb 2025 12:57:02 -0500 Subject: [PATCH 284/669] OSDOCS-13291-main:removes TP for UDN --- .../about-user-defined-networks.adoc | 15 +-------------- 1 file changed, 1 insertion(+), 14 deletions(-) diff --git a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc index f022d64ff672..d844ca6266a2 100644 --- a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc +++ b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc @@ -6,8 +6,6 @@ include::_attributes/common-attributes.adoc[] toc::[] -:featurename: `UserDefinedNetwork` -include::snippets/technology-preview.adoc[] Before the implementation of user-defined networks (UDNs) in the default the OVN-Kubernetes CNI plugin for {product-title}, the Kubernetes Layer 3 topology was supported as the primary network, or _main_ network, to where all pods attach. The Kubernetes design principle requires that all pods communicate with each other by their IP addresses, and Kubernetes restricts inter-pod traffic according to the Kubernetes network policy. While the Kubernetes design is useful for simple deployments, the Layer 3 topology restricts customization of primary network segment configurations, especially for modern multi-tenant deployments. @@ -22,7 +20,7 @@ image::527-OpenShift-UDN-isolation-012025.png[The namespace isolation concept in Support for the Localnet topology on both primary and secondary networks will be added in a future version of {product-title}. ==== -Unlike a network attachment definition (NAD), which is only namespaced scope, a cluster administrator can use a UDN to create and define additional networks that span multiple namespaces at the cluster level by leveraging the `ClusterUserDefinedNetwork` custom resource (CR). Additionally, a cluster administrator or a cluster user can use a UDN to define additional networks at the namespace level with the `UserDefinedNetwork` CR. +A cluster administrator can use a user-defined network to create and define additional networks that span multiple namespaces at the cluster level by leveraging the `ClusterUserDefinedNetwork` custom resource (CR). Additionally, a cluster administrator or a cluster user can use a user-defined network to define additional networks at the namespace level with the `UserDefinedNetwork` CR. The following diagram shows tenant isolation that a cluster administrator created by defining a `ClusterUserDefinedNetwork` (CR) for each tenant. This network configuration allows a network to span across many namespaces. In the diagram, the `udn-1` disconnected network selects `namespace-1` and `namespace-2`, while the `udn-2` disconnected network selects `namespace-3` and `namespace-4`. A tenant acts as a disconnected network that is isolated from other tenants' networks. Pods from a namespace can communicate with pods in another namespace only if those namespaces exist in the same tenant network. @@ -30,17 +28,6 @@ image::528-OpenShift-multitenant-0225.png[The tenant isolation concept in a user The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a `ClusterUserDefinedNetwork` or `UserDefinedNetwork` custom resource, how to create the custom resource, and additional configuration details that might be relevant to your deployment. -// Looks like this may be out for 4.17, but in for 4.18 as of 8/19/24 -//. Ingress and egress support -//+ -//* **Support for ingress and egress traffic**: Cluster ingress and egress traffic is supported for both primary and secondary networks. -//* **Support for ingress and egress features**: User-defined networks support support the following ingress and egress features: -//+ -//** EgressQoS -//** EgressService -//** EgressIP -//** Load balancer and NodePort services, and services with external IPs. - //benefits of UDN include::modules/nw-udn-benefits.adoc[leveloffset=+1] From b0068bf3fdbb33a3e6fefeb6bf1b4ae37295c7b6 Mon Sep 17 00:00:00 2001 From: Jaromir Hradilek Date: Thu, 6 Feb 2025 21:29:53 +0100 Subject: [PATCH 285/669] CNV-42747: Added a note about OCP 4.16 This is an edited version of Oren's original content from PR #86933 and extended by me based on information kindly provided by him. --- ...virt-about-control-plane-only-updates.adoc | 2 ++ ...ates-during-control-plane-only-update.adoc | 23 ++++++++++++++++++- 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/modules/virt-about-control-plane-only-updates.adoc b/modules/virt-about-control-plane-only-updates.adoc index 62f423a6902b..3c8347805b00 100644 --- a/modules/virt-about-control-plane-only-updates.adoc +++ b/modules/virt-about-control-plane-only-updates.adoc @@ -12,6 +12,8 @@ After you update from the source EUS version to the next odd-numbered minor vers When the {product-title} update succeeds, the corresponding update for {VirtProductName} becomes available. You can now update {VirtProductName} to the target EUS version. +For more information about EUS versions, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat OpenShift Container Platform Life Cycle Policy]. + [id="preparing-to-update_{context}"] == Preparing to update diff --git a/modules/virt-preventing-workload-updates-during-control-plane-only-update.adoc b/modules/virt-preventing-workload-updates-during-control-plane-only-update.adoc index 63ff7b5d1189..74d7089b8145 100644 --- a/modules/virt-preventing-workload-updates-during-control-plane-only-update.adoc +++ b/modules/virt-preventing-workload-updates-during-control-plane-only-update.adoc @@ -8,6 +8,27 @@ When you update from one Extended Update Support (EUS) version to the next, you must manually disable automatic workload updates to prevent {VirtProductName} from migrating or evicting workloads during the update process. +[IMPORTANT] +==== +In {product-title} 4.16, the underlying {op-system-first} upgraded to version 9.4 of {op-system-base-full}. To operate correctly, all `virt-launcher` pods in the cluster need to use the same version of {op-system-base}. + +After upgrading to {product-title} 4.16 from an earlier version, re-enable workload updates in {VirtProductName} to allow `virt-launcher` pods to update. Before upgrading to the next {product-title} version, verify that all VMIs use up-to-date workloads: + +[source,terminal] +---- +$ oc get kv kubevirt-kubevirt-hyperconverged -o json -n openshift-cnv | jq .status.outdatedVirtualMachineInstanceWorkloads +---- + +If the previous command returns a value larger than `0`, list all VMIs with outdated `virt-launcher` pods and start live migration to update them to a new version: + +[source,terminal] +---- +$ oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces +---- + +For the list of supported {product-title} releases and the {op-system-base} versions they use, see link:https://access.redhat.com/articles/6907891[{op-system-base} Versions Utilized by {op-system} and {product-title}]. +==== + .Prerequisites * You are running an EUS version of {product-title} and want to update to the next EUS version. You have not yet updated to the odd-numbered version in between. @@ -206,4 +227,4 @@ $ oc get vmim -A .Next steps -* You can now unpause the worker nodes' machine config pools. \ No newline at end of file +* Unpause the machine config pools for each compute node. From 71d4fdef5a779debf01e3a0d509c72a42bae7425 Mon Sep 17 00:00:00 2001 From: JoeAldinger Date: Tue, 18 Feb 2025 11:01:18 -0500 Subject: [PATCH 286/669] OCPBUGS-50855:adds warning for VRF and UDN --- .../k8s-nmstate-updating-node-network-config.adoc | 5 +++++ networking/metallb/metallb-frr-k8s.adoc | 5 ++++- 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/networking/k8s_nmstate/k8s-nmstate-updating-node-network-config.adoc b/networking/k8s_nmstate/k8s-nmstate-updating-node-network-config.adoc index daf4515db460..b4eabe6358d6 100644 --- a/networking/k8s_nmstate/k8s-nmstate-updating-node-network-config.adoc +++ b/networking/k8s_nmstate/k8s-nmstate-updating-node-network-config.adoc @@ -11,6 +11,11 @@ After you install the Kubernetes NMState Operator, you can use the Operator to o For more information about how to install the NMState Operator, see xref:../../networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator#k8s-nmstate-about-the-k8s-nmstate-operator[Kubernetes NMState Operator]. +[WARNING] +==== +When configuring Virtual Route Forwarding (VRF) users must change their VRFs to a table ID lower than 1000 as higher than 1000 is reserved for {product-title}. +==== + // Viewing the network state of a node by using the CLI include::modules/virt-viewing-network-state-of-node.adoc[leveloffset=+1] diff --git a/networking/metallb/metallb-frr-k8s.adoc b/networking/metallb/metallb-frr-k8s.adoc index 55ace527f794..a78390c1f44d 100644 --- a/networking/metallb/metallb-frr-k8s.adoc +++ b/networking/metallb/metallb-frr-k8s.adoc @@ -13,7 +13,10 @@ As a cluster administrator, you can use the `FRRConfiguration` custom resource ( image::695_OpenShift_MetalLB_FRRK8s_integration_0624.png[MetalLB integration with FRR] - +[WARNING] +==== +When configuring Virtual Route Forwarding (VRF) users must change their VRFs to a table ID lower than 1000 as higher than 1000 is reserved for {product-title}. +==== // FRR configurations include::modules/nw-metallb-frr-configurations.adoc[leveloffset=+1] From 9a4c258f905c26e4ec5b393e84a4d353af46c199 Mon Sep 17 00:00:00 2001 From: danielclowers Date: Thu, 13 Feb 2025 12:23:06 -0500 Subject: [PATCH 287/669] CNV#45704 4.18: Configuring a storage class for custom boot source updates --- ...efault-and-virt-default-storage-class.adoc | 63 +++++++++++++++++++ ...uring-storage-class-bootsource-update.adoc | 43 ++++--------- .../virt-automatic-bootsource-updates.adoc | 2 + 3 files changed, 78 insertions(+), 30 deletions(-) create mode 100644 modules/virt-configuring-default-and-virt-default-storage-class.adoc diff --git a/modules/virt-configuring-default-and-virt-default-storage-class.adoc b/modules/virt-configuring-default-and-virt-default-storage-class.adoc new file mode 100644 index 000000000000..fd86c44bc176 --- /dev/null +++ b/modules/virt-configuring-default-and-virt-default-storage-class.adoc @@ -0,0 +1,63 @@ +// Module included in the following assembly: +// +// * virt/storage/virt-automatic-bootsource-updates.adoc +// + +:_mod-docs-content-type: PROCEDURE +[id="virt-configuring-default-and-virt-default-storage-class_{context}"] += Configuring the default and virt-default storage classes + +A storage class determines how persistent storage is provisioned for workloads. In {VirtProductName}, the virt-default storage class takes precedence over the cluster default storage class and is used specifically for virtualization workloads. Only one storage class should be set as virt-default or cluster default at a time. If multiple storage classes are marked as default, the virt-default storage class overrides the cluster default. To ensure consistent behavior, configure only one storage class as the default for virtualization workloads. + +[IMPORTANT] +==== +Boot sources are created using the default storage class. When the default storage class changes, old boot sources are automatically updated using the new default storage class. If your cluster does not have a default storage class, you must define one. + +If boot source images were stored as volume snapshots and both the cluster default and virt-default storage class have been unset, the volume snapshots are cleaned up and new data volumes will be created. However the newly created data volumes will not start importing until a default storage class is set. +==== + +.Procedure + +. Patch the current virt-default or a cluster default storage class to false: +.. Identify all storage classes currently marked as virt-default by running the following command: ++ +[source,terminal] +---- +$ oc get sc -o json| jq '.items[].metadata|select(.annotations."storageclass.kubevirt.io/is-default-virt-class"=="true")|.name' +---- ++ +.. For each storage class returned, remove the virt-default annotation by running the following command: ++ +[source,terminal] +---- +$ oc patch storageclass -p '{"metadata": {"annotations": {"storageclass.kubevirt.io/is-default-virt-class": "false"}}}' +---- ++ +.. Identify all storage classes currently marked as cluster default by running the following command: ++ +[source,terminal] +---- +$ oc get sc -o json| jq '.items[].metadata|select(.annotations."storageclass.kubernetes.io/is-default-class"=="true")|.name' +---- ++ +.. For each storage class returned, remove the cluster default annotation by running the following command: ++ +[source,terminal] +---- +$ oc patch storageclass -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' +---- + +. Set a new default storage class: +.. Assign the virt-default role to a storage class by running the following command: ++ +[source,terminal] +---- +$ oc patch storageclass -p '{"metadata": {"annotations": {"storageclass.kubevirt.io/is-default-virt-class": "true"}}}' +---- ++ +.. Alternatively, assign the cluster default role to a storage class by running the following command: ++ +[source,terminal] +---- +$ oc patch storageclass -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' +---- \ No newline at end of file diff --git a/modules/virt-configuring-storage-class-bootsource-update.adoc b/modules/virt-configuring-storage-class-bootsource-update.adoc index 68538de80283..149f8cb39a88 100644 --- a/modules/virt-configuring-storage-class-bootsource-update.adoc +++ b/modules/virt-configuring-storage-class-bootsource-update.adoc @@ -5,13 +5,13 @@ :_mod-docs-content-type: PROCEDURE [id="virt-configuring-storage-class-bootsource-update_{context}"] -= Configuring a storage class for custom boot source updates += Configuring a storage class for boot source images -You can override the default storage class by editing the `HyperConverged` custom resource (CR). +You can configure a specific storage class in the `HyperConverged` resource. [IMPORTANT] ==== -Boot sources are created from storage using the default storage class. If your cluster does not have a default storage class, you must define one before configuring automatic updates for custom boot sources. +To ensure stable behavior and avoid unnecessary re-importing, you can specify the `storageClassName` in the `dataImportCronTemplates` section of the `HyperConverged` resource. ==== .Procedure @@ -23,7 +23,7 @@ Boot sources are created from storage using the default storage class. If your c $ oc edit hyperconverged kubevirt-hyperconverged -n {CNVNamespace} ---- -. Define a new storage class by entering a value in the `storageClassName` field: +. Add the `dataImportCronTemplate` to the spec section of the `HyperConverged` resource and set the `storageClassName`: + [source,yaml] ---- @@ -34,11 +34,12 @@ metadata: spec: dataImportCronTemplates: - metadata: - name: rhel8-image-cron + name: rhel9-image-cron spec: template: spec: - storageClassName: <1> + storage: + storageClassName: <1> schedule: "0 */12 * * *" <2> managedDataSource: <3> # ... @@ -54,36 +55,18 @@ For the custom image to be detected as an available boot source, the value of th ---- -- -. Remove the `storageclass.kubernetes.io/is-default-class` annotation from the current default storage class. -.. Retrieve the name of the current default storage class by running the following command: -+ -[source,terminal] ----- -$ oc get storageclass ----- -+ -.Example output -[source,text] ----- -NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE -csi-manila-ceph manila.csi.openstack.org Delete Immediate false 11d -hostpath-csi-basic (default) kubevirt.io.hostpath-provisioner Delete WaitForFirstConsumer false 11d <1> ----- -+ -<1> In this example, the current default storage class is named `hostpath-csi-basic`. +. Wait for the HyperConverged Operator (HCO) and Scheduling, Scale, and Performance (SSP) resources to complete reconciliation. -.. Remove the annotation from the current default storage class by running the following command: +. Delete any outdated `DataVolume` and `VolumeSnapshot` objects from the `openshift-virtualization-os-images` namespace by running the following command. + [source,terminal] ---- -$ oc patch storageclass -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' <1> +$ oc delete DataVolume,VolumeSnapshot -n openshift-virtualization-os-images --selector=cdi.kubevirt.io/dataImportCron ---- -<1> Replace `` with the `storageClassName` value of the default storage class. -. Set the new storage class as the default by running the following command: +. Wait for all `DataSource` objects to reach a "Ready - True" status. Data sources can reference either a PersistentVolumeClaim (PVC) or a VolumeSnapshot. To check the expected source format, run the following command: + [source,terminal] ---- -$ oc patch storageclass -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' <1> ----- -<1> Replace `` with the `storageClassName` value that you added to the `HyperConverged` CR. +$ oc get storageprofile -o json | jq .status.dataImportCronSourceFormat +---- \ No newline at end of file diff --git a/virt/storage/virt-automatic-bootsource-updates.adoc b/virt/storage/virt-automatic-bootsource-updates.adoc index 9ffba454afd5..d97a9cd2d9ae 100644 --- a/virt/storage/virt-automatic-bootsource-updates.adoc +++ b/virt/storage/virt-automatic-bootsource-updates.adoc @@ -41,6 +41,8 @@ You must configure a storage profile. Otherwise, the cluster cannot receive auto ==== endif::openshift-rosa,openshift-dedicated[] +include::modules/virt-configuring-default-and-virt-default-storage-class.adoc[leveloffset=+2] + include::modules/virt-configuring-storage-class-bootsource-update.adoc[leveloffset=+2] include::modules/virt-autoupdate-custom-bootsource.adoc[leveloffset=+2] From 18454baa0a5431cb9e5207a52c202b45b86920ab Mon Sep 17 00:00:00 2001 From: JoeAldinger Date: Tue, 3 Sep 2024 17:13:00 -0400 Subject: [PATCH 288/669] OSDOCS-11461:support matrix module for UDN --- ...-udn-support-matrix-primary-secondary.adoc | 148 ++++++++++++++++++ .../understanding-multiple-networks.adoc | 8 +- 2 files changed, 155 insertions(+), 1 deletion(-) create mode 100644 modules/nw-udn-support-matrix-primary-secondary.adoc diff --git a/modules/nw-udn-support-matrix-primary-secondary.adoc b/modules/nw-udn-support-matrix-primary-secondary.adoc new file mode 100644 index 000000000000..5cf74499ed40 --- /dev/null +++ b/modules/nw-udn-support-matrix-primary-secondary.adoc @@ -0,0 +1,148 @@ +//module included in the following assembly: +// +// *networkking/multiple_networks/understanding-user-defined-networks.adoc + +:_mod-docs-content-type: CONCEPT +[id="support-matrix-for-udn-nad_{context}"] += UserDefinedNetwork and NetworkAttachmentDefinition support matrix + +The `UserDefinedNetwork` and `NetworkAttachmentDefinition` custom resources (CRs) provide cluster administrators and users the ability to create customizable network configurations and define their own network topologies, ensure network isolation, manage IP addressing for workloads, and configure advanced network features. A third CR, `ClusterUserDefinedNetwork`, is also available, which allows administrators the ability to create and define additional networks spanning multiple namespaces at the cluster level. + +User-defined networks and network attachment definitions can serve as both the primary and secondary network interface, and each support `layer2` and `layer3` topologies; a third network topology, Localnet, is also supported with network attachment definitions with secondary networks. + +[NOTE] +==== +As of {product-title} 4.18, the Localnet topology is unavailable for use with the `UserDefinedNetwork` and `ClusterUserDefinedNetwork` CRs. It is only available for `NetworkAttachmentDefinition` CRs that leverage secondary networks. +==== + +The following section highlights the supported features of the `UserDefinedNetwork` and `NetworkAttachmentDefinition` CRs when they are used as either the primary or secondary network. A separate table for the `ClusterUserDefinedNetwork` CR is also included. + +.Primary network support matrix for `UserDefinedNetwork` and `NetworkAttachmentDefinition` CRs +[cols="1a,1a,1a, options="header"] +|=== +^| Network feature ^| Layer2 topology ^|Layer3 topology + +^| east-west traffic +^| ✓ +^| ✓ + +^| north-south traffic +^| ✓ +^| ✓ + +^| Persistent IPs +^| ✓ +^| X + +^| Services +^| ✓ +^| ✓ + +^| `EgressIP` resource +^| ✓ +^| ✓ + +^| Multicast ^[1]^ +^| X +^| ✓ + +^| `NetworkPolicy` resource ^[2]^ +^| ✓ +^| ✓ + +^| `MultinetworkPolicy` resource +^| X +^| X + +|=== +1. Multicast must be enabled in the namespace, and it is only available between OVN-Kubernetes network pods. For more information about multicast, see "Enabling multicast for a project". +2. When creating a `UserDefinedNetwork` CR with a primary network type, network policies must be created _after_ the `UserDefinedNetwork` CR. + +.Secondary network support matrix for `UserDefinedNetwork` and `NetworkAttachmentDefinition` CRs +[cols="1a,1a,1a,1a, options="header"] +|=== +^| Network feature ^| Layer2 topology ^|Layer3 topology ^|Localnet topology ^[1]^ + +^| east-west traffic +^| ✓ +^| ✓ +^| ✓ (`NetworkAttachmentDefinition` CR only) + +^| north-south traffic +^| X +^| X +^| ✓ + +^| Persistent IPs +^| ✓ +^| X +^| ✓ (`NetworkAttachmentDefinition` CR only) + +^| Services +^| X +^| X +^| X + +^| `EgressIP` resource +^| X +^| X +^| X + +^| Multicast +^| X +^| X +^| X + +^| `NetworkPolicy` resource +^| X +^| X +^| X + +^| `MultinetworkPolicy` resource +^| ✓ +^| ✓ +^| ✓ (`NetworkAttachmentDefinition` CR only) + +|=== +1. The Localnet topology is unavailable for use with the `UserDefinedNetwork` CR. It is only supported on secondary networks for `NetworkAttachmentDefinition` CRs. + +.Support matrix for `ClusterUserDefinedNetwork` CRs +[cols="1a,1a,1a, options="header"] +|=== +^| Network feature ^| Layer2 topology ^|Layer3 topology + +^| east-west traffic +^| ✓ +^| ✓ + +^| north-south traffic +^| ✓ +^| ✓ + +^| Persistent IPs +^| ✓ +^| X + +^| Services +^| ✓ +^| ✓ + +^| `EgressIP` resource +^| ✓ +^| ✓ + +^| Multicast ^[1]^ +^| X +^| ✓ + +^| `MultinetworkPolicy` resource +^| X +^| X + +^| `NetworkPolicy` resource ^[2]^ +^| ✓ +^| ✓ + +|=== +1. Multicast must be enabled in the namespace, and it is only available between OVN-Kubernetes network pods. For more information, see "About multicast". +2. When creating a `ClusterUserDefinedNetwork` CR with a primary network type, network policies must be created _after_ the `UserDefinedNetwork` CR. \ No newline at end of file diff --git a/networking/multiple_networks/understanding-multiple-networks.adoc b/networking/multiple_networks/understanding-multiple-networks.adoc index cb0d3e0d2bbe..1e69699871dd 100644 --- a/networking/multiple_networks/understanding-multiple-networks.adoc +++ b/networking/multiple_networks/understanding-multiple-networks.adoc @@ -67,4 +67,10 @@ networks in your cluster: * *TAP*: xref:../../networking/multiple_networks/secondary_networks/creating-secondary-nwt-other-cni.adoc#nw-multus-tap-object_configuring-additional-network-cni[Configure a TAP-based additional network] to create a tap device inside the container namespace. A TAP device enables user space programs to send and receive network packets. -* *SR-IOV*: xref:../../networking/hardware_networks/about-sriov.adoc#about-sriov[Configure an SR-IOV based additional network] to allow pods to attach to a virtual function (VF) interface on SR-IOV capable hardware on the host system. + * *SR-IOV*: xref:../../networking/hardware_networks/about-sriov.adoc#about-sriov[Configure an SR-IOV based additional network] to allow pods to attach to a virtual function (VF) interface on SR-IOV capable hardware on the host system. + +include::modules/nw-udn-support-matrix-primary-secondary.adoc[leveloffset=+1] + +.Additional resources + +* xref:../../networking/ovn_kubernetes_network_provider/enabling-multicast.adoc#nw-ovn-kubernetes-enabling-multicast[Enabling multicast for a project] From 74457a346f20ebcc03fb09af839cfa3b290c91ec Mon Sep 17 00:00:00 2001 From: Jaromir Hradilek Date: Thu, 13 Feb 2025 18:54:58 +0100 Subject: [PATCH 289/669] CNV-47151: Added information about vTPM device snapshots --- modules/virt-about-vtpm-devices.adoc | 7 +++---- .../creating_vms_advanced_web/virt-cloning-vms.adoc | 7 ++++++- virt/managing_vms/virt-using-vtpm-devices.adoc | 6 ++++++ 3 files changed, 15 insertions(+), 5 deletions(-) diff --git a/modules/virt-about-vtpm-devices.adoc b/modules/virt-about-vtpm-devices.adoc index 3e95fcf7aa14..46d926fd1b0d 100644 --- a/modules/virt-about-vtpm-devices.adoc +++ b/modules/virt-about-vtpm-devices.adoc @@ -8,14 +8,10 @@ A virtual Trusted Platform Module (vTPM) device functions like a physical Trusted Platform Module (TPM) hardware chip. - You can use a vTPM device with any operating system, but Windows 11 requires the presence of a TPM chip to install or boot. A vTPM device allows VMs created from a Windows 11 image to function without a physical TPM chip. -If you do not enable vTPM, then the VM does not recognize a TPM device, even if -the node has one. - A vTPM device also protects virtual machines by storing secrets without physical hardware. {VirtProductName} supports persisting vTPM device state by using Persistent Volume Claims (PVCs) for VMs. You must specify the storage class to be used by the PVC by setting the `vmStateStorageClass` attribute in the `HyperConverged` custom resource (CR): [source,yaml] @@ -29,3 +25,6 @@ spec: # ... ---- +If you do not enable vTPM, then the VM does not recognize a TPM device, even if +the node has one. + diff --git a/virt/creating_vms_advanced/creating_vms_advanced_web/virt-cloning-vms.adoc b/virt/creating_vms_advanced/creating_vms_advanced_web/virt-cloning-vms.adoc index 095458da75f7..6afd2f6f0c71 100644 --- a/virt/creating_vms_advanced/creating_vms_advanced_web/virt-cloning-vms.adoc +++ b/virt/creating_vms_advanced/creating_vms_advanced_web/virt-cloning-vms.adoc @@ -8,6 +8,11 @@ toc::[] You can clone virtual machines (VMs) or create new VMs from snapshots. +[IMPORTANT] +==== +Cloning a VM with a vTPM device attached to it or creating a new VM from its snapshot is not supported. +==== + include::modules/virt-cloning-vm-web.adoc[leveloffset=+1] include::modules/virt-creating-vm-from-snapshot-web.adoc[leveloffset=+1] @@ -16,4 +21,4 @@ include::modules/virt-creating-vm-from-snapshot-web.adoc[leveloffset=+1] [id="additional-resources_{context}"] == Additional resources -* xref:../../../virt/creating_vms_advanced/creating_vms_cli/virt-creating-vms-by-cloning-pvcs.adoc#virt-creating-vms-by-cloning-pvcs[Creating VMs by cloning PVCs] \ No newline at end of file +* xref:../../../virt/creating_vms_advanced/creating_vms_cli/virt-creating-vms-by-cloning-pvcs.adoc#virt-creating-vms-by-cloning-pvcs[Creating VMs by cloning PVCs] diff --git a/virt/managing_vms/virt-using-vtpm-devices.adoc b/virt/managing_vms/virt-using-vtpm-devices.adoc index b2692c0ddfba..ff6903043ee6 100644 --- a/virt/managing_vms/virt-using-vtpm-devices.adoc +++ b/virt/managing_vms/virt-using-vtpm-devices.adoc @@ -10,5 +10,11 @@ Add a virtual Trusted Platform Module (vTPM) device to a new or existing virtual machine by editing the `VirtualMachine` (VM) or `VirtualMachineInstance` (VMI) manifest. +[IMPORTANT] +==== +With {VirtProductName} 4.18 and newer, you can xref:../../virt/managing_vms/virt-exporting-vms.adoc#virt-exporting-vms[export virtual machines] (VMs) with attached vTPM devices, xref:../../virt/backup_restore/virt-backup-restore-snapshots.adoc#creating-snapshots_virt-backup-restore-snapshots[create snapshots of these VMs], and xref:../../virt/backup_restore/virt-backup-restore-snapshots.adoc#restoring-vms-from-snapshots_virt-backup-restore-snapshots[restore VMs from these snapshots]. However, cloning a VM with a vTPM device attached to it or creating a new VM from its snapshot is not supported. +==== + include::modules/virt-about-vtpm-devices.adoc[leveloffset=+1] + include::modules/virt-adding-vtpm-to-vm.adoc[leveloffset=+1] From 92d9370377a0b26579fc1648be847d7db1848da7 Mon Sep 17 00:00:00 2001 From: Alex Dellapenta Date: Tue, 18 Feb 2025 21:33:42 -0700 Subject: [PATCH 290/669] OSDK 1.38.0 migration docs for OCP 4.18 GA --- _attributes/common-attributes.adoc | 4 +- ...ible-inside-operator-logs-full-result.adoc | 2 +- modules/osdk-updating-1361-to-138.adoc | 419 ++++++++++++++++++ .../osdk-ansible-updating-projects.adoc | 8 +- .../golang/osdk-golang-updating-projects.adoc | 4 +- .../helm/osdk-helm-updating-projects.adoc | 8 +- 6 files changed, 429 insertions(+), 16 deletions(-) create mode 100644 modules/osdk-updating-1361-to-138.adoc diff --git a/_attributes/common-attributes.adoc b/_attributes/common-attributes.adoc index 116bfd051645..828ede7e0d2d 100644 --- a/_attributes/common-attributes.adoc +++ b/_attributes/common-attributes.adoc @@ -223,9 +223,9 @@ endif::[] :lvms-first: Logical Volume Manager (LVM) Storage :lvms: LVM Storage //Operator SDK version -:osdk_ver: 1.36.1 +:osdk_ver: 1.38.0 //Operator SDK version that shipped with the previous OCP 4.x release -:osdk_ver_n1: 1.31.0 +:osdk_ver_n1: 1.36.1 //Version-agnostic OLM :olm-first: Operator Lifecycle Manager (OLM) :olm: OLM diff --git a/modules/osdk-ansible-inside-operator-logs-full-result.adoc b/modules/osdk-ansible-inside-operator-logs-full-result.adoc index de8806af831a..d346d28e1e6f 100644 --- a/modules/osdk-ansible-inside-operator-logs-full-result.adoc +++ b/modules/osdk-ansible-inside-operator-logs-full-result.adoc @@ -10,7 +10,7 @@ You can set the environment variable `ANSIBLE_DEBUG_LOGS` to `True` to enable ch .Procedure -* Edit the `config/manager/manager.yaml` and `config/default/manager_auth_proxy_patch.yaml` files to include the following configuration: +* Edit the `config/manager/manager.yaml` and `config/default/manager_metrics_patch.yaml` files to include the following configuration: + [source,terminal] ---- diff --git a/modules/osdk-updating-1361-to-138.adoc b/modules/osdk-updating-1361-to-138.adoc new file mode 100644 index 000000000000..ea31057a34af --- /dev/null +++ b/modules/osdk-updating-1361-to-138.adoc @@ -0,0 +1,419 @@ +// Module included in the following assemblies: +// +// * operators/operator_sdk/golang/osdk-golang-updating-projects.adoc +// * operators/operator_sdk/ansible/osdk-ansible-updating-projects.adoc +// * operators/operator_sdk/helm/osdk-helm-updating-projects.adoc + +ifeval::["{context}" == "osdk-golang-updating-projects"] +:golang: +:type: Go +endif::[] +ifeval::["{context}" == "osdk-ansible-updating-projects"] +:ansible: +:type: Ansible +endif::[] +ifeval::["{context}" == "osdk-helm-updating-projects"] +:helm: +:type: Helm +endif::[] + +:_mod-docs-content-type: PROCEDURE +[id="osdk-upgrading-projects_{context}"] += Updating {type}-based Operator projects for Operator SDK {osdk_ver} + +The following procedure updates an existing {type}-based Operator project for compatibility with {osdk_ver}. + +.Prerequisites + +* Operator SDK {osdk_ver} installed +* An Operator project created or maintained with Operator SDK {osdk_ver_n1} + +.Procedure + +// The following few steps should be retained/updated for each new migration procedure, as they're just bumping the OSDK version for each language type. + +. Edit the Makefile of your Operator project to update the Operator SDK version to {osdk_ver}, as shown in the following example: ++ +.Example Makefile +[source,make,subs="attributes+"] +---- +# Set the Operator SDK version to use. By default, what is installed on the system is used. +# This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. +OPERATOR_SDK_VERSION ?= v{osdk_ver} <1> +---- +<1> Change the version from `{osdk_ver_n1}` to `{osdk_ver}`. +ifdef::helm[] +. Edit the Makefile of your Operator project to update the `ose-helm-rhel9-operator` image tag to `{product-version}`, as shown in the following example: ++ +.Example Dockerfile +[source,docker,subs="attributes+"] +---- +FROM registry.redhat.io/openshift4/ose-helm-rhel9-operator:v{product-version} +---- +endif::helm[] + +ifdef::ansible[] +. Edit the Dockerfile of your Operator project to update the `ose-ansible-operator` image tag to `{product-version}`, as shown in the following example: ++ +.Example Dockerfile +[source,docker,subs="attributes+"] +---- +FROM registry.redhat.io/openshift4/ose-ansible-operator:v{product-version} +---- +endif::ansible[] + +. You must upgrade the Kubernetes versions in your Operator project to use 1.30 and Kubebuilder v4. ++ +[TIP] +==== +This update include complex scaffolding changes due to the removal of link:https://github.com/brancz/kube-rbac-proxy[kube-rbac-proxy]. If these migrations become difficult to follow, scaffold a new sample project for comparison. +==== + +ifdef::helm,ansible[] +.. Update the Kustomize version in your Makefile by making the following changes: ++ +[source,diff] +---- +- curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.3.0/kustomize_v5.3.0_$(OS)_$(ARCH).tar.gz | \ ++ curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.4.2/kustomize_v5.4.2_$(OS)_$(ARCH).tar.gz | \ +---- +endif::helm,ansible[] + +ifdef::golang[] +.. Update your `go.mod` file with the following changes to upgrade your dependencies: ++ +[source,go] +---- +go 1.22.0 + +github.com/onsi/ginkgo/v2 v2.17.1 +github.com/onsi/gomega v1.32.0 +k8s.io/api v0.30.1 +k8s.io/apimachinery v0.30.1 +k8s.io/client-go v0.30.1 +sigs.k8s.io/controller-runtime v0.18.4 +---- + +.. Download the upgraded dependencies by running the following command: ++ +[source,terminal] +---- +$ go mod tidy +---- + +.. Update your Makefile with the following changes: ++ +[source,diff] +---- +- ENVTEST_K8S_VERSION = 1.29.0 ++ ENVTEST_K8S_VERSION = 1.30.0 +---- ++ +[source,diff] +---- +- KUSTOMIZE ?= $(LOCALBIN)/kustomize-$(KUSTOMIZE_VERSION) +- CONTROLLER_GEN ?= $(LOCALBIN)/controller-gen-$(CONTROLLER_TOOLS_VERSION) +- ENVTEST ?= $(LOCALBIN)/setup-envtest-$(ENVTEST_VERSION) +- GOLANGCI_LINT = $(LOCALBIN)/golangci-lint-$(GOLANGCI_LINT_VERSION) ++ KUSTOMIZE ?= $(LOCALBIN)/kustomize ++ CONTROLLER_GEN ?= $(LOCALBIN)/controller-gen ++ ENVTEST ?= $(LOCALBIN)/setup-envtest ++ GOLANGCI_LINT = $(LOCALBIN)/golangci-lint +---- ++ +[source,diff] +---- +- KUSTOMIZE_VERSION ?= v5.3.0 +- CONTROLLER_TOOLS_VERSION ?= v0.14.0 +- ENVTEST_VERSION ?= release-0.17 +- GOLANGCI_LINT_VERSION ?= v1.57.2 ++ KUSTOMIZE_VERSION ?= v5.4.2 ++ CONTROLLER_TOOLS_VERSION ?= v0.15.0 ++ ENVTEST_VERSION ?= release-0.18 ++ GOLANGCI_LINT_VERSION ?= v1.59.1 +---- ++ +[source,diff] +---- +- $(call go-install-tool,$(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,${GOLANGCI_LINT_VERSION}) ++ $(call go-install-tool,$(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,$(GOLANGCI_LINT_VERSION)) +---- ++ +[source,diff] +---- +- $(call go-install-tool,$(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,${GOLANGCI_LINT_VERSION}) ++ $(call go-install-tool,$(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,$(GOLANGCI_LINT_VERSION)) +---- ++ +[source,diff] +---- +- @[ -f $(1) ] || { \ ++ @[ -f "$(1)-$(3)" ] || { \ + echo "Downloading $${package}" ;\ ++ rm -f $(1) || true ;\ +- mv "$$(echo "$(1)" | sed "s/-$(3)$$//")" $(1) ;\ +- } ++ mv $(1) $(1)-$(3) ;\ ++ } ;\ ++ ln -sf $(1)-$(3) $(1) +---- + +.. Update your `.golangci.yml` file with the following changes: ++ +[source,diff] +---- +- exportloopref ++ - ginkgolinter + - prealloc ++ - revive ++ ++ linters-settings: ++ revive: ++ rules: ++ - name: comment-spacings +---- + +.. Update your Dockerfile with the following changes: ++ +[source,diff] +---- +- FROM golang:1.21 AS builder ++ FROM golang:1.22 AS builder +---- + +.. Update your `main.go` file with the following changes: ++ +[source,diff] +---- + "sigs.k8s.io/controller-runtime/pkg/log/zap" ++ "sigs.k8s.io/controller-runtime/pkg/metrics/filters" + + var enableHTTP2 bool +- flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address the metric endpoint binds to.") ++ var tlsOpts []func(*tls.Config) ++ flag.StringVar(&metricsAddr, "metrics-bind-address", "0", "The address the metrics endpoint binds to. "+ ++ "Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.") + flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.") + flag.BoolVar(&enableLeaderElection, "leader-elect", false, + "Enable leader election for controller manager. "+ + "Enabling this will ensure there is only one active controller manager.") +- flag.BoolVar(&secureMetrics, "metrics-secure", false, +- "If set the metrics endpoint is served securely") ++ flag.BoolVar(&secureMetrics, "metrics-secure", true, ++ "If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.") + +- tlsOpts := []func(*tls.Config){} + ++ // Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server. ++ // More info: ++ // - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.18.4/pkg/metrics/server ++ // - https://book.kubebuilder.io/reference/metrics.html ++ metricsServerOptions := metricsserver.Options{ ++ BindAddress: metricsAddr, ++ SecureServing: secureMetrics, ++ // TODO(user): TLSOpts is used to allow configuring the TLS config used for the server. If certificates are ++ // not provided, self-signed certificates will be generated by default. This option is not recommended for ++ // production environments as self-signed certificates do not offer the same level of trust and security ++ // as certificates issued by a trusted Certificate Authority (CA). The primary risk is potentially allowing ++ // unauthorized access to sensitive metrics data. Consider replacing with CertDir, CertName, and KeyName ++ // to provide certificates, ensuring the server communicates using trusted and secure certificates. ++ TLSOpts: tlsOpts, ++ } ++ ++ if secureMetrics { ++ // FilterProvider is used to protect the metrics endpoint with authn/authz. ++ // These configurations ensure that only authorized users and service accounts ++ // can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info: ++ // https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.18.4/pkg/metrics/filters#WithAuthenticationAndAuthorization ++ metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization ++ } ++ + mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ +- Scheme: scheme, +- Metrics: metricsserver.Options{ +- BindAddress: metricsAddr, +- SecureServing: secureMetrics, +- TLSOpts: tlsOpts, +- }, ++ Scheme: scheme, ++ Metrics: metricsServerOptions, +---- +endif::golang[] + +.. Update your `config/default/kustomization.yaml` file with the following changes: ++ +[source,diff] +---- + # [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. + #- ../prometheus ++ # [METRICS] Expose the controller manager metrics service. ++ - metrics_service.yaml + ++ # Uncomment the patches line if you enable Metrics, and/or are using webhooks and cert-manager + patches: +- # Protect the /metrics endpoint by putting it behind auth. +- # If you want your controller-manager to expose the /metrics +- # endpoint w/o any authn/z, please comment the following line. +- - path: manager_auth_proxy_patch.yaml ++ # [METRICS] The following patch will enable the metrics endpoint using HTTPS and the port :8443. ++ # More info: https://book.kubebuilder.io/reference/metrics ++ - path: manager_metrics_patch.yaml ++ target: ++ kind: Deployment +---- + +.. Remove the `config/default/manager_auth_proxy_patch.yaml` and `config/default/manager_config_patch.yaml` files. + +.. Create a `config/default/manager_metrics_patch.yaml` file with the following content: ++ +[source,text,subs="attributes+"] +---- +# This patch adds the args to allow exposing the metrics endpoint using HTTPS +- op: add + path: /spec/template/spec/containers/0/args/0 + value: --metrics-bind-address=:8443 +ifdef::helm,ansible[] +# This patch adds the args to allow securing the metrics endpoint +- op: add + path: /spec/template/spec/containers/0/args/0 + value: --metrics-secure +# This patch adds the args to allow RBAC-based authn/authz the metrics endpoint +- op: add + path: /spec/template/spec/containers/0/args/0 + value: --metrics-require-rbac +endif::helm,ansible[] +---- + +.. Create a `config/default/metrics_service.yaml` file with the following content: ++ +[source,yaml] +---- +apiVersion: v1 +kind: Service +metadata: + labels: + control-plane: controller-manager + app.kubernetes.io/name: + app.kubernetes.io/managed-by: kustomize + name: controller-manager-metrics-service + namespace: system +spec: + ports: + - name: https + port: 8443 + protocol: TCP + targetPort: 8443 + selector: + control-plane: controller-manager +---- + +.. Update your `config/manager/manager.yaml` file with the following changes: ++ +[source,diff,subs="attributes+"] +---- + - --leader-elect +ifdef::golang,helm[] ++ - --health-probe-bind-address=:8081 +endif::[] +ifdef::ansible[] ++ - --health-probe-bind-address=:6789 +endif::[] +---- + +.. Update your `config/prometheus/monitor.yaml` file with the following changes: ++ +[source,diff] +---- + - path: /metrics +- port: https ++ port: https # Ensure this is the name of the port that exposes HTTPS metrics + tlsConfig: ++ # TODO(user): The option insecureSkipVerify: true is not recommended for production since it disables ++ # certificate verification. This poses a significant security risk by making the system vulnerable to ++ # man-in-the-middle attacks, where an attacker could intercept and manipulate the communication between ++ # Prometheus and the monitored services. This could lead to unauthorized access to sensitive metrics data, ++ # compromising the integrity and confidentiality of the information. ++ # Please use the following options for secure configurations: ++ # caFile: /etc/metrics-certs/ca.crt ++ # certFile: /etc/metrics-certs/tls.crt ++ # keyFile: /etc/metrics-certs/tls.key + insecureSkipVerify: true +---- + +.. Remove the following files from the `config/rbac/` directory: ++ +-- +* `auth_proxy_client_clusterrole.yaml` +* `auth_proxy_role.yaml` +* `auth_proxy_role_binding.yaml` +* `auth_proxy_service.yaml` +-- + +.. Update your `config/rbac/kustomization.yaml` file with the following changes: ++ +[source,diff] +---- + - leader_election_role_binding.yaml +- # Comment the following 4 lines if you want to disable +- # the auth proxy (https://github.com/brancz/kube-rbac-proxy) +- # which protects your /metrics endpoint. +- - auth_proxy_service.yaml +- - auth_proxy_role.yaml +- - auth_proxy_role_binding.yaml +- - auth_proxy_client_clusterrole.yaml ++ # The following RBAC configurations are used to protect ++ # the metrics endpoint with authn/authz. These configurations ++ # ensure that only authorized users and service accounts ++ # can access the metrics endpoint. Comment the following ++ # permissions if you want to disable this protection. ++ # More info: https://book.kubebuilder.io/reference/metrics.html ++ - metrics_auth_role.yaml ++ - metrics_auth_role_binding.yaml ++ - metrics_reader_role.yaml +---- + +.. Create a `config/rbac/metrics_auth_role_binding.yaml` file with the following content: ++ +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: metrics-auth-rolebinding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: metrics-auth-role +subjects: + - kind: ServiceAccount + name: controller-manager + namespace: system +---- + +.. Create a `config/rbac/metrics_reader_role.yaml` file with the following content: ++ +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: metrics-reader +rules: +- nonResourceURLs: + - "/metrics" + verbs: + - get +---- + +ifeval::["{context}" == "osdk-golang-updating-projects"] +:!golang: +:!type: +endif::[] +ifeval::["{context}" == "osdk-ansible-updating-projects"] +:!ansible: +:!type: +endif::[] +ifeval::["{context}" == "osdk-helm-updating-projects"] +:!helm: +:!type: +endif::[] \ No newline at end of file diff --git a/operators/operator_sdk/ansible/osdk-ansible-updating-projects.adoc b/operators/operator_sdk/ansible/osdk-ansible-updating-projects.adoc index ddb935b12d00..f0e5f17a51e3 100644 --- a/operators/operator_sdk/ansible/osdk-ansible-updating-projects.adoc +++ b/operators/operator_sdk/ansible/osdk-ansible-updating-projects.adoc @@ -12,15 +12,11 @@ include::snippets/osdk-deprecation.adoc[] However, to ensure your existing Operator projects maintain compatibility with Operator SDK {osdk_ver}, update steps are required for the associated breaking changes introduced since {osdk_ver_n1}. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with {osdk_ver_n1}. -include::modules/osdk-updating-131-to-1361.adoc[leveloffset=+1] +include::modules/osdk-updating-1361-to-138.adoc[leveloffset=+1] [id="additional-resources_osdk-ansible-upgrading-projects"] [role="_additional-resources"] == Additional resources -* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/operators/index#osdk-upgrading-projects_osdk-ansible-updating-projects[Upgrading projects for Operator SDK v1.25.4] -* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.11/html-single/operators/index#osdk-upgrading-projects_osdk-ansible-updating-projects[Upgrading projects for Operator SDK v1.22.0] -* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.10/html-single/operators/index#osdk-upgrading-v1101-to-v1160_osdk-upgrading-projects[Upgrading projects for Operator SDK v1.16.0] -* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/operators/developing-operators#osdk-upgrading-v180-to-v1101_osdk-upgrading-projects[Upgrading projects for Operator SDK v1.10.1] -* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/operators/developing-operators#osdk-upgrading-v130-to-v180_osdk-upgrading-projects[Upgrading projects for Operator SDK v1.8.0] +* link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/operators/index#osdk-upgrading-projects_osdk-ansible-updating-projects[Updating Ansible-based Operator projects for Operator SDK 1.36.1] ({product-title} 4.17) * xref:../../../operators/operator_sdk/osdk-pkgman-to-bundle.adoc#osdk-pkgman-to-bundle[Migrating package manifest projects to bundle format] diff --git a/operators/operator_sdk/golang/osdk-golang-updating-projects.adoc b/operators/operator_sdk/golang/osdk-golang-updating-projects.adoc index 80b6643145de..3f0b2e7d10bf 100644 --- a/operators/operator_sdk/golang/osdk-golang-updating-projects.adoc +++ b/operators/operator_sdk/golang/osdk-golang-updating-projects.adoc @@ -12,11 +12,11 @@ include::snippets/osdk-deprecation.adoc[] However, to ensure your existing Operator projects maintain compatibility with Operator SDK {osdk_ver}, update steps are required for the associated breaking changes introduced since {osdk_ver_n1}. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with {osdk_ver_n1}. -include::modules/osdk-updating-131-to-1361.adoc[leveloffset=+1] +include::modules/osdk-updating-1361-to-138.adoc[leveloffset=+1] [id="additional-resources_osdk-upgrading-projects-golang"] [role="_additional-resources"] == Additional resources -* link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/operators/index#osdk-upgrading-projects_osdk-golang-updating-projects[Upgrading projects for Operator SDK 1.31.0] ({product-title} 4.16) +* link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/operators/index#osdk-upgrading-projects_osdk-golang-updating-projects[Updating Go-based projects for Operator SDK 1.36.1] ({product-title} 4.17) * xref:../../../operators/operator_sdk/osdk-pkgman-to-bundle.adoc#osdk-pkgman-to-bundle[Migrating package manifest projects to bundle format] \ No newline at end of file diff --git a/operators/operator_sdk/helm/osdk-helm-updating-projects.adoc b/operators/operator_sdk/helm/osdk-helm-updating-projects.adoc index 0b441a0293c2..ebe3067e472c 100644 --- a/operators/operator_sdk/helm/osdk-helm-updating-projects.adoc +++ b/operators/operator_sdk/helm/osdk-helm-updating-projects.adoc @@ -12,13 +12,11 @@ include::snippets/osdk-deprecation.adoc[] However, to ensure your existing Operator projects maintain compatibility with Operator SDK {osdk_ver}, update steps are required for the associated breaking changes introduced since {osdk_ver_n1}. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with {osdk_ver_n1}. -include::modules/osdk-updating-131-to-1361.adoc[leveloffset=+1] +include::modules/osdk-updating-1361-to-138.adoc[leveloffset=+1] [id="additional-resources_osdk-helm-upgrading-projects"] [role="_additional-resources"] == Additional resources -* xref:../../../operators/operator_sdk/osdk-pkgman-to-bundle.adoc#osdk-pkgman-to-bundle[Migrating package manifest projects to bundle format] -* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.10/html-single/operators/index#osdk-upgrading-v1101-to-v1160_osdk-upgrading-projects[Upgrading projects for Operator SDK 1.16.0] -* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.9/html/operators/developing-operators#osdk-upgrading-v180-to-v1101_osdk-upgrading-projects[Upgrading projects for Operator SDK v1.10.1] -* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/operators/developing-operators#osdk-upgrading-v130-to-v180_osdk-upgrading-projects[Upgrading projects for Operator SDK v1.8.0] +* link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/operators/index#osdk-upgrading-projects_osdk-helm-updating-projects[Updating Helm-based Operator projects for Operator SDK 1.36.1] ({product-title} 4.17) +* xref:../../../operators/operator_sdk/osdk-pkgman-to-bundle.adoc#osdk-pkgman-to-bundle[Migrating package manifest projects to bundle format] \ No newline at end of file From b2b48e8de4eaa47a4f228e8a16db9c10450b61d9 Mon Sep 17 00:00:00 2001 From: Michael Ryan Peter Date: Mon, 10 Feb 2025 10:35:06 -0500 Subject: [PATCH 291/669] OSDOCS#12905: Resolving content from multiple catalogs --- _topic_maps/_topic_map.yml | 2 + .../catalogs/catalog-content-resolution.adoc | 25 +++++ ...og-exclusion-by-labels-or-expressions.adoc | 91 +++++++++++++++++++ ...og-selection-by-labels-or-expressions.adoc | 80 ++++++++++++++++ modules/olmv1-catalog-selection-by-name.adoc | 44 +++++++++ .../olmv1-catalog-selection-by-priority.adoc | 46 ++++++++++ modules/olmv1-red-hat-catalogs.adoc | 52 +++++++---- ...ubleshooting-catalog-selection-errors.adoc | 15 +++ 8 files changed, 339 insertions(+), 16 deletions(-) create mode 100644 extensions/catalogs/catalog-content-resolution.adoc create mode 100644 modules/olmv1-catalog-exclusion-by-labels-or-expressions.adoc create mode 100644 modules/olmv1-catalog-selection-by-labels-or-expressions.adoc create mode 100644 modules/olmv1-catalog-selection-by-name.adoc create mode 100644 modules/olmv1-catalog-selection-by-priority.adoc create mode 100644 modules/olmv1-troubleshooting-catalog-selection-errors.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 0364ba392413..7bd9bc651841 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2082,6 +2082,8 @@ Topics: File: rh-catalogs - Name: Managing catalogs File: managing-catalogs + - Name: Catalog content resolution + File: catalog-content-resolution - Name: Creating catalogs File: creating-catalogs - Name: Disconnected environment support in OLM v1 diff --git a/extensions/catalogs/catalog-content-resolution.adoc b/extensions/catalogs/catalog-content-resolution.adoc new file mode 100644 index 000000000000..becd134a6dd7 --- /dev/null +++ b/extensions/catalogs/catalog-content-resolution.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: ASSEMBLY +[id="catalog-content-resolution"] += Catalog content resolution +include::_attributes/common-attributes.adoc[] +:context: catalog-content-resolution + +toc::[] + +When you specify the cluster extension you want to install in a custom resource (CR), {olmv1-first} uses catalog selection to resolve what content is installed. + +You can perform the following actions to control the selection of catalog content: + +* Specify labels to select the catalog. +* Use match expressions to perform complex filtering across catalogs. +* Set catalog priority. + +If you do not specify any catalog selection criteria, {olmv1-first} selects an extension from any available catalog on the cluster that provides the requested package. + +During resolution, bundles that are not deprecated are preferred over deprecated bundles by default. + +include::modules/olmv1-catalog-selection-by-name.adoc[leveloffset=1] +include::modules/olmv1-catalog-selection-by-labels-or-expressions.adoc[leveloffset=1] +include::modules/olmv1-catalog-exclusion-by-labels-or-expressions.adoc[leveloffset=1] +include::modules/olmv1-catalog-selection-by-priority.adoc[leveloffset=1] +include::modules/olmv1-troubleshooting-catalog-selection-errors.adoc[leveloffset=1] diff --git a/modules/olmv1-catalog-exclusion-by-labels-or-expressions.adoc b/modules/olmv1-catalog-exclusion-by-labels-or-expressions.adoc new file mode 100644 index 000000000000..f2c88a93cc1c --- /dev/null +++ b/modules/olmv1-catalog-exclusion-by-labels-or-expressions.adoc @@ -0,0 +1,91 @@ +// Module included in the following assemblies: +// * extensions/catalogs/catalog-content-resolution.adoc + +:_mod-docs-content-type: REFERENCE + +[id="olmv1-catalog-exclusion-by-labels-or-expressions_{context}"] += Catalog exclusion by labels or expressions + +You can exclude catalogs by using match expressions on metadata with the `NotIn` or `DoesNotExist` operators. + +The following CRs add an `example.com/testing` label to the `unwanted-catalog-1` and `unwanted-catalog-2` cluster catalogs: + +.Example cluster catalog CR +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterCatalog +metadata: + name: unwanted-catalog-1 + labels: + example.com/testing: "true" +spec: + source: + type: Image + image: + ref: quay.io/example/content-management-a:latest +---- + +.Example cluster catalog CR +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterCatalog +metadata: + name: unwanted-catalog-2 + labels: + example.com/testing: "true" +spec: + source: + type: Image + image: + ref: quay.io/example/content-management-b:latest +---- + +The following cluster extension CR excludes selection from the `unwanted-catalog-1` catalog: + +.Example cluster extension CR that excludes a specific catalog +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: +spec: + namespace: + serviceAccount: + name: -installer + source: + sourceType: Catalog + catalog: + packageName: -operator + selector: + matchExpressions: + - key: olm.operatorframework.io/metadata.name + operator: NotIn + values: + - unwanted-catalog-1 +---- + +The following cluster extension CR selects from catalogs that do not have the `example.com/testing` label. As a result, both `unwanted-catalog-1` and `unwanted-catalog-2` are excluded from catalog selection. + +.Example cluster extension CR that excludes catalogs with a specific label +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: +spec: + namespace: + serviceAccount: + name: -installer + source: + sourceType: Catalog + catalog: + packageName: -operator + selector: + matchExpressions: + - key: example.com/testing + operator: DoesNotExist +---- diff --git a/modules/olmv1-catalog-selection-by-labels-or-expressions.adoc b/modules/olmv1-catalog-selection-by-labels-or-expressions.adoc new file mode 100644 index 000000000000..7717ce958a7f --- /dev/null +++ b/modules/olmv1-catalog-selection-by-labels-or-expressions.adoc @@ -0,0 +1,80 @@ +// Module included in the following assemblies: +// * extensions/catalogs/olmv1-catalog-content-resolution.adoc + +:_mod-docs-content-type: REFERENCE + +[id="olmv1-catalog-selection-by-labels-or-exp_{context}"] += Catalog selection by labels or expressions + +You can add metadata to a catalog by using labels in the custom resource (CR) of a cluster catalog. You can then filter catalog selection by specifying the assigned labels or using expressions in the CR of the cluster extension. + +The following cluster catalog CR adds the `example.com/support` label with the value of `true` to the `catalog-a` cluster catalog: + +.Example cluster catalog CR with labels +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterCatalog +metadata: + name: catalog-a + labels: + example.com/support: "true" +spec: + source: + type: Image + image: + ref: quay.io/example/content-management-a:latest +---- + +The following cluster extension CR uses the `matchLabels` selector to select catalogs with the `example.com/support` label and the value of `true`: + +.Example cluster extension CR with `matchLabels` selector +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: +spec: + namespace: + serviceAccount: + name: -installer + source: + sourceType: Catalog + catalog: + packageName: -operator + selector: + matchLabels: + example.com/support: "true" +---- + +You can use the `matchExpressions` field to perform more complex filtering for labels. The following cluster extension CR selects catalogs with the `example.com/support` label and a value of `production` or `supported`: + +.Example cluster extension CR with `matchExpression` selector +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: +spec: + namespace: + serviceAccount: + name: -installer + source: + sourceType: Catalog + catalog: + packageName: -operator + selector: + matchExpressions: + - key: example.com/support + operator: In + values: + - "production" + - "supported" +---- + +[NOTE] +==== +If you use both the `matchLabels` and `matchExpressions` fields, the selected catalog must satisfy all specified criteria. +==== diff --git a/modules/olmv1-catalog-selection-by-name.adoc b/modules/olmv1-catalog-selection-by-name.adoc new file mode 100644 index 000000000000..b3ecfe7e6f06 --- /dev/null +++ b/modules/olmv1-catalog-selection-by-name.adoc @@ -0,0 +1,44 @@ +// Module included in the following assemblies: +// * extensions/catalogs/olmv1-catalog-content-resolution.adoc + +:_mod-docs-content-type: REFERENCE + +[id="olmv1-catalog-selection-by-name_{context}"] += Catalog selection by name + +When a catalog is added to a cluster, a label is created by using the value of the `metadata.name` field of the catalog custom resource (CR). In the CR of an extension, you can specify the catalog name by using the `spec.source.catalog.selector.matchLabels` field. The value of the `matchLabels` field uses the following format: + +.Example label derived from the `metadata.name` field +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: + labels: + olm.operatorframework.io/metadata.name: <1> +... +---- +<1> A label derived from the `metadata.name` field and automatically added when the catalog is applied. + +The following example resolves the `-operator` package from a catalog with the `openshift-redhat-operators` label: + +.Example extension CR +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: +spec: + namespace: + serviceAccount: + name: -installer + source: + sourceType: Catalog + catalog: + packageName: -operator + selector: + matchLabels: + olm.operatorframework.io/metadata.name: openshift-redhat-operators +---- diff --git a/modules/olmv1-catalog-selection-by-priority.adoc b/modules/olmv1-catalog-selection-by-priority.adoc new file mode 100644 index 000000000000..b0e4ae42f480 --- /dev/null +++ b/modules/olmv1-catalog-selection-by-priority.adoc @@ -0,0 +1,46 @@ +// Module included in the following assemblies: +// * extensions/catalogs/olmv1-catalog-content-resolution.adoc + +:_mod-docs-content-type: REFERENCE + +[id="olmv1-catalog-exclusion-by-priority_{context}"] += Catalog selection by priority + +When multiple catalogs provide the same package, you can resolve ambiguities by specifying the priority in the custom resource (CR) of each catalog. If unspecified, catalogs have a default priority value of `0`. The priority can be any positive or negative 32-bit integer. + +[NOTE] +==== +* During bundle resolution, catalogs with higher priority values are selected over catalogs with lower priority values. +* Bundles that are not deprecated are prioritized over bundles that are deprecated. +* If multiple bundles exist in catalogs with the same priority and the catalog selection is ambiguous, an error is printed. +==== + +.Example cluster catalog CR with a higher priority +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterCatalog +metadata: + name: high-priority-catalog +spec: + priority: 1000 + source: + type: Image + image: + ref: quay.io/example/higher-priority-catalog:latest +---- + +.Example cluster catalog CR with a lower priority +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterCatalog +metadata: + name: lower-priority-catalog +spec: + priority: 10 + source: + type: Image + image: + ref: quay.io/example/lower-priority-catalog:latest +---- diff --git a/modules/olmv1-red-hat-catalogs.adoc b/modules/olmv1-red-hat-catalogs.adoc index c3735fddd79e..ef498c1e8fd2 100644 --- a/modules/olmv1-red-hat-catalogs.adoc +++ b/modules/olmv1-red-hat-catalogs.adoc @@ -11,50 +11,70 @@ {olmv1-first} includes the following Red Hat-provided Operator catalogs on the cluster by default. If you want to add an additional catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show the default catalogs installed on the cluster. -.Example Red Hat Operators catalog +.Red{nbsp}Hat Operators catalog [source,yaml,subs="attributes+"] ---- -apiVersion: catalogd.operatorframework.io/v1alpha1 +apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: - name: redhat-operators + name: openshift-redhat-operators spec: + priority: -100 source: - type: image image: + pollIntervalMinutes: <1> ref: registry.redhat.io/redhat/redhat-operator-index:v{product-version} - pollInterval: <1> + type: Image ---- -<1> Specify the interval for polling the remote registry for newer image digests. The default value is `24h`. Valid units include seconds (`s`), minutes (`m`), and hours (`h`). To disable polling, set a zero value, such as `0s`. +<1> Specify the interval in minutes for polling the remote registry for newer image digests. To disable polling, do not set the field. -.Example Certified Operators catalog +.Certified Operators catalog [source,yaml,subs="attributes+"] ---- -apiVersion: catalogd.operatorframework.io/v1alpha1 +apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: - name: certified-operators + name: openshift-certified-operators spec: +priority: -200 source: type: image image: + pollIntervalMinutes: 10 ref: registry.redhat.io/redhat/certified-operator-index:v{product-version} - pollInterval: 24h + type: Image ---- -.Example Community Operators catalog +.Red{nbsp}Hat Marketplace catalog [source,yaml,subs="attributes+"] ---- -apiVersion: catalogd.operatorframework.io/v1alpha1 +apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: - name: community-operators + name: openshift-redhat-marketplace spec: + priority: -300 + source: + image: + pollIntervalMinutes: 10 + ref: registry.redhat.io/redhat/redhat-marketplace-index:v{product-version} + type: Image +---- + +.Community Operators catalog +[source,yaml,subs="attributes+"] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterCatalog +metadata: + name: openshift-community-operators +spec: + priority: -400 source: - type: image image: + pollIntervalMinutes: 10 ref: registry.redhat.io/redhat/community-operator-index:v{product-version} - pollInterval: 24h + type: Image ---- The following command adds a catalog to your cluster: @@ -64,4 +84,4 @@ The following command adds a catalog to your cluster: ---- $ oc apply -f .yaml <1> ---- -<1> Specifies the catalog CR, such as `redhat-operators.yaml`. +<1> Specifies the catalog CR, such as `my-catalog.yaml`. diff --git a/modules/olmv1-troubleshooting-catalog-selection-errors.adoc b/modules/olmv1-troubleshooting-catalog-selection-errors.adoc new file mode 100644 index 000000000000..18f55d84684c --- /dev/null +++ b/modules/olmv1-troubleshooting-catalog-selection-errors.adoc @@ -0,0 +1,15 @@ +// Module included in the following assemblies: +// * extensions/catalogs/olmv1-catalog-content-resolution.adoc + +:_mod-docs-content-type: REFERENCE + +[id="olmv1-troubleshooting-catalog-selection-errors_{context}"] += Troubleshooting catalog selection errors + +If bundle resolution fails because of ambiguity or because no catalog is selected, an error message is printed in the `status.conditions` field of the cluster extension. + +Perform the following actions to troubleshoot catalog selection errors: + +* Refine your selection criteria using labels or expressions. +* Adjust your catalog priorities. +* Ensure that only one bundle matches your package name and version requirements. From 9a0433e6a096d6b02f39c34201c7db4350354a15 Mon Sep 17 00:00:00 2001 From: Pan Ousley Date: Wed, 19 Feb 2025 01:28:59 -0500 Subject: [PATCH 292/669] CNV#46347: live update a VM's instance type --- modules/virt-change-vm-instance-type.adoc | 39 +++++++++++++++++++ ...virt-creating-vms-from-instance-types.adoc | 1 + virt/managing_vms/virt-edit-vms.adoc | 6 +++ 3 files changed, 46 insertions(+) create mode 100644 modules/virt-change-vm-instance-type.adoc diff --git a/modules/virt-change-vm-instance-type.adoc b/modules/virt-change-vm-instance-type.adoc new file mode 100644 index 000000000000..7f0cf6c4f856 --- /dev/null +++ b/modules/virt-change-vm-instance-type.adoc @@ -0,0 +1,39 @@ +// Module included in the following assemblies: +// +// * virt/managing_vms/virt-edit-vms.adoc + +:_mod-docs-content-type: PROCEDURE +[id="virt-change-vm-instance-type_{context}"] + += Changing the instance type of a VM + +You can change the instance type associated with a running virtual machine (VM) by using the web console. The change takes effect immediately. + +.Prerequisites + +* You created the VM by using an instance type. + +.Procedure + +. In the {product-title} web console, click *Virtualization* -> *VirtualMachines*. + +. Select a VM to open the *VirtualMachine details* page. + +. Click the *Configuration* tab. + +. On the *Details* tab, click the instance type text to open the *Edit Instancetype* dialog. For example, click *1 CPU | 2 GiB Memory*. + +. Edit the instance type by using the *Series* and *Size* lists. +.. Select an item from the *Series* list to show the relevant sizes for that series. For example, select *General Purpose*. +.. Select the VM's new instance type from the *Size* list. For example, select *medium: 1 CPUs, 4Gi Memory*, which is available in the *General Purpose* series. + +. Click *Save*. + +.Verification + +. Click the *YAML* tab. + +. Click *Reload*. + +. Review the VM YAML to confirm that the instance type changed. + diff --git a/virt/creating_vm/virt-creating-vms-from-instance-types.adoc b/virt/creating_vm/virt-creating-vms-from-instance-types.adoc index 3f60c74fb7a7..51436383e80d 100644 --- a/virt/creating_vm/virt-creating-vms-from-instance-types.adoc +++ b/virt/creating_vm/virt-creating-vms-from-instance-types.adoc @@ -33,3 +33,4 @@ include::modules/virt-inferfromvolume-labels.adoc[leveloffset=+2] include::modules/virt-creating-vm-instancetype.adoc[leveloffset=+1] +include::modules/virt-change-vm-instance-type.adoc[leveloffset=+1] diff --git a/virt/managing_vms/virt-edit-vms.adoc b/virt/managing_vms/virt-edit-vms.adoc index c9bb066d74f0..0fb9128bdc28 100644 --- a/virt/managing_vms/virt-edit-vms.adoc +++ b/virt/managing_vms/virt-edit-vms.adoc @@ -14,14 +14,20 @@ ifndef::openshift-rosa,openshift-dedicated[] To edit a VM to configure disk sharing by using virtual disks or LUN, see xref:../../virt/managing_vms/virtual_disks/virt-configuring-shared-volumes-for-vms.adoc#virt-configuring-shared-volumes-for-vms[Configuring shared volumes for virtual machines]. endif::openshift-rosa,openshift-dedicated[] +include::modules/virt-change-vm-instance-type.adoc[leveloffset=+1] + include::modules/virt-hot-plugging-memory.adoc[leveloffset=+1] + include::modules/virt-hot-plugging-cpu.adoc[leveloffset=+1] + include::modules/virt-editing-vm-cli.adoc[leveloffset=+1] include::modules/virt-add-disk-to-vm.adoc[leveloffset=+1] include::modules/virt-storage-wizard-fields-web.adoc[leveloffset=+2] + include::modules/virt-mounting-windows-driver-disk-on-vm.adoc[leveloffset=+1] + include::modules/virt-adding-secret-configmap-service-account-to-vm.adoc[leveloffset=+1] [discrete] From c43cc330a20a0a906c659ffe1519f5fa4b97452d Mon Sep 17 00:00:00 2001 From: Brendan Daly Date: Fri, 14 Feb 2025 09:12:06 +0000 Subject: [PATCH 293/669] OSDOCS-12321_install:adding GCP filestore WIF --- modules/cco-ccoctl-creating-at-once.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/cco-ccoctl-creating-at-once.adoc b/modules/cco-ccoctl-creating-at-once.adoc index 294fbb8b7924..519313421583 100644 --- a/modules/cco-ccoctl-creating-at-once.adoc +++ b/modules/cco-ccoctl-creating-at-once.adoc @@ -204,7 +204,7 @@ $ ccoctl gcp create-all \ --project= \// <3> --credentials-requests-dir= <4> ---- -<1> Specify the user-defined name for all created GCP resources used for tracking. +<1> Specify the user-defined name for all created {gcp-short} resources used for tracking. If you plan to install the {gcp-short} Filestore Container Storage Interface (CSI) Driver Operator, retain this value. <2> Specify the GCP region in which cloud resources will be created. <3> Specify the GCP project ID in which cloud resources will be created. <4> Specify the directory containing the files of `CredentialsRequest` manifests to create GCP service accounts. From 7a77564f65d27a933ca6f23ba3bd9287920afec4 Mon Sep 17 00:00:00 2001 From: Kathryn Alexander Date: Wed, 12 Feb 2025 14:49:59 -0500 Subject: [PATCH 294/669] redirect notice --- _templates/_page_openshift.html.erb | 12 ++++++++---- index-commercial.html | 2 +- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/_templates/_page_openshift.html.erb b/_templates/_page_openshift.html.erb index 6601f79064c6..8d93f202df3b 100644 --- a/_templates/_page_openshift.html.erb +++ b/_templates/_page_openshift.html.erb @@ -74,7 +74,8 @@ %> <% end %> @@ -106,7 +107,8 @@ <% if (distro_key == "openshift-acs") %>
    - OpenShift docs are moving and will soon only be available at docs.redhat.com, the home of all Red Hat product documentation. Explore the new docs experience today. + Starting on March 12, 2025, OpenShift docs will only be available at docs.redhat.com. From that time on, docs.openshift.com links will automatically redirect to their locations on docs.redhat.com. +
    <% end %> @@ -114,7 +116,8 @@ <% if (distro_key == "openshift-dedicated") %>
    - OpenShift docs are moving and will soon only be available at docs.redhat.com, the home of all Red Hat product documentation. Explore the new docs experience today. + Starting on March 12, 2025, OpenShift docs will only be available at docs.redhat.com. From that time on, docs.openshift.com links will automatically redirect to their locations on docs.redhat.com. +
    <% end %> @@ -122,7 +125,8 @@ <% if (distro_key == "openshift-rosa") %>
    - OpenShift docs are moving and will soon only be available at docs.redhat.com, the home of all Red Hat product documentation. Explore the new docs experience today. + Starting on March 12, 2025, OpenShift docs will only be available at docs.redhat.com. From that time on, docs.openshift.com links will automatically redirect to their locations on docs.redhat.com. +
    <% end %> diff --git a/index-commercial.html b/index-commercial.html index fb38be5ab1ff..acc1b04b460b 100644 --- a/index-commercial.html +++ b/index-commercial.html @@ -138,7 +138,7 @@

    Technology Topics

    From d96f9e62eebd8956ea97af631e235c6fbcbfb503 Mon Sep 17 00:00:00 2001 From: Eliska Romanova Date: Thu, 20 Feb 2025 10:25:20 +0100 Subject: [PATCH 295/669] OBSDOCS-1720: remove the additional spaces to see if it causes the d.r.c breaks --- ...specifying-limits-and-requests-for-monitoring-components.adoc | 1 - ...onitoring-assigning-tolerations-to-monitoring-components.adoc | 1 - ...taching-additional-labels-to-your-time-series-and-alerts.adoc | 1 - modules/monitoring-configurable-monitoring-components.adoc | 1 - modules/monitoring-configuring-a-persistent-volume-claim.adoc | 1 - modules/monitoring-configuring-external-alertmanagers.adoc | 1 - .../monitoring-configuring-pod-topology-spread-constraints.adoc | 1 - modules/monitoring-configuring-remote-write-storage.adoc | 1 - modules/monitoring-creating-cluster-id-labels-for-metrics.adoc | 1 - .../monitoring-example-remote-write-authentication-settings.adoc | 1 - modules/monitoring-example-remote-write-queue-configuration.adoc | 1 - ...ying-retention-time-and-size-for-prometheus-metrics-data.adoc | 1 - ...nitoring-moving-monitoring-components-to-different-nodes.adoc | 1 - modules/monitoring-resizing-a-persistent-volume.adoc | 1 - ...ring-retention-time-and-size-for-prometheus-metrics-data.adoc | 1 - .../monitoring-setting-log-levels-for-monitoring-components.adoc | 1 - modules/monitoring-setting-query-log-file-for-prometheus.adoc | 1 - ...specifying-limits-and-requests-for-monitoring-components.adoc | 1 - 18 files changed, 18 deletions(-) diff --git a/modules/monitoring-about-specifying-limits-and-requests-for-monitoring-components.adoc b/modules/monitoring-about-specifying-limits-and-requests-for-monitoring-components.adoc index cb1d3ff54ba7..f19154561933 100644 --- a/modules/monitoring-about-specifying-limits-and-requests-for-monitoring-components.adoc +++ b/modules/monitoring-about-specifying-limits-and-requests-for-monitoring-components.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: CONCEPT - [id="about-specifying-limits-and-requests-for-monitoring-components_{context}"] = About specifying limits and requests for monitoring components diff --git a/modules/monitoring-assigning-tolerations-to-monitoring-components.adoc b/modules/monitoring-assigning-tolerations-to-monitoring-components.adoc index f93507178b93..fdc40d052c6b 100644 --- a/modules/monitoring-assigning-tolerations-to-monitoring-components.adoc +++ b/modules/monitoring-assigning-tolerations-to-monitoring-components.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE - [id="assigning-tolerations-to-monitoring-components_{context}"] = Assigning tolerations to monitoring components diff --git a/modules/monitoring-attaching-additional-labels-to-your-time-series-and-alerts.adoc b/modules/monitoring-attaching-additional-labels-to-your-time-series-and-alerts.adoc index 0e52543f4a4b..4a898913d870 100644 --- a/modules/monitoring-attaching-additional-labels-to-your-time-series-and-alerts.adoc +++ b/modules/monitoring-attaching-additional-labels-to-your-time-series-and-alerts.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE - [id="attaching-additional-labels-to-your-time-series-and-alerts_{context}"] = Attaching additional labels to your time series and alerts diff --git a/modules/monitoring-configurable-monitoring-components.adoc b/modules/monitoring-configurable-monitoring-components.adoc index 7c94c92dc026..05c44693cbe5 100644 --- a/modules/monitoring-configurable-monitoring-components.adoc +++ b/modules/monitoring-configurable-monitoring-components.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: REFERENCE - [id="configurable-monitoring-components_{context}"] = Configurable monitoring components diff --git a/modules/monitoring-configuring-a-persistent-volume-claim.adoc b/modules/monitoring-configuring-a-persistent-volume-claim.adoc index b420f282b635..0a382019fc98 100644 --- a/modules/monitoring-configuring-a-persistent-volume-claim.adoc +++ b/modules/monitoring-configuring-a-persistent-volume-claim.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE - [id="configuring-a-persistent-volume-claim_{context}"] = Configuring a persistent volume claim diff --git a/modules/monitoring-configuring-external-alertmanagers.adoc b/modules/monitoring-configuring-external-alertmanagers.adoc index 4b53594e8743..5619c62b23a3 100644 --- a/modules/monitoring-configuring-external-alertmanagers.adoc +++ b/modules/monitoring-configuring-external-alertmanagers.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE - [id="monitoring-configuring-external-alertmanagers_{context}"] = Configuring external Alertmanager instances diff --git a/modules/monitoring-configuring-pod-topology-spread-constraints.adoc b/modules/monitoring-configuring-pod-topology-spread-constraints.adoc index 71ec3ea8cc3f..c39fb11e5d63 100644 --- a/modules/monitoring-configuring-pod-topology-spread-constraints.adoc +++ b/modules/monitoring-configuring-pod-topology-spread-constraints.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE - [id="configuring-pod-topology-spread-constraints_{context}"] = Configuring pod topology spread constraints diff --git a/modules/monitoring-configuring-remote-write-storage.adoc b/modules/monitoring-configuring-remote-write-storage.adoc index 4284c66e38dc..583d11f44d2f 100644 --- a/modules/monitoring-configuring-remote-write-storage.adoc +++ b/modules/monitoring-configuring-remote-write-storage.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE - [id="configuring-remote-write-storage_{context}"] = Configuring remote write storage diff --git a/modules/monitoring-creating-cluster-id-labels-for-metrics.adoc b/modules/monitoring-creating-cluster-id-labels-for-metrics.adoc index 1a180301b40a..1df3b1ec7a61 100644 --- a/modules/monitoring-creating-cluster-id-labels-for-metrics.adoc +++ b/modules/monitoring-creating-cluster-id-labels-for-metrics.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE - [id="creating-cluster-id-labels-for-metrics_{context}"] = Creating cluster ID labels for metrics diff --git a/modules/monitoring-example-remote-write-authentication-settings.adoc b/modules/monitoring-example-remote-write-authentication-settings.adoc index baceb4e8b858..80c59a2a07a3 100644 --- a/modules/monitoring-example-remote-write-authentication-settings.adoc +++ b/modules/monitoring-example-remote-write-authentication-settings.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: REFERENCE - [id="example-remote-write-authentication-settings_{context}"] = Example remote write authentication settings diff --git a/modules/monitoring-example-remote-write-queue-configuration.adoc b/modules/monitoring-example-remote-write-queue-configuration.adoc index dfa60e9c34aa..d616eaf6ab6f 100644 --- a/modules/monitoring-example-remote-write-queue-configuration.adoc +++ b/modules/monitoring-example-remote-write-queue-configuration.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: REFERENCE - [id="example-remote-write-queue-configuration_{context}"] = Example remote write queue configuration diff --git a/modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc b/modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc index 028744396c55..5f72844cbd72 100644 --- a/modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc +++ b/modules/monitoring-modifying-retention-time-and-size-for-prometheus-metrics-data.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE - [id="modifying-retention-time-and-size-for-prometheus-metrics-data_{context}"] = Modifying retention time and size for Prometheus metrics data diff --git a/modules/monitoring-moving-monitoring-components-to-different-nodes.adoc b/modules/monitoring-moving-monitoring-components-to-different-nodes.adoc index d35c95cea39d..70e79fdce004 100644 --- a/modules/monitoring-moving-monitoring-components-to-different-nodes.adoc +++ b/modules/monitoring-moving-monitoring-components-to-different-nodes.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE - [id="moving-monitoring-components-to-different-nodes_{context}"] = Moving monitoring components to different nodes diff --git a/modules/monitoring-resizing-a-persistent-volume.adoc b/modules/monitoring-resizing-a-persistent-volume.adoc index 6e2d60be549e..a32e62407e9f 100644 --- a/modules/monitoring-resizing-a-persistent-volume.adoc +++ b/modules/monitoring-resizing-a-persistent-volume.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE - [id="resizing-a-persistent-volume_{context}"] = Resizing a persistent volume diff --git a/modules/monitoring-retention-time-and-size-for-prometheus-metrics-data.adoc b/modules/monitoring-retention-time-and-size-for-prometheus-metrics-data.adoc index 0bb05c180673..386193442c38 100644 --- a/modules/monitoring-retention-time-and-size-for-prometheus-metrics-data.adoc +++ b/modules/monitoring-retention-time-and-size-for-prometheus-metrics-data.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: CONCEPT - [id="retention-time-and-size-for-prometheus-metrics-data_{context}"] = Retention time and size for Prometheus metrics diff --git a/modules/monitoring-setting-log-levels-for-monitoring-components.adoc b/modules/monitoring-setting-log-levels-for-monitoring-components.adoc index d4c873db7848..0224ab2fe665 100644 --- a/modules/monitoring-setting-log-levels-for-monitoring-components.adoc +++ b/modules/monitoring-setting-log-levels-for-monitoring-components.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE - [id="setting-log-levels-for-monitoring-components_{context}"] = Setting log levels for monitoring components diff --git a/modules/monitoring-setting-query-log-file-for-prometheus.adoc b/modules/monitoring-setting-query-log-file-for-prometheus.adoc index 5171d28100d9..6457cd5462f6 100644 --- a/modules/monitoring-setting-query-log-file-for-prometheus.adoc +++ b/modules/monitoring-setting-query-log-file-for-prometheus.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE - [id="setting-query-log-file-for-prometheus_{context}"] = Enabling the query log file for Prometheus diff --git a/modules/monitoring-specifying-limits-and-requests-for-monitoring-components.adoc b/modules/monitoring-specifying-limits-and-requests-for-monitoring-components.adoc index 89f9a0da4c81..8387edadd7b5 100644 --- a/modules/monitoring-specifying-limits-and-requests-for-monitoring-components.adoc +++ b/modules/monitoring-specifying-limits-and-requests-for-monitoring-components.adoc @@ -3,7 +3,6 @@ // * observability/monitoring/configuring-the-monitoring-stack.adoc :_mod-docs-content-type: PROCEDURE - [id="specifying-limits-and-resource-requests-for-monitoring-components_{context}"] = Specifying limits and requests From 28ec1f1ecb1eb7ff5a54c7210ace44e6016827b1 Mon Sep 17 00:00:00 2001 From: Michael Burke Date: Thu, 13 Feb 2025 18:20:40 -0500 Subject: [PATCH 296/669] Make debugLogging non-default --- modules/wmco-configure-debug-logging.adoc | 40 +++++++++++++++++++ .../enabling-windows-container-workloads.adoc | 2 + 2 files changed, 42 insertions(+) create mode 100644 modules/wmco-configure-debug-logging.adoc diff --git a/modules/wmco-configure-debug-logging.adoc b/modules/wmco-configure-debug-logging.adoc new file mode 100644 index 000000000000..030c718907f7 --- /dev/null +++ b/modules/wmco-configure-debug-logging.adoc @@ -0,0 +1,40 @@ +// Module included in the following assemblies: +// +// windows_containers/enabling-windows-container-workloads.adoc + +:_mod-docs-content-type: CONCEPT +[id="wmco-configure-debug-logging_{context}"] += Configuring debug-level logging for the Windows Machine Config Operator + +By default, the Windows Machine Config Operator (WMCO) is configured to use the `info` log level. You can change the log level to `debug` by editing the WMCO `Subscription` object. + +.Procedure + +. Edit the `windows-machine-config-operator` subscription in the `windows-machine-config-operator` namespace by using the following command: ++ +[source,terminal] +---- +$ oc edit subscription windows-machine-config-operator -n openshift-windows-machine-config-operator +---- + +. Add the follwing parameters to the `.spec.config.env` stanza: ++ +[source,yaml] +---- +apiVersion: operators.coreos.com/v1alpha1 +kind: Subscription +# ... + name: windows-machine-config-operator + namespace: openshift-windows-machine-config-operator +# ... +spec: +# ... + config: + env: + - name: ARGS <1> + value: --debugLogging <2> +---- +<1> Defines a list of environment variables that must exist in all containers in the pod. +<2> Specifies the `debug` level of verbosity for log messages. + +You can revert to the default `info` log level by removing the `name` and `value` parameters that you added. diff --git a/windows_containers/enabling-windows-container-workloads.adoc b/windows_containers/enabling-windows-container-workloads.adoc index 8c235a406be6..e80d40ae9cfc 100644 --- a/windows_containers/enabling-windows-container-workloads.adoc +++ b/windows_containers/enabling-windows-container-workloads.adoc @@ -51,6 +51,8 @@ include::modules/installing-wmco-using-cli.adoc[leveloffset=+2] include::modules/configuring-secret-for-wmco.adoc[leveloffset=+1] +include::modules/wmco-configure-debug-logging.adoc[leveloffset=+1] + include::modules/wmco-cluster-wide-proxy.adoc[leveloffset=+1] .Additional resources From 9871db5e68d317342ff78fb4fbf08a1cd5dc3d93 Mon Sep 17 00:00:00 2001 From: Lisa Pettyjohn Date: Fri, 18 Oct 2024 15:36:37 -0400 Subject: [PATCH 297/669] OSDOCS-12321# GCP Filestore WIP support --- ...rsistent-storage-csi-gcp-file-install.adoc | 28 ++-- ...sistent-storage-csi-gcp-filestore-wif.adoc | 126 ++++++++++++++++++ ...sistent-storage-csi-google-cloud-file.adoc | 24 +++- 3 files changed, 164 insertions(+), 14 deletions(-) create mode 100644 modules/persistent-storage-csi-gcp-filestore-wif.adoc diff --git a/modules/persistent-storage-csi-gcp-file-install.adoc b/modules/persistent-storage-csi-gcp-file-install.adoc index 17252eb3da9e..f8f4352a55fd 100644 --- a/modules/persistent-storage-csi-gcp-file-install.adoc +++ b/modules/persistent-storage-csi-gcp-file-install.adoc @@ -4,16 +4,17 @@ :_mod-docs-content-type: PROCEDURE [id="persistent-storage-csi-olm-operator-install_{context}"] -= Installing the GCP Filestore CSI Driver Operator += Installing the {gcp-short} Filestore CSI Driver Operator -The Google Compute Platform (GCP) Filestore Container Storage Interface (CSI) Driver Operator is not installed in {product-title} by default. -Use the following procedure to install the GCP Filestore CSI Driver Operator in your cluster. +The Google Compute Platform ({gcp-short}) Filestore Container Storage Interface (CSI) Driver Operator is not installed in {product-title} by default. +Use the following procedure to install the {gcp-short} Filestore CSI Driver Operator in your cluster. .Prerequisites * Access to the {product-title} web console. +* If using {gcp-wid-short}, certain {gcp-wid-short} parameters are needed. See the preceding Section _Preparing to install the {gcp-short} Filestore CSI Driver Operator with Workload Identity_. .Procedure -To install the GCP Filestore CSI Driver Operator from the web console: +To install the {gcp-short} Filestore CSI Driver Operator from the web console: ifdef::openshift-dedicated[] @@ -40,26 +41,33 @@ $ gcloud services enable file.googleapis.com --project <1> + You can also do this using Google Cloud web console. -. Install the GCP Filestore CSI Operator: +. Install the {gcp-short} Filestore CSI Operator: .. Click *Operators* -> *OperatorHub*. -.. Locate the GCP Filestore CSI Operator by typing *GCP Filestore* in the filter box. +.. Locate the {gcp-short} Filestore CSI Operator by typing *{gcp-short} Filestore* in the filter box. -.. Click the *GCP Filestore CSI Driver Operator* button. +.. Click the *{gcp-short} Filestore CSI Driver Operator* button. -.. On the *GCP Filestore CSI Driver Operator* page, click *Install*. +.. On the *{gcp-short} Filestore CSI Driver Operator* page, click *Install*. .. On the *Install Operator* page, ensure that: + * *All namespaces on the cluster (default)* is selected. * *Installed Namespace* is set to *openshift-cluster-csi-drivers*. ++ +If using {gcp-wid-short}, enter values for the following fields obtained from the procedure in Section _Preparing to install the {gcp-short} Filestore CSI Driver Operator with Workload Identity_: ++ +* *{gcp-short} Project Number* +* *{gcp-short} Pool ID* +* *{gcp-short} Provider ID* +* *{gcp-short} Service Account Email* .. Click *Install*. + -After the installation finishes, the GCP Filestore CSI Operator is listed in the *Installed Operators* section of the web console. +After the installation finishes, the {gcp-short} Filestore CSI Operator is listed in the *Installed Operators* section of the web console. -. Install the GCP Filestore CSI Driver: +. Install the {gcp-short} Filestore CSI Driver: .. Click *administration* → *CustomResourceDefinitions* → *ClusterCSIDriver*. diff --git a/modules/persistent-storage-csi-gcp-filestore-wif.adoc b/modules/persistent-storage-csi-gcp-filestore-wif.adoc new file mode 100644 index 000000000000..b5da08cac839 --- /dev/null +++ b/modules/persistent-storage-csi-gcp-filestore-wif.adoc @@ -0,0 +1,126 @@ +// Module included in the following assemblies: +// +// * storage/container_storage_csi-google_cloud_file.adoc + +:_mod-docs-content-type: PROCEDURE +[id="persistent-storage-csi-gcp-filestore-wif_{context}"] += Preparing to install the {gcp-short} Filestore CSI Driver Operator with Workload Identity + +If you are planning to use {gcp-wid-short} with Google Compute Platform Filestore, you must obtain certain parameters that you will use during the installation of the {gcp-short} Filestore Container Storage Interface (CSI) Driver Operator. + +.Prerequisites +* Access to the cluster as a user with the cluster-admin role. + +// Put note in install area of docs to remind users to take note of the identity pool ID and the provider ID + +.Procedure + +To prepare to install the {gcp-short} Filestore CSI Driver Operator with Workload Identity: + +. Obtain the project number: + +.. Obtain the project ID by running the following command: ++ +[source, terminal] +---- +$ export PROJECT_ID=$(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.gcp.projectID}') +---- + +.. Obtain the project number, using the project ID, by running the following command: ++ +[source, terminal] +---- +$ gcloud projects describe $PROJECT_ID --format="value(projectNumber)" +---- + +. Find the identity pool ID and the provider ID: ++ +During cluster installation, the names of these resources are provided to the Cloud Credential Operator utility (`ccoctl`) with the `--name parameter`. See "Creating {gcp-short} resources with the Cloud Credential Operator utility". + +. Create Workload Identity resources for the {gcp-short} Filestore Operator: + +.. Create a `CredentialsRequest` file using the following example file: ++ +.Example Credentials Request YAML file +[source, YAML] +---- +apiVersion: cloudcredential.openshift.io/v1 +kind: CredentialsRequest +metadata: + name: openshift-gcp-filestore-csi-driver-operator + namespace: openshift-cloud-credential-operator + annotations: + include.release.openshift.io/self-managed-high-availability: "true" + include.release.openshift.io/single-node-developer: "true" +spec: + serviceAccountNames: + - gcp-filestore-csi-driver-operator + - gcp-filestore-csi-driver-controller-sa + secretRef: + name: gcp-filestore-cloud-credentials + namespace: openshift-cluster-csi-drivers + providerSpec: + apiVersion: cloudcredential.openshift.io/v1 + kind: GCPProviderSpec + predefinedRoles: + - roles/file.editor + - roles/resourcemanager.tagUser + skipServiceCheck: true +---- + +.. Use the `CredentialsRequest` file to create a {gcp-short} service account by running the following command: ++ +[source, terminal] +---- +$ ./ccoctl gcp create-service-accounts --name= \// <1> + --workload-identity-pool= \// <2> + --workload-identity-provider= \// <3> + --project= \// <4> + --credentials-requests-dir=/tmp/credreq <5> +---- +<1> is a user-chosen name. +<2> comes from Step 2 above. +<3> comes from Step 2 above. +<4> comes from Step 1.a above. +<5> The name of directory where the `CredentialsRequest` file resides. ++ +.Example output +[source, terminal] +---- +2025/02/10 17:47:39 Credentials loaded from gcloud CLI defaults +2025/02/10 17:47:42 IAM service account filestore-service-account-openshift-gcp-filestore-csi-driver-operator created +2025/02/10 17:47:44 Unable to add predefined roles to IAM service account, retrying... +2025/02/10 17:47:59 Updated policy bindings for IAM service account filestore-service-account-openshift-gcp-filestore-csi-driver-operator +2025/02/10 17:47:59 Saved credentials configuration to: /tmp/install-dir/ <1> +openshift-cluster-csi-drivers-gcp-filestore-cloud-credentials-credentials.yaml +---- +<1> The current directory. + +.. Find the service account email of the newly created service account by running the following command: ++ +[source, terminal] +---- +$ cat /tmp/install-dir/manifests/openshift-cluster-csi-drivers-gcp-filestore-cloud-credentials-credentials.yaml | yq '.data["service_account.json"]' | base64 -d | jq '.service_account_impersonation_url' +---- ++ +.Example output +[source, terminal] +---- +https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/filestore-se-openshift-g-ch8cm@openshift-gce-devel.iam.gserviceaccount.com:generateAccessToken +---- ++ +In this example output, the service account email is `filestore-se-openshift-g-ch8cm@openshift-gce-devel.iam.gserviceaccount.com`. + +.Results + +You now have the following parameters that you need to install the {gcp-short} Filestore CSI Driver Operator: + +* Project number - from Step 1.b + +* Pool ID - from Step 2 + +* Provider ID - from Step 2 + +* Service account email - from Step 3.c + + diff --git a/storage/container_storage_interface/persistent-storage-csi-google-cloud-file.adoc b/storage/container_storage_interface/persistent-storage-csi-google-cloud-file.adoc index a8c2a88363fa..ae0db18f6661 100644 --- a/storage/container_storage_interface/persistent-storage-csi-google-cloud-file.adoc +++ b/storage/container_storage_interface/persistent-storage-csi-google-cloud-file.adoc @@ -14,15 +14,27 @@ toc::[] Familiarity with xref:../../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[persistent storage] and xref:../../storage/container_storage_interface/persistent-storage-csi.adoc#persistent-storage-csi[configuring CSI volumes] is recommended when working with a CSI Operator and driver. -To create CSI-provisioned PVs that mount to GCP Filestore Storage assets, you install the GCP Filestore CSI Driver Operator and the GCP Filestore CSI driver in the `openshift-cluster-csi-drivers` namespace. +To create CSI-provisioned PVs that mount to {gcp-short} Filestore Storage assets, you install the {gcp-short} Filestore CSI Driver Operator and the {gcp-short} Filestore CSI driver in the `openshift-cluster-csi-drivers` namespace. -* The _GCP Filestore CSI Driver Operator_ does not provide a storage class by default, but xref:../../storage/container_storage_interface/persistent-storage-csi-google-cloud-file.adoc#persistent-storage-csi-google-cloud-file-create-sc_persistent-storage-csi-google-cloud-file[you can create one if needed]. The GCP Filestore CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage. +* The _{gcp-short} Filestore CSI Driver Operator_ does not provide a storage class by default, but xref:../../storage/container_storage_interface/persistent-storage-csi-google-cloud-file.adoc#persistent-storage-csi-google-cloud-file-create-sc_persistent-storage-csi-google-cloud-file[you can create one if needed]. The {gcp-short} Filestore CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage. -* The _GCP Filestore CSI driver_ enables you to create and mount GCP Filestore PVs. +* The _{gcp-short} Filestore CSI driver_ enables you to create and mount {gcp-short} Filestore PVs. + +{product-title} {gcp-short} Filestore supports Workload Identity. This allows users to access Google Cloud resources using federated identities instead of a service account key. {gcp-wid-short} must be enabled globally during installation, and then configured for the {gcp-short} Filestore CSI Driver Operator. For more information, see xref:../../storage/container_storage_interface/persistent-storage-csi-google-cloud-file.adoc#installing-the-gcp-filestore-csi-driver-operator[Installing the {gcp-short} Filestore CSI Driver Operator]. include::modules/persistent-storage-csi-about.adoc[leveloffset=+1] -include::modules/persistent-storage-csi-gcp-file-install.adoc[leveloffset=+1] +== Installing the {gcp-short} Filestore CSI Driver Operator + +include::modules/persistent-storage-csi-gcp-filestore-wif.adoc[leveloffset=+2] + +ifndef::openshift-dedicated[] +[role="_additional-resources"] +.Additional resources +* xref:../../installing/installing_gcp/installing-gcp-customizations.adoc#cco-ccoctl-creating-at-once_installing-gcp-customizations[Creating {gcp-short} resources with the Cloud Credential Operator utility] +endif::[] + +include::modules/persistent-storage-csi-gcp-file-install.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources @@ -36,3 +48,7 @@ include::modules/persistent-storage-csi-google-cloud-file-delete-instances.adoc[ [role="_additional-resources"] == Additional resources * xref:../../storage/container_storage_interface/persistent-storage-csi.adoc#persistent-storage-csi[Configuring CSI volumes] +ifndef::openshift-dedicated[] +[id="osdk-cco-gpc_{context}"] +* xref:../../operators/operator_sdk/token_auth/osdk-cco-gcp.adoc[CCO-based workflow for OLM-managed Operators with {gcp-short} Workload Identity]. +endif::openshift-dedicated[] From 97ff21ca61446429947512f1b442f46f7b811708 Mon Sep 17 00:00:00 2001 From: Michael Burke Date: Tue, 28 Jan 2025 13:53:12 -0500 Subject: [PATCH 298/669] MCO-1316 On-Cluster Layering GA - 4.18 upgrades and integrations --- .../mco-coreos-layering.adoc | 19 ++- ...os-layering-configuring-on-extensions.adoc | 117 +++++++++++++ ...eos-layering-configuring-on-modifying.adoc | 139 ++++++++++++++++ ...coreos-layering-configuring-on-revert.adoc | 105 ++++++++++++ modules/coreos-layering-configuring-on.adoc | 157 ++++++++++++------ modules/coreos-layering-removing.adoc | 16 +- modules/rhcos-add-extensions.adoc | 17 +- .../coreos-layering-configuring-on-pause.adoc | 22 +++ 8 files changed, 524 insertions(+), 68 deletions(-) create mode 100644 modules/coreos-layering-configuring-on-extensions.adoc create mode 100644 modules/coreos-layering-configuring-on-modifying.adoc create mode 100644 modules/coreos-layering-configuring-on-revert.adoc create mode 100644 snippets/coreos-layering-configuring-on-pause.adoc diff --git a/machine_configuration/mco-coreos-layering.adoc b/machine_configuration/mco-coreos-layering.adoc index ad4626f21fd0..1fdaf2ffbcf3 100644 --- a/machine_configuration/mco-coreos-layering.adoc +++ b/machine_configuration/mco-coreos-layering.adoc @@ -184,13 +184,30 @@ include::modules/coreos-layering-configuring-on.adoc[leveloffset=+1] .Additional resources * xref:../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling[Enabling features using feature gates] +* xref:../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-pause_update-using-custom-machine-config-pools[Pausing the machine config pools] + +include::modules/coreos-layering-configuring-on-modifying.adoc[leveloffset=+2] + +.Additional resources +* xref:../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-pause_update-using-custom-machine-config-pools[Pausing the machine config pools] + +include::modules/coreos-layering-configuring-on-extensions.adoc[leveloffset=+2] + +.Additional resources +* xref:../machine_configuration/machine-configs-configure.html#rhcos-add-extensions_machine-configs-configure[Adding extensions to RHCOS] +* xref:../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-pause_update-using-custom-machine-config-pools[Pausing the machine config pools] + +// Not in 4.18; maybe 4.19 +// include::modules/coreos-layering-configuring-on-rebuild.adoc[leveloffset=+2] + +include::modules/coreos-layering-configuring-on-revert.adoc[leveloffset=+2] include::modules/coreos-layering-configuring.adoc[leveloffset=+1] .Additional resources xref:../machine_configuration/mco-coreos-layering.adoc#coreos-layering-updating_mco-coreos-layering[Updating with a {op-system} custom layered image] -include::modules/coreos-layering-removing.adoc[leveloffset=+1] +include::modules/coreos-layering-removing.adoc[leveloffset=+2] include::modules/coreos-layering-updating.adoc[leveloffset=+1] //// diff --git a/modules/coreos-layering-configuring-on-extensions.adoc b/modules/coreos-layering-configuring-on-extensions.adoc new file mode 100644 index 000000000000..42d2be638af9 --- /dev/null +++ b/modules/coreos-layering-configuring-on-extensions.adoc @@ -0,0 +1,117 @@ +// Module included in the following assemblies: +// +// * machine_configuration/coreos-layering.adoc + +:_mod-docs-content-type: PROCEDURE +[id="coreos-layering-configuring-on-extensions_{context}"] += Installing extensions into an on-cluster custom layered image + +You can install {op-system-first} extensions into your on-cluster custom layered image by creating a machine config that lists the extensions that you want to install. The Machine Config Operator (MCO) installs the extensions onto the nodes associated with a specific machine config pool (MCP). + +For a list of the currently supported extensions, see "Adding extensions to RHCOS." + +After you make the change, the MCO reboots the nodes associated with the specified machine config pool. + +[NOTE] +==== +include::snippets/coreos-layering-configuring-on-pause.adoc[] +==== + +.Prerequisites + +* You have opted in to on-cluster layering by creating a `MachineOSConfig` object. + +.Procedure + +. Create a YAML file for the machine config similar to the following example: ++ +[source,yaml] +---- +apiVersion: machineconfiguration.openshift.io/v1 <1> +kind: MachineConfig +metadata: + labels: + machineconfiguration.openshift.io/role: layered <2> + name: 80-worker-extensions +spec: + config: + ignition: + version: 3.2.0 + extensions: <3> + - usbguard + - kerberos +---- +<1> Specifies the `machineconfiguration.openshift.io/v1` API that is required for `MachineConfig` CRs. +<2> Specifies the machine config pool to apply the `MachineConfig` object to. +<3> Lists the {op-system-first} extensions that you want to install. + +. Create the MCP object: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- + +.Verification + +. You can watch the build progress by using the following command: ++ +[source,terminal] +---- +$ oc get machineosbuilds +---- ++ +.Example output +[source,terminal] +---- +NAME PREPARED BUILDING SUCCEEDED INTERRUPTED FAILED +layered-f8ab2d03a2f87a2acd449177ceda805d False True False False False <1> +---- +<1> The value `True` in the `BUILDING` column indicates that the `MachineOSBuild` object is building. When the `SUCCEEDED` column reports `TRUE`, the build is complete. + +. You can watch as the new machine config is rolled out to the nodes by using the following command: ++ +[source,terminal] +---- +$ oc get machineconfigpools +---- ++ +.Example output +[source,terminal] +---- +NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE +layered rendered-layered-221507009cbcdec0eec8ab3ccd789d18 False True False 1 0 0 0 167m <1> +master rendered-master-a0b404d061a6183cc36d302363422aba True False False 3 3 3 0 3h38m +worker rendered-worker-221507009cbcdec0eec8ab3ccd789d18 True False False 2 2 2 0 3h38m +---- +<1> The value `FALSE` in the `UPDATED` column indicates that the `MachineOSBuild` object is building. When the `UPDATED` column reports `FALSE`, the new custom layered image has rolled out to the nodes. + +. When the associated machine config pool is updated, check that the extensions were installed: + +.. Open an `oc debug` session to the node by running the following command: ++ +[source,terminal] +---- +$ oc debug node/ +---- + +.. Set `/host` as the root directory within the debug shell by running the following command: ++ +[source,terminal] +---- +sh-5.1# chroot /host +---- + +.. Use an appropriate command to verify that the extensions were installed. The following example shows that the usbguard extension was installed: ++ +[source,terminal] +---- +sh-5.1# rpm -qa |grep usbguard +---- ++ +.Example output +[source,terminal] +---- +usbguard-selinux-1.0.0-15.el9.noarch +usbguard-1.0.0-15.el9.x86_64 +---- diff --git a/modules/coreos-layering-configuring-on-modifying.adoc b/modules/coreos-layering-configuring-on-modifying.adoc new file mode 100644 index 000000000000..2bba8718dfd9 --- /dev/null +++ b/modules/coreos-layering-configuring-on-modifying.adoc @@ -0,0 +1,139 @@ +// Module included in the following assemblies: +// +// * machine_configuration/coreos-layering.adoc + +:_mod-docs-content-type: PROCEDURE +[id="coreos-layering-configuring-on-modifying_{context}"] += Modifying a custom layered image + +You can modify an on-cluster custom layered image, as needed. This allows you to install additional packages, remove existing packages, change the pull or push repositories, update secrets, or other similar changes. You can edit the `MachineOSConfig` object, apply changes to the YAML file that created the `MachineOSConfig` object, or create a new YAML file for that purpose. + +If you modify and apply the `MachineOSConfig` object YAML or create a new YAML file, the YAML overwrites any changes you made directly to the `MachineOSConfig` object itself. + +include::snippets//coreos-layering-configuring-on-pause.adoc[] + +.Prerequisites + +* You have opted in to on-cluster layering by creating a `MachineOSConfig` object. + +.Procedure + +* Modify an object to update the associated custom layered image: + +.. Edit the `MachineOSConfig` object to modify the custom layered image. The following example adds the `rngd` daemon to nodes that already have the tree package that was installed using a custom layered image. ++ +[source,yaml] +---- +apiVersion: machineconfiguration.openshift.io/v1alpha1 +kind: MachineOSConfig +metadata: + name: layered-alpha1 +spec: + machineConfigPool: + name: layered + buildInputs: + containerFile: + - containerfileArch: noarch + content: |- <1> + FROM configs AS final + + RUN rpm-ostree install rng-tools && \ + systemctl enable rngd && \ + rpm-ostree cleanup -m && \ + ostree container commit + + RUN rpm-ostree install tree && \ + ostree container commit + imageBuilder: + imageBuilderType: PodImageBuilder + baseImagePullSecret: + name: global-pull-secret-copy <2> + renderedImagePushspec: image-registry.openshift-image-registry.svc:5000/openshift-machine-config-operator/os-images:latest <3> + renderedImagePushSecret: <4> + name: new-secret-name + buildOutputs: + currentImagePullSecret: + name: new-secret-name <5> +---- +<1> Optional: Modify the Containerfile, for example to add or remove packages. +<2> Optional: Update the secret needed to pull the base operating system image from the registry. +<3> Optional: Modify the image registry to push the newly-built custom layered image to. +<4> Optional: Update the secret needed to push the newly-built custom layered image to the registry. +<5> Optional: Update the secret needed to pull the newly-built custom layered image from the registry. ++ +When you save the changes, the MCO drains, cordons, and reboots the nodes. After the reboot, the node uses the cluster base {op-system-first} image. If your changes modify a secret only, no new build is triggered and no reboot is performed. + +.Verification + +. Verify that the new `MachineOSBuild` object was created by using the following command: ++ +[source,terminal] +---- +$ oc get machineosbuild +---- ++ +.Example output +[source,terminal] +---- +NAME PREPARED BUILDING SUCCEEDED INTERRUPTED FAILED +layered-a5457b883f5239cdcb71b57e1a30b6ef False False True False False +layered-f91f0f5593dd337d89bf4d38c877590b False True False False False <1> +---- +<1> The value `True` in the `BUILDING` column indicates that the `MachineOSBuild` object is building. When the `SUCCEEDED` column reports `True`, the build is complete. + +. You can watch as the new machine config is rolled out to the nodes by using the following command: ++ +[source,terminal] +---- +$ oc get machineconfigpools +---- ++ +.Example output +[source,terminal] +---- +NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE +layered rendered-layered-221507009cbcdec0eec8ab3ccd789d18 False True False 1 0 0 0 167m <1> +master rendered-master-a0b404d061a6183cc36d302363422aba True False False 3 3 3 0 3h38m +worker rendered-worker-221507009cbcdec0eec8ab3ccd789d18 True False False 2 2 2 0 3h38m +---- +<1> The value `FALSE` in the `UPDATED` column indicates that the `MachineOSBuild` object is building. When the `UPDATED` column reports `FALSE`, the new custom layered image has rolled out to the nodes. + +. When the node is back in the `Ready` state, check that the changes were applied: + +.. Open an `oc debug` session to the node by running the following command: ++ +[source,terminal] +---- +$ oc debug node/ +---- + +.. Set `/host` as the root directory within the debug shell by running the following command: ++ +[source,terminal] +---- +sh-5.1# chroot /host +---- + +.. Use an appropriate command to verify that change was applied. The following examples shows that the `rngd` daemon was installed: ++ +[source,terminal] +---- +sh-5.1# rpm -qa |grep rng-tools +---- ++ +.Example output +[source,terminal] +---- +rng-tools-6.17-3.fc41.x86_64 +---- ++ +[source,terminal] +---- +sh-5.1# rngd -v +---- ++ +.Example output +[source,terminal] +---- +rngd 6.16 +---- diff --git a/modules/coreos-layering-configuring-on-revert.adoc b/modules/coreos-layering-configuring-on-revert.adoc new file mode 100644 index 000000000000..855b9c7fe48f --- /dev/null +++ b/modules/coreos-layering-configuring-on-revert.adoc @@ -0,0 +1,105 @@ +// Module included in the following assemblies: +// +// * machine_configuration/coreos-layering.adoc + +:_mod-docs-content-type: PROCEDURE +[id="coreos-layering-configuring-on-revert_{context}"] += Reverting an on-cluster custom layered image + +You can revert an on-cluster custom layered image from nodes by removing the label for the machine config pool (MCP) that you specified in the `MachineOSConfig` object. After you remove the label, the Machine Config Operator (MCO) reboots the nodes in that MCP with the cluster base {op-system-first} image, along with any previously-made machine config changes, overriding the custom layered image. + +[IMPORTANT] +==== +If the node where the custom layered image is deployed uses a custom machine config pool, before you remove the label, make sure the node is associated with a second MCP. +==== + +You can reapply the custom layered image to the node by using the `oc label node/ 'node-role.kubernetes.io/='` label. + +.Prerequisites + +* You have opted in to on-cluster layering by creating a `MachineOSConfig` object. + +.Procedure + +* Remove the label from the node by using the following command: ++ +[source,terminal] +---- +$ oc label node/ node-role.kubernetes.io/- +---- ++ +When you save the changes, the MCO drains, cordons, and reboots the nodes. After the reboot, the node uses the cluster base {op-system-first} image. + +.Verification + +You can verify that the custom layered image is removed by performing the following checks: + +. Check that the worker machine config pool is updating with the previous machine config: ++ +[source,terminal] +---- +$ oc get mcp +---- ++ +.Sample output +[source,terminal] +---- +NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE +layered rendered-layered-bde4e4206442c0a48b1a1fb35ba56e85 True False False 0 0 0 0 4h46m +master rendered-master-8332482204e0b76002f15ecad15b6c2d True False False 3 3 3 0 5h26m +worker rendered-worker-bde4e4206442c0a48b1a1fb35ba56e85 False True False 3 2 2 0 5h26m <1> +---- +<1> The value `FALSE` in the `UPDATED` column indicates that the `MachineOSBuild` object is building. When the `UPDATED` column reports `FALSE`, the base image has rolled out to the nodes. + +. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied: ++ +[source,terminal] +---- +$ oc get nodes +---- ++ +.Example output +[source,terminal] +---- +NAME STATUS ROLES AGE VERSION +ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.31.3 +ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.31.3 +ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.31.3 +ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.31.3 +ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.31.3 +ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.31.3 +---- + +. When the node is back in the `Ready` state, check that the node is using the base image: + +.. Open an `oc debug` session to the node. For example: ++ +[source,terminal] +---- +$ oc debug node/ +---- + +.. Set `/host` as the root directory within the debug shell: ++ +[source,terminal] +---- +sh-5.1# chroot /host +---- + +.. Run an `rpm-ostree status` command to view that the base image is in use: ++ +[source,terminal] +---- +sh-5.1# rpm-ostree status +---- ++ +.Example output ++ +[source,terminal] +---- +State: idle +Deployments: +* ostree-unverified-image:containers-storage:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76721c875a2b79688be46b1dca654c2c6619a6be28b29a2822cd86c3f9d8e3c1 + Digest: sha256:76721c875a2b79688be46b1dca654c2c6619a6be28b29a2822cd86c3f9d8e3c1 + Version: 418.94.202501300706-0 (2025-01-30T07:10:58Z) +---- diff --git a/modules/coreos-layering-configuring-on.adoc b/modules/coreos-layering-configuring-on.adoc index 298964aaa78a..77d129bb2585 100644 --- a/modules/coreos-layering-configuring-on.adoc +++ b/modules/coreos-layering-configuring-on.adoc @@ -13,7 +13,21 @@ To apply a custom layered image to your cluster by using the on-cluster build pr * where the final image should be pushed and pulled from * the push and pull secrets to use -When you create the object, the Machine Config Operator (MCO) creates a `MachineOSBuild` object and a `machine-os-builder` pod. The build process also creates transient objects, such as config maps, which are cleaned up after the build is complete. +When you create the object, the Machine Config Operator (MCO) creates a `MachineOSBuild` object and a builder pod. The build process also creates transient objects, such as config maps, which are cleaned up after the build is complete. The `MachineOSBuild` object and the associated `builder-*` pod use the same naming scheme, `-`, for example: + +.Example `MachineOSBuild` object +[source,terminal] +---- +NAME PREPARED BUILDING SUCCEEDED INTERRUPTED FAILED +layered-c8765e26ebc87e1e17a7d6e0a78e8bae False False True False False +---- + +.Example builder pod +[source,terminal] +---- +NAME READY STATUS RESTARTS AGE +build-layered-c8765e26ebc87e1e17a7d6e0a78e8bae 2/2 Running 0 11m +---- When the build is complete, the MCO pushes the new custom layered image to your repository for use when deploying new nodes. You can see the digested image pull spec for the new custom layered image in the `MachineOSBuild` object and `machine-os-builder` pod. @@ -23,16 +37,47 @@ You need a separate `MachineOSConfig` CR for each machine config pool where you :FeatureName: On-cluster image layering include::snippets/technology-preview.adoc[] +include::snippets//coreos-layering-configuring-on-pause.adoc[] + +In the case of a build failure, for example due to network issues or an invalid secret, the MCO retries the build three additional times before the job fails. The MCO creates a different build pod for each build attempt. You can use the build pod logs to troubleshoot any build failures. However, the MCO automatically removes these build pods after a short period of time. + +.Example failed `MachineOSBuild` object +[source,terminal] +---- +NAME PREPARED BUILDING SUCCEEDED INTERRUPTED FAILED +layered-c8765e26ebc87e1e17a7d6e0a78e8bae False False False False True +---- + +// Not in 4.18; maybe 4.19 +// You can manually rebuild your custom layered image by either modifying your `MachineOSConfig` object or applying an annotation to the `MachineOSConfig` object as discussed in the following section. + +[discrete] +[id="coreos-layering-configuring-on-limitations_{context}"] +== On-cluster layering Technology Preview known limitations + +Note the following limitations when working with the on-cluster layering feature: + +* On-cluster layering is supported only for {product-title} clusters on the AMD64 architecture. +* On-cluster layering is not supported on multi-architecture compute machines, {sno}, or disconnected clusters. +* If you scale up a machine set that uses a custom layered image, the nodes reboot two times. The first, when the node is initially created with the base image and a second time when the custom layered image is applied. +* Node disruption policies are not supported on nodes with a custom layered image. As a result the following configuration changes cause a node reboot: +** Modifying the configuration files in the `/var` or `/etc` directory +** Adding or modifying a systemd service +** Changing SSH keys +** Removing mirroring rules from `ICSP`, `ITMS`, and `IDMS` objects +** Changing the trusted CA, by updating the `user-ca-bundle` configmap in the `openshift-config` namespace +* The images used in creating custom layered images take up space in your push registry. Always be aware of the free space in your registry and prune the images as needed. .Prerequisites -* You have enabled the `TechPreviewNoUpgrade` feature set by using the feature gates. For more information, see "Enabling features using feature gates". +* You have enabled the `TechPreviewNoUpgrade` feature set by using the feature gates. For more information, see "Enabling features using feature gates". -* You have a copy of the global pull secret in the `openshift-machine-config-operator` namespace that the MCO needs in order to pull the base operating system image. +* You have a copy of the pull secret in the `openshift-machine-config-operator` namespace that the MCO needs to pull the base operating system image. -* You have a copy of the `etc-pki-entitlement` secret in the `openshift-machine-api` namespace. +// Not in 4.18; maybe in 4.19 +// If you are using the global pull secret, the MCO automatically creates a copy when you first create a `MachineOSconfig` object. -* You have the push secret that the MCO needs in order to push the new custom layered image to your registry. +* You have the push secret of the registry that the MCO needs to push the new custom layered image to. * You have a pull secret that your nodes need to pull the new custom layered image from your registry. This should be a different secret than the one used to push the image to the repository. @@ -42,47 +87,48 @@ include::snippets/technology-preview.adoc[] .Procedure -. Create a `machineOSconfig` object: +. Create a `MachineOSconfig` object: .. Create a YAML file similar to the following: + -[source,terminal] +[source,yaml] ---- -apiVersion: machineconfiguration.openshift.io/v1alpha1 +apiVersion: machineconfiguration.openshift.io/v1aplha1 <1> kind: MachineOSConfig metadata: - name: layered + name: layered <2> spec: machineConfigPool: - name: <1> + name: <3> buildInputs: - containerFile: # <2> - - containerfileArch: noarch <3> + containerFile: <4> + - containerfileArch: noarch content: |- - FROM configs AS final <4> - RUN dnf install -y cowsay && \ - dnf clean all && \ - ostree container commit - imageBuilder: # <5> + FROM configs AS final + RUN rpm-ostree install tree && \ + ostree container commit + imageBuilder: <5> imageBuilderType: PodImageBuilder - baseImagePullSecret: # <6> + baseImagePullSecret: <6> name: global-pull-secret-copy - renderedImagePushspec: image-registry.openshift-image-registry.svc:5000/openshift/os-image:latest # <7> - renderedImagePushSecret: # <8> + renderedImagePushspec: image-registry.openshift-image-registry.svc:5000/openshift/os-image:latest <7> + renderedImagePushSecret: <8> name: builder-dockercfg-7lzwl - buildOutputs: # <9> + buildOutputs: <9> currentImagePullSecret: name: builder-dockercfg-7lzwl ---- -<1> Specifies the machine config pool to deploy the custom layered image. -<2> Specifies the Containerfile to configure the custom layered image. You can specify multiple build stages in the Containerfile. -<3> Specifies the architecture of the image to be built. You must set this parameter to `noarch`. -<4> Specifies the build stage as final. This field is required and applies to the last image in the build. -<5> Specifies the name of the image builder to use. You must set this parameter to `PodImageBuilder`. +<1> Specifies the `machineconfiguration.openshift.io/v1` API that is required for `MachineConfig` CRs. +<2> Specifies a name for the `MachineOSConfig` object. This name is used with other on-cluster layering resources. The examples in this documentation use the name `layered`. +<3> Specifies the name of the machine config pool associated with the nodes where you want to deploy the custom layered image. +<4> Specifies the Containerfile to configure the custom layered image. +<5> Specifies the name of the image builder to use. This must be `PodImageBuilder`. <6> Specifies the name of the pull secret that the MCO needs in order to pull the base operating system image from the registry. <7> Specifies the image registry to push the newly-built custom layered image to. This can be any registry that your cluster has access to. This example uses the internal {product-title} registry. -<8> Specifies the name of the push secret that the MCO needs in order to push the newly-built custom layered image to the registry. +<8> Specifies the name of the push secret that the MCO needs in order to push the newly-built custom layered image to that registry. <9> Specifies the secret required by the image registry that the nodes need in order to pull the newly-built custom layered image. This should be a different secret than the one used to push the image to your repository. +// + +// https://github.com/openshift/openshift-docs/pull/87486/files has the v1 api versions .. Create the `MachineOSConfig` object: + @@ -103,8 +149,8 @@ $ oc get machineosbuild .Example output showing that the `MachineOSBuild` object is ready [source,terminal] ---- -NAME PREPARED BUILDING SUCCEEDED INTERRUPTED FAILED -layered-rendered-layered-ad5a3cad36303c363cf458ab0524e7c0-builder False False True False False +NAME PREPARED BUILDING SUCCEEDED INTERRUPTED FAILED +layered-ad5a3cad36303c363cf458ab0524e7c0-builder False False True False False ---- .. Edit the nodes where you want to deploy the custom layered image by adding a label for the machine config pool you specified in the `MachineOSConfig` object: @@ -135,14 +181,14 @@ $ oc get pods -n openshift-machine-config-operator [source,terminal] ---- NAME READY STATUS RESTARTS AGE -build-rendered-layered-ad5a3cad36303c363cf458ab0524e7c0 2/2 Running 0 2m40s # <1> +build-layered-ad5a3cad36303c363cf458ab0524e7c0-hxrws 2/2 Running 0 2m40s # <1> # ... machine-os-builder-6fb66cfb99-zcpvq 1/1 Running 0 2m42s # <2> ---- -<1> This is the build pod where the custom layered image is building. +<1> This is the build pod where the custom layered image is building, named in the `build--` format. <2> This pod can be used for troubleshooting. -. Verify the current stage of your layered build by running the following command: +. Verify that the `MachineOSConfig` object contains a reference to the new custom layered image by running the following command: + [source,terminal] ---- @@ -152,9 +198,10 @@ $ oc get machineosbuilds .Example output [source,terminal] ---- -NAME PREPARED BUILDING SUCCEEDED INTERRUPTED FAILED -layered-rendered-layered-ef6460613affe503b530047a11b28710-builder False True False False False +NAME PREPARED BUILDING SUCCEEDED INTERRUPTED FAILED +layered-ad5a3cad36303c363cf458ab0524e7c0 False True False False False <1> ---- +<1> The `MachineOSBuild` is named in the `-` format. . Verify that the `MachineOSBuild` object contains a reference to the new custom layered image by running the following command: + @@ -166,25 +213,25 @@ $ oc describe machineosbuild .Example output [source,yaml] ---- -apiVersion: machineconfiguration.openshift.io/v1alpha1 -kind: MachineOSBuild -metadata: - name: layered-rendered-layered-ad5a3cad36303c363cf458ab0524e7c0-builder -spec: - desiredConfig: - name: rendered-layered-ad5a3cad36303c363cf458ab0524e7c0 - machineOSConfig: - name: layered - renderedImagePushspec: image-registry.openshift-image-registry.svc:5000/openshift-machine-config-operator/os-image:latest +Name: layered-ad5a3cad36303c363cf458ab0524e7c0 +# ... +API Version: machineconfiguration.openshift.io/v1alpha1 +Kind: MachineOSBuild +# ... +Spec: + Config Generation: 1 + Desired Config: + Name: rendered-layered-ad5a3cad36303c363cf458ab0524e7c0 + Machine OS Config: + Name: layered-alpha1 + Rendered Image Pushspec: image-registry.openshift-image-registry.svc:5000/openshift-machine-config-operator/os-images:layered-ad5a3cad36303c363cf458ab0524e7c0 # ... -status: - conditions: - - lastTransitionTime: "2024-05-21T20:25:06Z" - message: Build Ready - reason: Ready - status: "True" - type: Succeeded - finalImagePullspec: image-registry.openshift-image-registry.svc:5000/openshift-machine-config-operator/os-image@sha256:f636fa5b504e92e6faa22ecd71a60b089dab72200f3d130c68dfec07148d11cd # <1> + Last Transition Time: 2025-02-12T19:21:28Z + Message: Build Ready + Reason: Ready + Status: True + Type: Succeeded + Final Image Pullspec: image-registry.openshift-image-registry.svc:5000/openshift-machine-config-operator/os-images@sha256:312e48825e074b01a913deedd6de68abd44894ede50b2d14f99d722f13cda04b <1> ---- <1> Digested image pull spec for the new custom layered image. @@ -216,8 +263,8 @@ sh-5.1# rpm-ostree status ---- # ... Deployments: -* ostree-unverified-registry:quay.io/openshift-release-dev/os-image@sha256:f636fa5b504e92e6faa22ecd71a60b089dab72200f3d130c68dfec07148d11cd # <1> - Digest: sha256:bcea2546295b2a55e0a9bf6dd4789433a9867e378661093b6fdee0031ed1e8a4 - Version: 416.94.202405141654-0 (2024-05-14T16:58:43Z) +* ostree-unverified-registry:image-registry.openshift-image-registry.svc:5000/openshift-machine-config-operator/os-images@sha256:312e48825e074b01a913deedd6de68abd44894ede50b2d14f99d722f13cda04b + Digest: sha256:312e48825e074b01a913deedd6de68abd44894ede50b2d14f99d722f13cda04b <1> + Version: 418.94.202502100215-0 (2025-02-12T19:20:44Z) ---- <1> Digested image pull spec for the new custom layered image. diff --git a/modules/coreos-layering-removing.adoc b/modules/coreos-layering-removing.adoc index 79c4bae49d88..3006f40cee13 100644 --- a/modules/coreos-layering-removing.adoc +++ b/modules/coreos-layering-removing.adoc @@ -4,15 +4,15 @@ :_mod-docs-content-type: PROCEDURE [id="coreos-layering-removing_{context}"] -= Removing a {op-system} custom layered image += Reverting an out-of-cluster node -You can easily revert {op-system-first} image layering from the nodes in specific machine config pools. The Machine Config Operator (MCO) reboots those nodes with the cluster base {op-system-first} image, overriding the custom layered image. +You can revert an out-of-cluster custom layered image from the nodes in specific machine config pools. The Machine Config Operator (MCO) reboots those nodes with the cluster base {op-system-first} image, overriding the custom layered image. To remove a {op-system-first} custom layered image from your cluster, you need to delete the machine config that applied the image. .Procedure -. Delete the machine config that applied the custom layered image. +* Delete the machine config that applied the custom layered image. + [source,terminal] ---- @@ -62,25 +62,25 @@ ip-10-0-218-151.us-west-1.compute.internal Ready worker . When the node is back in the `Ready` state, check that the node is using the base image: -.. Open an `oc debug` session to the node. For example: +.. Open an `oc debug` session to the node by running the following command: + [source,terminal] ---- -$ oc debug node/ip-10-0-155-125.us-west-1.compute.internal +$ oc debug node/ ---- -.. Set `/host` as the root directory within the debug shell: +.. Set `/host` as the root directory within the debug shell by running the following command: + [source,terminal] ---- -sh-4.4# chroot /host +sh-5.1# chroot /host ---- .. Run the `rpm-ostree status` command to view that the custom layered image is in use: + [source,terminal] ---- -sh-4.4# sudo rpm-ostree status +sh-5.1# sudo rpm-ostree status ---- + .Example output diff --git a/modules/rhcos-add-extensions.adoc b/modules/rhcos-add-extensions.adoc index eb38fb072109..f9f537e10bfd 100644 --- a/modules/rhcos-add-extensions.adoc +++ b/modules/rhcos-add-extensions.adoc @@ -6,15 +6,24 @@ [id="rhcos-add-extensions_{context}"] = Adding extensions to {op-system} -{op-system} is a minimal container-oriented RHEL operating system, designed to provide a common set of capabilities to {product-title} clusters across all platforms. While adding software packages to {op-system} systems is generally discouraged, the MCO provides an `extensions` feature you can use to add a minimal set of features to {op-system} nodes. + +{op-system} is a minimal container-oriented RHEL operating system, designed to provide a common set of capabilities to {product-title} clusters across all platforms. Although adding software packages to {op-system} systems is generally discouraged, the MCO provides an `extensions` feature you can use to add a minimal set of features to {op-system} nodes. Currently, the following extensions are available: -* **usbguard**: Adding the `usbguard` extension protects {op-system} systems from attacks from intrusive USB devices. See link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/security_hardening/index#usbguard_protecting-systems-against-intrusive-usb-devices[USBGuard] for details. +* **usbguard**: The `usbguard` extension protects {op-system} systems from attacks by intrusive USB devices. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/security_hardening/index#usbguard_protecting-systems-against-intrusive-usb-devices[USBGuard] for details. + +* **kerberos**: The `kerberos` extension provides a mechanism that allows both users and machines to identify themselves to the network to receive defined, limited access to the areas and services that an administrator has configured. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system-level_authentication_guide/using_kerberos[Using Kerberos] for details, including how to set up a Kerberos client and mount a Kerberized NFS share. + +* **sandboxed-containers**: The `sandboxed-containers` extension contains RPMs for Kata, QEMU, and its dependencies. For more information, see https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/latest[OpenShift Sandboxed Containers]. + +* **ipsec**: The `ipsec` extension contains RPMs for libreswan and NetworkManager-libreswan. + +* **wasm**: The `wasm` extension enables Developer Preview functionality in {product-title} for users who want to use WASM-supported workloads. -* **kerberos**: Adding the `kerberos` extension provides a mechanism that allows both users and machines to identify themselves to the network to receive defined, limited access to the areas and services that an administrator has configured. See link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system-level_authentication_guide/using_kerberos[Using Kerberos] for details, including how to set up a Kerberos client and mount a Kerberized NFS share. +* **sysstat**: Adding the `sysstat` extension provides additional performance monitoring for {product-title} nodes, including the system activity reporter (`sar`) command for collecting and reporting information. -* **sysstat**: Adding the `sysstat` extension provides additional performance monitoring for {product-title} nodes, including the system activity reporter (`sar`) command for collecting and reporting information. See link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/monitoring_and_managing_system_status_and_performance/index#proc_providing-feedback-on-red-hat-documentation_monitoring-and-managing-system-status-and-performance[Monitoring and managing system status and performance] for details. +* **kernel-devel**: The `kernel-devel` extension provides kernel headers and makefiles sufficient to build modules against the kernel package. The following procedure describes how to use a machine config to add one or more extensions to your {op-system} nodes. diff --git a/snippets/coreos-layering-configuring-on-pause.adoc b/snippets/coreos-layering-configuring-on-pause.adoc new file mode 100644 index 000000000000..8892638b0d6a --- /dev/null +++ b/snippets/coreos-layering-configuring-on-pause.adoc @@ -0,0 +1,22 @@ +// Text snippet included in the following modules: +// +// * modules/coreos-layering-configuring-on.adoc +// * modules/coreos-layering-configuring-on-modifying.adoc + +:_mod-docs-content-type: SNIPPET + +Making certain changes to a `MachineOSConfig` object triggers an automatic rebuild of the associated custom layered image. You can mitigate the effects of the rebuild by pausing the machine config pool where the custom layered image is applied as described in "Pausing the machine config pools." For example, if you want to remove and replace a `MachineOSCOnfig` object, pausing the machine config pools before making the change prevents the MCO from reverting the associated nodes to the base image, reducing the number of reboots needed. + +When a machine config pool is paused, the `oc get machineconfigpools` reports the following status: + +.Example output +[source,terminal] +---- +NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE +layered rendered-layered-221507009cbcdec0eec8ab3ccd789d18 False False False 1 0 0 0 3h23m <1> +master rendered-master-a0b404d061a6183cc36d302363422aba True False False 3 3 3 0 4h14m +worker rendered-worker-221507009cbcdec0eec8ab3ccd789d18 True False False 2 2 2 0 4h14m +---- +<1> The `layered` machine config pool is paused, as indicated by the three `False` statuses and the `READYMACHINECOUNT` at `0`. + +After the changes have been rolled out, you can unpause the machine config pool. From 7b9d4278b62ccf1b05b0c58a980921ce138255a3 Mon Sep 17 00:00:00 2001 From: sbeskin Date: Thu, 20 Feb 2025 12:08:23 +0200 Subject: [PATCH 299/669] CNV-49587_fix --- modules/virt-viewing-network-state-of-node-console.adoc | 3 --- 1 file changed, 3 deletions(-) diff --git a/modules/virt-viewing-network-state-of-node-console.adoc b/modules/virt-viewing-network-state-of-node-console.adoc index 155e004df95a..6c005d7e24a4 100644 --- a/modules/virt-viewing-network-state-of-node-console.adoc +++ b/modules/virt-viewing-network-state-of-node-console.adoc @@ -20,9 +20,6 @@ In the *NodeNetworkState* page, you can view the list of `NodeNetworkState` reso [id="virt-viewing-graphical-representation-of-nns-topology_{context}"] == Viewing a graphical representation of the NNS topology -:FeatureName: NNS topology view -include::snippets/technology-preview.adoc[] - To make the configuration of the node network in the cluster easier to understand, you can view it in the form of a diagram. The NNS topology diagram displays all node components (network interface controllers, bridges, bonds, and VLANs), their properties and configurations, and connections between the nodes. To open the topology view of the cluster, do the following: From 0e10c513563987321d5eecda524ebcb34d333af4 Mon Sep 17 00:00:00 2001 From: libander Date: Mon, 30 Sep 2024 16:20:31 -0500 Subject: [PATCH 300/669] Removing xrefs --- _topic_maps/_topic_map.yml | 243 +++++++++--------- _topic_maps/_topic_map_osd.yml | 218 ++++++++-------- adding_service_cluster/adding-service.adoc | 6 - .../rosa-available-services.adoc | 1 - architecture/index.adoc | 1 - cicd/gitops/viewing-argo-cd-logs.adoc | 4 +- ...-using-the-openshift-logging-operator.adoc | 8 +- ...experts-deploying-application-logging.adoc | 12 +- .../creating-infrastructure-machinesets.adoc | 2 - migrating_from_ocp_3_to_4/index.adoc | 2 +- .../planning-migration-3-4.adoc | 6 - .../distr-tracing-jaeger-updating.adoc | 3 +- observability/logging/cluster-logging.adoc | 14 +- .../log-forwarding.adoc | 2 +- .../installing-operators.adoc | 6 +- .../otel/otel-forwarding-telemetry-data.adoc | 2 + observability/overview/index.adoc | 2 - .../cluster-tasks.adoc | 12 +- rosa_architecture/about-hcp.adoc | 2 +- rosa_architecture/index.adoc | 2 - .../learn_more_about_openshift.adoc | 2 - .../optimization/optimizing-storage.adoc | 5 - security/audit-log-view.adoc | 2 - .../security-monitoring.adoc | 2 +- service_mesh/v1x/ossm-custom-resources.adoc | 4 - .../v1x/preparing-ossm-installation.adoc | 4 - service_mesh/v2x/ossm-reference-jaeger.adoc | 2 +- support/index.adoc | 19 +- virt/support/virt-troubleshooting.adoc | 4 +- welcome/about-hcp.adoc | 2 +- welcome/learn_more_about_openshift.adoc | 12 +- 31 files changed, 286 insertions(+), 320 deletions(-) diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 7bd9bc651841..baf9e5731d42 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2994,32 +2994,32 @@ Topics: Dir: logging Distros: openshift-enterprise,openshift-origin Topics: - - Name: Release notes - Dir: logging_release_notes - Topics: - - Name: Logging 5.9 - File: logging-5-9-release-notes - - Name: Logging 5.8 - File: logging-5-8-release-notes - - Name: Logging 5.7 - File: logging-5-7-release-notes +# - Name: Release notes +# Dir: logging_release_notes +# Topics: +# - Name: Logging 5.9 +# File: logging-5-9-release-notes +# - Name: Logging 5.8 +# File: logging-5-8-release-notes +# - Name: Logging 5.7 +# File: logging-5-7-release-notes - Name: Logging 6.0 Dir: logging-6.0 Topics: - - Name: Release notes - File: log6x-release-notes - - Name: About logging 6.0 - File: log6x-about - - Name: Upgrading to Logging 6.0 - File: log6x-upgrading-to-6 - - Name: Configuring log forwarding - File: log6x-clf - - Name: Configuring LokiStack storage - File: log6x-loki - - Name: Visualization for logging - File: log6x-visual - # - Name: API reference 6.0 - # File: log6x-api-reference + - Name: Release notes + File: log6x-release-notes + - Name: About logging 6.0 + File: log6x-about + - Name: Upgrading to Logging 6.0 + File: log6x-upgrading-to-6 + - Name: Configuring log forwarding + File: log6x-clf + - Name: Configuring LokiStack storage + File: log6x-loki + - Name: Visualization for logging + File: log6x-visual +# - Name: API reference 6.0 +# File: log6x-api-reference - Name: Logging 6.1 Dir: logging-6.1 Topics: @@ -3037,110 +3037,107 @@ Topics: File: log6x-opentelemetry-data-model-6.1 - Name: Visualization for logging File: log6x-visual-6.1 - - Name: Support - File: cluster-logging-support - - Name: Troubleshooting logging - Dir: troubleshooting - Topics: - - Name: Viewing Logging status - File: cluster-logging-cluster-status - - Name: Troubleshooting log forwarding - File: log-forwarding-troubleshooting - - Name: Troubleshooting logging alerts - File: troubleshooting-logging-alerts - - Name: Viewing the status of the Elasticsearch log store - File: cluster-logging-log-store-status - - Name: About Logging - File: cluster-logging - - Name: Installing Logging - File: cluster-logging-deploying - - Name: Updating Logging - File: cluster-logging-upgrading - Distros: openshift-enterprise,openshift-origin - - Name: Visualizing logs - Dir: log_visualization - Topics: - - Name: About log visualization - File: log-visualization - - Name: Log visualization with the web console - File: log-visualization-ocp-console - - Name: Viewing cluster dashboards - File: cluster-logging-dashboards - - Name: Log visualization with Kibana - File: logging-kibana - - Name: Configuring your Logging deployment - Dir: config - Distros: openshift-enterprise,openshift-origin - Topics: - - Name: Configuring CPU and memory limits for Logging components - File: cluster-logging-memory - - Name: Configuring systemd-journald for Logging - File: cluster-logging-systemd - - Name: Log collection and forwarding - Dir: log_collection_forwarding - Topics: - - Name: About log collection and forwarding - File: log-forwarding - - Name: Log output types - File: logging-output-types - - Name: Enabling JSON log forwarding - File: cluster-logging-enabling-json-logging - - Name: Configuring log forwarding - File: configuring-log-forwarding - - Name: Configuring the logging collector - File: cluster-logging-collector - - Name: Collecting and storing Kubernetes events - File: cluster-logging-eventrouter - - Name: Log storage - Dir: log_storage - Topics: - - Name: About log storage - File: about-log-storage - - Name: Installing log storage - File: installing-log-storage - - Name: Configuring the LokiStack log store - File: cluster-logging-loki - - Name: Configuring the Elasticsearch log store - File: logging-config-es-store - - Name: Logging alerts - Dir: logging_alerts - Topics: - - Name: Default logging alerts - File: default-logging-alerts - - Name: Custom logging alerts - File: custom-logging-alerts - - Name: Performance and reliability tuning - Dir: performance_reliability - Topics: - - Name: Flow control mechanisms - File: logging-flow-control-mechanisms - - Name: Filtering logs by content - File: logging-content-filtering - - Name: Filtering logs by metadata - File: logging-input-spec-filtering - - Name: Scheduling resources - Dir: scheduling_resources - Topics: - - Name: Using node selectors to move logging resources - File: logging-node-selectors - - Name: Using tolerations to control logging pod placement - File: logging-taints-tolerations - - Name: Uninstalling Logging - File: cluster-logging-uninstall - - Name: Exported fields - File: cluster-logging-exported-fields - Distros: openshift-enterprise,openshift-origin - - Name: API reference - Dir: api_reference - Topics: +# - Name: Support +# File: cluster-logging-support +# - Name: Troubleshooting logging +# Dir: troubleshooting +# Topics: +# - Name: Viewing Logging status +# File: cluster-logging-cluster-status +# - Name: Troubleshooting log forwarding +# File: log-forwarding-troubleshooting +# - Name: Troubleshooting logging alerts +# File: troubleshooting-logging-alerts +# File: cluster-logging-log-store-status +# - Name: About Logging +# File: cluster-logging +# - Name: Installing Logging +# File: cluster-logging-deploying +# - Name: Updating Logging +# File: cluster-logging-upgrading +# Distros: openshift-enterprise,openshift-origin +# - Name: Visualizing logs +# Topics: +# - Name: About log visualization +# File: log-visualization +# - Name: Log visualization with the web console +# File: log-visualization-ocp-console +# - Name: Viewing cluster dashboards +# File: cluster-logging-dashboards +# - Name: Log visualization with Kibana +# File: logging-kibana +# - Name: Configuring your Logging deployment +# Dir: config +# Distros: openshift-enterprise,openshift-origin +# Topics: +# - Name: Configuring CPU and memory limits for Logging components +# File: cluster-logging-memory +# - Name: Configuring systemd-journald for Logging +# File: cluster-logging-systemd +# - Name: Log collection and forwarding +# Dir: log_collection_forwarding +# Topics: +# - Name: About log collection and forwarding +# File: log-forwarding +# - Name: Log output types +# File: logging-output-types +# - Name: Enabling JSON log forwarding +# File: cluster-logging-enabling-json-logging +# - Name: Configuring log forwarding +# File: configuring-log-forwarding +# - Name: Configuring the logging collector +# File: cluster-logging-collector +# - Name: Collecting and storing Kubernetes events +# File: cluster-logging-eventrouter +# - Name: Log storage +# Dir: log_storage +# Topics: +# - Name: About log storage +# File: about-log-storage +# File: installing-log-storage +# - Name: Configuring the LokiStack log store +# File: cluster-logging-loki +# - Name: Configuring the Elasticsearch log store +# File: logging-config-es-store +# - Name: Logging alerts +# Dir: logging_alerts +# Topics: +# - Name: Default logging alerts +# File: default-logging-alerts +# - Name: Custom logging alerts +# File: custom-logging-alerts +# - Name: Performance and reliability tuning +# Dir: performance_reliability +# Topics: +# - Name: Flow control mechanisms +# File: logging-flow-control-mechanisms +# - Name: Filtering logs by content +# File: logging-content-filtering +# - Name: Filtering logs by metadata +# File: logging-input-spec-filtering +# - Name: Scheduling resources +# Dir: scheduling_resources +# Topics: +# - Name: Using node selectors to move logging resources +# File: logging-node-selectors +# - Name: Using tolerations to control logging pod placement +# File: logging-taints-tolerations +# - Name: Uninstalling Logging +# File: cluster-logging-uninstall +# - Name: Exported fields +# File: cluster-logging-exported-fields +# Distros: openshift-enterprise,openshift-origin +# - Name: API reference +# Dir: api_reference +# Topics: # - Name: 5.8 Logging API reference # File: logging-5-8-reference # - Name: 5.7 Logging API reference # File: logging-5-7-reference - - Name: 5.6 Logging API reference - File: logging-5-6-reference - - Name: Glossary - File: logging-common-terms +# - Name: 5.6 Logging API reference +# File: logging-5-6-reference +# - Name: Glossary +# File: logging-common-terms - Name: Distributed tracing Dir: distr_tracing Distros: openshift-enterprise diff --git a/_topic_maps/_topic_map_osd.yml b/_topic_maps/_topic_map_osd.yml index 35eda57f372d..9c0a7da5e60e 100644 --- a/_topic_maps/_topic_map_osd.yml +++ b/_topic_maps/_topic_map_osd.yml @@ -1231,120 +1231,118 @@ Topics: File: troubleshooting-monitoring-issues - Name: Config map reference for the Cluster Monitoring Operator File: config-map-reference-for-the-cluster-monitoring-operator -- Name: Logging - Dir: logging - Distros: openshift-dedicated - Topics: - - Name: Release notes - Dir: logging_release_notes - Topics: - - Name: Logging 5.9 - File: logging-5-9-release-notes - - Name: Logging 5.8 - File: logging-5-8-release-notes - - Name: Logging 5.7 - File: logging-5-7-release-notes - - Name: Support - File: cluster-logging-support - - Name: Troubleshooting logging - Dir: troubleshooting - Topics: - - Name: Viewing Logging status - File: cluster-logging-cluster-status - - Name: Troubleshooting log forwarding - File: log-forwarding-troubleshooting - - Name: Troubleshooting logging alerts - File: troubleshooting-logging-alerts - - Name: Viewing the status of the Elasticsearch log store - File: cluster-logging-log-store-status - - Name: About Logging - File: cluster-logging - - Name: Installing Logging - File: cluster-logging-deploying - - Name: Updating Logging - File: cluster-logging-upgrading - - Name: Visualizing logs - Dir: log_visualization - Topics: - - Name: About log visualization - File: log-visualization - - Name: Log visualization with the web console - File: log-visualization-ocp-console - - Name: Viewing cluster dashboards - File: cluster-logging-dashboards - - Name: Log visualization with Kibana - File: logging-kibana - - Name: Configuring your Logging deployment - Dir: config - Topics: - - Name: Configuring CPU and memory limits for Logging components - File: cluster-logging-memory - #- Name: Configuring systemd-journald and Fluentd - # File: cluster-logging-systemd - - Name: Log collection and forwarding - Dir: log_collection_forwarding - Topics: - - Name: About log collection and forwarding - File: log-forwarding - - Name: Log output types - File: logging-output-types - - Name: Enabling JSON log forwarding - File: cluster-logging-enabling-json-logging - - Name: Configuring log forwarding - File: configuring-log-forwarding - - Name: Configuring the logging collector - File: cluster-logging-collector - - Name: Collecting and storing Kubernetes events - File: cluster-logging-eventrouter - - Name: Log storage - Dir: log_storage - Topics: - - Name: About log storage - File: about-log-storage - - Name: Installing log storage - File: installing-log-storage - - Name: Configuring the LokiStack log store - File: cluster-logging-loki - - Name: Configuring the Elasticsearch log store - File: logging-config-es-store - - Name: Logging alerts - Dir: logging_alerts - Topics: - - Name: Default logging alerts - File: default-logging-alerts - - Name: Custom logging alerts - File: custom-logging-alerts - - Name: Performance and reliability tuning - Dir: performance_reliability - Topics: - - Name: Flow control mechanisms - File: logging-flow-control-mechanisms - - Name: Filtering logs by content - File: logging-content-filtering - - Name: Filtering logs by metadata - File: logging-input-spec-filtering - - Name: Scheduling resources - Dir: scheduling_resources - Topics: - - Name: Using node selectors to move logging resources - File: logging-node-selectors - - Name: Using tolerations to control logging pod placement - File: logging-taints-tolerations - - Name: Uninstalling Logging - File: cluster-logging-uninstall - - Name: Exported fields - File: cluster-logging-exported-fields - - Name: API reference - Dir: api_reference - Topics: +#- Name: Logging +# Dir: logging +# Distros: openshift-dedicated +# Topics: +# - Name: Release notes +# Dir: logging_release_notes +# Topics: +# - Name: Logging 5.9 +# File: logging-5-9-release-notes +# - Name: Logging 5.8 +# File: logging-5-8-release-notes +# - Name: Logging 5.7 +# File: logging-5-7-release-notes +# - Name: Support +# File: cluster-logging-support +# - Name: Troubleshooting logging +# Dir: troubleshooting +# Topics: +# - Name: Viewing Logging status +# File: cluster-logging-cluster-status +# - Name: Troubleshooting log forwarding +# File: log-forwarding-troubleshooting +# - Name: Troubleshooting logging alerts +# File: troubleshooting-logging-alerts +# - Name: Viewing the status of the Elasticsearch log store +# File: cluster-logging-log-store-status +# - Name: About Logging +# File: cluster-logging +# - Name: Installing Logging +# File: cluster-logging-deploying +# - Name: Updating Logging +# File: cluster-logging-upgrading +# - Name: Visualizing logs +# Dir: log_visualization +# Topics: +# - Name: About log visualization +# File: log-visualization +# - Name: Log visualization with the web console +# File: log-visualization-ocp-console +# - Name: Viewing cluster dashboards +# File: cluster-logging-dashboards +# - Name: Log visualization with Kibana +# File: logging-kibana +# - Name: Configuring your Logging deployment +# Dir: config +# Topics: +# - Name: Configuring CPU and memory limits for Logging components +# File: cluster-logging-memory +# #- Name: Configuring systemd-journald and Fluentd +# # File: cluster-logging-systemd +# Dir: log_collection_forwarding +# Topics: +# - Name: About log collection and forwarding +# File: log-forwarding +# - Name: Log output types +# - Name: Enabling JSON log forwarding +# File: cluster-logging-enabling-json-logging +# - Name: Configuring log forwarding +# File: configuring-log-forwarding +# - Name: Configuring the logging collector +# File: cluster-logging-collector +# - Name: Collecting and storing Kubernetes events +# File: cluster-logging-eventrouter +# - Name: Log storage +# Dir: log_storage +# Topics: +# - Name: About log storage +# File: about-log-storage +# - Name: Installing log storage +# File: installing-log-storage +# - Name: Configuring the LokiStack log store +# File: cluster-logging-loki +# - Name: Configuring the Elasticsearch log store +# File: logging-config-es-store +# - Name: Logging alerts +# Dir: logging_alerts +# Topics: +# - Name: Default logging alerts +# File: default-logging-alerts +# - Name: Custom logging alerts +# File: custom-logging-alerts +# - Name: Performance and reliability tuning +# Dir: performance_reliability +# Topics: +# - Name: Flow control mechanisms +# File: logging-flow-control-mechanisms +# - Name: Filtering logs by content +# File: logging-content-filtering +# - Name: Filtering logs by metadata +# File: logging-input-spec-filtering +# - Name: Scheduling resources +# Dir: scheduling_resources +# Topics: +# - Name: Using node selectors to move logging resources +# File: logging-node-selectors +# - Name: Using tolerations to control logging pod placement +# File: logging-taints-tolerations +# - Name: Uninstalling Logging +# File: cluster-logging-uninstall +# - Name: Exported fields +# File: cluster-logging-exported-fields +# - Name: API reference +# Dir: api_reference +# Topics: # - Name: 5.8 Logging API reference # File: logging-5-8-reference # - Name: 5.7 Logging API reference # File: logging-5-7-reference - - Name: 5.6 Logging API reference - File: logging-5-6-reference - - Name: Glossary - File: logging-common-terms +# - Name: 5.6 Logging API reference +# File: logging-5-6-reference +# - Name: Glossary +# File: logging-common-terms --- Name: Service Mesh Dir: service_mesh diff --git a/adding_service_cluster/adding-service.adoc b/adding_service_cluster/adding-service.adoc index d99a497236f5..363bf2106eda 100644 --- a/adding_service_cluster/adding-service.adoc +++ b/adding_service_cluster/adding-service.adoc @@ -21,9 +21,3 @@ include::modules/adding-service-existing.adoc[leveloffset=+1] include::modules/access-service.adoc[leveloffset=+1] include::modules/deleting-service.adoc[leveloffset=+1] //include::modules/deleting-service-cli.adoc[leveloffset=+1] - -ifdef::openshift-rosa[] -[role="_additional-resources"] -== Additional resources -* xref:../observability/logging/log_collection_forwarding/configuring-log-forwarding.adoc#cluster-logging-collector-log-forward-cloudwatch_configuring-log-forwarding[Forwarding logs to Amazon CloudWatch] -endif::[] diff --git a/adding_service_cluster/rosa-available-services.adoc b/adding_service_cluster/rosa-available-services.adoc index 657744c6115b..b8e1c94471dc 100644 --- a/adding_service_cluster/rosa-available-services.adoc +++ b/adding_service_cluster/rosa-available-services.adoc @@ -16,7 +16,6 @@ include::modules/aws-cloudwatch.adoc[leveloffset=+1] .Additional resources * link:https://aws.amazon.com/cloudwatch/[Amazon CloudWatch product information] -* xref:../observability/logging/log_collection_forwarding/configuring-log-forwarding.adoc#cluster-logging-collector-log-forward-cloudwatch_configuring-log-forwarding[Forwarding logs to Amazon CloudWatch] include::modules/osd-rhoam.adoc[leveloffset=+1] diff --git a/architecture/index.adoc b/architecture/index.adoc index 3e750ad6a9f5..eb419440bbc9 100644 --- a/architecture/index.adoc +++ b/architecture/index.adoc @@ -25,7 +25,6 @@ endif::openshift-dedicated,openshift-rosa[] * For more information on storage, see xref:../storage/index.adoc#index[{product-title} storage]. * For more information on authentication, see xref:../authentication/index.adoc#index[{product-title} authentication]. * For more information on Operator Lifecycle Manager (OLM), see xref:../operators/understanding/olm/olm-understanding-olm.adoc#olm-understanding-olm[OLM]. -* For more information on logging, see xref:../observability/logging/cluster-logging.adoc#cluster-logging[About Logging]. // Topic not included in the OSD/ROSA docs ifndef::openshift-dedicated,openshift-rosa[] * For more information on over-the-air (OTA) updates, see xref:../updating/understanding_updates/intro-to-updates.adoc#understanding-openshift-updates[Introduction to OpenShift updates]. diff --git a/cicd/gitops/viewing-argo-cd-logs.adoc b/cicd/gitops/viewing-argo-cd-logs.adoc index 556d0f576a45..9ddbd18dd0e1 100644 --- a/cicd/gitops/viewing-argo-cd-logs.adoc +++ b/cicd/gitops/viewing-argo-cd-logs.adoc @@ -10,7 +10,9 @@ You can view the Argo CD logs with {logging}. {logging-uc} visualizes the logs o include::modules/gitops-storing-and-retrieving-argo-cd-logs.adoc[leveloffset=+1] +//// [role="_additional-resources"] [id="additional-resources_viewing-argo-cd-logs"] == Additional resources -* xref:../../observability/logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[Installing {logging} using the web console] +* xref :../../observability/logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[Installing {logging} using the web console] +//// diff --git a/cicd/pipelines/viewing-pipeline-logs-using-the-openshift-logging-operator.adoc b/cicd/pipelines/viewing-pipeline-logs-using-the-openshift-logging-operator.adoc index 20e9a3daf1a3..a2dd46308370 100644 --- a/cicd/pipelines/viewing-pipeline-logs-using-the-openshift-logging-operator.adoc +++ b/cicd/pipelines/viewing-pipeline-logs-using-the-openshift-logging-operator.adoc @@ -23,9 +23,11 @@ Before trying to view pipeline logs in a Kibana dashboard, ensure the following: include::modules/op-viewing-pipeline-logs-in-kibana.adoc[leveloffset=+1] +//// [role="_additional-resources"] [id="additional-resources_viewing-pipeline-logs-using-the-openshift-logging-operator"] == Additional resources -* xref:../../observability/logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Installing OpenShift Logging] -* xref:../../observability/logging/log_visualization/log-visualization.adoc#log-visualization-resource-logs_log-visualization[Viewing logs for a resource] -* xref:../../observability/logging/log_visualization/logging-kibana.adoc#logging-kibana[Log visualization with Kibana] +* xref :../../observability/logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Installing OpenShift Logging] +* xref :../../observability/logging/log_visualization/log-visualization.adoc#log-visualization-resource-logs_log-visualization[Viewing logs for a resource] +* xref :../../observability/logging/log_visualization/logging-kibana.adoc#logging-kibana[Log visualization with Kibana] +//// diff --git a/cloud_experts_tutorials/cloud-experts-deploying-application/cloud-experts-deploying-application-logging.adoc b/cloud_experts_tutorials/cloud-experts-deploying-application/cloud-experts-deploying-application-logging.adoc index c1c80a1e4705..1b3cf4b7efbc 100644 --- a/cloud_experts_tutorials/cloud-experts-deploying-application/cloud-experts-deploying-application-logging.adoc +++ b/cloud_experts_tutorials/cloud-experts-deploying-application/cloud-experts-deploying-application-logging.adoc @@ -16,15 +16,15 @@ There are various methods to view your logs in {product-rosa} (ROSA). Use the fo ==== ROSA is not preconfigured with a logging solution. ==== - +//// .Prerequisites -* Set up a logging solution before viewing your logs. See the xref:../../observability/logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Installing logging] documentation for more information. +* Set up a logging solution before viewing your logs. See the xref :../../observability/logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Installing logging] documentation for more information. [role="_additional-resources"] .Additional resources -* xref:../../observability/logging/cluster-logging.adoc#cluster-logging[Cluster logging] - +* xref :../../observability/logging/cluster-logging.adoc#cluster-logging[Cluster logging] +//// == Forwarding logs to CloudWatch Install the logging add-on service to forward the logs to AWS CloudWatch. @@ -126,7 +126,7 @@ pod/ostoy-frontend-679cb85695-5cn7x <1> pod/ostoy-microservice-86b4c6f559-p594d ---- <1> The pod name is `ostoy-frontend-679cb85695-5cn7x`. -+ ++ . Run the following command to see both the `stdout` and `stderr` messages: + [source,terminal] @@ -174,4 +174,4 @@ image:cloud-experts-deploying-application-logging-messages.png[] [role="_additional-resources"] .Additional resources -* link:https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html[What is Amazon CloudWatch] \ No newline at end of file +* link:https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html[What is Amazon CloudWatch] diff --git a/machine_management/creating-infrastructure-machinesets.adoc b/machine_management/creating-infrastructure-machinesets.adoc index 33e2299eaa1a..1b52d86a2a42 100644 --- a/machine_management/creating-infrastructure-machinesets.adoc +++ b/machine_management/creating-infrastructure-machinesets.adoc @@ -130,5 +130,3 @@ include::modules/nodes-cluster-resource-override-move-infra.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources * xref:../observability/monitoring/configuring-core-platform-monitoring/configuring-performance-and-scalability.adoc#moving-monitoring-components-to-different-nodes-cpm_configuring-performance-and-scalability[Moving monitoring components to different nodes] -* xref:../observability/logging/scheduling_resources/logging-node-selectors.adoc#logging-node-selectors[Using node selectors to move logging resources] -* xref:../observability/logging/scheduling_resources/logging-taints-tolerations.adoc#cluster-logging-logstore-tolerations_logging-taints-tolerations[Using taints and tolerations to control logging pod placement] diff --git a/migrating_from_ocp_3_to_4/index.adoc b/migrating_from_ocp_3_to_4/index.adoc index bdf42af961ac..7537fd22ef17 100644 --- a/migrating_from_ocp_3_to_4/index.adoc +++ b/migrating_from_ocp_3_to_4/index.adoc @@ -14,7 +14,7 @@ Before migrating from {product-title} 3 to 4, you can check xref:../migrating_fr * xref:../architecture/architecture.adoc#architecture[Architecture] * xref:../architecture/architecture-installation.adoc#architecture-installation[Installation and update] -* xref:../storage/index.adoc#index[Storage], xref:../networking/understanding-networking.adoc#understanding-networking[network], xref:../observability/logging/cluster-logging.adoc#cluster-logging[logging], xref:../security/index.adoc#index[security], and xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[monitoring considerations] +* xref:../storage/index.adoc#index[Storage], xref:../networking/understanding-networking.adoc#understanding-networking[network], xref:../security/index.adoc#index[security], and xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[monitoring considerations] [id="mtc-3-to-4-overview-planning-network-considerations-mtc"] == Planning network considerations diff --git a/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc b/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc index b5dd873f20d7..89f20c08386d 100644 --- a/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc +++ b/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc @@ -193,22 +193,16 @@ Review the following logging changes to consider when transitioning from {produc {product-title} 4 provides a simple deployment mechanism for OpenShift Logging, by using a Cluster Logging custom resource. -For more information, see xref:../observability/logging/cluster-logging-deploying.adoc#cluster-logging-deploying_cluster-logging-deploying[Installing OpenShift Logging]. - [discrete] ==== Aggregated logging data You cannot transition your aggregate logging data from {product-title} 3.11 into your new {product-title} 4 cluster. -For more information, see xref:../observability/logging/cluster-logging.adoc#cluster-logging-about_cluster-logging[About OpenShift Logging]. - [discrete] ==== Unsupported logging configurations Some logging configurations that were available in {product-title} 3.11 are no longer supported in {product-title} {product-version}. -For more information on the explicitly unsupported logging cases, see the xref:../observability/logging/cluster-logging-support.adoc#cluster-logging-support[logging support documentation]. - [id="migration-preparing-security"] === Security considerations diff --git a/observability/distr_tracing/distr_tracing_jaeger/distr-tracing-jaeger-updating.adoc b/observability/distr_tracing/distr_tracing_jaeger/distr-tracing-jaeger-updating.adoc index 0e24202080f0..238dae151290 100644 --- a/observability/distr_tracing/distr_tracing_jaeger/distr-tracing-jaeger-updating.adoc +++ b/observability/distr_tracing/distr_tracing_jaeger/distr-tracing-jaeger-updating.adoc @@ -16,7 +16,7 @@ During an update, the {DTProductName} Operators upgrade the managed {DTShortName [IMPORTANT] ==== -If you have not already updated your {es-op} as described in xref:../../../observability/logging/cluster-logging-upgrading.adoc#cluster-logging-upgrading[Updating OpenShift Logging], complete that update before updating your {JaegerName} Operator. +If you have not already updated your {es-op}, complete that update before updating your {JaegerName} Operator. ==== [role="_additional-resources"] @@ -25,4 +25,3 @@ If you have not already updated your {es-op} as described in xref:../../../obser * xref:../../../operators/understanding/olm/olm-understanding-olm.adoc#olm-understanding-olm[Operator Lifecycle Manager concepts and resources] * xref:../../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators] -* xref:../../../observability/logging/cluster-logging-upgrading.adoc#cluster-logging-upgrading[Updating OpenShift Logging] diff --git a/observability/logging/cluster-logging.adoc b/observability/logging/cluster-logging.adoc index ea791ff07bd8..9587e169c6db 100644 --- a/observability/logging/cluster-logging.adoc +++ b/observability/logging/cluster-logging.adoc @@ -11,40 +11,40 @@ As a cluster administrator, you can deploy {logging} on an {product-title} clust include::snippets/logging-kibana-dep-snip.adoc[] -{product-title} cluster administrators can deploy {logging} by using Operators. For information, see xref:../../observability/logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Installing {logging}]. +{product-title} cluster administrators can deploy {logging} by using Operators. For information, see xref :../../observability/logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Installing {logging}]. The Operators are responsible for deploying, upgrading, and maintaining {logging}. After the Operators are installed, you can create a `ClusterLogging` custom resource (CR) to schedule {logging} pods and other resources necessary to support {logging}. You can also create a `ClusterLogForwarder` CR to specify which logs are collected, how they are transformed, and where they are forwarded to. [NOTE] ==== -Because the internal {product-title} Elasticsearch log store does not provide secure storage for audit logs, audit logs are not stored in the internal Elasticsearch instance by default. If you want to send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API as described in xref:../../observability/logging/log_storage/logging-config-es-store.adoc#cluster-logging-elasticsearch-audit_logging-config-es-store[Forward audit logs to the log store]. +Because the internal {product-title} Elasticsearch log store does not provide secure storage for audit logs, audit logs are not stored in the internal Elasticsearch instance by default. If you want to send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API as described in xref :../../observability/logging/log_storage/logging-config-es-store.adoc#cluster-logging-elasticsearch-audit_logging-config-es-store[Forward audit logs to the log store]. ==== include::modules/logging-architecture-overview.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../observability/logging/log_visualization/log-visualization-ocp-console.adoc#log-visualization-ocp-console[Log visualization with the web console] +* xref :../../observability/logging/log_visualization/log-visualization-ocp-console.adoc#log-visualization-ocp-console[Log visualization with the web console] include::modules/cluster-logging-about.adoc[leveloffset=+1] ifdef::openshift-rosa,openshift-dedicated[] include::modules/cluster-logging-cloudwatch.adoc[leveloffset=+1] -For information, see xref:../../observability/logging/log_collection_forwarding/log-forwarding.adoc#about-log-collection_log-forwarding[About log collection and forwarding]. +For information, see xref :../../observability/logging/log_collection_forwarding/log-forwarding.adoc#about-log-collection_log-forwarding[About log collection and forwarding]. endif::[] include::modules/cluster-logging-json-logging-about.adoc[leveloffset=+2] include::modules/cluster-logging-collecting-storing-kubernetes-events.adoc[leveloffset=+2] -For information, see xref:../../observability/logging/log_collection_forwarding/cluster-logging-eventrouter.adoc#cluster-logging-eventrouter[About collecting and storing Kubernetes events]. +For information, see xref :../../observability/logging/log_collection_forwarding/cluster-logging-eventrouter.adoc#cluster-logging-eventrouter[About collecting and storing Kubernetes events]. include::modules/cluster-logging-troubleshoot-logging.adoc[leveloffset=+2] include::modules/cluster-logging-export-fields.adoc[leveloffset=+2] -For information, see xref:../../observability/logging/cluster-logging-exported-fields.adoc#cluster-logging-exported-fields[About exporting fields]. +For information, see xref :../../observability/logging/cluster-logging-exported-fields.adoc#cluster-logging-exported-fields[About exporting fields]. include::modules/cluster-logging-eventrouter-about.adoc[leveloffset=+2] -For information, see xref:../../observability/logging/log_collection_forwarding/cluster-logging-eventrouter.adoc#cluster-logging-eventrouter[Collecting and storing Kubernetes events]. +For information, see xref :../../observability/logging/log_collection_forwarding/cluster-logging-eventrouter.adoc#cluster-logging-eventrouter[Collecting and storing Kubernetes events]. diff --git a/observability/logging/log_collection_forwarding/log-forwarding.adoc b/observability/logging/log_collection_forwarding/log-forwarding.adoc index 277bdc6d8418..fa9bfdee0c9a 100644 --- a/observability/logging/log_collection_forwarding/log-forwarding.adoc +++ b/observability/logging/log_collection_forwarding/log-forwarding.adoc @@ -35,7 +35,7 @@ To use the multi log forwarder feature, you must create a service account and cl [IMPORTANT] ==== -In order to support multi log forwarding in additional namespaces other than the `openshift-logging` namespace, you must xref:../../../observability/logging/cluster-logging-upgrading.adoc#logging-operator-upgrading-all-ns_cluster-logging-upgrading[update the {clo} to watch all namespaces]. This functionality is supported by default in new {clo} version 5.8 installations. +In order to support multi log forwarding in additional namespaces other than the `openshift-logging` namespace, you must update the {clo} to watch all namespaces]. This functionality is supported by default in new {clo} version 5.8 installations. ==== include::modules/log-collection-rbac-permissions.adoc[leveloffset=+3] diff --git a/observability/network_observability/installing-operators.adoc b/observability/network_observability/installing-operators.adoc index 1260807ea099..c0e3b2c8ae76 100644 --- a/observability/network_observability/installing-operators.adoc +++ b/observability/network_observability/installing-operators.adoc @@ -11,7 +11,7 @@ The {loki-op} integrates a gateway that implements multi-tenancy and authenticat [NOTE] ==== -The {loki-op} can also be used for xref:../../observability/logging/log_storage/cluster-logging-loki.adoc#cluster-logging-loki[configuring the LokiStack log store]. The Network Observability Operator requires a dedicated LokiStack separate from the {logging}. +The {loki-op} can also be used for configuring the LokiStack log store]. The Network Observability Operator requires a dedicated LokiStack separate from the {logging}. ==== include::modules/network-observability-without-loki.adoc[leveloffset=+1] @@ -24,9 +24,9 @@ include::modules/network-observability-loki-install.adoc[leveloffset=+1] include::modules/network-observability-loki-secret.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources -* xref:../../observability/network_observability/flowcollector-api.adoc#network-observability-flowcollector-api-specifications_network_observability[Flow Collector API Reference] +* xref:../../observability/network_observability/flowcollector-api.adoc#network-observability-flowcollector-api-specifications_network_observability[Flow Collector API Reference] * xref:../../observability/network_observability/configuring-operator.adoc#network-observability-flowcollector-view_network_observability[Flow Collector sample resource] -* xref:../../observability/logging/log_storage/installing-log-storage.adoc#logging-loki-storage_installing-log-storage[Loki object storage] +//* xref :../../observability/logging/log_storage/installing-log-storage.adoc#logging-loki-storage_installing-log-storage[Loki object storage] include::modules/network-observability-lokistack-create.adoc[leveloffset=+2] include::modules/logging-creating-new-group-cluster-admin-user-role.adoc[leveloffset=+2] diff --git a/observability/otel/otel-forwarding-telemetry-data.adoc b/observability/otel/otel-forwarding-telemetry-data.adoc index b8312364ba47..7dfa6219fa13 100644 --- a/observability/otel/otel-forwarding-telemetry-data.adoc +++ b/observability/otel/otel-forwarding-telemetry-data.adoc @@ -11,7 +11,9 @@ You can use the OpenTelemetry Collector to forward your telemetry data. include::modules/otel-forwarding-traces.adoc[leveloffset=+1] include::modules/otel-forwarding-logs-to-tempostack.adoc[leveloffset=+1] +//// [role="_additional-resources"] .Additional resources * xref:../logging/log_storage/installing-log-storage.adoc#installing-log-storage[Installing LokiStack log storage] +//// diff --git a/observability/overview/index.adoc b/observability/overview/index.adoc index 2dd7d0c145b6..68ccfea00e98 100644 --- a/observability/overview/index.adoc +++ b/observability/overview/index.adoc @@ -48,8 +48,6 @@ endif::openshift-dedicated,openshift-rosa[] == Logging Collect, visualize, forward, and store log data to troubleshoot issues, identify performance bottlenecks, and detect security threats. In logging 5.7 and later versions, users can configure the LokiStack deployment to produce customized alerts and recorded metrics. -For more information, see xref:../../observability/logging/cluster-logging.adoc#cluster-logging[About Logging]. - ifdef::openshift-enterprise,openshift-origin[] [id="distr-tracing-architecture-index_{context}"] == Distributed tracing diff --git a/post_installation_configuration/cluster-tasks.adoc b/post_installation_configuration/cluster-tasks.adoc index 8703f8bd959c..fd7b89e410c3 100644 --- a/post_installation_configuration/cluster-tasks.adoc +++ b/post_installation_configuration/cluster-tasks.adoc @@ -296,13 +296,11 @@ include::modules/infrastructure-moving-registry.adoc[leveloffset=+2] include::modules/infrastructure-moving-monitoring.adoc[leveloffset=+2] -[id="custer-tasks-moving-logging-resources"] -=== Moving {logging} resources - -For information about moving {logging} resources, see: - -* xref:../observability/logging/scheduling_resources/logging-node-selectors.adoc#logging-node-selectors[Using node selectors to move logging resources] -* xref:../observability/logging/scheduling_resources/logging-taints-tolerations.adoc#cluster-logging-logstore-tolerations_logging-taints-tolerations[Using taints and tolerations to control logging pod placement] +include::modules/cluster-autoscaler-about.adoc[leveloffset=+1] +include::modules/cluster-autoscaler-cr.adoc[leveloffset=+2] +:FeatureName: cluster autoscaler +:FeatureResourceName: ClusterAutoscaler +include::modules/deploying-resource.adoc[leveloffset=+2] [id="custer-tasks-applying-autoscaling"] == Applying autoscaling to your cluster diff --git a/rosa_architecture/about-hcp.adoc b/rosa_architecture/about-hcp.adoc index 3e035a66f4de..d75e17fd4dc6 100644 --- a/rosa_architecture/about-hcp.adoc +++ b/rosa_architecture/about-hcp.adoc @@ -260,4 +260,4 @@ For additional information about ROSA installation, see link:https://www.redhat. * link:https://www.openshift.com/products/amazon-openshift[ROSA product page] * link:https://aws.amazon.com/rosa/[AWS product page] * link:https://access.redhat.com/products/red-hat-openshift-service-aws[Red{nbsp}Hat Customer Portal] -* link:https://learn.openshift.com[Learn about OpenShift] \ No newline at end of file +* link:https://learn.openshift.com[Learn about OpenShift] diff --git a/rosa_architecture/index.adoc b/rosa_architecture/index.adoc index 854603e3d3f5..d46fda7eb52e 100644 --- a/rosa_architecture/index.adoc +++ b/rosa_architecture/index.adoc @@ -272,8 +272,6 @@ Use the Cluster Version Operator (CVO) to upgrade your {product-title} cluster. === Observe a cluster -- **xref:../observability/logging/cluster-logging.adoc#cluster-logging[OpenShift Logging]**: Learn about logging and configure different logging components, such as log storage, log collectors, and the logging web console plugin. - - **xref:../observability/distr_tracing/distr_tracing_arch/distr-tracing-architecture.adoc#distr-tracing-architecture[Red Hat OpenShift distributed tracing platform]**: Store and visualize large volumes of requests passing through distributed systems, across the whole stack of microservices, and under heavy loads. Use the distributed tracing platform for monitoring distributed transactions, gathering insights into your instrumented services, network profiling, performance and latency optimization, root cause analysis, and troubleshooting the interaction between components in modern cloud-native microservices-based applications. // xreffing to the installation page until further notice because OTEL content is currently planned for internal restructuring across pages that is likely to result in renamed page files diff --git a/rosa_architecture/learn_more_about_openshift.adoc b/rosa_architecture/learn_more_about_openshift.adoc index c85124de5828..ab9a86b20262 100644 --- a/rosa_architecture/learn_more_about_openshift.adoc +++ b/rosa_architecture/learn_more_about_openshift.adoc @@ -47,7 +47,6 @@ Use the following sections to find content to help you learn about and use {prod | xref:../architecture/architecture.adoc#architecture[Architecture] | xref:../machine_configuration/index.adoc#machine-config-overview[Machine configuration overview] -| xref:../observability/logging/cluster-logging.adoc#cluster-logging[Logging] | link:https://access.redhat.com/articles/4217411[OpenShift Knowledgebase articles] | link:https://learn.openshift.com/?extIdCarryOver=true&sc_cid=701f2000001Css5AAC[OpenShift Interactive Learning Portal] @@ -88,7 +87,6 @@ Use the following sections to find content to help you learn about and use {prod | link:https://access.redhat.com/articles/4217411[OpenShift Knowledgebase articles] | -| xref:../observability/logging/cluster-logging.adoc#cluster-logging[Logging] | link:https://access.redhat.com/support/policy/updates/openshift#ocp4_phases[{product-title} Life Cycle] | diff --git a/scalability_and_performance/optimization/optimizing-storage.adoc b/scalability_and_performance/optimization/optimizing-storage.adoc index 2a17cc92913c..3fe7be661e09 100644 --- a/scalability_and_performance/optimization/optimizing-storage.adoc +++ b/scalability_and_performance/optimization/optimizing-storage.adoc @@ -31,8 +31,3 @@ include::modules/recommended-configurable-storage-technology.adoc[leveloffset=+1 include::modules/data-storage-management.adoc[leveloffset=+1] include::modules/optimizing-storage-azure.adoc[leveloffset=+1] - -[role="_additional-resources"] -[id="admission-plug-ins-additional-resources"] -== Additional resources -* xref:../../observability/logging/log_storage/logging-config-es-store.adoc#logging-config-es-store[Configuring the Elasticsearch log store] diff --git a/security/audit-log-view.adoc b/security/audit-log-view.adoc index db79c924e6db..5d79176fe799 100644 --- a/security/audit-log-view.adoc +++ b/security/audit-log-view.adoc @@ -33,5 +33,3 @@ ifndef::openshift-rosa,openshift-dedicated[] * link:https://github.com/kubernetes/apiserver/blob/master/pkg/apis/audit/v1/types.go#L72[API audit log event structure] * xref:../security/audit-log-policy-config.adoc#audit-log-policy-config[Configuring the audit log policy] endif::[] -* xref:../observability/logging/log_collection_forwarding/log-forwarding.adoc#log-forwarding[About log forwarding] -endif::openshift-rosa-hcp[] diff --git a/security/container_security/security-monitoring.adoc b/security/container_security/security-monitoring.adoc index d0e32f473c2b..019864a17d3b 100644 --- a/security/container_security/security-monitoring.adoc +++ b/security/container_security/security-monitoring.adoc @@ -25,5 +25,5 @@ include::modules/security-monitoring-audit-logging.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources * xref:../../nodes/clusters/nodes-containers-events.adoc#nodes-containers-events[List of system events] -* xref:../../observability/logging/cluster-logging.adoc#cluster-logging[Understanding OpenShift Logging] +//* xref :../../observability/logging/cluster-logging.adoc#cluster-logging[Understanding OpenShift Logging] * xref:../../security/audit-log-view.adoc#audit-log-view[Viewing audit logs] diff --git a/service_mesh/v1x/ossm-custom-resources.adoc b/service_mesh/v1x/ossm-custom-resources.adoc index 0f6fb51a9a13..4dc446b1c1f2 100644 --- a/service_mesh/v1x/ossm-custom-resources.adoc +++ b/service_mesh/v1x/ossm-custom-resources.adoc @@ -41,8 +41,4 @@ include::modules/ossm-jaeger-config-elasticsearch-v1x.adoc[leveloffset=+2] include::modules/ossm-jaeger-config-es-cleaner-v1x.adoc[leveloffset=+2] -ifdef::openshift-enterprise[] -For more information about configuring Elasticsearch with {product-title}, see xref:../../observability/logging/log_storage/logging-config-es-store.adoc#logging-config-es-store[Configuring the Elasticsearch log store]. -endif::[] - include::modules/ossm-cr-threescale.adoc[leveloffset=+1] diff --git a/service_mesh/v1x/preparing-ossm-installation.adoc b/service_mesh/v1x/preparing-ossm-installation.adoc index 522edcb112c6..4db22173d887 100644 --- a/service_mesh/v1x/preparing-ossm-installation.adoc +++ b/service_mesh/v1x/preparing-ossm-installation.adoc @@ -40,10 +40,6 @@ include::modules/ossm-supported-configurations-v1x.adoc[leveloffset=+1] include::modules/ossm-installation-activities.adoc[leveloffset=+1] ifdef::openshift-enterprise[] -[WARNING] -==== -See xref:../../observability/logging/log_storage/logging-config-es-store.adoc#logging-config-es-store[Configuring the Elasticsearch log store] for details on configuring the default Jaeger parameters for Elasticsearch in a production environment. -==== == Next steps diff --git a/service_mesh/v2x/ossm-reference-jaeger.adoc b/service_mesh/v2x/ossm-reference-jaeger.adoc index 2a0866f8bf45..7091951a4966 100644 --- a/service_mesh/v2x/ossm-reference-jaeger.adoc +++ b/service_mesh/v2x/ossm-reference-jaeger.adoc @@ -44,7 +44,7 @@ include::modules/distr-tracing-config-sampling.adoc[leveloffset=+2] include::modules/distr-tracing-config-storage.adoc[leveloffset=+2] ifdef::openshift-enterprise[] -For more information about configuring Elasticsearch with {product-title}, see xref:../../observability/logging/log_storage/logging-config-es-store.adoc#logging-config-es-store[Configuring the Elasticsearch log store] or xref:../../observability/distr_tracing/distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc#distr-tracing-jaeger-configuring[Configuring and deploying distributed tracing]. +For more information about configuring Elasticsearch with {product-title}, see xref:../../observability/distr_tracing/distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc#distr-tracing-jaeger-configuring[Configuring and deploying {DTShortName}]. //TO DO For information about connecting to an external Elasticsearch instance, see xref:../../observability/distr_tracing/distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc#jaeger-config-external-es_jaeger-deploying[Connecting to an existing Elasticsearch instance]. endif::[] diff --git a/support/index.adoc b/support/index.adoc index 1b0a662cfd8c..84647eff5433 100644 --- a/support/index.adoc +++ b/support/index.adoc @@ -122,14 +122,15 @@ endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] ** Investigate why user-defined metrics are unavailable. ** Determine why Prometheus is consuming a lot of disk space. -// TODO: Include this in ROSA HCP when the Logging book is migrated. -ifndef::openshift-rosa-hcp[] -* xref:../observability/logging/cluster-logging.adoc#cluster-logging[Logging issues]: A cluster administrator can follow the procedures in the "Support" and "Troubleshooting logging" sections to resolve logging issues: - -** xref:../observability/logging/troubleshooting/cluster-logging-cluster-status.adoc#cluster-logging-clo-status_cluster-logging-cluster-status[Viewing the status of the {clo}] -** xref:../observability/logging/troubleshooting/cluster-logging-cluster-status.adoc#cluster-logging-clo-status-comp_cluster-logging-cluster-status[Viewing the status of {logging} components] -** xref:../observability/logging/troubleshooting/troubleshooting-logging-alerts.adoc#troubleshooting-logging-alerts[Troubleshooting logging alerts] -** xref:../observability/logging/cluster-logging-support.adoc#cluster-logging-must-gather-collecting_cluster-logging-support[Collecting information about your logging environment by using the `oc adm must-gather` command] -endif::openshift-rosa-hcp[] +//// +---- +* xref :../observability/logging/cluster-logging.adoc#cluster-logging[Logging issues]: A cluster administrator can follow the procedures in the "Support" and "Troubleshooting logging" sections to resolve logging issues: + +** xref :../observability/logging/troubleshooting/cluster-logging-cluster-status.adoc#cluster-logging-clo-status_cluster-logging-cluster-status[Viewing the status of the {clo}] +** xref :../observability/logging/troubleshooting/cluster-logging-cluster-status.adoc#cluster-logging-clo-status-comp_cluster-logging-cluster-status[Viewing the status of {logging} components] +** xref :../observability/logging/troubleshooting/troubleshooting-logging-alerts.adoc#troubleshooting-logging-alerts[Troubleshooting logging alerts] +** xref :../observability/logging/cluster-logging-support.adoc#cluster-logging-must-gather-collecting_cluster-logging-support[Collecting information about your logging environment by using the `oc adm must-gather` command] +---- +//// * xref:../support/troubleshooting/diagnosing-oc-issues.adoc#diagnosing-oc-issues[{oc-first} issues]: Investigate {oc-first} issues by increasing the log level. diff --git a/virt/support/virt-troubleshooting.adoc b/virt/support/virt-troubleshooting.adoc index 78c93c7fe675..655907bad628 100644 --- a/virt/support/virt-troubleshooting.adoc +++ b/virt/support/virt-troubleshooting.adoc @@ -83,8 +83,8 @@ include::modules/virt-loki-log-queries.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources for LokiStack and LogQL -* xref:../../observability/logging/log_storage/about-log-storage.adoc#about-log-storage[About log storage] -* xref:../../observability/logging/log_storage/installing-log-storage.adoc#cluster-logging-loki-deploy_installing-log-storage[Deploying the LokiStack] +//* xref :../../observability/logging/log_storage/about-log-storage.adoc#about-log-storage[About log storage] +//* xref :../../observability/logging/log_storage/installing-log-storage.adoc#cluster-logging-loki-deploy_installing-log-storage[Deploying the LokiStack] * link:https://grafana.com/docs/loki/latest/logql/log_queries/[LogQL log queries] in the Grafana documentation include::modules/virt-common-error-messages.adoc[leveloffset=+1] diff --git a/welcome/about-hcp.adoc b/welcome/about-hcp.adoc index 6a951953e583..4ac306b8f219 100644 --- a/welcome/about-hcp.adoc +++ b/welcome/about-hcp.adoc @@ -53,7 +53,7 @@ Use the following sections to find content to help you learn about and use {hcp- | xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[{hcp-title} architecture] | xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Installing {hcp-title}] -| xref:../observability/logging/cluster-logging.adoc#cluster-logging[Logging] +// | xref :../observability/logging/cluster-logging.adoc#cluster-logging[Logging] | xref:../support/index.adoc#support-overview[Getting Support] | link:https://learn.openshift.com/?extIdCarryOver=true&sc_cid=701f2000001Css5AAC[OpenShift Interactive Learning Portal] diff --git a/welcome/learn_more_about_openshift.adoc b/welcome/learn_more_about_openshift.adoc index ac6edd65c4db..ad96ef77f8be 100644 --- a/welcome/learn_more_about_openshift.adoc +++ b/welcome/learn_more_about_openshift.adoc @@ -211,8 +211,8 @@ a|* xref:../operators/understanding/crds/crd-extending-api-with-crds.adoc#crd-cr |=== |Learn about {product-title} |Optional additional resources -| xref:../observability/logging/cluster-logging.adoc#cluster-logging[OpenShift Logging] -| xref:../observability/cluster_observability_operator/ui_plugins/logging-ui-plugin.adoc#logging-ui-plugin[Logging UI pluigin] +// | //xref :../observability/logging/cluster-logging.adoc#cluster-logging[OpenShift Logging] +// | | xref:../observability/distr_tracing/distr-tracing-rn.adoc#distr-tracing-rn[Release notes for the {DTProductName}] | xref:../observability/distr_tracing/distr_tracing_arch/distr-tracing-architecture.adoc#distr-tracing-architecture[{jaegername}] @@ -256,7 +256,11 @@ a| * xref:../storage/understanding-persistent-storage.adoc#understanding-persist | xref:../operators/understanding/olm-what-operators-are.adoc#olm-what-operators-are[Operators] | xref:../operators/operator-reference.adoc#cluster-operator-reference[Cluster Operator reference] -| xref:../observability/logging/cluster-logging.adoc#cluster-logging[Logging] +| +// | xref :../observability/logging/cluster-logging.adoc#cluster-logging[Logging] +| link:https://access.redhat.com/support/policy/updates/openshift#ocp4_phases[{product-title} Life Cycle] + +| | link:https://www.openshift.com/blog/tag/logging[Blogs about logging] |=== @@ -335,4 +339,4 @@ a| * xref:../hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc#hcp-de | xref:../hosted_control_planes/hcp-troubleshooting.adoc#hcp-troubleshooting[Troubleshooting {hcp}] a| xref:../hosted_control_planes/hcp-troubleshooting.adoc#hosted-control-planes-troubleshooting_hcp-troubleshooting[Gathering information to troubleshoot {hcp}] -|=== \ No newline at end of file +|=== From abda36ab04451d2df74e85a50639bf8fbe800871 Mon Sep 17 00:00:00 2001 From: srir Date: Sun, 16 Feb 2025 14:41:53 +0530 Subject: [PATCH 301/669] TELCODOCS#2019: Added a diagram to illustrate holdover in a T-GM clock --- images/holdover_in_t_gm.png | Bin 0 -> 60517 bytes ...w-ptp-holdover-in-a-grandmaster-clock.adoc | 21 ++++++++++++++++-- 2 files changed, 19 insertions(+), 2 deletions(-) create mode 100644 images/holdover_in_t_gm.png diff --git a/images/holdover_in_t_gm.png b/images/holdover_in_t_gm.png new file mode 100644 index 0000000000000000000000000000000000000000..b34c703d8004d49f6eccd4a02e898b67ab9d9e81 GIT binary patch literal 60517 zcmeFZcRpYM1IF9S(t)e8ihITV8g+f^)f9!}V zg|ZAE7yHsI$6v|&xQ+1F(n~V(>NNP{LSu3Z|K4PGRNGF?+RX0CnTw{BvlpzbOa(3( zUokI7a=d4UAYW5e{1^2Vdni{e1-LqGieUFgX zp8aC`_Oq)fvdgQf)3WG~Q7G&b`6Gwa9q#r1bkx3M+A=p#b!L6;k&}!mO25!TeWL+^oK*e!7=l4fd+R=nc?of_aH#2J&IQ_e=E#ciAzErg& zrOCnfySe4BG`Z$p?yJ{0vXeTh3X&q#i&Vr4##?+9jvafRV5#puWAEkVwR7jrw}pR( zyQ-gjKmSCV|7ak?{54`*e=`||iXOhX_KxrSxhdr<-yWX{b*Iix-79i)hYlUG_?i~+ z<;$0v=eEYxw$$&oY_6hsu9KSndtFscZK$`WCOPCub~d*>o#c4zcE!lh4u`(_9Q(1K zSDFU(_D3mpb#?J-Wi)7%r)j2MymYCptIP3?Eq-t(A|iRP?S0e58+-}z@sCs!6mngD z=3VM&ZENEe$+a9C-Lz@byNZgyurRH{av>oh)9mjTx`$>nh0cC_WKo+kS{p`v3X|p? zj?4p}4$~#u_k4An%^TFzx3>P1>OZG5dwa_P)!UneRUgXU9)7)Q_qWW6=IeC5Q&Ib^ zcvEXr1dYocIgNCuXr@Q%J~fZKfAESzu-jzUjvUK6rN+j_!GVF3bs4Aa`y29HvS)h? z=2TNP)5*=Tv9V1|P4%$Njo;jL`h{I}%#}Zl1>KDWFL9=nNMXwq+#M|~?U$=dR%D2c z4V1Ixy89=qC3Saxen>`hb}+2yk!4**aC~j5){m-lHIIUJ>E}O+5IUP8dwcVXo{X1+ z;s$~e5~qLt`ejl7G&w~*d7HJhwXj9avFaq{y0-UWnUQnvY&khOaihD0FEnVoxw&nY zbQxE4H=7@kX6EP1n7KdVgm$eJ>*Q5SeKzz>?bds4*z~8nI|X-TQ5h*b`g`Tdl}_yu zXBjtaNC?w+l0SFu+^8`|Mpaiga{q;f@Us~~oN^Ygt}SyMy)S03fSZ|Y@!dV#l@ybr zaEG_9a_alpnB=on(U_vge~v3F*CC2W+QJOFLzE>?8XCsVd>i;(Za>&6-xZ+YF5x`- zw5UrI4;5@T&~kL@?45W_*130~J4gGZnjX8%PCH6W^=40Eu!c;;F1FrERJxx+j`&@6 z8}sn?Qm=B7a*Sr~g`dMetc%(r*ybNJbW0k=Q|;Z`x1TZUt-pVVblm;EYU8fXypCu! zGqXesjBH1wb>W2Uz`($0-v&O-)}3>_8Y%I*ITqh@uk_U?G&eW*w2$}YHq~SrNI6e* z^!I1Hu$pcQznl0cD}<<4 zavtqHS>QCnxN&1rWt0d)t}f#~0|UcT+=Rr?2cgj!K|#UE$)3z1)-l9rLy;CbzGshxkdn$rBEj~Z+autuNOx#^K-V%9Wzg4}-+#~`oxYXz~0ezr|$u$b!tvyu_#4Ky1sC6b=AnwJAlaNX&U;Prt`3TtMdud_jm8a#Kd4#JQ(Sz z`w`&m=yIw`3l@gJBSr@yaOSfBI5%~q6`%boprFNmvPpo|4mr5e|sA1!9z{jI1m?^4p< zy?a}`x^~LHd-v`Ooz%3U)NE1A#`(*O(rHo_PXM$8W@cu(PUY0;XE%L}ii#r9^4);> zq6F)~%TL`UtQ*CxzCS4~oT=3^IQRXDsaN!ja^H{KT=)B-q49MF?x(|c8)!N?y^xWS zv6~*Zof=rnD#oa@cK#BEOyriew6qk%_q-}BSV&G-3e~5t{7iTovLp5M%ga9A-g*6^ zoO1q8rQG)*AKbp^*O$AmtG(j%1UQ?ZsUvo?Ah*`(=S!J7o1E$%s!Oji+xCGBd{|Mfj_t-6qeuPW9zl{CJkGclvbS z6rX0=BT3guo?W|?kjw;vr;)KWPn_`6cOB8tNYnmYSmZJ>*!}Zm?z0O&dEMRTwneZ| zCu_NHI#0jE(tDq}s*~=7hH@eQ*QRLnJS#gUFaH^NRU`4lrrKn+9aT{dzPg&4ch|B> z@<&x}my(jYvr{ecTRTCk6iU0wzYy(|$NxR@;^AU*8;|Q+zW#{P5gFmAiV5g~r40 z)Wb?>rDF}vs-o{FO^3w$Zrp8A=C`SAFZG%9yf=6hTTPDC8DNn3OifKs+kYr6JsKrq z{nd5s=UFc5pccnyFQREkL}F?`H9MbtWey)cT11WA6M(B1-h>&rC1cf2zUXk7n;ENr zeCBYLDHY^a(B)E=4o?+p!B9i;*(!#jW;WX>NW*TIE`}S?|?JU#-*Wu4{UUyeg zAI8jP)gGHAfMFJm`FTqBL{BznxwyEHGUBs=k4a(SDlE1_tP~r&_hGxWEG$y;^Yb$t zm1Jda;GWeHrFR5QjYf9pq-zT3K0U@M>&Nrp>HNtlju0KLgrp(*+W zFZy5f?8?gprnAN{_cFSSy& zGJ&?x>N$;mKUA`jY9{vm4)T0klU0_{ndR z&~R^d6|bnMHlWh(;5-WD==CrU@~F})L&C!!xy?*=z1m?>T zA3F8r)Y&1!Y>Qb>Q4yu~?XgRBUf3n0P9`fx?f-7je)1mrxslIuY{^G`nI>C9lzm*? zD8=d2&fkN9T|hu_W^zRB*s)u?3<}SS`R~$u9?Nm~1`qB>Gqu)k3#HheTA@6yA6HV+ z1iZ~TSG~RZWCjB(Yx-6RCmsQTN;^hMvGBrCOWs#hsHms}UhZ!=C-zRbw4~$_8j!T5 zODQXkQt5=X=YiKIf8As|QT{wG{Zy{Smw5R^V+v*P(8Bi?sMNYIFQ<1gi&&xfPmcbu zRxGDbrs>c*&xe54`wt(~fk5K?H!(7D0e@L;RaQ{=^4e>awX+LS=%-IAxNb$|Z8nib z#V7yn*h_vT8Y^u1wX?)~z1RL|S2Kwo+q5G(5(AR_FWo?_OeN z*=CjTQqk2~FE3jOn}3ob&vB;K>NXz`U^*U$ z^~AX4pS$lVNFKY~+S+RV=EfS7<2M@Rni+b@IMbtxZSO2T#a_jVOh8BRp!gEg=vfM- zq44!g&OEyjTijnTrtj$B!1!J-Y}e^8fS-Lo&^zz`8;(X$9W5=bI3~j8p8y;f9>wG?MaTg_hXHDX^ANt+S|Q`O@)#OfT5gel zFTcd;m)Azdb?f3D$^|evq#>Fre*8G4EHM&F9#At~_j>?JC>eCXPtql?64%0z)Z#je z1C~&D#1|&ZLtI>ZpKZ$#EceV}V{R@k^XB5KSfpxLhZ+S=_N09S7$E?d1QK^PxMGOx-rn=Lf90vsKEBX_*}B+6*BP0awYiL!P%f|) z-JM@FE10%!&5&brt+DfCA=np)gDk4n>r`HFOZd&3H}CT6JZP_sit@jxh$UG4;HYnF zdzqpcrTNyvWncZYEoUXmp(f$@M&y9OcYK-VXzT`uhxY+++N_y&nn$$?9(zsSbry^@ zs;R05V}V(S`R5lDV8r-IFPr!f!SAubgUpSJO#a;LRQ(J4RLoCJzT+iM>mT-4u4u(b zx-zn`9G_T0;a<9sLMaBWBX<%{L~N5M-S5}*Q%T|B%=TShjv{*>m2^iVw1JT^{@XWg zbTh~A@M-NDJg|u3b)5>Ti(mCM7Mz-y8q-oz^1pWN+CKA7?DDGDm#-OabpAs)4&7|C z+xyRdi)C}4KDj?CU8UXL*7kGL?5GIgJi5Om_yRQ1n(9%c2Ocaepv#lP=SI3~kCSc- zMFmfnmiiIpC*{WKO%ISlJJF5S7rMS|X=x!{d|y+M0cbW6VJ-Br8gVkqxVgEL5jB%z zMRVzx#*MF%7E>5^Ad%pYjS4yISpUyEr~p`n=D%KhUA**iHo&&9u#nu%%4 zlTVeR#gj=H6cSQH-x|uLk)ked`YtpmARvJB<_~4pkQNK+_A5~JQ`f1>ge=pRVGnQ`x`nb zbrX|#ekR{W#|~DBzr{;R#fp-WlCE99&Mj|UFme~{rLVmkMf%r2qdr(E(!|5Xb!5_b zLu~Tdzsr^_O9m$)cBTK`HcrmlCKL+e+JzycpBnp9=dox>=>7ZYXc}~|7KQGXR#qy4 zX$U`}S?w6AFd)#=;sX1|2sH*0!gm)R(~NMyDQu>L5(x4&)7WjBX7wzopn( zO7ngyD)2Z1;ElNp1MrSRJ|f-t+JW}dzNHUrf0xMzGws$-Tfajo+63y4>Eq)QjFIiqzKl|+2~KXWkdUzR=y^iq=3b@V*y_l=rapuyLgOfl9&K=F zNQkwIH!dy?OtEf(Q_kq0CTntAObfTg!Kal`1 z3JnQ)h=w>EK>qpBg}W*q?fbETfnl3G?aGxWk&92ry~scrMT~V-D@)a4_D+;9qBJa~ zjxCSbinUu*utrUCbVsI#sw_)xtxM$m`3VpxJ>&FCh2Yn(U;i)1%XU=7XjMf^+zAeT zAh*@=2wJN`5AOB_$R+p|Wgbw%f6IXfMbmA&bpdRU zJ5Q9qI0w2(rK!-BTRvomvL*%;jjBhKG!;LT7Mo@71bjJoxmN{DPma@wg`=}`l)wFZ zL{%I3dDp4YbI7%czmbL(Q}*N4L^V8n19~(6HO_eh-vrk-D@`OJrYb> zyLT2jkz^(pEewJYVBgn+fATr0qr7bS@<@MmcI1>`&^ILBJ3GzEG>RMury;EyZn_VTDD$?li#x7o_ z`k@@%#Kc5bL*6AY;z3=a6ptQiP50Oqc<)}?g`Y3FP(EHDtqNc1*VEx=s?MUl{qp5Y zG~=A|ks{X7!66}va*HW?OBSAeFG`@6zT?rGbZnZ)`#Bfe6wncVwM&z!BcXrp%aQ*M z{ml!vNB{l&{@cugGC!VKse!%5pM(^R)JT5~F0AIV0Jh!ogh60fzn)y{pE5&x(!t@m z$@qC!LVvRsA>iH*DQ{0in#!Mh#4rV`zP?cfU$hj7prZUhzijHgD#| zh=Z87oSU5?vpBnA;YqH|&5jjq7PETvU!w6(RR;@m})>4jB)p&BVR$G%5XbRCnOn4Ct4{v4}f7 zMwtN|5yV49A*CcNz}klR-{k?RasgWvpgXKr7LP;uiS%y`Bt!~sNIp`+`5Exd3Ax1{ zs<)|m?!xPY<6(>_;;3gSNT5D`exZ1jF735^JUqvspeRsLzGC^Je}BV%@Zc$|=ISb` zxik>)ous6o!YYQ1zwWT!!)N^aT2<2mqB~WYSbHZHb(Bn3;T!;z7f>2zTP` z?VZTENILllH51kSh1OyLS~EEQ@#8%bBQYsv)a z%d=z0%|F}}=7s6jI;N;t`+j#(8j@xTNO#@mHgOFN4JXQZ2grIA5fQPfe=5Ha?g|^I z12ujai-y$w0_U+5TnbqC4q@r5v+i9(Tkzs?AMeb7_)*{@P>={h4=K+ZH@b(aq{2|f z?}8M@4e+cBp(?DQULo+Q%`X`N{e10RyLRb5HoUrc@!}J5 z7sUwB`H`E+b#AcrEh0{sOy0s0OlSnlo$fyUn=LpVQI!HhI+620dULa(Bf=L#AKH-X zWSn@Z(mR#ZK6ch)#n@>>i zBw&6W9vtGy88OY|=B}<}+^4)M@*p+(DM?fkDYoVYjuzUD+DbsPtRk) z?H})_B3)GnQHswk{6qtXJvDr_7=@t#BnMF~6yH#G#5{y=eanrI6n_y7vRPI2)DJ1#wref!k0ynrK# zPD=nq64m8UoNQ`h@(9UY%(lf7X*b7Z;v(9}aDOkp254RuO@#$$wD%d8ttNxjMU4U* zU9{7$UcEv)tp!|wg2j!jSe+FZKm|Ggwu<JKqy1U zgNdw(h83|9TSBFSni&B;NfQO>=MhW`B-_mcuGdz~qBgPMm5H)@4v7d=SK3={KFh_H zWPd-uSXdUgV#-HAY*qaH zdD@prAQ;J{>ljaLY%Dv_4crj&An4~Wv#>mloB}1ZZQC|3dBR5nDC6j*$nXHHL0n|m z@vJ2IdM`w6$z+xL?87>J=SsI#HktYbC*))}oWSqY#>=lqM`VsJX#VsIhaZ_F5_7J# z0fuB=aIH16*-CPI+3gUnqq4HRf`Xq(F$|&>!%$INj>I-mLkROnfR&!!_(0jF{|fU) zw<#;kN9>Un>tiHDxD}uS5n%~nS>+fm!NIYyy5qYGh$?~l zHUA*gB;WkkZF)eiV6@&6ZWV&JCWpI>#TZu8U@nmMjf{;qtY2?TF2GE^z?S+f#K?1ZsfBsC`)vjYU^aRPHi-vBOMn&?)zo76mT;r!<_JxK_7%8>^3UOC zriZJQp}2-)#)NH}r3=8dByzqwM5c7O^K)>#<52s~0|&m7{Gh!s$gfchYYLpNBqSsd z_LtBkcn%wMnwl8FP~=uvcZhzomWpXSY?G6dNqzvkSPSzIfiM{v8HvWPUXkq1dQdZ% zanq*cl9D5&fN)SBeh)7K4ny;$&~-{3bXB@`mJC3Y2T<`_H^y ziZ1rQlKU2Fa`!MI29J?HO$Go3WDv0G6!C+OKYube8n}Q2Qap3!%=|M@gI*;~DMm6q zTEe*-GjH+p1r}JOsTjk(l5jy|2>Qujp1?Yju6iw%RFlribD4PZ;>w?l{~d5$3;8=T zqCst7w<{8l2^!dDdg_xi`oiI`W~;b1EFpx&0Y{VoVI#tS*$yG25}JP?&U4#`WVrtG zBNQ>hJw``Ilkr84Aom7iRv5?zE_q_rc=F^4&^G1w!e#PcK2RZ<7i>6qjnoxe9g6L^ zM6-Yl-@?E^xRrR_hC>a=B9@bkZ(ciD4{OB;6l;X1HpwH>~WGB zkUk%PV1*5fDNsC8oyYn~I3o{n6Ko4R1kZnx{C|FM1PejmZNl_)zQal6^g6V~n8cke z{rwjV96|Evaqrj>g{51z0PO4t{{n6PzxWrbsGV>Ed4t6Czp36ska}0N8vn(>U||2l z!0>pH&h<9S4tX>l>PAKnFe!#q_R&K|*z_HvE7TnVv?Qnh9!8Oh@b^?r!#cD0{v-i~ za|9@fG(fTo_qOlM*Om+>GqX?q!gz8wwQ1wlv_jx|Bnm4RIU z|6u>;=w+9iix-V_Rm*9nYM}GUM@~eaJs7^{%r!vzr;un$%gf2#l=@IH+0}be?)vZw zLe2gR?+eKUXez?}m$m---veGci@IoSek{NN5Z>Lp-6wqS?mEp&M@Ltk<>7K0cZpDg zlBNN#1i{+psbSRsgK8oWZr``8!w8M9V8@`dScx&&DE`=_bD_82kz7c04=DFe1-(PR+%UQ2|xIN z?_cb=niJK5oe50mQjC#2t)rtut`kHBI5x*Y#uBt8{{^sQ1YSA7)`nofq4)nw#H+L} zJj_#^h`7w2=%zctO)^}@Vhu8{;R%SlbKty!67fGjS_9O9MnU->3g-Oxz2N%F?D(6X{U_mL#(1d- zpJy*Mk~7|$Ws}<=EHf``ERP=XXr$}b+VRI0W38kil=&czf{_&hIPh6L4?8`2XzA$c z%Guoc(IW_)exW|ra4Decm~@;C@+=4kZh5%#7J^@ei#iN?)o|P~Zr!>|K61ZRXwP=+KyQG_2wp(hFNrU8x_x!bwS1R9x_x6 z*~|;-MUBcQ)@B*YO<4td2*J5VR8zuM!i7OvEiZp+EN}fU>_wQ_p^mCB-&Im##d(S_ zKwP+Rp&rSx<@ayG6UdY{wGimli|jYtotoK=h7UZ|-3JdJYQRi{Sl&i5K1`w&pbh!? zKS1*2QKyb?Z<#-9D%$lF882F73Dq2nS|m`PN(>VmvLxYyCA>laNF239u*>`0y7dXh z3ut^P%T_UULR zVg0~*bW&<&I0QCNbCM-!=d(m?=y7Pq5ZOZE3WLV*2w(^CT}ER5@Qu|$vpjFI=d--W zVrs-IWc?Ql1J`FVIC?}dvxviA!-?_nPOLA|(UZwDoWCLIosd+xL92HGLqeMM1}*{2 zu@GZ21d?johYBSB<&jyZJ8~#vE7`j^cY=+8%E@{|$Q}j0!Tq z^wg(e+$`mmZ)9YoUg+XTI5AXIJk8otv;@dGceyGfgyg8OzSwHvc_Jo;IxIo6U$nQc z1+A9}J1(%CXgn0qR8Tr3z2!R&kv#FE-E7_+?-3Oi8JPkQLoBj5+Y7s{;29!!5t}LM zWHPQgk(@#L+}`-_g=Plf&=9Ky7qwQ~(l) zuz&u18c}(ynmxW4od8vQr-dlo zKM6B-SKt&RIV&gF(&p(;21>(TT4_dKAmF#PwIes^?kXuOYhAwl42xX*JTY5ssf2$4 z;mAu$l9=OQt5rY<9&&u#0cqb7ViAM^q*wy{!4RlFidb}(Y4he3vjXS&oOQ7NJ zBp#cn9F^IB0OpIj@7@6(wrnZ!b{vJtm{&rgAW-scx;$u1hy-mN9cNA4AY4wCV5ED$ zWx>S3^Xveu+p$Q{7U;G}KSapq_ix@j{MNogArecj6PCO$$j5}+K~X1ykVs$Rs^Z*6 zr{hV`cC)J>C8IiP-H(FcN62`n@Q$NW110NA^(niwA4?y+(*M+&j%9yZ@v>&{QV(!U z+l#sD?Z{mu!z#x}UbNk*vr0f2N)#xEbb}(vrjGXx4i1mp=Vl{CZ6ENeKMMTAjML<& zOkc9M*HBVYigpEj(wbQ=P&DN8hQ&IcxG}oP%mIl&Ht9C1LZ~T(0nRH%?vqC}666%L z8qy?;zS+ZQ-igNhu!fRBe8EJP)XdaRN2{2A`X%xh<@vc~7KuRV$-TY3siTm8rcws0 zbnZzwYeKCq`el8VZmaksbm)qx;uo?^`BqEKjz5Hap92lUQc|zpBF6%pt5b{jjtpL} zIrOYCXQ%F8Y5ZL3yAdMh0a76g@pLl^ii$bbW4+(ih{|;F;uC0=n&vS*W-ikM}R-dym52gFrVSR#&TZe_ z(1N$Nw{NGT3$Sd|hyPLK3sLx?M?A$MJ}aiM708D4t=pBO&x_GE2O^Orz$zpk46A@s zwRP)qN;Pf6;}57TunB}f{;^|8og*~p4D7lJCr*UcT4+GMu6<&9d}bumJtxLE`#ae< zz{|(?zAzK|1}LQ`*4E#j9L0*M6Q3M~sFu$*&}+aZcPDrY?UGZiev$|3$D z{Pn#k1O5^C>VrFUcB1zp)fnQ_!-|Ru@!X2Ck`iKX2#!bk?aV6QT8+g9w&0D)#(2vX%YJ*um$5e*Y~0;FR%RhC^QOH_*@h4Pt{KN{)=Wt zqcA>U;vG2leRZ{3Fll8m@e1x~VhynF5Q7lW?1`%dbrC^!x$Vv*;_>H*9{f z`%^HMf|h{SpfG>t9t<_EIOFJnS-b9*NyCtV*AZ_V;VJks7f^68AXU(P zZpEzFeJ0+W_Ll2RgL^HX#f1jF3zon9*{@=C+@b4fLYHqaTaG?ib0*Z4?Q{A5%{siE z3Q+=g6sD%e$}=w&vfi(DiczwaV518tzc=OD^rz=VfNY1}nDfN&N{E-G5X>0wE98w& zOuYL3UFY*Tdez^^eQeqVPQo8Pe4uAwP;Gh1#^G7B?@r|62IML4*Pqas1%-v}5S&z2 zR9piK&E6@8;m)8BEo_u~@Li#|YpDar>~;P6_3@D=_qp3|ZqjiI2&@5-#IP~L@AmCq z9UWS~SRLrtBv%_4G%*~0J~z?9=50Ui(=-NC_oqZfmenl#SHlHfhP(%Muk`5*VXERK#s;@f_Zz0p}$O6Zq<&e9rlgCc!pl&i_k=NiCmoulv2Ob_7 z_yc<4T575#Ig6}{KZ}SzQ!lqCz056{X|(J)iQ_jPjC?dnpEyxb*NcU~MCUo~qQfsk zlkCl_0pRVW4V`S<{=|vZsBQvTqfMSG*M9=D5aIeET7pF;jvfj24L0s&L4-iPl|Fs? zw97}hxCdruoGBjC?8I8ZEn8L@F1Q3vD7*}4LWD6sdnvLenyzPO3vCs5@W573%F&)z z-cQeeX8@uL7q#7neO&RQ*dpSIhi2#Dp+RvA`Nn*tq@scb(%oFK?G_6A{{2k*_wR36 zBMSNRvB_Ho8ZQ~|76Tw^8@QBh*F8a9Z3zlO>u?l-ctTMTd533L^kR?PToJg8Rx@rW zMyGfS2p|q{#v`;+)Mq1CLNXvniRI^8M?W(LK_XtdVukMyfMHtf)EuznYU_8Uaxg%o_Qc`>%&{$bt(tDDwn~@(j?rxa^+V{;A78G2^wgXFy<`-O7EL>YZ4px5q#*A#w5R*6B z|I@doriORK8Ba!*(DLQW#ZBZZ!roL?t^xRx!vn%S?2Ww!ykgukG(pKk>&zuQku9YiJArs*~UYky?t@B`z~_x9!Ic7N21_^7PpG`@v|w!NTElVcYiL_70V& zlF{Z*nSB!9Gu5F`9+8!8L25zlEV_69{%05+8D4(f&c$U2Pr2%EkeKrMa@)6WXHbil zavApks)UiUsZc<2cHp<1=)r@=Fl<5)V@F@YoweweId4BIKxw(Ux;ht=4HM1MaIhrL zkyVmPOTA?^-Z8MS@C{536^RsT>+1T*d6}7+86kKcKYMnguuw8BEe%y`xyjkHzE!*3 zno?3y!ok~*7l8K<;bHRmYRpB(F(@}Jjo*7s=eXIKGgl#iz4`FLZ|t=E(9jS9p~5o8 z(`6+t{})JP@lngK8=FFd5%rP0eTy5Cl-aaTo#KRL&Ls`hZd>HuJn-HhVx)>{&OLcT zo(6fB!Rr9%Anz6>4OjgSQ}B#;m;b-Q$k1LS=1VNZ4cbJ)){sP)#C@+NAmm}F;kfEpgVWC z!wL`DcYC5yJR~Q#B4De;Yph#`raQx^Ci`A^)9-|~e+_V?1zpj%d`BUGs?5$%8jd3= zyc;Ns4B;p51Q9bng4j5If^Mb()I3s!*R^YlDIO6Kn{b<+fNkQAgGbtL(D7_uC+apm z&hT)MAZNQjji)bge)c(qJwW^d0s}y*@>}X^YDylRd}!18i9@f@WfLVZDhm9)zUYYH zKVASm?{znxG!!^%wd9ybdR@J`NI&0U5v3C(QZgs&(Wpy|?ri!+Znv?g;I0^NHX6B4 zU%xUbugkOVVY-Yf8g1q`UQPUR;rb0E{fCE#I=bH(fyLuZef?%MA+JCj4s?Fz6nC4} z7MBp^=U^|XeGK@zW6@E--@FM8w>FFtJ)~vzcpJ}ZK z+IzN3y%bB>@lQzL?yJGU4DdT#$L4E8w1UhaX-Zy{0dypZE);FE;AE!te-UcVTZso?1&jQm00zMA{<|v`6IR-qhA^fa!>4&6-kp(z$*7 zA9Q^I#kHOdJE`R$RA7x>zI?;qe=S091?~r%8a1{l4J6kj+X34L*F675BP_!De18y>;tWpd*{c?V#}Ric_mU(=?@& z1hxTX(a_O_jktE~;Dl|V-8>FlUrAZnyDe|;;o{hbrporn+o(eKY5ud4V+snqT7E5b z8Il1(fq^RBex6vp@rj8FX~$3LDJiW*g)p6Ug^yyJu&~B2K}iH@2J@4~_uW1N6E8{( zXmzBnYy4_%_5jfiOZpP5f-EjRel|&$z^RV&G8TLX4<1BbUj;h}(ar3{Jn6Tw z^6fwsOU%glSW{d39TEut=h5a@OA?cl-=cWzv~@w?wpijj9R~+dW4|oCyJ0{OCI9l@ z=0i*@Ebo!eD~Ti0wi5e2OJ?p%x<;NuZu$Yb;w=g)zwIpi9$Xg*cX`Aa&w*1QFL?hZ zT=VmO@aK%Vwp+@XjrqB`yU+Vq(3ZW?L+w8%KnMsCnxtObJ05OP1OV7kY^3ZZR?4ZpXSzgD}xQ zXn(jr!diHyySw|t*ROZeb#r`JC0)Y7&)n_m?)J$%=rqiOCj+!(7_&NiwjA!rSG|3G z5ieed^?ZJ~!g-`y!&~&(vu6x-Bb}eGfB*3#+rVw&s6~G%cD3Kb^P?5|AnI@IJ~I+l zGlU(N75Im7nO{(SbWqSMeBM%?g@h(AaTY1Y@3-h>RQ%*Vc!v$QlnY`La&jMPZ3{Ez zRkdo9q>JcpQGeD07mrv>L0e`nw>=MfP4!3hmoFuPN9L+rFTi_d-hFo1vuk!W?;%|Va8iJRI6++ zUpB*20m&b;YASFR!7W3fId0`XHJUDZgCAi`z9t|k%dcV1+2Bfl!&?jljt_O3L1To+ zI$>a7V77Y4jveG490%iTMo|&>;5NZY*NkH3$Sos>e_)2pZiKU;50keV8yX_RiIgjE zh22j0f5F3t4@U@MTV^pXK=zy07h3(;UuLW4-Bi8}Rz!@|R}eK3f|Fx^szm+Q&utwK zk18MZopCu2^}2=`%z|kdIGZ6L)+~jzd>O9cg^i3n4@GhpVeAIXMJ@rub)aM~@2P z1YcpF!Ca2?l;bzwi{@te<{$J!LqmSaSVYU%nRDmfZrQTMq>Op52~GCXrzT^6n!@S} z9f!=!@XgnZ)2A#}%gL4RHNI8&!}S83C?CFjQEhQnu?6vikHN$Wfsrnni`LQAeFE0= z?&ddjby^nACMI0ubW2vO4c%l6b1*qr!fE)J#T1rbL_z}hsUo+6no%@9kFjojr0SYp zUI`FB|1EG6GxKBo@18Pd79q3MJwI}H`(n34{mqydHu4XE*O2S;==kf@fZPsv5(Nn_9yR?gBg2y@;A@wi+mz{0Kx8e$4x>|E`4S3WtAkNZE zY;}K(p8I)T9*2m?Mz|P6k%3Btp`Uo689@iVhB5ujb0}G&Vq!jWn)&W?`Ym0TPyj(z z@-`mT=3#6BJ%O(FLCM;}FEmRR6Nr?|de z*C-^S#Z!hOEj|6}gRZ4t2v-0H%-*DzzD7^6bJODGlP7yoJAJU)c*I^^$*NWPEPqHFhdKhJDk&5u%UF^WkzwGscGBA z(qqcXyN3#bg%$OA7#|>{5;9gLWMU%%em-29joz`iFE>xXl*aH5+kRqy7L1siT? zWTa};G1@YAGJ<(C@7$Q>vE#>6^#m18snr%2Q_x4YR9=_V(p(J`XKra3w)wD{+WPwX z`j(&Hq>>U7ms_E>AGl@2$Y!tinb{shvDRvxy5qia$=fD^q!|x?t@7l#fv&Kj|F@-v zvc_l=OR~PB9|z~|qLMkbJ8`M0?~f}{nt%PeidSbyv(vLnN^XVG^bkOAXo=B_TTG z?%f-Z#Ur9LAYN<(rvdtZuQB!wu zB3-|Qvzgg;pxfg=^|-d}W$?CcdOOX((}!cypQA@ofkk4=rcK@uNFyHR9u;&N>pyMb z_eC~PPF8lCpkUSqW{@v{8r9Y77#OZmYYrTj96NWeghYl+E#>YH(CzK6?0fw9@xy@S zB}zTqM$^u8nd{fT^Bb7hJRMU`U)dYd4ms3esQvbbm{fQM0n%e3=cxWbNRt!e%dpMA zF%_m|o$!v!O+MLTh&>=HGe! zMV9hzo8?k=V?VE(H-#)MExkvGPs%O}H z62WQL-&pV-8T!<==;Y*`Kj5sdS*MVdm31B>P#R=EhPr*0U;U6s7 zkXx`tb>E~7cK_{-8T9!pym$|QvVh+6K(upV&#hTmS+Vo!EiCW~!z^rURY0KOckkXU z5q|OF#d+ZJUi6~e856@@>rwfPBO@aj>Ox236^`s%Y=g}lrqRd22iC~Kra%tl2LNn8 z04PcRu~*Ivy;wH7r(+*Peg*{`{?9sZw$(Wt<{V&s35 zk9tT9OMJb(&-`gB3agI}33)@x!pD!};y2PFR=Jei3JAzl^+Tn}LNk?J9>DhAZyjfjPo4^WGtLPfLaTh+_pO*FMkHaV=wlnBdDa@qcKYIGtp(t#$Zc~ z@XcQL3%m@6zZT75j7;!v{=IuuEg&@f(SdF0L(p*ThTkB^_~E^6uj&V&)h7SF?kRM@ z4^2%5?H}06m=9iTr5PFsW%KV(xXFv^WUGfdru}>G1qWXPAlfD*v>x#H5c;h58?d)y9q4F892*E+2wZEd ztort5lJuaU;4_|sB~p`=fz}4&z);jKx)wpKI_2l<`v!znAo3)9nX$kxpzV2u0+7R? zBwqh53KL1;hc!Bc5kdk%BmEjR1bo)6p-8xyi|gt(g0;kGtr+ULVZ2AjfffV#VRyR04g_4AoQcx4Qac)ADxir*l@ z02&}!*y$fsgJI9QxRD}XE6+y>-o0c4WhLR<$ri{h;tnePw%Cam0Lp&}g$Q7mEnLE=6lK&%kcYw$mQuG`LmMC^37BHs5& zEo>ZZCg;!lfhk-)&EXM^{boQT-F?A^ON+7WW>&B8I**3=9p0aRgIj)6>}CND4VINr~+h=Mk(7-7vg zf8(;tq%nDTbjBkZ8XF);;58c?KOtynPxGm?bUe&|_G~{06F*R753ySEhvjJ|<_HOk z_a1D6V2GYq6H^nI$xIC>8Jd%i@n*T=rY1HFL|fqv4cn#c<+guEN6X-e5Of?gfsau= zSxV-ZJU~{s(Al*VLP8#cWx?EisC~_ahUZt&UZZ!oQpN@x&NE&=Sx(wKiIwfh~B|+a9}4 z6~OV;oAxe6(t!pl3e1ch9~`z9kmTTov>j?!#9nPFVBB&LLxRw_AEBc6Zl&SKVUdt! zUx;orioPJrFbRg}OBqU~D32dM_COZ`CWMZH)qNBhGV$X*uge1bWVo?H}2#>xgTlY%<8uum1Wl5+fX^DV z3Ie~}+251i4K5W!Omj;eURiT3F>yO2LK|$U;vM-3ZGfnH05PSoL^xL5r5{Xqu$X;5 z4|CL_9Ie1hX+A+V!IT-An`?I!!V`TR{hR&}r{`bndD{rc{R!MVTXk)1X}o+0g#uD0 zc~Hy^KFVUa`~HG#F~sHxjgJxZgQowU+7Qyf-?6cqDm*24JIH6)QjR7ap$W=HQF`^| z%aamYaIOeO4l%LKSa}-$57X0cqsRM&-7DXo+j7#LyR9NqarU2Vd+Q!~{|t6hQz-nW zo~<4o9o6VpW2gVLtJt%~hHwtLl$uYUiedFwM1iCE2%7ji#>puuMkoh7{QMhAl>oqhn@q__VbzLC+w*RE$RxWgVq_xK)`UIW?^*ubdkY%4a1x$D2B&TP^ zO=;WnpAr*`4)cKT#qn3xj!7+n#=<~71kU~M{Rf@9VPOcgb? zH^`0-ePQ&fQ!4)?glIT7NlJeNWP}X;89ZlRN!&>Z&l){8DL4MQeXf8Bc3oeV!6pZ8 zq)m9mRcR0Gm~RHE&hOoSf+o2Dm7RiVQPl@s_Zt5_Fx;iag=QUhjs~7=dmi6{VMNRR z=F=y7yrNhJx#x_9#k-!T2W4n>R^p}}<6Vl_#qz4EiVmX`di)+5JRGdD%XPg>3P3I^ z&b7NPZxs{2M@YzeJn|mQdw68z;9blMsq`qpOR+w`)z=Sx%@xPiclI4SRv@eUD0v0$ zrk&3rHQMN*p$aSqwdsR%A31U)7K!KeObwVHv`mBx0x5VCHMN4k#R%R(=egzl$*6S!m z&_WhL6@Uq4s8I^g7V6d#hBFCSUW1?eg;ncu`}P_#+33o)iHL-S<*%Gzm@ksI zUc>Bn2ABZbI9`EZNENuCufG+F%q{Q-OyD3&ks?wA_M?OZMMQW5Kyade0q+2Sx+3&W zThSMPWD4iiGRNAmk~JPtZarAz37ds2gxZGZwug5)qHllm_U%gd>4BRK`x$)v{mbBk zGBh?`0!a%^?K^J&3aBL5UF-!-mV=!=_Vu5G90p6~*Z;{!s8Fxoy`#mOWmL7-BA~|s zL_Wib0q|~c#t~kH7?K_UPhdh@hkatVh`@h5)#+Ac~NYtVE%N zWF%RMD5SpE`^$Ns&+nh#ALnw;pRG-EVgTia{NbC2$6wzOgZp^#2M+ z_#A3fgO7pJ4YAPhr|^Cld?v?AOw63PWS2AUwXrf7KeI^FK#!WTOL$sO5Io1(AhyW8 zgZ(?~e5NE_yH<(1&lQp|pMpZGqVE>}(*khR=y!Be#JuWDcs%lF{BSEp!?PsljjR?v zQ*~XT**K3(1bA_7*>fyY~0Br^p*so1D=~K zHQV$j3AH-8@E9~Ikn@mxkD{`&CA}t>R4mjuz(c@AAR@87E0;?7V*1157icv!8FNOH z3e#X0mjmR0gI@ld37IfinCOr(I#1K3BCX-;a9c>1zq)BwxlOK z0t)7FpE9;q3-|cI;(&re z$f8DQc=&oYxRomf8V4mw8gR}}noa%%HM@d!pym4eRkHTRCcV$zFv)K*Y{Up1ab|kg zRdb|~YJG-xPdnH@1yrcFe@FsP|7u3Ywgo{gSFO9a>`y~j3iHK_O)vfVF&4gS%!C9` z_L!#ufej-gDz<<$o8Gu_gB<@V3SghqPxHW8u-ry2mzIqFEHzrmXqJ?I88FL-ifah> zCIOolSKZz|!>Z8#bAJ9>7Fp4OmIiUFV?*i!!@qt9UiOOGV16PZ;%ZJ#Q#Sh=z_R!J z`CFk4kbvQfpi2?;c=5J{Td{Yltf4WH`f6gHjb`|M$di|E->xCa?K(S00_99t`>b|0b;=eSv%Z(Qd7 zv+AJ>M^Gusbe($jgfkK3heF25-B$8zAd&8#N(Q0cq+yPGEIX1xU<`>>aTQ4Lm)*{# zHE7uI>VpR{{33Vle2TBt7X##pjvVtS%f_wUv_fe7%^Z^AU=O+G5 zK05nOtFMoDqFsUA-GIRDH9Zxqx`agmHmr4uIqJT? zzKR*?{-X7~4jUfom-PAup$mF>aP!p7NT-9pKvKMn^(_VATSnj6W@_Ixp)(%E{w@{32E&8WD}GR; zN!3Xi5|c4I>5FhAIf|QJLlz~vZ_+k%Vh|O^4dA<$Bs{SE*GQ!z&Q!A}eINd$uAQ$6 zOIySRmdu(CiHfq40zeN=TAgOiY9UU@IS@46{l4wqRZDE_?Cf&Eq{yj|z%>ze+S|$y z(s{+D>vim|wQ1Y7Wrq%1c3$GVRH$(^?!T-t%<`L~Y3YLZrL3^N3>KdoP8-`8b@cj7 z0OvJ!Zb!_vnCj3B=Dtpmphrqr%_)b#lx`2$4N%QX{O?d411h|8ayH8hTQpsIR98{> z6Xt}6*4)@Q;G8}d7H>{=Ft*@VYNBRaOF^b!TBJn_fzk9=6IU)$;#x&ck zB0_#ipXLFcDMI~Kr+0&I33V5wN&dYi>^7@ciMN+sD#O4o zBCwM|T-|?HD93S8_Cd%yFhMkJY46^>%jF1*j)}Q~5ctr^lkLh|>S~>6456(}pvpI# z&7-BMr?b3YsKTtc>GkO?(A^(ot!`)tff6_->DjY==Th(EJE8s`J}aM%1m2%=_=1mF z26GZg?GlrPA{JTjW~mx|)Y=%bH5jP0v!~PU5=TqR^$X^;b~XHaF4uE%a&k`(t=3^F z@~Ot6Y3F4nrwE-%$)wNHUkc4Ry?7bhMoN9>H2ldbw%)hm7QOWoDYi`5-yII@v1{-UEzOM01u1BGl zoJ0QzMHQ*BUcud<=*!E2Bo6D+d@9QOLXf+@1onSdPnnaLhd^k+-{ZsF(y|%{e}tq{Qpbb z8lH%b)}h@1S)`-4ao$;cSr*}MMBKu#(Dz4m`d>sNk7d|K@{obeb&O0;cD5^F9 z!`HO&tB_ph#=YiNhE7@uuIcaxGp9pDl|?0=o0OM*8@{n|ub`dFhRs6~M6x*vyrKBh zH8GJ+_r#QuPEKn`f++cG5mQLcj~qIrP%}xU3vUi%Yu2Zl8Hey;J?FpIE$mg-u8~n& zQ#JS(*II-oELf_sz4@a3AsKD9vBm+Pf&&fa7Qeh!6@XX|N0fKjac#+_9oh!?afBSD zV{p-zT(#L%%wPk!)rpc((pUq>-J<0A?cc=(D~0{zl!mlg&1VuAR+GLIHEi9#e={;3 zakX3Y$$9+vi+1*Fs{ORPuj|GIJ))a-NY6!Ins@EiO>e(dxR989)d0)&%N)r*u{=vg z6wP;OclAiGEI^L17?Kg8v*+$Wi%&`ULz=h1{fG+r5GZhj!l~0=h#X^MqaHOW<<)OC z77Pjf*P$fK5f6-2=&$O6>FmMLM%9;|5aQ#LsrNg7;OYQ2>w|C$nN`7S3B6IjO*Mn< zDf%nnCy->0q{Q;^#pPxc=K^d`n6vfV4SYH9506+!%BEYd-q**+xUxl`(%EXSc?*f( z+YO_yYAbvv+8cF+W(+Bp07u7tj%horxsDqRDqeHv%0e*GRYfDCL6F*WetaB#W_sr4 zo#mDLY1L404*n~`*2&|3oyiWa_IAD>^SFFn=XR(joGv5P-HA4|)V`EC3{ zs}p1M;?g)+lm0)X)Y3bzz{J!n0$e}9MuSA&kvqm+j}p>O(VLLr%}iG>bPO^<{=1#peu|omM)tzREaoo0!?0BXf$b(I{K^m zs)tPx(5`_`)}JHimk z73sE#iAjWEEe{fbPDrFg!ocC9b!TPZp1=IC?mtkpb^ozWf?ys2&L{nNMLVH>_Zg#B z*MAk|t~P>O<2X#>)QBNEfRY?L`kGtJ#P4haLrc_1|Mk1N2QGSOJ=0J})4eVjxxwbw zUyvB&{1(NyBDEnXmc){ZUTisJ)p~F+9UeReNze;gg?`T!Y}04+YLB$DTLD_5`Vq~+ zH0I9kz%~DQr)H-VjTbBpOT#3$JZ{sVRLBSGX~sLfnHs(@Iba5+NFk*__`y zr-BA>hN_N4#_CKs^%Fz|A|7WM2-Tsv+Hvfu!SzrNS|9A!1lm5Cwc&Bh)qJu}$Hg-& zEVkpMq|Wjp!4+mhZ+}gLZm6@HA_(*6^a$AC_xSPS&S_}`1{RH1VNPGDz+A*1l6=xP zlYGRr^81Evqj#G6~99$K+$Dl)pT5{X$*avj7 zp*;D#!KnQ5oqNk&_6Ka*@y{wFC(akC1o83TauyC6RGDXnT`=4tl<*zjjcJGq27nCU zxuzWW{cj79fJ`b>U$|kbWE$tBLkJsrwZByC>Zb2>>}uW)u=Uq5znOV7B)Pw?UNL{Y zoO3Gpoe;|K{-slLmW&#^K(uOy38iyfyu3Qf10YOIpuA9Qur)C1E-a-8I@~=yCz9Z} z4l1gp>}WSWVCotT==A{h@v7i$phGDufLTG92Qx3xw6U5jt|?WM6bPa@*ypedZ#;NV zTd-tY-2B}w%j7hD`&NUbu;UG#`t{e6#gTnTK8OoH%=u8>2Klu_M9O1y-sv}JaD~1m zWYsGukP+zKEStS*$-g{C_M5ysA+US;{xE?_i{Vh}yut4%YRl!IE;-L5!Zc8wsY-0; z&J9J@Krg@+5YkIr_D;Uq&%W3|t_PQ<&}9e+8AzT3*D~9^hMBqNF(LM(O^d|2aOrR8 za7q#Kf}}u7HLj@Xk)A}i;*U@2O4k@!^PQ#pfnuDvh;-lVX=d=I;=YcioJ1iUq2Pa6 zyK5pkws~X6{3l1@Sx>_A`&zdS!xAg)ChxLIz4S>8o<)vRH z$>SiEO)SzF!y$8ksL`>pLFZd`q!xBaRewq_HZn@0@wioh-&g(B8L^JTmV8U?(fKLz z8Y#Kta(F%ZI$@ViNKgC%=@@NGZc!P?W`h61gEqI4rEJ2<-rnp=N60wDw7Jf1G900w6TTOI{#AxguJ_>4NFz6DUqMJZ}>Vvg&oHH-?4tPs5<^QsLJ(^J)z^rW>TfxjFsj$b+ zn;VVALPB!wmpyAj+nsr+WzbDWg^<3OZ>19NHE-TEgll*D40m>}k77|7(Qd=Uy5?`% zQcJN9vmPMwVcUJ{FdQ*5J?gm))e`Yfut>N%wO)?$ntW@DrPvSP|)Z*@Z<568S1u$VULH>jhO(tWr`jx3m6F5?BZ(w>?gE2@D3slI zz6Tz)%`H~bq9Wq8xtkuuTh>+ldDRF$-Za!!5JSkBA-5Ua?C))k;c>Bi+m9@p=k4ua z(O!MIUBb|BAd3mG@@}hF3o`PZMXei4RM7G#bPjmL@=&f^N3kypB@)xMSVxnj3Fc71xaiJCuod4xWYmU30Zwg<{s+ zZ1pA04Pk0kt5h+-%zE?g+UBL*aTS*|6KY{OHv!oi`Yu>EgffadZvnB;sxODA36;L6 zC~tkPynzCQrx5v)V^g~LpG{Qr=K$oF#)b`ku#an);TMu&ZkE)K zVX3x}r!VjJwXBiH0jwk`Qg^n!bFz3yRWLsiRAw9cx{-G_6HSp zHAJMjs2M~)5$~l&O#ZswanNwRVh`%yuY%Eyc_!0pa!(4Sj{OK?l8#y42^9_ z?3~^;e9$!)rtHrAlD_5^k1!b0A)<>okd%3)jOP)OQ!|QQ8-M!u5{gecUhW6XyN^_s zZh8$xI<^y{S~5IuctZV#)&Bd!uRcYy>9pt;R@&O`L9G}t=uoswq8{S&Z?=Q2dL$_> z#h#mY@2=Xk$tKRk&9%|}f10maG8!@(`^L2#@n3Szjj*i77E*FQeX0Q@mKa0vJ2A!h z6etuzc*nR&Dsv`ti+IFGOHO)kDjb%TS-9r}s=-dmx^NT0rLRXFv)-+a zdYITRt3C0?uYV6)lT7_pfw6y!p;9-(IxM8Lb;C~18GU%2NH6p*;i6UR;>nY0H@p~& zneb+aX81msS?L59kpUe(DCi>I1$zcdhcXkh^XFeLT^UY1y1vpD(oWF363vhV zB<^1uM#a;2YJdaPtUq=C_Xa}cabQREn!x#mTmv0as9k12_e#Bx{hd5E*rG-ZZ6PyW zTx!BQMQmrrcj@i7OSM09pgslKbP{o({g?B;pFt#IBe8zc|E2mo=R0FfJlNZB&Ry$d z5IXK})vWRjeKbU4rw8~Mjj4V)mX@ztTx;Uu?^T3B)5oD?=FI6#2Z1GsbRkEYF{K*vUQ|F2MzMS9{%tJ(wqgwi zaz;@TE2=ZBAtyW)g!|Uo=DO=t3;DYKq%TE9wd+?fm#6uGreia{e|)4(4z&G}i}yha zgmcGNU8~dLDZ!*8mX2(9&a0+4Wo!zuk9d$cE%hpO5eXQBYF1WP6CBDh=kn_}S}e-_ zC+eiEHaAdJF|%T)Xy*WrhxPHmtZ_4?)LSS0%q{*_i7Oi!m$Pka0mxQR{^3$agpZ%E zchuP4U+oY-DuDIWxMEt`R@Z!P3lALoN^3wBS zFTRM%{O03#&ZqGEvcv;xsf~&UKQYZG?DzSD5ZN+rZS>=Q{%1yF;2)}60aSY*oZR0s z8>v5@N14?R7QTJEBlcbvef%?bRZm(;#Db1hk%@W*<0Y_I_TTMP38*$j2_3E%Au2wE1W7Yk&e_tr= zfm4C_9XIWp?Xkgc_kMK~@hdkT8@92|tJiVf1$KMis0W^vvBFnZRl`{C7mHs8b%`nR zgi$8-#DWsgx_egsfHgIn7?VD>BViDWDF?rjZ@6T-BN!i@-km2r!nMEF|Fi%+?fTt` zk-@KCP;2d2CjpAih(aT7oGWX`jZI9lWW)j~I^H&@~hp&t7{x!HR6j}ITY5|+!+Jzb{RhCRKtw?`C1Ioeptl!_WbY8LH? zvE1~1>Kl#;bEj--)<5T;jMDL`R6* z>(1Lx>K2Wvm?>Tc;Qad-ei<;n26k!6-d6`dTepdEj5)>siP<{Y3l^Z=_nxViThYC# z7#jJ9W&Tre(e^er9?*F$o+m6wq;+S_uL}op5rRg> z7*TyuM=gIPw}rTJT?p`ty7Viez@od}t}Y`6B+Pvh+K%!qO`0Quy0ZQFMl%^FAZDL9 zo3dx#v%k7GxfM=-p)=*WE=?-=H(^7L;X>@OMt02CYd6LL8D&AgOF$49m7U!8F^KTV z%{{W`zP%ex#+sElUMU%>rlX8Y61!cb@pqVGf>hzjsnXV#>}k7{&HHC$A86HXbnGBx zI^Cz&_j07lb`jszwmc&WuVImK>o;Cl{5?W7ol-;T&0}(=aB$M)SUe@>2=?>&Ewvab zLB<~1yT|MqOWQeM>l&t3X~7V_I5vS-7PjPFQ+RFD1s~l_?RJz-eebZe>{d$m>Eaz~ zJSzH-IM`94dm1@DNlzxA0J68Y!dtoZ(WK3Hf6{_`h926CPD<7j8NG{ewHitnx|YZ&pQ1>>7&o;x)~-TR?_r(d&HEoT&e z+r2*QfDrP`IJ^AgXR>bYXd7CRP(ZAhxqbcqy<3yEV|_3^6f0Pswf@)iVI5>{Pl0jB zl;R=bbb@HqY;0sHPL?<;2Of?rv5OqliD{S*r+!jXZLgbC0qfiB${Jz+g}+!Uv3isV z4XICMF9+|7gwET0uEu0%%I1qdoIFSSs3~$PcLLXKf(b-2(QAGYPq*U2dOSY9=hC9&`VBl^yI3JkMi`hC+4tW!epA9Z=9m^iPy!(` zfx^4hh0&L0>;@^${I)=jUNLos%wtLJN_|>2_Q>#lKL<$-0_FL4Y-I|z4sF`0Nt>}_ zP-GwxoouS1tmIZsfUMk6*LXB7;Xn`zy9S@9-3&Ac3PmIKc>uj>szqk?Q zo9`_}!_7%LzcA-H*KmGrUV!YQZws__jSjKjjdqXpH#-k+)hh2IQyu%+u_vvE-ab33 zl8%2X@>(0#HUZ$=XWzR}#a-fZTxdT2&JbB+3=^4@IA_h%pRGG|i0aysV6v7 z7`x|F()UKay+`qFbim3PEB1%*$z2{je(|hxe6MQC2Z1TAdn_wkM8V(synWe+C#R1w z7i)5!#V$5<8*F3c8A$Lq`B2qsD(ORJ+o;446j98xRS*TL zH?bUDGWwTUcpWBNKE*Uf=AL>_)1~swDwJuO_GCXq$zyOc45qzvH}VI*li@Z`&ocVf zOJU2-PApn;iP5CKw%-|5dBKx~RdL|ZX zI?;9IbXf)8bi7u-QZrtJFPqAGXy?zMx9khEU3bK}GU|*klUkb$W7#(4k8jS}70bfIjc_as~3D#hI_klQU?OoS8oK^0GhBg~w>(kr`OK zJx{n@Sz*fHE1S80H@;VA_QgY(hYpsD^YdNGk88cp_+NAX@+1ILIZL?)J zW0xt@zA}Nrol1td%%g@WX3aB_%r2{ne$qv0A*PqX&f5BJh2Q-(nU;Kc=JcqG|5%D? zXi!j4=V#(*Ohe&}AZ@d30grVp$CrKJkS?&OY1nJS0=4q7>}f+wnOB6mCH72*qz8Z4 zfNe4^qwbOUkTP{7@C|n^rnjz3iUwOfNPFC)x@-grTOM%qG62iwsYYNo`TT9A(~4$O zRMM@mTfc7I2yXuLnJqlb6WDzR47a$?f1c(`wu;xj>?u5nWcmouno%`7i+_EH5?^3h zU}hL(j--}sA22Ub3es~!!ntgdN=d4d&8!sNS5{FjbU^S|HDA6{W`6NJWnPC&afwZo zHZ_?M1aq4%STC`E(pJofA~>aGvE1I#?gTnee&V^e$f^<~b;?x#Er;f$_1gAvqXqT0%mqDuGV%UwssE~*=N zSNk{3+8Vh}No(%kidKE3%L$|lKTw5?o@$dQujK(X-B!)1u)&1KK z$~6cjIx-Oi{fcGx+0FbvjUaGU?S@%5Xbecq{FjB&xEWN8P=D`c^{iBx!U0~s1rY;L zd%koVctnH{Xh}1+HZK6D zWp4PCy_<(v4OQrHw1cBN`yM8n18%Kg1IQqkQA;oJTWpIN2egx(%W7TulLco#q#UQ9 zfz(ZVG`ZC#xP!9`QR&<;S3+V0ViVO?j2?E-zYS$D83)p;fO&n^KcBDl5&vC+)HKez z)?e5)suYvCz*HfpfBR8!X-;35N{E@e5%Xv<@kai$cbd9a4_Q8Du(+pi!#C>HYcwxm zc*3r$D={bYb}32)#De5jo7%`2l)AI zQ6KbcRI3Z8n%Uz=OV%=6rf8CHe|+4*tpm)#BStHQqS*StIGgl$502Ojz1lZ=Mm=RZ zYD9Nhs|1u0>ZfF!CB=}ea`2ELp42}I`lU^bxj8W=tE=|E-|EJQy zr=LUD>6v{doYyPUGWN!4Cv}%sJb@=_-KJb-M2X9JfrTxTV#ZRlUhwH#p+^#s{~>|z zV|I@~NBp%;$_z<*m_}Y&^0O=Q2i1Vo)xO~2HQ>KV@}dKKFPxP}ZjeVWT}K8F45mbT ziIIEmrKruiqX#hS9X}sFzYr`arvmL`QXY^{>mb^!wmi_a4!n zz&0$3_7>YGE}HWzXYd zZL*{5>|p%fhY_(FGTc-m0Nm2Fh|8eR>1?#@3k#!Ax(3G1n(GP2<0Q%Pqx$~E6SO)i zJI~VkEx4JKJp&#)4pWEfdKI6iAyfDTY<4(3)uu0o# zHUjoDC=@2)`MiXEged45d zi9YtwqnRZCGAofV*Ov7aHpyWBk7m7E7#g-Id%4zTs9J8Wt;_kyCB4d^guW+DqR@Q` z(Aa!Ry0GTNBh49H z^U$Jbco17Wj3|>abwj&>W4;~iwx{Gz$&g<)af@dG8xB#sy1o^T?G%?~L=I+)-C(!_ z{A{Ndg(UA`{w$bxUlO4=Zt|@l?+-8jCg>4ht*YYo>)o%^e^^B!UkgRt@<1gD3 zu=9w6yUWAE4IM5WCsxwqHj)l8E1xBRMn+zZIMBzZ^TYP4>?pU67^AZ_-d$!4Pl3_G zUQEe^**?X5pp23G4(#2VoVCu@a`D$&t+#h@?T2M_?`MS#0<-Y~U@VF2t%))3=+2Uu zz&tdK=&G9jk(4@Qf_ntpMgqSKQT}dYi?&^+dwYwQGmoHtl6yX-&(qaQ750{=)cT z)tu2)qNbXd-QC~Yht6A%aAqLWhaSZVAfxl-tvbqL8Kr0JFanE_OJ!e&$qXPkjm7T{ zuk9RM)$B%eHBGllGt^(j6OTDImXcjP^$xBod>gBnRXf!*z}|GxyFgpl^O0lp+UnmC zIke1x{QBd^W9Gdv2f*D#J9aIupfW;+M!Lt`)cm~Lda{4vWca9^9H)}PViJU99DKM@ z>JOR-qG0?#@GZpPO>%7)F4oJ+*&hEPD0E-1GHclv-3xP$v`ARV?De4V1?+m&SZAci z@UJVuwtG)~qvUXigX#_#m?qS8#EW^KKOYD;Q6gVYV8G4fqg(p@y4fbcFG%;_d7>P{ z!AxN+=xLh|(QfMfHu8bG8?$>PU;@mh<9^M!nUgUlUbf`4=`6@IRQIXE*pTg=^Gr?W zCS*)Ogx%4A2*$n8{~26s6wHrY6N-@#ncM&lx^>lA-M94Dhg;|2=hoC{;%Ta-R6REZ zN_pp^o*7k=__5;zDuMB;$r9*2C})pEg~|ZvC28}3D|rlQWL(dCA}p5y7oDU4=weSo zt-S}N+>8BzM*CFfDYq?x1a(8ROg^8=BJ>c%1BX57&0&M%T;YI zeG797E!A`i=2G%UDQW?)E#t#Dl6Jj3%f33o0&*vE2Zd6b1CR_6q$K(Ld!m8ym@~(z zkN-pB-RP2*;@|T3>n;6t-+c~addHOW?dYp}gwUHX|A>z)<}!gw9zp>&UHs0VF9fj+ zxn(2};*XO==l8t4_hj__N^>g-43Q4XroJCL?M>{i6bw|;F+2O6FfjG~%HluN*u&9( z5%`^NSA7Q@dlIS`Swa{kxGXLpoK5v_8R(BMdUO<{qNq4tL!} zJl$awXHsMP2`EPOH!NuGo12HJUOZbEtg)}gQ-8(cVC156Uy6$zpd}o^rovEA-jeKU zKFyEvzd1QM%vF6r`WvRDXMB@s82o_U3%2Vt!nb0PfrQU6B%GeqVSlI z<4E#ul3FFVVp4rK<%zs)2FA=j{e!3yjmGM zWxAv^+U9p@>#_L@t8i3n-QNDA=J6kx+}YFK?ha^>P+HK_ONR-gaWc;HXkn9r?VK^h zAJr5MD*M(c12mZFQf4y#Lp`$}uZ)tIJr)zCM|DaC<8m6chG3DVm0(i?eRDX%_ppYoq&jxq=Nr>r&CXJ%@VBSPReCvmc*)NWUAsQ?zHTj* zG@GGbrqneYpigX4EC9@hd2^Q5l@^Q!=iBp=_|tQgo;@Kw!zLwZ3FX9lQzfz^s6|jH zdFZ|Ig7?R)k+u332TJszrR$z|`y6Nf8T&wukJc2ZvK4-Ci}a4Xd!*k*H1=PK=3*DA2S$MzGat(_pIU zjqx;7NfF1BGn~|K`(>G}9DM{`<~yWT-DhT`r45HYbzlfNC4TSMH)`xh1HdECAXasG zUXCI{#_;w1ku|CNtqqZJ(@L5qkGmL>KEY6h+DYrEzx8`*>$|jAe*M6M4sYdKN$O1XezAokd?Jw1=R&69FQY%Z9ag(?An%_lV zzgjiuwWjztJ1%Zc<}91?;<_0l?a)CS2b{@pSQ#b{LBOoV-o5gZTGBR+vC4 zrZe_btAGxUJ`DWMiCeIjX|0JDFBbYHo;$a2kKevNefp#~?!*u>5JYK`b-g=(c2=vU zsTsKU=^y7xi|#iwz}bb3{;cF@)jw*>?oaogH_zV8ENSUM=Y~;p;OM$vyvbv+b8vXr z_uvQeQt{ogq&X}w%zAgw%VnH8M|e}yN|!k2x$pHa`?R{g%Y6>BVkngninjr+jnM^f zJ3wz(?mpoAF{_h6P9#cOm|%1iA;xi3!ZOZOsGn&EeEzXtDzkh!JeKCZsJ|=!?b}_2 zzDl4AJc4|uoIvWEA+ERK2!cNrcxdodD56lX1P1`T0B)+K(;_My$z3MLV03#f%CCw1R z0=I5UiP;{~0GO zwzd!ujiR$+*-$>=5yyQ-r+rTFpT&%m+`W0ua22CP)|KY1n{|uE-P33O5RyFL83k+6 zF4RhBNAOYiwjHOi-+9uc{^E^OMpKqwR% zh^e@^m6*`i8VLw0T4Y;uSsTK7c z3`+Lo{jm`4`(vyNAutp6R`?jsh2cMQc#4R2nB=u6xBcgWt&g*#y&4OKxiv4}vG7)Q z*b&9cOX`+Cql4N^cdF#omk<)aN+whBYw!8ExBVNBH3{0(@7aZf1ThX~(QOtgr6Y%dWVodZI z+BfENpz{Xrc=fKgaG~K-w}olIQcu*-5tP8fo~}rL_9?q|A5wSurqhYd{QUebSo`3~ zJ#5}uudzP&ZC{^##hi@({rm5YJBNUNuk8_DP?S}$(P2QVtmyMy7Jr55QH4o1F0r}| zGdb`9c?jI=^2n8NI3u_*1D&%vHTLq6dk=0gIKUmSE<;zTY#n90jeW7saGw2Z0?Wh0 z1FK&A{rSRe$3%9%iOH@w;*8)abVQdn=UJz@S%~}9(c>huaBnh+N@@=aLAHNB{{yFsyDsI z0)p-%zJC}u-I)~06QC;PO*sx3uNn@!OuF@VteJ=KY z;xAl~VbJFSqss7B5j@%E%7{nkcs|%orzNki57AwwVj?M!anCXukx)1t0fm%( zp;kZld#N@(H7|9qvt+LXPzP)%p=YOk()MpvKuRQ>NP)gt0I}L#m;=fkHrac-OxsLR z`Uo*I!fd&^=pmTE%wwE%`nv|t_@?iUx^qsC`WSnd9tdvOMn(&E-j3dc8lKeteZEFFt?OUW zsERmT3l>X=`9L(&2l>rhX-M5e{u?H0XRCwTq%8^&yQ5)L=Ud9yG{^lQJ*nQKRk= zq0s1nU4d)C%*Rt>4ZrQ2L(fON zAP@FNcB{G{Kj55;+0+s16eYJZ)va^Zg>`BrB}h%W?Iw(6SwBc+!o@I?fZe%CrT8fL zUa4CyM%*wX{eWTL3p-x$XxEbOhEZsi0_ZhS7IP!CX ztMjRGBV`;qw4xa#=~b<@#w}W0_10C^NbRq!6x4?Yw~+~{86&^cvQ8bcKu<^K5ZSCsUWAkQb7%Bi(H!^uh& zvy{4x8%GjBDc5Bs%Vb!eVSW#v%?{hMS$!oZ%dvGQGF6K;03zl75}*H`gB|o?){Lua zGHAL=fL@O#m7=xL8$ohqbaKmrY@Gk`V=u?`Mze+0afWlv`<^yI;>a`Ws){mgTMyTM z0Igk{mpiZyMD9Ys&#E%4wpow}d-|7rRON&?FYO=`oA_@4^4~731Z?JRe$Qwf>rF|z z*r|%X5;^E_HGdBY`QN8eCRgJxT3^v*7u4YnucuWxHJQ~ag17?xjwbk8Bju1Hb6o_& zsOfj|#H&xNEQ?A6h+baN1^O08F)Uhq-qM~hasl^H2k1e5`L2MuOP5q-AKKL&0((?3 zOWF>JkypK|hsJWqF%RK|clD(9`bAonLJ2P`|2d7%>bZPN%0?Mq`F|Ug z<``L|8Jz;H%*|JTGBkz6B8sieeK(Q4sk$YJ7GK9MAnoAcl1CYO6>VyFFz z|AyX+z8rd6@nt2~7wPr#DgvWjvWAo%R%#vP%HAAoxt1f%hMh9J@AqJ_8a%C@%jk4&L0t^{Y z3WkJ=Tk@py#h-RnGe)|Np!BJ`uWcFXSsCffF8}_}ez+_pgm9;{ zH?C99HNV6J0M(fyM=}(WDcrZL4yp4yi55U43B0M>P%0u$l`pN*a;uv1Sp4$9tnGQn z`XXwJ^rlMye&Y%2!F)jFQ=!OCK!cn4G)3{M!>uP->F#N)-@ktRI@BBC)J#eL)uPo* z{fl#d=_z~t760w={&THmsuUrG_x@4lggk`de-yo-o!lOPwD{wuMyiJ*%cFgNSNZzu za({N@#t1zOWm%z$a^!rKr5ruVWaFaMR27d93`OWrt#XwrZA5_0$)-Aoq`rkhksTdQ z%a#rMoO38zdq|6cL6C${u<|)QmrGjQ%yCh zAJO53rf=XUn*YvS*>T(;LiWQG(}wdIitUO<8VYBP|8|pEvv6SId&0SzaB>(vc0mzP zG@N{r#}W;e5feDoor;s#z*>D(vNp6gN~FU@YL6XUp7EsZ4x3leGcIQX`;SLkol!ei zkqmKV%9azJtF=@~0%EyP^~64^NCwbKvO=CTp?WY)`xcH9K71KjdkP&Y)(j+e8 ze-(nyxJ`=x1k@C__xC1_aYi6o4ZgN*OND>iNfHghkGxfHreAsKBv~q|l`8Z)1E3wlCN^sFmGK7%yZ;SAs^cZL<0}gLqDKf3@ajK7xUNy7Q~}68d19v~YW@=<)wPtH^+Z;G zf9){2pJWbP!P^`aqhM8wN30K0Jqoyn33YXnl2!3kMhgRgZ}3*<(r&#JMV?&MbL>>D z4=(!^E{{Mu-KoCS`R`U9g0q>fjjlYjPLn1l3DK%^Wx9*H5-AQ55bDIfe^+|Zm8YGG z<}Ghq4KETRg>{O5B{#CbKS~jC1r>fbmqfCTd8vPIqW zMyOtIV|&Oj_=uSjy4gL>kll;t`=om7>OUll?6$PLifY%84ByJ!U)0~f?dv8AX9kV^ z-Q4d3D?{hvjfx4Q99|2$VBP{SyQ7L*v?-Y`E3G8?13^$@(2Nt z>TbgX;XSLGY57;UrQ}YQ1C+hTa_4lZ_kXuG_*Kn_Yp?RvGi4I2igV(g7@Ey_kt*xU zVE2huNy)05hRz1_zrn@gC~61eCbd;aggBDEJL8yaZpnE64ZW@Jv(*y1XzBb56h!E< z7&=5Q4|k&pgLKDfOm{I4l44pIqtI3F6u5kq#g(i&HkZQn3)LdgmpoqDpZFm~ zshR)m$!SyH&!ESDH?S13XXPaF#_cA|91^~*RQlLH?RHsKjSf<6Qj~+eB z_l|)yOmw(d93&?jwi46Ron46)*MzDk{ge$}tq(mK&4OG{doV z8#|?B9&LFr47)ZPs9?63pCRFl;GzD4a-to@!_;Yy3S{thBA%(0%PB{E_&}A4#TKm{ zw#+<;Cgqdg_x70Xc6pf!BZy*oT%TcCNeQA7C!S~G+&}^RK*104@z9F0{Qge+?a}&w z|Mb)urHMFh8CnWOMF4ihve0_kZ7UVTqeLDwG<~K(Mi5hYNx#Nn#}7K~+N?ggmzIB^ z$iE5zbzS9!AirPTe%$%cb+J>ZlXTAWtN49TWcxH*97l-zwIr=HX(MNR0Z}3D{XWU* z=otd-kGefc5?V^RYvyHXPZvN%T6zq%mY{E=bxGTCPVRYESC*r&Y4;&+rIgA+=}gdE zkjChNX@mwFFhB;~a36Ftr>MaHXp=~_cgM4y*E|&hdKLAXyt&p_^V-Jl6@3lh>=&bw zDOYL|@^3|aqeiP&yY`&SNgSu?9P);iXz-1lPdyu-29;(RL?+!0^@sIk4PQajH|x>S zB`rxTt_v6ntZGQ{YeAQWPmVesA2DEn&sU;ZQZ-%3omnn9?2pkCF?tedyJV)6-DOhu z_2cy6b@T>ql}3OOEioMM_2jUydrwFhKeI?wbuC-^`(cag=N!g7^QPfx^_- zw@z6q%m_*9!rLXVVoq>!aST$EfAWer4Dj0>361T1Tjo}0@Qr8OuBc!9{&*}@kkjpX zV5bK_g~Io3DS(TXxK{r%ugvcU;fh3@DFjc@=I{}bY9unoE{XAu3&*%MlR7nAp6X)N zo44{1p&2`p^D(zFsdjl(h}{#gz_22Vcf>pBeJDo_(Yd&-l^)!-4Jqvq>KCeEgi%4 zItyr$4QD2m^) zpKW|TB5=CTyK1^?b(;}SKd3#<9FZdH@k-%^ z4cKEnhG6~KSNURwJkn{#43on8UY?9!a5{LJ<~Zs8proJCyD(8zHnTkYdv_n7xmL>y z5k8GM+|Ejvae+!B6ep z?N@mx-$>wf+m(E6Wl3M9;PuP!pYx{fwN=Y{&ere7_)2MvBWrRb2z_q2Y3KIxgqjm} zcb@XuSowu@Vh|H>9AYQm3RZVpQdTkk1rO`fse)^)y-U+Gg0TAtj-rBGmSjR8Yh*;u ze!IULOch0vGA!;=g;L)DM(8{ zk9q%^0@uaqreFG_3wyczk3G-j5Km4R?LK`v+-dXzw(6aMvS4rJAAu8d7)Ky?572a=J9B zax+?##Zl#W^(_`P7}%& zRjH~WaXIWrZZi~j^ErnIV~Wz)mY{a~jCqnH{%;Xbk0RGZvNXeV^COJeu)GVC?DBk< z*CJjI&bxQBf~2cWr%odwmI^$#jQ1DRb78{O?QdusqD?ABbzIg&yf$VGrttlKzNvf6 zr2Nk(#CA467sIvV;d)U53JQ%Agc#3f@iDBx;8SPyV=F68vGEBb-PPS5k^<_+d2Zj+ z{WzRlbuF;c))s>W2`5d^?cg#FQNbxrNKexMu}pL1aY%=DD-R;%7|dG7r}Uv%D#0O` zDR5m?jSr8lY{Qgy{t`S<4y*VV&Os^>#*)|_wc75`xvaJdH*g%a{qfc&qm#eMww6O{ zie{D&zLN!Lof6Xk%X*RP(w-ZiuhE=1Ewv_qz_+Bcs*oWAK3#|B^lM@xg%FxCjP=^W zPbv!-qIUG>R_(5PQ@1F6#8P?wuQH#DISHo=jQnk=TS*U}^z@`VR`w0#;64s22nwER z5W~M0o~%~*=~KJsPdrKwZdto#P1ut0pxfa5a+j3C+0tB^-231p3UG5W=TUW)T9Xi! zNV(o~dXlh@q&&uX*%soV&-kcK!=uT4Xp*&3yVOtQS_MX>G901e#rRGO&aKpTOC6^8 zccaNOyk%=!Nl6{}U6E&QZ#VYw)R~C;?i84QUf=q_{kp^Z6h+Zt@Xi3}@$o?ZLng0s zQe4WPS26BG;j#lhZySiXu;BK>4l01b(H6M2H22dw&Ur@+15QwIQ$fkB@oXYPHVi#KcBTRpB%e_?rOK}HSRg_GS2P6M4R1xZb5U`{LG?|M3Aag*od+_jeBI^ z=Jy2KO-mphX=M05&I9HddJiCuu56KJv;0e}Evh;ebsi=sbdfK!1D|hK0btFE^P7q#IQyX4<;fN;U*g@q< zZu=F-?)$8mBbyudbYttF3EyG0vmKQc$`XuHtCF88|GiTG2JDTS~J#q_q5 z`edz6o;cF3PLCGjo^v=!H-lR8C-pDA>Sowu!{FeG##;agzzW&N9I+w1V#k@P2!Z+9q}#_GX*a7FCi(7D}W)@qE-s7ZfdE>!XI<% z);P0X&D+m-X{BDN()Sk?Dl>uqZf-+zi_s?hiq`}-$bDF9(yi*+h!X2=Hzragd3YVs z^_rq0+^B&cTT&->LG3(=%fN{sI;5KVys1s2{F_tx`KfyaN=vuLF96!_^ z>_VdJs3MdzjpaMkU)G^Gufse26u|9s}pTd379d(&~)l%ppcbDfs z&y`ZkdruKwx{n#3M2}-h(l-*iqpNyn4w2##TA=Z1=)1>a+9Vg`c*sl*GyGc_8C~wU zR4xAmN$6zWVsF;bl;n!coc%+)&7bL~RH)moYOv4D_>iWN(}2b!j;kAwimA7@#mX@= zPi}vF_>r?-TxsO|h{szWd-~jS2!G`t($gTfa{q8GW5)ryY6AvMtsa>4^H0&$(UsNp zn$1-!nRYT#FKC!6ng+xYHmmw(#aj6Ne zAPyxlwE+LU#8g_$y^ywv4Gj-!#f`C!LxyP3g1ixlpLlKKNN4j)JC_bZEf1gMv7>*G zG3MsyuSKbokW#fq!&L;%iPY~Ee{zx6LSMOe&_(AkhIX4}{!v^;!G zLm4z=$SXQmsT5TfN7+W4VueeN4V`D{BaWMh0nn5$m&?ZW5_$3sDnM>CW>i++Y}jTl92NK`}Y{~E>shh6!!TX(z%95 zZmq&oeg&-&6KS0?>d+y73qDlW7)Wus?CXYcRj1m=%>3Y_yFPi;%lynkcb;R=gL^>4 zmtjk-t*gt>IF{NZ`dDh5&pCH8rhILk^_9~O9PM%Y{GM%M-NaVL3~j4#ALeBA|5IES zvkx`Kf-_#p%%q-ny=n9Lc}h{3FCTjLntRdI)J&q-zUR7o1G@{@^@t z2CHFbwjOAp#Sm^zt9mB)B4}^h#6kM#Nf?p=o3KiOUHjsgppR)WekoB6)AI0AB*T%> zNgU$Miyln{=E)R`(ie=MU_RKOS+m-sjnu>sAC&oc(Q{V%7qms+?p=NlVe^I4_)vVj zKD$UPp2v)7h|5_=76|5BNw;nloV?WIt1EusHt+kq*hObjRmAGFy4>o!jO|v&CG2A0 z2dz>}?7MZVwvkmb*2w{EuG@bK*)yw|BSr5@ZP=U@H-_%1{~kNd-HcXanIH>D4Y#8ErV$a%N3@czgiPgG>hDOEy%xJCk zgH}9z=&&w%qJE=B1MxX~d0t$R=(2jesEo~zby=lJkq<5z_1IZ^W@`%ffP0JB=`+W4mL;3cp$~*Iip3|uPe+tuLtAvWEFf%1f zNKKX&N`pj2$<`*tG{sm#s-dL4gvmBh)I=02B&}LBBrVp45@O0!j3r9n*WG-7c%HxE z`Nhm+`h4EYeV_X}*E#39J_cl7x@7A+k_jX;=6@vZvEf1APXPox=}f_T2H2hikt0+9 z;q@a>94Ia*kdPaGDOA{i8iLRLZ=K7e-Hbkr zIVd=jChkTYoxL@gnMa}L38t!*dSLmauH_VIp->l{v^rNJ5nG+g@k{N`5qj*SK8qGD zvic{agq9jV2B2ar$KtTE_vq0P9P5F2aMWMlIh?5yWuv*(w-(E2Xx@InEpKm5Kx5GT z&hG=6h`=a86Xyn~h&iO|df*YG-xwh5bh0wo8yHOxUxMtV zP$|Fv5{f_=h9FWR!fOBi;f^yp=p?u?qX$)l@ideSW>J}=D@ONuAjIw4fO;eE-;~s{ zWy|5zKbb$IVNVD#f32r3tjFzRs+LGuOMcB7t^+!$k)Qi^ebNiag|l(EwsnHY!%XgV ze1v<@(AAB&ER2e9+T)GephG^$)YQ}=*Y`sGJ{zo1gtjZgjd3f|abZ9ybp4G>ew~{p zm!3F(eo?S|-~GprM`B*lo{!){5M4dUX7(eD;%4!6fSAA?I6yjvsl zlEt)RVIoucSR9b_Cl`=?A#ncVZu4JmW2xz z6hqQTNh&KVW##3~!1odfa8G>kfrC%q7`f9Fgm=@F`XiQSwl1|vRpzE&91p-e_(8VU zh^KeZjEbOR-kJFOifpGS)-bfi#NCIw9I@YhNA5|_=$rX)~U zoDLjF*NJ^gxv802SbF-Twk|vu6SW9&gKhsdA1HNW zu9o7Zj0~p;y>DYJAHZ0N@rBM!{1nd{X9{tR=?ko_*8yz~ZAeN@y-8~CfKa1rgdV-F z_h{t0IvqhqU9_K_h0Xdpt=N6=wtEg9Ja|3$+`A8`x&GeSz0-Wpg!DiD*yrit(f-Eq zs#VKdYC}`G9xBphqkq}7Y5(J|2MrJ=9ERt+iYtksCl`Eyl{{5OM#kO6rHJeKfulog!gZsEAHG`x9(1Y zcjkNjE$9i1cq${%U9cCT-JRXUySE=ep%Hqe(-|-INvN)^b>r!{N9ZkEx$+jDKQtRD z=t-^-7aAIz@~rKy!S7IJd2o7>o)9!{8} zh{e=;Af)hTTUjZctY9ua7k|BEp9H1Qxvd~v7rqC>Z2EYHW_Lomf4aD3%a#o@(^FE) z5D;9`(#?q+n_v0#Y5uR-HpRa0{~o&YP^{)=mAURTCi|^930hd6cuyDnS6^-kv zA3r{tzZJqk{;}})&?|jF{b4yXV9ME7F$svg0#N_d>`l|QpDJO&Nh$RCxt}Igzr8DK zVs<38B5`y~8k*m$7gUI6A7ZHh>+_7Q;R^y#&c@0Z!xwJVMm#&0WE zj-=;32pMIx-kqVee70`=Im4^2N+?JE*k_p3&5DX=`G#t$s-~+}bw?h$wDpZ6$w)zA zp_H&0%iA>r;3{wp$mAIfM&cSI5?l~8kyf-9rzVa5pjjEWF@tsH$#=0`R1AGISg;$> zhy40yw{hyo?i3WQ8K}y;yL0;x81y1OqhY4Y+@y1tw8O*0n?7zhYG7k+onE=tcl&lG z%LZa@s7F`|rk1QRYnD2UjkKF~6ABKqX&czsSk!O! z=PiUJkh>{|QU|{xjv>%QF(du$jwQ(|u6}m502=b7e|LCCU)G*7MXBduR{xMx=^b$~ zhYpptF)r~X@{llpgnByxB+JOmoJ$77{UOL1&8`3_32!u>qO=5^4JTGcd1a*qwSv)_ zBh=LgN-zogp7T}~JD!y}o8`cj>G~xV)D>9wzrKCD3D@_)b?(Gq-<><(HGL&WV+>km z9A5VxJ$NooxNIpXDHVlRp?B>})x~G?X1}C2bN`-%Rd4htV~aaFV?M0_3jEX2LRb11 zon!N{M@Xq$3oLel9nIq_Qz(g&ngpn{hK7cM{D_aQYh=iB zIql2hzO(jKtKi7+l`lgFxdE+TvAM`YPRlJL7&oVm6 z$R{S4v{}}t_}gmGM^O8c#5#1-Q+p%@ZrNa zK`&T<`u2sa>9uSuC2uyZFQIILJ9vcwFqyJ3QpV8kg# zNK2f++FL3OYO4r2LJ3%RXvS_2TbnHxJy=bRMgKc{T83Hp-d zW&YG*p=86*C&JG(ETzOQdpnM$bOXqd!3tgddxOd=J`kOXkSf5RMB3I?=>PVmOe>5% zz|AjgB7Uq{lGU)Vu4%cCu-hOA0fGD>*r4TCpd1;KHR+czB{~tD>S( zfV0k=y#oqitfHt))L#9{0p~(Ni%d3%l#1-KV#ZsDwzBbSna&YX{-SQ)@+m)n?d{?+ z*`rE>Q9ePzfLC7KPan;3DPD|W+8X~n7?2_=PoO& z+XkyE$$MT@&SR~;{NjavwSWZa*T`l24H+`z-#3qkJ3BjfsinR3v)ir?!=};8kRt0MK^Br#YT^GxDcXpF*yNYG_T`}R-4Aj5Dg`>ClL<1?x5J2!DdUsmTsxD z4Lq#7>lK4EKQW1Si%gPHqqmK(fSqe2^nju@#l!mhYxCB2T~np z1(m*Iv7}{YW?FoyP-aiL;2Pkt;gRUmSBX145M8TU%e7TCHLlF>0%nL-<@6sdcU6UO zIQwoEfU&(N0cYrypSQbS;rU-uoqS@+{f7sTM9MAg$)6Zz6G%4x5HrEDpMLr&9=oAm ze0-c>esYiV2zW+mlo8}>cDlN_sJKQTkyvXfb+4+*iPg6(KI{VmZXH{J#m!}xi9hYy z{m&%-l^b4BQSl%@z(ISnioE&{n}T|JR<2srbK!cTasyj3n~IWfYPxrN!Ekq1*W%XJ z*7c|MvQX+Cd_+C=G&1~||h(Z}oLUePHs z-M^@>NFA2i-p6?>sr`|zwsz=K`vcM=6%`9@-G}}KKdTzItMb;QYP!`ofe3^xAb=%{F}T@ay{O@;phacDB`gzlc6HzCDJ9`oEL zm#WZ6wLJ2UFnmc!)}AmS()zmX`+NBT0UyvazrSzX#EDUtH*Vhi{Pmx3Iewq}!+9J^ zNO)SXA~Msh?ZbdaLDO8J5t=-Ya)vV?uKx(Vl_Au{vL*`KGvQha%>$a<$J%NT)#Tz>6!>Iw5raZeLw?VJuxPUsj^s=^ZqK{3~fdBaUfIcMYt4m5|B zlQ@z5GfPtl8?_;`r8ZUgoeTAIVPi0USy~_`$qg4h>!+Q_Gl*OW5&%mX1-b^LB^a2b z-MbkpB|L0qmOMajpt0|lQyrWsd!I5fYeHLn{X3R>yn*+q(Ywy>i;f=b-}z!VO4P!8 z9wtZMA`73XM#@h@%^Lyks_Q$qnQR>(oS!8P$eGLosSC{Zp95b~Zth4z&a)ve?e%>c zO#BH`U;Unk9zLuJYzfp!a(ucJ?x)M}Q7zMi$8as){94Jn^DLfN!9@N>xQeWa!ZzIj z;}h7rDs=ymmZ*L3QKu#Yl2Z?fX@}*@O-aTfd&D&`Mds}79Sy;P#zSFANw9UiKeJh2 z_PolE7Z7C>$GK!OX_&#N9~V4Phmh$~iN{=nDQ*FSVj^F5f(^0H=ut}uJtGK7j7l6F z9iKrVh(@sbj@X--3@eOH7 zNS%m> z8fPnx#>W22(W}Ph+7O~T)txg}k)Ugcxh*MapHda;KCf{BWL#1ll`F6BIC$p=#!U5R zTU#w%nqZ9CZ$R#P{qnz;?mc>F?>Xr%Dlsnfi3Fw{xd;P-b3+wbDc3 zbHj0iKUlNuBZ*t;PNudz0?*kP3x5WqE$frgFQ+00WjuTT(tSh2OMHBM3b<88&=sAN zg;-n>I^Z|=FNjV3n;0tzL>FNIGEIeWa2;I0J&)l4GhPwn*;=z_|Gv$`erq+Rj15$G z+uy%0A@^Xz7JM9MUiKZKd3C5CX&PJI6cUnxg0GRgm-f^)`Bb|wI{T;J+%BP*@gho! z07^^gBn1V9K$;*F$wR4*`W$ag0Xs-vlz&cFUK!1YwENm7{l%q7N$8Egvg5rD8CT!( z>KMFTguxqjdJB(ClY?hBx3N*`QChlusEy(Uhg$(LHum;q^qF)Ud&ON|i@dyiK&xhv z$6q=61mwyTD`5eNlP$YmBh)aG?4Cb_kra6Zv60 zI!v>Bx_&X4Zs5f0iF_RJ_1);aG@#uW7@L^erm_#;KcQ}O0m&UfvO~cKwmLk?O#kQ@ z1;iQOy71@M{+}9U{LcYyjKmV6#j6a&;zam>!YIP!^yhz`pV3JV#zIU{T|JW0%uh6P zTU=x5H}X6ku5KE~WY9j2>$lB1#%+lDltCy9E?$GA-ag)b^X&CJB*y*+lt+!Dk~SBk z6XDoH?W8?^^}5+B%nW-W(N}{tL|NlxXS-z!(|(s8sis!(SQqhSnrp)BBhMa@&ZSd5 z`TY4xB$@7z+LZ{8^xyr-y>IbcmQw~goSLeAl+$PhbZh#c`!pN}Z)Eozvp-b>px_K% z3x7G&s_N0BoSVr(__-dKmA1dAW-(y<0l{B+*@8Mq8nTU!)%>(|do} z!QzNB-za)86A0RIBuTb`8G{uRs24^-oD%2$*IaMMb<-?eVpdnT$z@bX!dHcvd*MQ>x41z zFHaGrHXN_3YZouwMMw(|b*q^hp`y22$Ve{adMEZXoHgtC>z0;xO{+#Y_RNw<`mYzg z0EF?**-furl{*|fARJ=dA&XhkoljZRS*7`yMdh8R!QSA}*0bp_3>R8A;0N(-)BNtIe5!>Zh7-OWY&$1TVUlMfX?Vt0A-i+NauXpK4&R z4oPhu&Fd?9wkSExlD; zy}#!=OFejF^NSaCJKjuG3Vi?{IAZ&P{V$`!0u`!nbyLnFg)O8wrE9QU*X{LCWOn05 zXgk!=)9bWhLpT)*`evXs@=Mb#39yD3ERh8Y!*~edug5SlWxIyJhH8u%)5qn4ir@wV zR>OJqm@QpJek~|ixD(AEemFclN)iZyY2ummIQW=-k9&&w&;GM_lvn$yh2h}t-_yO# z57#(ZIXP3p5s1Lq(^C$0PD)nqvJ-ZFP+gtpceCc9PR-`06B5OrVKOlyrkzDq$JYy)l~Pi|#i(PC02gcE4J)zg8~gN6xZQShO;Vs|=wg zSU&%}a|;9Xr%!*`yohvNQUG|m%Slb%F3mMV)dca1kvNksx9BpQF{7#HYg>;G{Tonv z8sB{@lM)@=(Vwzxab9CxalVj#doaddAJm>$dBXxpz05p@nQlz^*#Q zN5GmwqW?x_U{+q+I#6^@+`k`^m8DNTM0uIqciv^(R6G6WK3_h$o}A;knF+LkH*E=@^JKO!~;YjJ<6s9Q{}2)}Z1x*sWxl z@L(p%HKSvJB1#BOsXXe^d41P=csN1tuwyl^Ma=-rFjcS{&U^OIn7q8%5SEXg>(s<0 zTZihG)MKi49df9F(l4zMK`c=+tHx)~wCp=XA06rRuQ@qJklIqq)gB5w(NNfjtx(Le zf~1j1$TTsNl)C8r26Ao=v?5;hqB?g~{ut0$RH&#{VgR>5Sj#h*ao2cRz0QOQ0(0QT zCzEO?&x>NYeI7k&T5u+kFK%nsxz<%2Gz&31!v)>3rH$@q4r7Pc9M&Xub^h92NUnTxZu(2 ze!sK$sDrMk@q?5dTI$(ohv8bcEJ{!ExJqfu6#n1Y#ie>|K-UW7cDmy8d|E{+L61ExHvYmee5$Mb34X(`g7YJ7UP0xzFvLHGGPdRY0{0 z-@Du6Ls0T}EJ?D?VMYVY)Xdk)XsCxj@!~eobQM3coGL78sgcX6g4Q&me-||S z;$%DFv5jl8nnzy^&7fSVza>BIK)bt;5zf4=^d zs&In`JOmHgqoAds)_W~(>(|R$O_;!M)=>x0oHE7c)nr*uWxKvX3TBqOL^K>@5v9zX z>L@ZmFh9Ew_dH^NLMkKh0^!_gGGYV#Z zqYKj)#BLU1Dy!N1D4u6km2WhY{qN1+?Ppgrw`6(wwYi{gV2bbq6px;}jd2e&lE`t_ zW=Bra>tUl9vu!1LpPp~K6IU)Sb*hQEi7XEbr%-G%b+|e;CO#;w%8RS}7G|6W$QSpm zg{g@+^<%f3W_saYf2lW>Mi2j=cgKm!thlbYJJv8ISQGI)rXBdONSrYNlh#b!j2oLf zW%U81w2J@Z<~#`l?eE@kzcyBEP?Be!LyW~7^+2v~wLQCpQ2pPd7i%}hz;sQHoTl^_ zT4$>8HtoHS>H$^J7C{QVf4?KK|MzB=wm!|3>PPyttAD2yT7`nw|GM3vOj%OLf;%Io zgN-yJU&84*{L>slZ<@Tiyy8BHX67p<^jw{2pXWF}NkK}ar9rg}l^POh$A|}&n5fl7gHP{k z4N|Cn4%$^FJ^roAS;4IDTCca#>Fsexz}5zNS)W(4b%it={L;yDr2Zr}`!R@ZZ`H+k*eQ tuy?Wj_a7{sVG;j->HJ^&>((Brs+)sVJk5(=&{-?_(R|U|^JY%F{~yYbhOPhr literal 0 HcmV?d00001 diff --git a/modules/nw-ptp-holdover-in-a-grandmaster-clock.adoc b/modules/nw-ptp-holdover-in-a-grandmaster-clock.adoc index 678742d31af5..237325a06432 100644 --- a/modules/nw-ptp-holdover-in-a-grandmaster-clock.adoc +++ b/modules/nw-ptp-holdover-in-a-grandmaster-clock.adoc @@ -6,7 +6,7 @@ [id="holdover-in-a-grandmaster-clock_{context}"] = Holdover in a grandmaster clock with GNSS as the source -Holdover allows the grandmaster (T-GM) clock to maintain synchronization performance when the GNSS source is unavailable. During this period, the T-GM clock relies on its internal oscillator and holdover parameters to reduce timing disruptions. +Holdover allows the grandmaster (T-GM) clock to maintain synchronization performance when the global navigation satellite system (GNSS) source is unavailable. During this period, the T-GM clock relies on its internal oscillator and holdover parameters to reduce timing disruptions. You can define the holdover behavior by configuring the following holdover parameters in the `PTPConfig` custom resource (CR): @@ -37,5 +37,22 @@ The T-GM clock reaches the maximum offset in 60 seconds. [NOTE] ==== -The phase offset is converted from picoseconds to nanoseconds. As a result, the calculated phase offset during holdover is expressed in nanoseconds per second, and the resulting slope is measured in nanoseconds per second. +The phase offset is converted from picoseconds to nanoseconds. As a result, the calculated phase offset during holdover is expressed in nanoseconds, and the resulting slope is expressed in nanoseconds per second. ==== + +The following figure illustrates the holdover behavior in a T-GM clock with GNSS as the source: + +.Holdover in a T-GM clock with GNSS as the source +image::holdover_in_t_gm.png[Holdover in a T-GM clock with GNSS as the source] + +image:darkcircle-1.png[20,20] The GNSS signal is lost, causing the T-GM clock to enter the `HOLDOVER` mode. The T-GM clock maintains time accuracy using its internal clock. + +image:darkcircle-2.png[20,20] The GNSS signal is restored and the T-GM clock re-enters the `LOCKED` mode. When the GNSS signal is restored, the T-GM clock re-enters the `LOCKED` mode only after all dependent components in the synchronization chain, such as `ts2phc` offset, digital phase-locked loop (DPLL) phase offset, and GNSS offset, reach a stable `LOCKED` mode. + +image:darkcircle-3.png[20,20] The GNSS signal is lost again, and the T-GM clock re-enters the `HOLDOVER` mode. The time error begins to increase. + +image:darkcircle-4.png[20,20] The time error exceeds the `MaxInSpecOffset` threshold due to prolonged loss of traceability. + +image:darkcircle-5.png[20,20] The GNSS signal is restored, and the T-GM clock resumes synchronization. The time error starts to decrease. + +image:darkcircle-6.png[20,20] The time error decreases and falls back within the `MaxInSpecOffset` threshold. \ No newline at end of file From cc6634f6f008fd2c4882c68eceb8948eadd388aa Mon Sep 17 00:00:00 2001 From: Jeana Routh Date: Thu, 9 Jan 2025 15:42:50 -0500 Subject: [PATCH 302/669] OSDOCS-12868: multi-NIC support in vSphere FDs --- .../cpmso-config-options-vsphere.adoc | 4 + .../creating-machineset-vsphere.adoc | 5 +- .../cpmso-yaml-failure-domain-vsphere.adoc | 4 +- modules/machineset-vsphere-multiple-nics.adoc | 150 ++++++++++++++++++ 4 files changed, 160 insertions(+), 3 deletions(-) create mode 100644 modules/machineset-vsphere-multiple-nics.adoc diff --git a/machine_management/control_plane_machine_management/cpmso_provider_configurations/cpmso-config-options-vsphere.adoc b/machine_management/control_plane_machine_management/cpmso_provider_configurations/cpmso-config-options-vsphere.adoc index 18711ea6a0f1..ac825a4d9160 100644 --- a/machine_management/control_plane_machine_management/cpmso_provider_configurations/cpmso-config-options-vsphere.adoc +++ b/machine_management/control_plane_machine_management/cpmso_provider_configurations/cpmso-config-options-vsphere.adoc @@ -31,3 +31,7 @@ You can enable features by updating values in the control plane machine set. //Adding tags to machines by using machine sets include::modules/machine-api-vmw-add-tags.adoc[leveloffset=+2,tag=!compute] + +//Configuring multiple NICs by using machine sets +//pulled from 4.18 GA +//include::modules/machineset-vsphere-multiple-nics.adoc[leveloffset=+1,tag=!compute] \ No newline at end of file diff --git a/machine_management/creating_machinesets/creating-machineset-vsphere.adoc b/machine_management/creating_machinesets/creating-machineset-vsphere.adoc index 12424858cf1d..9b9971c0d76e 100644 --- a/machine_management/creating_machinesets/creating-machineset-vsphere.adoc +++ b/machine_management/creating_machinesets/creating-machineset-vsphere.adoc @@ -46,4 +46,7 @@ include::modules/machineset-label-gpu-autoscaler.adoc[leveloffset=+1] * xref:../../machine_management/applying-autoscaling.adoc#cluster-autoscaler-cr_applying-autoscaling[Cluster autoscaler resource definition] //Adding tags to machines by using machine sets -include::modules/machine-api-vmw-add-tags.adoc[leveloffset=+1,tag=!controlplane] \ No newline at end of file +include::modules/machine-api-vmw-add-tags.adoc[leveloffset=+1,tag=!controlplane] + +//Configuring multiple NICs by using machine sets +include::modules/machineset-vsphere-multiple-nics.adoc[leveloffset=+1,tag=!controlplane] \ No newline at end of file diff --git a/modules/cpmso-yaml-failure-domain-vsphere.adoc b/modules/cpmso-yaml-failure-domain-vsphere.adoc index 02b3c87f5d76..ece36c79abb5 100644 --- a/modules/cpmso-yaml-failure-domain-vsphere.adoc +++ b/modules/cpmso-yaml-failure-domain-vsphere.adoc @@ -34,8 +34,8 @@ spec: failureDomains: # <1> platform: VSphere vsphere: # <2> - - name: - - name: + - name: + - name: # ... ---- <1> Specifies the vCenter location for {product-title} cluster nodes. diff --git a/modules/machineset-vsphere-multiple-nics.adoc b/modules/machineset-vsphere-multiple-nics.adoc new file mode 100644 index 000000000000..ac8775f3ccb3 --- /dev/null +++ b/modules/machineset-vsphere-multiple-nics.adoc @@ -0,0 +1,150 @@ + +// Module included in the following assemblies: +// +// * machine_management/creating_machinesets/creating-machineset-vsphere.adoc +// * machine_management/control_plane_machine_management/cpmso_provider_configurations/cpmso-config-options-vsphere.adoc + +:_mod-docs-content-type: PROCEDURE +[id="machineset-vsphere-multiple-nics_{context}"] += Configuring multiple network interface controllers by using machine sets + +{product-title} clusters on {vmw-first} support connecting up to 10 network interface controllers (NICs) to a node. +By configuring multiple NICs, you can provide dedicated network links in the node virtual machines (VMs) for uses such as storage or databases. + +You can use machine sets to manage this configuration. + +* If you want to use multiple NICs in a {vmw-short} cluster that was not configured to do so during installation, you can use machine sets to implement this configuration. +* If your cluster was set up during installation to use multiple NICs, machine sets that you create can use your existing failure domain configuration. +* If your failure domain configuration changes, you can use machine sets to make updates that reflect those changes. + +tag::controlplane[] +[NOTE] +==== +This feature is not compatible with a control plane machine set that uses more than one failure domain. +==== +end::controlplane[] + +:FeatureName: Configuring multiple NICs +include::snippets/technology-preview.adoc[] + +.Prerequisites + +* You have administrator access to {oc-first} for an {product-title} cluster on {vmw-short}. + +.Procedure + +. For a cluster that already uses multiple NICs, obtain the following values from the `Infrastructure` resource by running the following command: ++ +[source,terminal] +---- +$ oc get infrastructure cluster -o=jsonpath={.spec.platformSpec.vsphere.failureDomains} +---- ++ +.Required network interface controller values +|=== +|`Infrastructure` resource value | Placeholder value for sample machine set | Description + +|`failureDomain.topology.networks[0]` +|`` +|The name of the first NIC to use. + +|`failureDomain.topology.networks[1]` +|`` +|The name of the second NIC to use. + +|`failureDomain.topology.networks[]` +|`` +|The name of the __n__th NIC to use. +Collect the name of each NIC in the `Infrastructure` resource. + +|`failureDomain.topology.template` +|`` +|The {vmw-short} VM template to use. + +|`failureDomain.topology.datacenter` +|`` +|The vCenter data center to deploy the machine set on. + +|`failureDomain.topology.datastore` +|`` +|The vCenter datastore to deploy the machine set on. + +|`failureDomain.topology.folder` +|`` +|The path to the {vmw-short} VM folder in vCenter, such as `/dc1/vm/user-inst-5ddjd`. + +|`failureDomain.topology.computeCluster` + `/Resources` +|`` +|The {vmw-short} resource pool for your VMs. + +|`failureDomain.server` +|`` +|The vCenter server IP or fully qualified domain name (FQDN). +|=== + +. In a text editor, open the YAML file for an existing machine set or create a new one. + +. Use a machine set configuration formatted like the following example. ++ +-- +* For a cluster that currently uses multiple NICs, use the values from the `Infrastructure` resource to populate the values in the machine set custom resource. +* For a cluster that is not using multiple NICs, populate the values you want to use in the machine set custom resource. +-- ++ +.Sample machine set +[source,yaml] +---- +tag::compute[] +apiVersion: machine.openshift.io/v1beta1 +kind: MachineSet +# ... +spec: + template: + spec: + providerSpec: + value: + network: + devices: # <1> + - networkName: "" + - networkName: "" + template: # <2> + workspace: + datacenter: # <3> + datastore: # <4> + folder: # <5> + resourcepool: # <6> + server: # <7> +# ... +end::compute[] +tag::controlplane[] +apiVersion: machine.openshift.io/v1 +kind: ControlPlaneMachineSet +# ... +spec: + template: + machines_v1beta1_machine_openshift_io: + spec: + providerSpec: + value: + network: + devices: # <1> + - networkName: "" + - networkName: "" + template: # <2> + workspace: + datacenter: # <3> + datastore: # <4> + folder: # <5> + resourcepool: # <6> + server: # <7> + +# ... +end::controlplane[] +---- +<1> Specify a list of up to 10 NICs to use. +<2> Specify the {vmw-short} VM template to use, such as `user-5ddjd-rhcos`. +<3> Specify the vCenter data center to deploy the machine set on. +<4> Specify the vCenter datastore to deploy the machine set on. +<5> Specify the path to the {vmw-short} VM folder in vCenter, such as `/dc1/vm/user-inst-5ddjd`. +<6> Specify the {vmw-short} resource pool for your VMs. +<7> Specify the vCenter server IP or fully qualified domain name (FQDN). \ No newline at end of file From f5d12374c6eeedf28229924851baef190363e6d3 Mon Sep 17 00:00:00 2001 From: Audrey Spaulding Date: Tue, 18 Feb 2025 13:02:31 -0500 Subject: [PATCH 303/669] predefined table update --- modules/virt-common-instancetypes.adoc | 51 ++++++++++---------------- 1 file changed, 20 insertions(+), 31 deletions(-) diff --git a/modules/virt-common-instancetypes.adoc b/modules/virt-common-instancetypes.adoc index 93976e6b595b..93f6f61f2636 100644 --- a/modules/virt-common-instancetypes.adoc +++ b/modules/virt-common-instancetypes.adoc @@ -16,14 +16,17 @@ These instance type resources are named according to their series, version, and |=== ^.^|Use case ^.^|Series ^.^|Characteristics ^.^|vCPU to memory ratio ^.^|Example resource -^.^|Universal -^.^|U +^.^|Network +^.^|N a| -* Burstable CPU performance -^.^|1:4 -.^a|`u1.medium`:: -* 1 vCPUs -* 4 Gi memory +* Hugepages +* Dedicated CPU +* Isolated emulator threads +* Requires nodes capable of running DPDK workloads +^.^|1:2 +.^a|`n1.medium`:: +* 4 vCPUs +* 4GiB Memory ^.^|Overcommitted ^.^|O @@ -33,9 +36,9 @@ a| ^.^|1:4 .^a|`o1.small`:: * 1 vCPU -* 2Gi memory +* 2GiB Memory -^.^|Compute-exclusive +^.^|Compute Exclusive ^.^|CX a| * Hugepages @@ -45,20 +48,18 @@ a| ^.^|1:2 .^a|`cx1.2xlarge`:: * 8 vCPUs -* 16Gi memory +* 16GiB Memory -^.^|NVIDIA GPU -^.^|GN +^.^|General Purpose +^.^|U a| -* For VMs that use GPUs provided by the NVIDIA GPU Operator -* Has predefined GPUs * Burstable CPU performance ^.^|1:4 -.^a|`gn1.8xlarge`:: -* 32 vCPUs -* 128Gi memory +.^a|`u1.medium`:: +* 1 vCPU +* 4GiB Memory -^.^|Memory-intensive +^.^|Memory Intensive ^.^|M a| * Hugepages @@ -66,17 +67,5 @@ a| ^.^|1:8 .^a|`m1.large`:: * 2 vCPUs -* 16Gi memory - -^.^|Network-intensive -^.^|N -a| -* Hugepages -* Dedicated CPU -* Isolated emulator threads -* Requires nodes capable of running DPDK workloads -^.^|1:2 -.^a|`n1.medium`:: -* 4 vCPUs -* 4Gi memory +* 16GiB Memory |=== \ No newline at end of file From 8b685ebbc342c39054f67b50ba6081d5042c15d1 Mon Sep 17 00:00:00 2001 From: subhtk Date: Tue, 18 Feb 2025 11:31:54 +0530 Subject: [PATCH 304/669] Added a known issue in the delete feature about container registry --- .../about-installing-oc-mirror-v2.adoc | 7 +++ ...mirror-v2-procedure-garbage-collector.adoc | 53 +++++++++++++++++++ modules/oc-mirror-workflows-delete-v2.adoc | 5 +- 3 files changed, 64 insertions(+), 1 deletion(-) create mode 100644 modules/oc-mirror-v2-procedure-garbage-collector.adoc diff --git a/disconnected/mirroring/about-installing-oc-mirror-v2.adoc b/disconnected/mirroring/about-installing-oc-mirror-v2.adoc index 7c7fe06a6183..1c21fcb5ad23 100644 --- a/disconnected/mirroring/about-installing-oc-mirror-v2.adoc +++ b/disconnected/mirroring/about-installing-oc-mirror-v2.adoc @@ -89,6 +89,12 @@ After your cluster is configured to use the resources generated by oc-mirror plu // workflows of delete feature include::modules/oc-mirror-workflows-delete-v2.adoc[leveloffset=+1] +[role="_additional-resources"] +.Additional resources +* xref:../../disconnected/mirroring/about-installing-oc-mirror-v2.adoc#oc-mirror-v2-procedure-garbage-collector_about-installing-oc-mirror-v2[Resolving storage cleanup issues in the distribution registry] + +include::modules/oc-mirror-v2-procedure-garbage-collector.adoc[leveloffset=+2] + // procedure for delete feature include::modules/oc-mirror-procedure-delete-v2.adoc[leveloffset=+2] @@ -113,6 +119,7 @@ include::modules/oc-mirror-proxy-support.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources * xref:../../disconnected/updating/disconnected-update-osus.adoc#updating-disconnected-cluster-osus[Updating a cluster in a disconnected environment using the OpenShift Update Service] +* xref:../../disconnected/mirroring/about-installing-oc-mirror-v2.adoc#oc-mirror-v2-procedure-garbage-collector_about-installing-oc-mirror-v2[Resolving storage cleanup issues in the distribution registry] //operator catalog filtering include::modules/oc-mirror-operator-catalog-filtering.adoc[leveloffset=+1] diff --git a/modules/oc-mirror-v2-procedure-garbage-collector.adoc b/modules/oc-mirror-v2-procedure-garbage-collector.adoc new file mode 100644 index 000000000000..34fc273d5153 --- /dev/null +++ b/modules/oc-mirror-v2-procedure-garbage-collector.adoc @@ -0,0 +1,53 @@ +// Module included in the following assemblies: +// +// * disconnected/mirroring/about-installing-oc-mirror-v2.adoc + +:_mod-docs-content-type: PROCEDURE +[id="oc-mirror-v2-procedure-garbage-collector_{context}"] += Resolving storage cleanup issues in the distribution registry + +A known issue in the distribution registry prevents the garbage collector from freeing up storage as expected. This issue does not occur when you use {quay}. + +.Procedure + +* Choose the appropriate method to work around the known issue in the distribution registry: + +** To restart the container registry, run the following command: ++ +[source,terminal] +---- +$ podman restart +---- + +** To disable caching in the registry configuration, perform the following steps: + +... To disable the `blobdescriptor` cache, modify the `/etc/docker/registry/config.yml` file: ++ +[source,yaml] +---- +version: 0.1 +log: + fields: + service: registry +storage: + cache: + blobdescriptor: "" + filesystem: + rootdirectory: /var/lib/registry +http: + addr: :5000 + headers: + X-Content-Type-Options: [nosniff] +health: + storagedriver: + enabled: true + interval: 10s + threshold: 3 +---- + +... To apply the changes, restart the container registry by running the following command: ++ +[source,terminal] +---- +$ podman restart +---- diff --git a/modules/oc-mirror-workflows-delete-v2.adoc b/modules/oc-mirror-workflows-delete-v2.adoc index 8329a7c7a189..9d3893a4ec3a 100644 --- a/modules/oc-mirror-workflows-delete-v2.adoc +++ b/modules/oc-mirror-workflows-delete-v2.adoc @@ -52,7 +52,10 @@ Consider using the mirror-to-disk and disk-to-mirror workflows to reduce mirrori oc-mirror plugin v2 deletes only the manifests of the images, which does not reduce the storage occupied in the registry. -To free up storage space from unnecessary images, such as those with deleted manifests, you must enable the garbage collector on your container registry. With the garbage collector enabled, the registry will delete the image blobs that no longer have references to any manifests, reducing the storage previously occupied by the deleted blobs. The process for enabling the garbage collector differs depending on your container registry. +To free up storage space from unnecessary images, such as those with deleted manifests, you must enable the garbage collector on your container registry. With the garbage collector enabled, the registry will delete the image blobs that no longer have references to any manifests, reducing the storage previously occupied by the deleted blobs. The process for enabling the garbage collector differs depending on your container registry. + +For more information, see "Resolving storage cleanup issues in the distribution registry". + [IMPORTANT] ==== From 3bb5a8b1db6b14f2d358f6a4f4d41550e8916082 Mon Sep 17 00:00:00 2001 From: Kevin Owen Date: Tue, 18 Feb 2025 10:34:26 -0500 Subject: [PATCH 305/669] OSDOCS-13398: Expand etcd quorum restoration docs --- .../disaster_recovery/quorum-restoration.adoc | 11 +- modules/dr-restoring-cluster-state-sno.adoc | 7 +- modules/dr-restoring-cluster-state.adoc | 5 + modules/dr-restoring-etcd-quorum-ha.adoc | 117 +++++++++++++++++- 4 files changed, 134 insertions(+), 6 deletions(-) diff --git a/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/quorum-restoration.adoc b/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/quorum-restoration.adoc index 2f8c6ef95073..890d322f85a4 100644 --- a/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/quorum-restoration.adoc +++ b/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/quorum-restoration.adoc @@ -6,7 +6,14 @@ include::_attributes/common-attributes.adoc[] toc::[] -You can use the `quorum-restore.sh` script to restore etcd quorum on clusters that are offline due to quorum loss. +You can use the `quorum-restore.sh` script to restore etcd quorum on clusters that are offline due to quorum loss. When quorum is lost, the {product-title} API becomes read-only. After quorum is restored, the {product-title} API returns to read/write mode. // Restoring etcd quorum for high availability clusters -include::modules/dr-restoring-etcd-quorum-ha.adoc[leveloffset=+1] \ No newline at end of file +include::modules/dr-restoring-etcd-quorum-ha.adoc[leveloffset=+1] + +[role="_additional-resources"] +[id="additional-resources_dr-quorum-restoration"] +== Additional resources + +* xref:../../../installing/installing_bare_metal/upi/installing-bare-metal.adoc#installing-bare-metal[Installing a user-provisioned cluster on bare metal] +* xref:../../../installing/installing_bare_metal/ipi/ipi-install-expanding-the-cluster.adoc#replacing-a-bare-metal-control-plane-node_ipi-install-expanding[Replacing a bare-metal control plane node] \ No newline at end of file diff --git a/modules/dr-restoring-cluster-state-sno.adoc b/modules/dr-restoring-cluster-state-sno.adoc index ae59b40ab827..ad348c9010c1 100644 --- a/modules/dr-restoring-cluster-state-sno.adoc +++ b/modules/dr-restoring-cluster-state-sno.adoc @@ -42,4 +42,9 @@ $ sudo -E /usr/local/bin/cluster-restore.sh /home/core/ [source,terminal] ---- $ oc adm wait-for-stable-cluster ----- \ No newline at end of file +---- ++ +[NOTE] +==== +It can take up to 15 minutes for the control plane to recover. +==== \ No newline at end of file diff --git a/modules/dr-restoring-cluster-state.adoc b/modules/dr-restoring-cluster-state.adoc index d9b869c284e8..96ec782f3f69 100644 --- a/modules/dr-restoring-cluster-state.adoc +++ b/modules/dr-restoring-cluster-state.adoc @@ -88,6 +88,11 @@ $ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": ---- $ oc adm wait-for-stable-cluster ---- ++ +[NOTE] +==== +It can take up to 15 minutes for the control plane to recover. +==== . Once recovered, enable the quorum guard by running the following command: + diff --git a/modules/dr-restoring-etcd-quorum-ha.adoc b/modules/dr-restoring-etcd-quorum-ha.adoc index 89e17e6ff5e6..bdac16b83917 100644 --- a/modules/dr-restoring-etcd-quorum-ha.adoc +++ b/modules/dr-restoring-etcd-quorum-ha.adoc @@ -13,6 +13,11 @@ You can use the `quorum-restore.sh` script to instantly bring back a new single- You might experience data loss if the host that runs the restoration does not have all data replicated to it. ==== +[IMPORTANT] +==== +Quorum restoration should not be used to decrease the number of nodes outside of the restoration process. Decreasing the number of nodes results in an unsupported cluster configuration. +==== + .Prerequisites * You have SSH access to the node used to restore quorum. @@ -21,26 +26,132 @@ You might experience data loss if the host that runs the restoration does not ha . Select a control plane host to use as the recovery host. You run the restore operation on this host. -. Using SSH, connect to the chosen recovery node and run the following command to restore etcd quorum: +.. List the running etcd pods by running the following command: ++ +[source,terminal] +---- +$ oc get pods -n openshift-etcd -l app=etcd --field-selector="status.phase==Running" +---- + +.. Choose a pod and run the following command to obtain its IP address: ++ +[source,terminal] +---- +$ oc exec -n openshift-etcd -c etcdctl -- etcdctl endpoint status -w table +---- ++ +Note the IP address of a member that is not a learner and has the highest Raft index. + +.. Run the following command and note the node name that corresponds to the IP address of the chosen etcd member: ++ +[source,terminal] +---- +$ oc get nodes -o jsonpath='{range .items[*]}[{.metadata.name},{.status.addresses[?(@.type=="InternalIP")].address}]{end}' +---- + +. Using SSH, connect to the chosen recovery node and run the following command to restore etcd quorum: + [source,terminal] ---- $ sudo -E /usr/local/bin/quorum-restore.sh ---- ++ +After a few minutes, the nodes that went down are automatically synchronized with the node that the recovery script was run on. Any remaining online nodes automatically rejoin the new etcd cluster created by the `quorum-restore.sh` script. This process takes a few minutes. . Exit the SSH session. +. Return to a three-node configuration if any nodes are offline. Repeat the following steps for each node that is offline to delete and re-create them. After the machines are re-created, a new revision is forced and etcd automatically scales up. ++ +** If you use a user-provisioned bare-metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal". ++ +[WARNING] +==== +Do not delete and re-create the machine for the recovery host. +==== ++ +** If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps: ++ +[WARNING] +==== +Do not delete and re-create the machine for the recovery host. + +For bare-metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node". +==== + +.. Obtain the machine for one of the offline nodes. ++ +In a terminal that has access to the cluster as a `cluster-admin` user, run the following command: ++ +[source,terminal] +---- +$ oc get machines -n openshift-machine-api -o wide +---- ++ +.Example output: ++ +[source,terminal] +---- +NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE +clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped <1> +clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running +clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running +clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running +clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running +clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running +---- +<1> This is the control plane machine for the offline node, `ip-10-0-131-183.ec2.internal`. + +.. Delete the machine of the offline node by running: ++ +[source,terminal] +---- +$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 <1> +---- +<1> Specify the name of the control plane machine for the offline node. ++ +A new machine is automatically provisioned after deleting the machine of the offline node. + +. Verify that a new machine has been created by running: ++ +[source,terminal] +---- +$ oc get machines -n openshift-machine-api -o wide +---- ++ +.Example output: ++ +[source,terminal] +---- +NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE +clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running +clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running +clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running <1> +clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running +clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running +clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running +---- +<1> The new machine, `clustername-8qw5l-master-3` is being created and is ready after the phase changes from `Provisioning` to `Running`. ++ +It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically synchronize when the machine or node returns to a healthy state. + +.. Repeat these steps for each node that is offline. + . Wait until the control plane recovers by running the following command: + [source,terminal] ---- $ oc adm wait-for-stable-cluster ---- ++ +[NOTE] +==== +It can take up to 15 minutes for the control plane to recover. +==== .Troubleshooting -If you see no progress rolling out the etcd static pods, you can force redeployment from the `cluster-etcd-operator` pod by running the following command: - +* If you see no progress rolling out the etcd static pods, you can force redeployment from the etcd cluster Operator by running the following command: ++ [source,terminal] ---- $ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$(date --rfc-3339=ns )"'"}}' --type=merge From bd9774e0fd13b69342951159fce389ec39148ae9 Mon Sep 17 00:00:00 2001 From: Steven Smith Date: Wed, 19 Feb 2025 15:45:01 -0500 Subject: [PATCH 306/669] Slightly reorganizes UDN/CUDN status conditions --- modules/cudn-status-conditions.adoc | 10 ++++------ .../primary_networks/about-user-defined-networks.adoc | 6 +++--- 2 files changed, 7 insertions(+), 9 deletions(-) diff --git a/modules/cudn-status-conditions.adoc b/modules/cudn-status-conditions.adoc index e059b8769bf6..943840f36b4b 100644 --- a/modules/cudn-status-conditions.adoc +++ b/modules/cudn-status-conditions.adoc @@ -4,13 +4,11 @@ :_mod-docs-content-type: REFERENCE [id="cudn-status-conditions_{context}"] -= ClusterUserDefinedNetwork status condition types += User-defined network status condition types -The following tables explain the status condition types returned for `ClusterUserDefinedNetwork` CRs when describing the resource. These conditions can be used to troubleshoot your deployment. +The following tables explain the status condition types returned for `ClusterUserDefinedNetwork` and `UserDefinedNetwork` CRs when describing the resource. These conditions can be used to troubleshoot your deployment. -//The following table is subject to change and will be updated accordingly. - -.NetworkCreated condition types +.NetworkCreated condition types (`ClusterDefinedNetwork` and `UserDefinedNetwork` CRs) [cols="2a,2a,3a,6a",options="header"] |=== @@ -55,7 +53,7 @@ h|Message |`NetworkAttachmentDefinition is being deleted: [/]` |=== -.NetworkAllocationSucceeded condition types +.NetworkAllocationSucceeded condition types (`UserDefinedNetwork` CRs) [cols="2a,2a,3a,6a",options="header"] |=== diff --git a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc index d844ca6266a2..128b1658acf8 100644 --- a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc +++ b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc @@ -44,9 +44,6 @@ include::modules/nw-cudn-cr.adoc[leveloffset=+1] .Additional resources * xref:../../../networking/multiple_networks/secondary_networks/creating-secondary-nwt-ovnk.adoc#configuring-pods-static-ip_configuring-additional-network-ovnk[Configuring pods with a static IP address] -//CUDN status conditions -include::modules/cudn-status-conditions.adoc[leveloffset=+2] - //Best practices for using UDN. include::modules/nw-udn-best-practices.adoc[leveloffset=+1] @@ -56,6 +53,9 @@ include::modules/nw-udn-cr.adoc[leveloffset=+1] //Explanation of optional config details include::modules/nw-udn-additional-config-details.adoc[leveloffset=+1] +//UDN/CUDN status conditions +include::modules/cudn-status-conditions.adoc[leveloffset=+1] + include::modules/opening-default-network-ports-udn.adoc[leveloffset=+1] //Support matrix for UDN From 4fc9f4481ac1fd93b020f7446d011f7031809cdc Mon Sep 17 00:00:00 2001 From: Sebastian Kopacz Date: Wed, 19 Feb 2025 12:45:10 -0500 Subject: [PATCH 307/669] OSDOCS-11325: updating 4.18 RHEL update version numbers --- modules/rhel-compute-updating.adoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/rhel-compute-updating.adoc b/modules/rhel-compute-updating.adoc index eb1f170f5e28..50edd046d5e8 100644 --- a/modules/rhel-compute-updating.adoc +++ b/modules/rhel-compute-updating.adoc @@ -52,7 +52,7 @@ By default, the base OS RHEL with "Minimal" installation option enables firewall + [source,terminal,subs="attributes+"] ---- -# subscription-manager repos --disable=rhocp-4.16-for-rhel-8-x86_64-rpms \ +# subscription-manager repos --disable=rhocp-4.17-for-rhel-8-x86_64-rpms \ --enable=rhocp-{product-version}-for-rhel-8-x86_64-rpms ---- + @@ -72,7 +72,7 @@ As of {product-title} 4.11, the Ansible playbooks are provided only for {op-syst + [source,terminal,subs="attributes+"] ---- -# subscription-manager repos --disable=rhocp-4.16-for-rhel-8-x86_64-rpms \ +# subscription-manager repos --disable=rhocp-4.17-for-rhel-8-x86_64-rpms \ --enable=rhocp-{product-version}-for-rhel-8-x86_64-rpms ---- From 48b011aae1f21d9b3fa0389814fe80412ed79f9e Mon Sep 17 00:00:00 2001 From: subhtk Date: Wed, 29 Jan 2025 16:25:19 +0530 Subject: [PATCH 308/669] Added migration from v1 to v2 in docs --- _topic_maps/_topic_map.yml | 2 + .../oc-mirror-migration-v1-to-v2.adoc | 29 ++++ modules/oc-mirror-migration-differences.adoc | 69 ++++++++++ modules/oc-mirror-migration-process.adoc | 126 ++++++++++++++++++ modules/oc-mirror-workflows-delete-v2.adoc | 10 +- 5 files changed, 234 insertions(+), 2 deletions(-) create mode 100644 disconnected/mirroring/oc-mirror-migration-v1-to-v2.adoc create mode 100644 modules/oc-mirror-migration-differences.adoc create mode 100644 modules/oc-mirror-migration-process.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index baf9e5731d42..6e2bf9730ed6 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -130,6 +130,8 @@ Topics: File: installing-mirroring-disconnected - Name: Mirroring images for a disconnected installation using oc-mirror plugin v2 File: about-installing-oc-mirror-v2 + - Name: Migrating from oc-mirror plugin v1 to v2 + File: oc-mirror-migration-v1-to-v2 - Name: Installing a cluster in a disconnected environment File: installing - Name: Using OLM in disconnected environments diff --git a/disconnected/mirroring/oc-mirror-migration-v1-to-v2.adoc b/disconnected/mirroring/oc-mirror-migration-v1-to-v2.adoc new file mode 100644 index 000000000000..c0c5e9b6e7fd --- /dev/null +++ b/disconnected/mirroring/oc-mirror-migration-v1-to-v2.adoc @@ -0,0 +1,29 @@ +:_mod-docs-content-type: ASSEMBLY +[id="oc-mirror-migration-v1-to-v2"] += Migrating from oc-mirror plugin v1 to v2 +include::_attributes/common-attributes.adoc[] +:context: oc-mirror-migration-v1-to-v2 + +toc::[] + +The oc-mirror v2 plugin introduces major changes to image mirroring workflows. This guide provides step-by-step instructions for migration while ensuring compatibility with oc-mirror plugin v2. + +[IMPORTANT] +==== +You must manually update the configurations by modifying the API version and removing deprecated fields. For more information, see "Changes from oc-mirror plugin v1 to v2". +==== + +// Differences between oc-mirror plugin v1 and v2 +include::modules/oc-mirror-migration-differences.adoc[leveloffset=+1] + +// Migrating from oc-mirror plugin v1 to oc-mirror plugin v2 +include::modules/oc-mirror-migration-process.adoc[leveloffset=+1] + +[role="_additional-resources"] +[id="additional-resources_{context}"] +== Additional resources + +* xref:../../disconnected/mirroring/about-installing-oc-mirror-v2#oc-mirror-workflows-partially-disconnected-v2_about-installing-oc-mirror-v2[Mirroring an image set in a partially disconnected environment] +* xref:../../disconnected/mirroring/about-installing-oc-mirror-v2#oc-mirror-workflows-fully-disconnected-v2_about-installing-oc-mirror-v2[Mirroring an image set in a fully disconnected environment] +* For details regarding configuration changes, see xref:../../disconnected/mirroring/oc-mirror-migration-v1-to-v2#oc-mirror-migration-differences_oc-mirror-migration-v1-to-v2[Changes from oc-mirror plugin v1 to v2]. +* For more information about deleting images, see xref:../../disconnected/mirroring/about-installing-oc-mirror-v2#oc-mirror-procedure-delete-v2_about-installing-oc-mirror-v2[Deletion of images from your disconnected environment]. diff --git a/modules/oc-mirror-migration-differences.adoc b/modules/oc-mirror-migration-differences.adoc new file mode 100644 index 000000000000..44c203d6ed81 --- /dev/null +++ b/modules/oc-mirror-migration-differences.adoc @@ -0,0 +1,69 @@ +// Module included in the following assemblies: +// +// * disconnected/mirroring/oc-mirror-migration-v1-to-v2.adoc + +:_mod-docs-content-type: CONCEPT +[id="oc-mirror-migration-differences_{context}"] += Changes from oc-mirror plugin v1 to v2 + +Before migrating from oc-mirror plugin v1 to v2, see the following differences between oc-mirror plugin v1 and v2: + +* Explicit version selection: Users must explicitly specify `--v2` when using `oc-mirror`. If no version is specified, v1 is executed by default. This behavior is expected to change in future releases, where `--v2` will be the default. + +* Updated commands: Commands for mirroring workflows have changed to align with oc-mirror plugin v2's new workflow. + +** For mirror-to-disk, run the following command: ++ +[source,terminal] +---- +$ oc-mirror --config isc.yaml file:// --v2 +---- + +** For disk-to-mirror, run the following command: ++ +[source,terminal] +---- +$ oc-mirror --config isc.yaml --from file:// docker:// --v2 +---- + +** For mirror-to-mirror, run the following command: ++ +[source,terminal] +---- +$ oc-mirror --config isc.yaml --workspace file:// docker:// --v2 +---- ++ +[NOTE] +==== +`--workspace` is now required for mirror-to-mirror operation. +==== + +* API version update: The `ImageSetConfiguration` API version changes from `v1alpha2` (v1) to `v2alpha1` (v2). You must manually update the configuration files before migration. + +* Configuration changes: +- `storageConfig` must be removed in oc-mirror plugin v2. +- Incremental mirroring is now handled automatically through the working directory or local cache. + +* Changes in results directory: All custom resources to be applied to the disconnected cluster are generated in the `/working-dir/cluster-resources` directory after the migration. +- Outputs in oc-mirror plugin v2 are not stored in the same location as oc-mirror plugin v1. +- You must check the `cluster-resources` folder under the working directory for the following resources: +** `ImageDigestMirrorSet` (IDMS) +** `ImageTagMirrorSet` (ITMS) +** `CatalogSource` +** `ClusterCatalog` +** `UpdateService` + +* Workspace and directory naming: Follow the new oc-mirror v2 convention to prevent any potential data inconsistencies when transitioning between versions. +- The oc-mirror plugin v1 `oc-mirror-workspace` directory is no longer needed. +- Use a new directory for oc-mirror plugin v2 to avoid conflicts. + +* Replacing `ImageContentSourcePolicy` (ICSP) resources with IDMS/ITMS: ++ +[IMPORTANT] +==== +Deleting all `ImageContentSourcePolicy` (ICSP) resources might remove configurations unrelated to oc-mirror. + +To avoid unintended deletions, identify ICSP resources generated by oc-mirror before removing them. If you are unsure, check with your cluster administrator. For more information, see "Mirroring images for a disconnected installation by using the oc-mirror plugin v2". +==== + +- In oc-mirror plugin v2, the ICSP resource is replaced by `ImageDigestMirrorSet` (IDMS) and `ImageTagMirrorSet` (ITMS) resources. \ No newline at end of file diff --git a/modules/oc-mirror-migration-process.adoc b/modules/oc-mirror-migration-process.adoc new file mode 100644 index 000000000000..b5b23e1d7fe6 --- /dev/null +++ b/modules/oc-mirror-migration-process.adoc @@ -0,0 +1,126 @@ +// Module included in the following assemblies: +// +// * disconnected/mirroring/oc-mirror-migration-v1-to-v2.adoc + +:_mod-docs-content-type: PROCEDURE +[id="oc-mirror-migration-process_{context}"] += Migrating to oc-mirror plugin v2 + +To migrate from oc-mirror plugin v1 to v2, you must manually update the `ImageSetConfiguration` file, modify mirroring commands, and clean up v1 artifacts. Follow these steps to complete the migration. + +.Procedure + +. Modify the API version and remove deprecated fields in your `ImageSetConfiguration`. ++ +.Example `ImageSetConfiguration` file with oc-mirror plugin v1 configuration +[source,yaml] +---- +kind: ImageSetConfiguration +apiVersion: mirror.openshift.io/v1alpha2 +mirror: + platform: + channels: + - name: stable-4.17 + graph: true + helm: + repositories: + - name: sbo + url: https://redhat-developer.github.io/service-binding-operator-helm-chart/ + additionalImages: + - name: registry.redhat.io/ubi8/ubi:latest + - name: quay.io/openshifttest/hello-openshift@sha256:example_hash + operators: + - catalog: oci:///test/redhat-operator-index + packages: + - name: aws-load-balancer-operator +storageConfig: # REMOVE this field in v2 + local: + path: /var/lib/oc-mirror +---- ++ +.Example `ImageSetConfiguration` file with oc-mirror plugin v2 configuration +[source,yaml] +---- +kind: ImageSetConfiguration +apiVersion: mirror.openshift.io/v2alpha1 +mirror: + platform: + channels: + - name: stable-4.17 + graph: true + helm: + repositories: + - name: sbo + url: https://redhat-developer.github.io/service-binding-operator-helm-chart/ + additionalImages: + - name: registry.redhat.io/ubi8/ubi:latest + - name: quay.io/openshifttest/hello-openshift@sha256:example_hash + operators: + - catalog: oci:///test/redhat-operator-index + packages: + - name: aws-load-balancer-operator +---- + +. Check the `cluster-resources` directory inside the working directory for IDMS, ITMS, `CatalogSource`, and `ClusterCatalog` resources by running the following command: ++ +[source,terminal] +---- +$ ls /working-dir/cluster-resources/ +---- + +. Once the migration is complete, verify that mirrored images and catalogs are available: +- Ensure that no errors or warnings occurred during mirroring. +- Ensure that no error file was generated (`working-dir/logs/mirroring_errors_YYYYMMdd_HHmmss.txt`). + +. Verify that mirrored images and catalogs are available using the following the commands: ++ +[source,terminal] +---- +$ oc get catalogsource -n openshift-marketplace +---- ++ +[source,terminal] +---- +$ oc get imagedigestmirrorset,imagetagmirrorset -n openshift-config +---- ++ +For more information, refer to "Mirroring images for a disconnected installation using oc-mirror plugin v2". + +. Optional: Remove images mirrored using oc-mirror plugin v1: + +.. Mirror the images using oc-mirror plugin v1. + +.. Update the API version in the `ImageSetConfiguration` file from `v1alpha2` (v1) to `v2alpha1` (v2), then run the following command: ++ +[source,terminal] +---- +$ oc-mirror -c isc.yaml file://some-dir --v2 +---- ++ +[NOTE] +==== +`storageConfig` is not a valid field in the `ImageSetConfiguration` and `DeleteImageSetConfiguration` files. Remove this field when updating to oc-mirror plugin v2. +==== + +.. Generate a delete manifest and delete v1 images by running the following command: ++ +[source,terminal] +---- +$ oc-mirror delete --config=delete-isc.yaml --generate --delete-v1-images --workspace file://some-dir docker://registry.example:5000 --v2 +---- ++ +[IMPORTANT] +==== +oc-mirror plugin v2 does not automatically prune the destination registry, unlike oc-mirror plugin v1. To clean up images that are no longer needed, use the delete functionality in v2 with the `--delete-v1-images` command flag. + +Once all images mirrored with oc-mirror plugin v1 are removed, you no longer need to use this flag. If you need to delete images mirrored with oc-mirror plugin v2, do not set `--delete-v1-images`. +==== ++ +For more information about deleting images, see "Deletion of images from your disconnected environment". + +.. Delete images based on the generated manifest by running the following command: ++ +[source,terminal] +---- +$ oc-mirror delete --delete-yaml-file some-dir/working-dir/delete/delete-images.yaml docker://registry.example:5000 --v2 +---- \ No newline at end of file diff --git a/modules/oc-mirror-workflows-delete-v2.adoc b/modules/oc-mirror-workflows-delete-v2.adoc index 9d3893a4ec3a..7e6a0c867568 100644 --- a/modules/oc-mirror-workflows-delete-v2.adoc +++ b/modules/oc-mirror-workflows-delete-v2.adoc @@ -59,7 +59,13 @@ For more information, see "Resolving storage cleanup issues in the distribution [IMPORTANT] ==== -To skip deleting the Operator catalog image while you are deleting Operator images, you must list the specific Operators under the Operator catalog image in the `DeleteImageSetConfiguration` file. This ensures that only the specified Operators are deleted, not the catalog image. - +* To skip deleting the Operator catalog image while you are deleting Operator images, you must list the specific Operators under the Operator catalog image in the `DeleteImageSetConfiguration` file. This ensures that only the specified Operators are deleted, not the catalog image. ++ If only the Operator catalog image is specified, all Operators within that catalog, as well as the catalog image itself, will be deleted. + +* oc-mirror plugin v2 does not delete Operator catalog images automatically because other Operators may still be deployed and depend on these images. ++ +If you are certain that no Operators from a catalog remain in the registry or cluster, you can explicitly add the catalog image to `additionalImages` in `DeleteImageSetConfiguration` to remove it. + +* Garbage collection behavior depends on the registry. Some registries do not automatically remove deleted images, requiring a system administrator to manually trigger garbage collection to free up space. ==== \ No newline at end of file From 785c339e85372380660a7012039f6d73ea2f5bd2 Mon Sep 17 00:00:00 2001 From: subhtk Date: Fri, 7 Feb 2025 15:40:38 +0530 Subject: [PATCH 309/669] Updated oc-mirror v2 docs --- _topic_maps/_topic_map.yml | 8 +-- .../about-installing-oc-mirror-v2.adoc | 17 ++--- ...oc-mirror-command-reference-v2-delete.adoc | 72 +++++++++++++++++++ modules/oc-mirror-command-reference-v2.adoc | 37 +++++++--- modules/oc-mirror-dry-run-v2.adoc | 17 +++-- modules/oc-mirror-enclave-support-about.adoc | 3 - modules/oc-mirror-procedure-delete-v2.adoc | 16 +++-- modules/oc-mirror-troubleshooting-v2.adoc | 2 +- modules/oc-mirror-v2-about.adoc | 2 +- modules/oc-mirror-workflows-delete-v2.adoc | 5 +- 10 files changed, 135 insertions(+), 44 deletions(-) create mode 100644 modules/oc-mirror-command-reference-v2-delete.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 6e2bf9730ed6..bc135b7c59c4 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -124,14 +124,14 @@ Topics: File: index - Name: Creating a mirror registry with mirror registry for Red Hat OpenShift File: installing-mirroring-creating-registry - - Name: Mirroring images for a disconnected installation by using the oc adm command - File: installing-mirroring-installation-images - - Name: Mirroring images for a disconnected installation using the oc-mirror plugin v1 - File: installing-mirroring-disconnected - Name: Mirroring images for a disconnected installation using oc-mirror plugin v2 File: about-installing-oc-mirror-v2 - Name: Migrating from oc-mirror plugin v1 to v2 File: oc-mirror-migration-v1-to-v2 + - Name: Mirroring images for a disconnected installation using the oc-mirror plugin v1 + File: installing-mirroring-disconnected + - Name: Mirroring images for a disconnected installation by using the oc adm command + File: installing-mirroring-installation-images - Name: Installing a cluster in a disconnected environment File: installing - Name: Using OLM in disconnected environments diff --git a/disconnected/mirroring/about-installing-oc-mirror-v2.adoc b/disconnected/mirroring/about-installing-oc-mirror-v2.adoc index 1c21fcb5ad23..414bb5f559cf 100644 --- a/disconnected/mirroring/about-installing-oc-mirror-v2.adoc +++ b/disconnected/mirroring/about-installing-oc-mirror-v2.adoc @@ -8,10 +8,7 @@ toc::[] You can run your cluster in a disconnected environment if you install the cluster from a mirrored set of {product-title} container images in a private registry. This registry must be running whenever your cluster is running. -Just as you can use the oc-mirror OpenShift CLI (`oc`) plugin, you can also use oc-mirror plugin v2 to mirror images to a mirror registry in your fully or partially disconnected environments. To download the required images from the official Red{nbsp}Hat registries, you must run oc-mirror plugin v2 from a system with internet connectivity. - -:FeatureName: oc-mirror plugin v2 -include::snippets/technology-preview.adoc[] +You can use oc-mirror plugin v2 to mirror images to a mirror registry in your fully or partially disconnected environments. To download the required images from the official Red{nbsp}Hat registries, you must run oc-mirror plugin v2 from a system with internet connectivity. // About oc-mirror plugin v2 include::modules/oc-mirror-v2-about.adoc[leveloffset=+1] @@ -85,7 +82,7 @@ After your cluster is configured to use the resources generated by oc-mirror plu * xref:../../extensions/catalogs/disconnected-catalogs.adoc#disconnected-catalogs[Disconnected environment support in {olmv1}] -//Delete Feature +// Delete Feature // workflows of delete feature include::modules/oc-mirror-workflows-delete-v2.adoc[leveloffset=+1] @@ -107,7 +104,7 @@ include::modules/oc-mirror-dry-run-v2.adoc[leveloffset=+2] // Troubleshooting include::modules/oc-mirror-troubleshooting-v2.adoc[leveloffset=+1] -//oc-mirror-enclave-support-about +// oc-mirror-enclave-support-about include::modules//oc-mirror-enclave-support-about.adoc[leveloffset=+1] // How to mirror to an Enclave @@ -121,14 +118,18 @@ include::modules/oc-mirror-proxy-support.adoc[leveloffset=+1] * xref:../../disconnected/updating/disconnected-update-osus.adoc#updating-disconnected-cluster-osus[Updating a cluster in a disconnected environment using the OpenShift Update Service] * xref:../../disconnected/mirroring/about-installing-oc-mirror-v2.adoc#oc-mirror-v2-procedure-garbage-collector_about-installing-oc-mirror-v2[Resolving storage cleanup issues in the distribution registry] -//operator catalog filtering +// Operator catalog filtering include::modules/oc-mirror-operator-catalog-filtering.adoc[leveloffset=+1] -//Image set configuration parameters +// Image set configuration parameters include::modules/oc-mirror-imageset-config-parameters-v2.adoc[leveloffset=+1] // Command reference for oc-mirror v2 include::modules/oc-mirror-command-reference-v2.adoc[leveloffset=+1] +include::modules/oc-mirror-command-reference-v2-delete.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources * xref:../../disconnected/mirroring/about-installing-oc-mirror-v2.adoc#oc-mirror-updating-cluster-manifests-v2_about-installing-oc-mirror-v2[Configuring your cluster to use the resources generated by oc-mirror] [id="next-steps_{context}"] diff --git a/modules/oc-mirror-command-reference-v2-delete.adoc b/modules/oc-mirror-command-reference-v2-delete.adoc new file mode 100644 index 000000000000..f19bc66f560d --- /dev/null +++ b/modules/oc-mirror-command-reference-v2-delete.adoc @@ -0,0 +1,72 @@ +// Module included in the following assemblies: +// +// * installing/disconnected_install/installing-mirroring-disconnected-v2.adoc + + +:_mod-docs-content-type: REFERENCE +[id="oc-mirror-command-reference-delete-v2_{context}"] += Command reference for deleting images + +The following tables describe the `oc mirror` subcommands and flags for deleting images: + +.Subcommands and flags for the deleting images +[cols="1,2",options="header"] +|=== +|Subcommand +|Description + +|`--authfile ` +|Path of the authentication file. The default value is `${XDG_RUNTIME_DIR}/containers/auth.json`. + +|`--cache-dir ` +|oc-mirror cache directory location. The default is `$HOME`. + +|`-c `, `--config ` +|Path to the delete imageset configuration file. + +|`--delete-id ` +|Used to differentiate between versions for files created by the delete functionality. + +|`--delete-v1-images` +|Used during the migration, along with `--generate`, in order to target images previously mirrored with oc-mirror plugin v1. + +|`--delete-yaml-file ` +|If set, uses the generated or updated yaml file to delete contents. + +|`--dest-tls-verify` +|Require HTTPS and verify certificates when talking to the container registry or daemon. TThe default value is `true`. + +|`--force-cache-delete` +|Used to force delete the local cache manifests and blobs. + +|`--generate` +|Used to generate the delete yaml for the list of manifests and blobs, used when deleting from local cache and remote registry. + +|`-h`, `--help` +|Displays help. + +|`--log-level ` +|Log level one of `info`, `debug`, `trace`, and `error`. The default value is `info`. + +|`--parallel-images ` +|Indicates the number of images deleted in parallel. The default value is `8`. + +|`--parallel-layers ` +|Indicates the number of image layers mirrored in parallel. The default value is `10`. + +|`-p `, `--port ` +|HTTP port used by oc-mirror's local storage instance. The default value is `55000`. + +|`--retry-delay` +|Duration delay between 2 retries. The default value is `1s`. + +|`--retry-times ` +|The number of times to retry. The default value is `2`. + +|`--src-tls-verify` +|Require HTTPS and verify certificates when talking to the container registry or daemon. The default value is `true`. + +|`--workspace ` +|oc-mirror workspace where resources and internal artifacts are generated. + +|=== \ No newline at end of file diff --git a/modules/oc-mirror-command-reference-v2.adoc b/modules/oc-mirror-command-reference-v2.adoc index 4e61719373ac..83e23e403fda 100644 --- a/modules/oc-mirror-command-reference-v2.adoc +++ b/modules/oc-mirror-command-reference-v2.adoc @@ -38,11 +38,14 @@ The following tables describe the `oc mirror` subcommands and flags for oc-mirro |`-c`, `--config` `` |Specifies the path to an image set configuration file. +|`--cache-dir ` +|oc-mirror cache directory location. The default value is `$HOME`. + |`--dest-tls-verify` -|Requires HTTPS and verifies certificates when accessing the container registry or daemon. +|Requires HTTPS and verifies certificates when accessing the container registry or daemon. The default value is `true`. |`--dry-run` -|Prints actions without mirroring images +|Prints actions without mirroring images. |`--from ` |Specifies the path to an image set archive that was generated by executing oc-mirror plugin v2 to load a target registry. @@ -50,17 +53,26 @@ The following tables describe the `oc mirror` subcommands and flags for oc-mirro |`-h`, `--help` |Displays help -|`--loglevel` -|Displays string log levels. Supported values include info, debug, trace, error. The default is `info`. +|`--image-timeout duration` +|Timeout for mirroring an image. The default is 10m0s. Valid time units are `ns`, `us` or `µs`, `ms`, `s`, `m`, and `h`. + +|`--log-level ` +|Displays string log levels. Supported values include info, debug, trace, error. The default value is `info`. |`-p`, `--port` -|Determines the HTTP port used by oc-mirror plugin v2 local storage instance. The default is `55000`. +|Determines the HTTP port used by oc-mirror plugin v2 local storage instance. The default value is `55000`. + +|`--parallel-images ` +|Specifies the number of images mirrored in parallel. The default value is `8`. + +|`--parallel-layers ` +|Specifies the number of image layers mirrored in parallel. The default value is `10`. |`--max-nested-paths ` -|Specifies the maximum number of nested paths for destination registries that limit nested paths. The default is `0`. +|Specifies the maximum number of nested paths for destination registries that limit nested paths. The default value is `0`. |`--secure-policy` -|Default value is `false`. If you set a non-default value, the command enables signature verification, which is the secure policy for signature verification. +|The default value is `false`. If you set a non-default value, the command enables signature verification, which is the secure policy for signature verification. |`--since` |Includes all new content since a specified date (format: `yyyy-mm-dd`). When not provided, new content since previous mirroring is mirrored. @@ -69,7 +81,7 @@ The following tables describe the `oc mirror` subcommands and flags for oc-mirro |Requires HTTPS and verifies certificates when accessing the container registry or daemon. |`--strict-archive` -|Default value is `false`. If you set a value, the command generates archives that are strictly less than the `archiveSize` that was set in the `imageSetConfig` custom resource (CR). Mirroring exist in error if a file being archived exceeds `archiveSize` (GB). +|The default value is `false`. If you set a value, the command generates archives that are strictly less than the `archiveSize` that was set in the `imageSetConfig` custom resource (CR). Mirroring exist in error if a file being archived exceeds `archiveSize` (GB). |`-v`, `--version` |Displays the version for oc-mirror plugin v2. @@ -77,4 +89,13 @@ The following tables describe the `oc mirror` subcommands and flags for oc-mirro |`--workspace` |Determines string oc-mirror plugin v2 workspace where resources and internal artifacts are generated. +|`--retry-delay duration` +|Delay between 2 retries. The default value is `1s`. + +|`--retry-times ` +|The number of times to retry. The default value is `2`. + +|`--rootless-storage-path ` +|Overrides the default container rootless storage path (usually in `etc/containers/storage.conf`). + |=== \ No newline at end of file diff --git a/modules/oc-mirror-dry-run-v2.adoc b/modules/oc-mirror-dry-run-v2.adoc index c448e3ab65e5..71de8f7ca2c9 100644 --- a/modules/oc-mirror-dry-run-v2.adoc +++ b/modules/oc-mirror-dry-run-v2.adoc @@ -5,7 +5,7 @@ :_mod-docs-content-type: PROCEDURE [id="oc-mirror-dry-run-v2_{context}"] -= Performing dry run for oc-mirror plugin v2 += Performing a dry run for oc-mirror plugin v2 Verify your image set configuration by performing a dry run without mirroring any images. This ensures your setup is correct and prevents unintended changes. @@ -15,19 +15,18 @@ Verify your image set configuration by performing a dry run without mirroring an + [source,terminal] ---- -$ oc mirror -c --from file:// docker:// --dry-run --v2 +$ oc mirror -c file:// --dry-run --v2 ---- -Where: -- ``: Use the image set configuration file that you just created. -- ``: Insert the address of the workspace path. -- ``: Insert the URL or address of the remote container registry from which images will be deleted. ++ +where: + +``:: Specifies the image set configuration file that you created. +``:: Insert the address of the workspace path. +``:: Insert the URL or address of the remote container registry from which images will be mirrored or deleted. + .Example output [source,terminal] ---- -$ oc mirror --config /tmp/isc_dryrun.yaml file:// --dry-run --v2 - -[INFO] : :warning: --v2 flag identified, flow redirected to the oc-mirror v2 version. This is Tech Preview, it is still under development and it is not production ready. [INFO] : :wave: Hello, welcome to oc-mirror [INFO] : :gear: setting up the environment for you... [INFO] : :twisted_rightwards_arrows: workflow mode: mirrorToDisk diff --git a/modules/oc-mirror-enclave-support-about.adoc b/modules/oc-mirror-enclave-support-about.adoc index 448fd28bb3cc..f1eb7d12876a 100644 --- a/modules/oc-mirror-enclave-support-about.adoc +++ b/modules/oc-mirror-enclave-support-about.adoc @@ -8,9 +8,6 @@ Enclave support restricts internal access to a specific part of a network. Unlike a demilitarized zone (DMZ) network, which allows inbound and outbound traffic access through firewall boundaries, enclaves do not cross firewall boundaries. -:FeatureName: Enclave Support -include::snippets/technology-preview.adoc[] - The new enclave support functionality is for scenarios where mirroring is needed for multiple enclaves that are secured behind at least one intermediate disconnected network. Enclave support has the following benefits: diff --git a/modules/oc-mirror-procedure-delete-v2.adoc b/modules/oc-mirror-procedure-delete-v2.adoc index 87128ec2a85b..4beffd251669 100644 --- a/modules/oc-mirror-procedure-delete-v2.adoc +++ b/modules/oc-mirror-procedure-delete-v2.adoc @@ -38,18 +38,15 @@ delete: minVersion: <5> maxVersion: <5> additionalImages: - - name: <6> + - name: ---- <1> Specify the name of the {product-title} channel to delete, for example `stable-4.15`. <2> Specify a version range of the images to delete within the channel, for example `4.15.0` for the minimum version and `4.15.1` for the maximum version. To delete only one version's images, use that version number for both the `minVersion` and `maxVersion` fields. -<3> Specify an Operator catalog image to delete, for example `registry.redhat.io/redhat/redhat-operator-index:v4.14`. -To delete the catalog image, you must not specify individual Operators using the `delete.operators.packages` parameter. -<4> Specify a specific Operator bundle to delete, for example `aws-load-balancer-operator`. -If you specify individual Operators, the Operator catalog image does not get deleted. +<3> Specify an Operator catalog image containing the Operators to delete, for example `registry.redhat.io/redhat/redhat-operator-index:v4.14`. +The Operator catalog image will not get deleted. Its presence in the registry might be necessary for other Operators still remaining on the cluster. +<4> Specify a specific Operator to delete, for example `aws-load-balancer-operator`. <5> Specify a version range of the images to delete for the Operator, for example `0.0.1` for the minimum version and `0.0.2` for the maximum version. -To delete only one version's images, use that version number for both the `minVersion` and `maxVersion` fields. -<6> Specify additional images to delete, for example `registry.redhat.io/ubi9/ubi-init:latest`. . Create a `delete-images.yaml` file by running the following command: + @@ -62,6 +59,11 @@ where: :: Specifies the directory where images were previously mirrored to or stored during the mirroring process. :: Specifies the URL or address of the remote container registry from which images will be deleted. ++ +[IMPORTANT] +==== +When deleting images, specify the correct workspace directory. Modify or delete the cache directory only when starting mirroring from scratch, such as setting up a new cluster. Incorrect changes to the cache directory might disrupt further mirroring operations. +==== . Go to the `/delete` directory that was created. diff --git a/modules/oc-mirror-troubleshooting-v2.adoc b/modules/oc-mirror-troubleshooting-v2.adoc index bd9a3c6a126b..f07cf44e983b 100644 --- a/modules/oc-mirror-troubleshooting-v2.adoc +++ b/modules/oc-mirror-troubleshooting-v2.adoc @@ -12,7 +12,7 @@ oc-mirror plugin v2 now logs all image mirroring errors in a separate file, maki ==== When errors occur while mirroring release or release component images, they are critical. This stops the mirroring process immediately. -Errors with mirroring Operators, Operator-related images, or additional images do not stop the mirroring process. Mirroring continues, and oc-mirror plugin v2 logs updates every 8 images. +Errors with mirroring Operators, Operator-related images, or additional images do not stop the mirroring process. Mirroring continues, and oc-mirror plugin v2 saves a file under the `working-dir/logs` directory describing which Operator failed to mirror. ==== When an image fails to mirror, and that image is mirrored as part of one or more Operator bundles, oc-mirror plugin v2 notifies the user which Operators are incomplete, providing clarity on the Operator bundles affected by the error. diff --git a/modules/oc-mirror-v2-about.adoc b/modules/oc-mirror-v2-about.adoc index 46e46dfb5ebf..de2c382c6892 100644 --- a/modules/oc-mirror-v2-about.adoc +++ b/modules/oc-mirror-v2-about.adoc @@ -9,7 +9,7 @@ The oc-mirror OpenShift CLI (`oc`) plugin is a single tool that mirrors all required {product-title} content and other images to your mirror registry. -To use the new Technology Preview version of oc-mirror, add the `--v2` flag to the oc-mirror plugin v2 command line. +To use the new version of oc-mirror, add the `--v2` flag to the oc-mirror plugin v2 command line. oc-mirror plugin v2 has the following features: diff --git a/modules/oc-mirror-workflows-delete-v2.adoc b/modules/oc-mirror-workflows-delete-v2.adoc index 7e6a0c867568..7d1ddeac89d8 100644 --- a/modules/oc-mirror-workflows-delete-v2.adoc +++ b/modules/oc-mirror-workflows-delete-v2.adoc @@ -17,7 +17,6 @@ You must create a `DeleteImageSetConfiguration` file to specify which images to In the following example, the `DeleteImageSetConfiguration` file removes the following images: * All release images for {product-title} 4.13.3. -* The `redhat-operator-index` `v4.12` catalog image. * The `aws-load-balancer-operator` v0.0.1 bundle and all its related images. * The additional images for `ubi` and `ubi-minimal`, referenced by their corresponding digests. @@ -47,7 +46,7 @@ delete: [IMPORTANT] ==== -Consider using the mirror-to-disk and disk-to-mirror workflows to reduce mirroring issues. +Consider using the mirror-to-disk and disk-to-mirror workflows to reduce deletion issues. ==== oc-mirror plugin v2 deletes only the manifests of the images, which does not reduce the storage occupied in the registry. @@ -68,4 +67,4 @@ If only the Operator catalog image is specified, all Operators within that catal If you are certain that no Operators from a catalog remain in the registry or cluster, you can explicitly add the catalog image to `additionalImages` in `DeleteImageSetConfiguration` to remove it. * Garbage collection behavior depends on the registry. Some registries do not automatically remove deleted images, requiring a system administrator to manually trigger garbage collection to free up space. -==== \ No newline at end of file +==== From 888ba8357bcd93a84d39b33c5f9b18cbe8f47e7c Mon Sep 17 00:00:00 2001 From: Steven Smith Date: Thu, 20 Feb 2025 11:17:47 -0500 Subject: [PATCH 310/669] Level change for additional config details --- modules/nw-udn-additional-config-details.adoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/nw-udn-additional-config-details.adoc b/modules/nw-udn-additional-config-details.adoc index 36880049bf13..32167e15da22 100644 --- a/modules/nw-udn-additional-config-details.adoc +++ b/modules/nw-udn-additional-config-details.adoc @@ -4,9 +4,9 @@ :_mod-docs-content-type: REFERENCE [id="nw-udn-additional-config-details_{context}"] -== Additional configuration details for a UserDefinedNetworks CR += Additional configuration details for user-defined networks -The following table explains additional configurations for UDN that are optional. It is not recommended to set these fields without explicit need and understanding of OVN-Kubernetes network topology. +The following table explains additional configurations for `ClusterUserDefinedNetwork` and `UserDefinedNetwork` custom resources (CRs) that are optional. It is not recommended to set these fields without explicit need and understanding of OVN-Kubernetes network topology. .`UserDefinedNetworks` optional configurations [cols="1,1,7", options="header"] From f1637a3a61b38b491216687dd8145e4eca030723 Mon Sep 17 00:00:00 2001 From: Michael Ryan Peter Date: Tue, 18 Feb 2025 17:43:12 -0500 Subject: [PATCH 311/669] OSDOCS-13342: Update example outputs, CRs, and other nits --- _topic_maps/_topic_map.yml | 2 +- modules/olmv1-adding-a-catalog.adoc | 75 ++-- modules/olmv1-catalog-queries.adoc | 2 +- modules/olmv1-clusterextension-api.adoc | 18 +- modules/olmv1-deleting-catalog.adoc | 2 +- modules/olmv1-installing-an-operator.adoc | 117 +++--- modules/olmv1-updating-an-operator.adoc | 437 ++++++++++++---------- operators/olm_v1/index.adoc | 2 +- 8 files changed, 359 insertions(+), 296 deletions(-) diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index bc135b7c59c4..e96331026de7 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2051,7 +2051,7 @@ Topics: Distros: openshift-origin - Name: Cluster Operators reference File: operator-reference -- Name: OLM v1 (Technology Preview) +- Name: OLM v1 Dir: olm_v1 Distros: openshift-origin,openshift-enterprise Topics: diff --git a/modules/olmv1-adding-a-catalog.adoc b/modules/olmv1-adding-a-catalog.adoc index 5cdbf7c09362..cc7814ccb1f6 100644 --- a/modules/olmv1-adding-a-catalog.adoc +++ b/modules/olmv1-adding-a-catalog.adoc @@ -16,19 +16,22 @@ To add a catalog to a cluster for {olmv1-first} usage, create a `ClusterCatalog` .Example `my-redhat-operators.yaml` file [source,yaml,subs="attributes+"] ---- -apiVersion: catalogd.operatorframework.io/v1alpha1 +apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: - name: my-redhat-operators + name: my-redhat-operators <1> spec: + priority: 1000 <2> source: - type: image image: - ref: registry.redhat.io/redhat/redhat-operator-index:v{product-version} <1> - pollInterval: <2> + pollIntervalMinutes: 10 <3> + ref: registry.redhat.io/redhat/community-operator-index:v{product-version} <4> + type: Image ---- -<1> Specify the catalog's image in the `spec.source.image` field. -<2> Specify the interval for polling the remote registry for newer image digests. The default value is `24h`. Valid units include seconds (`s`), minutes (`m`), and hours (`h`). To disable polling, set a zero value, such as `0s`. +<1> The catalog is automatically labeled with the value of the `metadata.name` field when it is applied to the cluster. For more information about labels and catalog selection, see "Catalog content resolution". +<2> Optional: Specify the priority of the catalog in relation to the other catalogs on the cluster. For more information, see "Catalog selection by priority". +<3> Specify the interval in minutes for polling the remote registry for newer image digests. To disable polling, do not set the field. +<4> Specify the catalog image in the `spec.source.image.ref` field. . Add the catalog to your cluster by running the following command: + @@ -40,7 +43,7 @@ $ oc apply -f my-redhat-operators.yaml .Example output [source,text] ---- -catalog.catalogd.operatorframework.io/my-redhat-operators created +clustercatalog.olm.operatorframework.io/my-redhat-operators created ---- .Verification @@ -57,15 +60,19 @@ $ oc get clustercatalog .Example output [source,text] ---- -NAME AGE -my-redhat-operators 20s +NAME LASTUNPACKED SERVING AGE +my-redhat-operators 55s True 64s +openshift-certified-operators 83m True 84m +openshift-community-operators 43m True 84m +openshift-redhat-marketplace 83m True 84m +openshift-redhat-operators 54m True 84m ---- .. Check the status of your catalog by running the following command: + [source,terminal] ---- -$ oc describe clustercatalog +$ oc describe clustercatalog my-redhat-operators ---- + .Example output @@ -78,39 +85,43 @@ Annotations: API Version: olm.operatorframework.io/v1 Kind: ClusterCatalog Metadata: - Creation Timestamp: 2024-06-10T17:34:53Z + Creation Timestamp: 2025-02-18T20:28:50Z Finalizers: - catalogd.operatorframework.io/delete-server-cache + olm.operatorframework.io/delete-server-cache Generation: 1 - Resource Version: 46075 - UID: 83c0db3c-a553-41da-b279-9b3cddaa117d + Resource Version: 50248 + UID: 86adf94f-d2a8-4e70-895b-31139f2eeab7 Spec: Availability Mode: Available - Priority: 0 + Priority: 1000 Source: Image: Poll Interval Minutes: 10 - Ref: registry.redhat.io/redhat/redhat-operator-index:v4.18 - Type: image + Ref: registry.redhat.io/redhat/community-operator-index:v{product-version} + Type: Image Status: <1> Conditions: - Last Transition Time: 2024-06-10T17:35:15Z - Message: - Reason: UnpackSuccessful <2> + Last Transition Time: 2025-02-18T20:29:00Z + Message: Successfully unpacked and stored content from resolved source + Observed Generation: 1 + Reason: Succeeded <2> Status: True - Type: Unpacked - Content URL: https://catalogd-catalogserver.openshift-catalogd.svc/catalogs/redhat-operators/all.json - Observed Generation: 1 - Phase: Unpacked <3> + Type: Progressing + Last Transition Time: 2025-02-18T20:29:00Z + Message: Serving desired content from resolved source + Observed Generation: 1 + Reason: Available + Status: True + Type: Serving + Last Unpacked: 2025-02-18T20:28:59Z Resolved Source: Image: - Last Poll Attempt: 2024-06-10T17:35:10Z - Ref: registry.redhat.io/redhat/redhat-operator-index:v4.18 - Resolved Ref: registry.redhat.io/redhat/redhat-operator-index@sha256:f2ccc079b5e490a50db532d1dc38fd659322594dcf3e653d650ead0e862029d9 <4> - Type: image -Events: + Ref: registry.redhat.io/redhat/community-operator-index@sha256:11627ea6fdd06b8092df815076e03cae9b7cede8b353c0b461328842d02896c5 <3> + Type: Image + Urls: + Base: https://catalogd-service.openshift-catalogd.svc/catalogs/my-redhat-operators +Events: ---- <1> Describes the status of the catalog. <2> Displays the reason the catalog is in the current state. -<3> Displays the phase of the installation process. -<4> Displays the image reference of the catalog. \ No newline at end of file +<3> Displays the image reference of the catalog. diff --git a/modules/olmv1-catalog-queries.adoc b/modules/olmv1-catalog-queries.adoc index 8c62edb7b159..aa6d40ff1162 100644 --- a/modules/olmv1-catalog-queries.adoc +++ b/modules/olmv1-catalog-queries.adoc @@ -47,7 +47,7 @@ a| [source,terminal] ---- $ opm render : \ - \| jq -s '.[] \| select( .schema == "olm.package") + \| jq -s '.[] \| select( .schema == "olm.package")' ---- |Packages that support `AllNamespaces` install mode and do not use webhooks diff --git a/modules/olmv1-clusterextension-api.adoc b/modules/olmv1-clusterextension-api.adoc index 528d47470ce3..435985cf0e8a 100644 --- a/modules/olmv1-clusterextension-api.adoc +++ b/modules/olmv1-clusterextension-api.adoc @@ -20,13 +20,19 @@ For more information about the earlier behavior, see _Multitenancy and Operator .Example `ClusterExtension` object [source,yaml] ---- -apiVersion: olm.operatorframework.io/v1alpha1 +apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: - name: + name: spec: - packageName: - installNamespace: - channel: - version: + namespace: + serviceAccount: + name: + source: + sourceType: Catalog + catalog: + packageName: + channels: + - + version: "" ---- diff --git a/modules/olmv1-deleting-catalog.adoc b/modules/olmv1-deleting-catalog.adoc index e6ddc293179b..7656e776ca7f 100644 --- a/modules/olmv1-deleting-catalog.adoc +++ b/modules/olmv1-deleting-catalog.adoc @@ -25,7 +25,7 @@ $ oc delete clustercatalog .Example output [source,text] ---- -catalog.catalogd.operatorframework.io "my-catalog" deleted +clustercatalog.olm.operatorframework.io "my-redhat-operators" deleted ---- .Verification diff --git a/modules/olmv1-installing-an-operator.adoc b/modules/olmv1-installing-an-operator.adoc index 39d93ac7dbf1..87b1bdea2440 100644 --- a/modules/olmv1-installing-an-operator.adoc +++ b/modules/olmv1-installing-an-operator.adoc @@ -17,28 +17,47 @@ You can install an extension from a catalog by creating a custom resource (CR) a . Create a CR, similar to the following example: + +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 + kind: ClusterExtension + metadata: + name: + spec: + namespace: <1> + serviceAccount: + name: <2> + source: + sourceType: Catalog + catalog: + packageName: + channels: + - <3> + version: <4> +---- +<1> Specifies the namespace where you want the bundle installed, such as `pipelines` or `my-extension`. Extensions are still cluster-scoped and might contain resources that are installed in different namespaces. +<2> Specifies the name of the service account you created to install, update, and manage your extension. +<3> Optional: Specifies channel names as an array, such as `pipelines-1.14` or `latest`. +<4> Optional: Specifies the version or version range, such as `1.14.0`, `1.14.x`, or `>=1.16`, of the package you want to install or update. For more information, see "Example custom resources (CRs) that specify a target version" and "Support for version ranges". + .Example `pipelines-operator.yaml` CR [source,yaml] ---- -apiVersion: olm.operatorframework.io/v1alpha1 +apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: pipelines-operator spec: - packageName: openshift-pipelines-operator-rh - installNamespace: + namespace: pipelines serviceAccount: - name: - channel: - version: "" + name: pipelines-installer + source: + sourceType: Catalog + catalog: + packageName: openshift-pipelines-operator-rh + version: "1.14.x" ---- -+ -where: -+ -``:: Specifies the namespace where you want the bundle installed, such as `pipelines` or `my-extension`. Extensions are still cluster-scoped and might contain resources that are installed in different namespaces. -``:: Specifies the name of the service account you created to install, update, and manage your extension. -``:: Optional: Specifies the channel, such as `pipelines-1.11` or `latest`, for the package you want to install or update. -``:: Optional: Specifies the version or version range, such as `1.11.1`, `1.12.x`, or `>=1.12.1`, of the package you want to install or update. For more information, see "Example custom resources (CRs) that specify a target version" and "Support for version ranges". + . Apply the CR to the cluster by running the following command: + @@ -69,76 +88,76 @@ $ oc get clusterextension pipelines-operator -o yaml ---- apiVersion: v1 items: -- apiVersion: olm.operatorframework.io/v1alpha1 +- apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"olm.operatorframework.io/v1alpha1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","installNamespace":"pipelines","packageName":"openshift-pipelines-operator-rh","serviceAccount":{"name":"pipelines-installer"},"pollInterval":"30m"}} - creationTimestamp: "2024-06-10T17:50:51Z" + {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipes"},"spec":{"namespace":"pipelines","serviceAccount":{"name":"pipelines-installer"},"source":{"catalog":{"packageName":"openshift-pipelines-operator-rh","version":"1.14.x"},"sourceType":"Catalog"}}} + creationTimestamp: "2025-02-18T21:48:13Z" finalizers: - olm.operatorframework.io/cleanup-unpack-cache + - olm.operatorframework.io/cleanup-contentmanager-cache generation: 1 name: pipelines-operator - resourceVersion: "53324" - uid: c54237be-cde4-46d4-9b31-d0ec6acc19bf + resourceVersion: "72725" + uid: e18b13fb-a96d-436f-be75-a9a0f2b07993 spec: - channel: latest - installNamespace: pipelines - packageName: openshift-pipelines-operator-rh + namespace: pipelines serviceAccount: name: pipelines-installer - upgradeConstraintPolicy: Enforce + source: + catalog: + packageName: openshift-pipelines-operator-rh + upgradeConstraintPolicy: CatalogProvided + version: 1.14.x + sourceType: Catalog status: conditions: - - lastTransitionTime: "2024-06-10T17:50:58Z" - message: resolved to "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec" - observedGeneration: 1 - reason: Success - status: "True" - type: Resolved - - lastTransitionTime: "2024-06-10T17:51:11Z" - message: installed from "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec" - observedGeneration: 1 - reason: Success - status: "True" - type: Installed - - lastTransitionTime: "2024-06-10T17:50:58Z" + - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: Deprecated - - lastTransitionTime: "2024-06-10T17:50:58Z" + - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: PackageDeprecated - - lastTransitionTime: "2024-06-10T17:50:58Z" + - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: ChannelDeprecated - - lastTransitionTime: "2024-06-10T17:50:58Z" + - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: BundleDeprecated - - lastTransitionTime: "2024-06-10T17:50:58Z" - message: 'unpack successful: + - lastTransitionTime: "2025-02-18T21:48:16Z" + message: Installed bundle registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:f7b19ce26be742c4aaa458d37bc5ad373b5b29b20aaa7d308349687d3cbd8838 + successfully observedGeneration: 1 - reason: UnpackSuccess + reason: Succeeded status: "True" - type: Unpacked - installedBundle: - name: openshift-pipelines-operator-rh.v1.14.4 - version: 1.14.4 - resolvedBundle: - name: openshift-pipelines-operator-rh.v1.14.4 - version: 1.14.4 + type: Installed + - lastTransitionTime: "2025-02-18T21:48:16Z" + message: desired state reached + observedGeneration: 1 + reason: Succeeded + status: "True" + type: Progressing + install: + bundle: + name: openshift-pipelines-operator-rh.v1.14.5 + version: 1.14.5 +kind: List +metadata: + resourceVersion: "" ---- where: @@ -156,6 +175,4 @@ where: The value of `False` in the `status` field indicates that the `reason: Deprecated` condition is not deprecated. The value of `True` in the `status` field indicates that the `reason: Deprecated` condition is deprecated. `installedBundle.name`:: Displays the name of the bundle installed. `installedBundle.version`:: Displays the version of the bundle installed. -`resolvedBundle.name`:: Displays the name of the resolved bundle. -`resolvedBundle.version`:: Displays the version of the resolved bundle. ==== diff --git a/modules/olmv1-updating-an-operator.adoc b/modules/olmv1-updating-an-operator.adoc index 7d9302e3ef92..aa293e47eb56 100644 --- a/modules/olmv1-updating-an-operator.adoc +++ b/modules/olmv1-updating-an-operator.adoc @@ -11,10 +11,9 @@ You can update your cluster extension or Operator by manually editing the custom .Prerequisites -* You have a catalog installed. -* You have downloaded a local copy of the catalog file. * You have an Operator or extension installed. * You have installed the `jq` CLI tool. +* You have installed the `opm` CLI tool. .Procedure @@ -24,19 +23,19 @@ You can update your cluster extension or Operator by manually editing the custom + [source,terminal] ---- -$ jq -s '.[] | select( .schema == "olm.channel" ) | \ - select( .package == "") | \ - .name' //.json +$ opm render : \ + | jq -s '.[] | select( .schema == "olm.channel" ) \ + | select( .package == "openshift-pipelines-operator-rh") | .name' ---- + .Example command [%collapsible] ==== -[source,terminal] +[source,terminal,subs=attributes+] ---- -$ jq -s '.[] | select( .schema == "olm.channel" ) | \ - select( .package == "openshift-pipelines-operator-rh") | \ - .name' /home/username/rhoc.json +$ opm render registry.redhat.io/redhat/redhat-operator-index:v{product-version} \ + | jq -s '.[] | select( .schema == "olm.channel" ) \ + | select( .package == "openshift-pipelines-operator-rh") | .name' ---- ==== + @@ -46,10 +45,10 @@ $ jq -s '.[] | select( .schema == "olm.channel" ) | \ [source,text] ---- "latest" -"pipelines-1.11" -"pipelines-1.12" -"pipelines-1.13" "pipelines-1.14" +"pipelines-1.15" +"pipelines-1.16" +"pipelines-1.17" ---- ==== @@ -57,20 +56,22 @@ $ jq -s '.[] | select( .schema == "olm.channel" ) | \ + [source,terminal] ---- -$ jq -s '.[] | select( .package == "" ) | \ - select( .schema == "olm.channel" ) | \ - select( .name == "" ) | .entries | \ - .[] | .name' //.json +$ opm render : \ + | jq -s '.[] | select( .package == "" ) \ + | select( .schema == "olm.channel" ) \ + | select( .name == "" ) | .entries \ + | .[] | .name' ---- + .Example command [%collapsible] ==== -[source,terminal] +[source,terminal,subs=attributes+] ---- -$ jq -s '.[] | select( .package == "openshift-pipelines-operator-rh" ) | \ -select( .schema == "olm.channel" ) | select( .name == "latest" ) | \ -.entries | .[] | .name' /home/username/rhoc.json +$ opm render registry.redhat.io/redhat/redhat-operator-index:v{product-version} \ + | jq -s '.[] | select( .package == "openshift-pipelines-operator-rh" ) \ + | select( .schema == "olm.channel" ) | select( .name == "latest" ) \ + | .entries | .[] | .name' ---- ==== + @@ -79,15 +80,10 @@ select( .schema == "olm.channel" ) | select( .name == "latest" ) | \ ==== [source,text] ---- -"openshift-pipelines-operator-rh.v1.11.1" -"openshift-pipelines-operator-rh.v1.12.0" -"openshift-pipelines-operator-rh.v1.12.1" -"openshift-pipelines-operator-rh.v1.12.2" -"openshift-pipelines-operator-rh.v1.13.0" -"openshift-pipelines-operator-rh.v1.14.1" -"openshift-pipelines-operator-rh.v1.14.2" -"openshift-pipelines-operator-rh.v1.14.3" -"openshift-pipelines-operator-rh.v1.14.4" +"openshift-pipelines-operator-rh.v1.15.0" +"openshift-pipelines-operator-rh.v1.16.0" +"openshift-pipelines-operator-rh.v1.17.0" +"openshift-pipelines-operator-rh.v1.17.1" ---- ==== @@ -109,134 +105,167 @@ $ oc get clusterextension pipelines-operator -o yaml ==== [source,text] ---- -apiVersion: olm.operatorframework.io/v1alpha1 -kind: ClusterExtension +apiVersion: v1 +items: +- apiVersion: olm.operatorframework.io/v1 + kind: ClusterExtension + metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipes"},"spec":{"namespace":"pipelines","serviceAccount":{"name":"pipelines-installer"},"source":{"catalog":{"packageName":"openshift-pipelines-operator-rh","version":"1.14.x"},"sourceType":"Catalog"}}} + creationTimestamp: "2025-02-18T21:48:13Z" + finalizers: + - olm.operatorframework.io/cleanup-unpack-cache + - olm.operatorframework.io/cleanup-contentmanager-cache + generation: 1 + name: pipelines-operator + resourceVersion: "72725" + uid: e18b13fb-a96d-436f-be75-a9a0f2b07993 + spec: + namespace: pipelines + serviceAccount: + name: pipelines-installer + source: + catalog: + packageName: openshift-pipelines-operator-rh + upgradeConstraintPolicy: CatalogProvided + version: 1.14.x + sourceType: Catalog + status: + conditions: + - lastTransitionTime: "2025-02-18T21:48:13Z" + message: "" + observedGeneration: 1 + reason: Deprecated + status: "False" + type: Deprecated + - lastTransitionTime: "2025-02-18T21:48:13Z" + message: "" + observedGeneration: 1 + reason: Deprecated + status: "False" + type: PackageDeprecated + - lastTransitionTime: "2025-02-18T21:48:13Z" + message: "" + observedGeneration: 1 + reason: Deprecated + status: "False" + type: ChannelDeprecated + - lastTransitionTime: "2025-02-18T21:48:13Z" + message: "" + observedGeneration: 1 + reason: Deprecated + status: "False" + type: BundleDeprecated + - lastTransitionTime: "2025-02-18T21:48:16Z" + message: Installed bundle registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:f7b19ce26be742c4aaa458d37bc5ad373b5b29b20aaa7d308349687d3cbd8838 + successfully + observedGeneration: 1 + reason: Succeeded + status: "True" + type: Installed + - lastTransitionTime: "2025-02-18T21:48:16Z" + message: desired state reached + observedGeneration: 1 + reason: Succeeded + status: "True" + type: Progressing + install: + bundle: + name: openshift-pipelines-operator-rh.v1.14.5 + version: 1.14.5 +kind: List metadata: - annotations: - kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"olm.operatorframework.io/v1alpha1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","installNamespace":"openshift-operators","packageName":"openshift-pipelines-operator-rh","pollInterval":"30m","version":"\u003c1.12"}} - creationTimestamp: "2024-06-11T15:55:37Z" - generation: 1 - name: pipelines-operator - resourceVersion: "69776" - uid: 6a11dff3-bfa3-42b8-9e5f-d8babbd6486f -spec: - channel: latest - installNamespace: openshift-operators - packageName: openshift-pipelines-operator-rh - upgradeConstraintPolicy: Enforce - version: <1.12 -status: - conditions: - - lastTransitionTime: "2024-06-11T15:56:09Z" - message: installed from "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280" - observedGeneration: 1 - reason: Success - status: "True" - type: Installed - - lastTransitionTime: "2024-06-11T15:55:50Z" - message: resolved to "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280" - observedGeneration: 1 - reason: Success - status: "True" - type: Resolved - - lastTransitionTime: "2024-06-11T15:55:50Z" - message: "" - observedGeneration: 1 - reason: Deprecated - status: "False" - type: Deprecated - - lastTransitionTime: "2024-06-11T15:55:50Z" - message: "" - observedGeneration: 1 - reason: Deprecated - status: "False" - type: PackageDeprecated - - lastTransitionTime: "2024-06-11T15:55:50Z" - message: "" - observedGeneration: 1 - reason: Deprecated - status: "False" - type: ChannelDeprecated - - lastTransitionTime: "2024-06-11T15:55:50Z" - message: "" - observedGeneration: 1 - reason: Deprecated - status: "False" - type: BundleDeprecated - installedBundle: - name: openshift-pipelines-operator-rh.v1.11.1 - version: 1.11.1 - resolvedBundle: - name: openshift-pipelines-operator-rh.v1.11.1 - version: 1.11.1 + resourceVersion: "" ---- ==== . Edit your CR by using one of the following methods: -** If you want to pin your Operator or extension to specific version, such as `1.12.1`, edit your CR similar to the following example: +** If you want to pin your Operator or extension to specific version, such as `1.15.0`, edit your CR similar to the following example: + .Example `pipelines-operator.yaml` CR [source,yaml] ---- -apiVersion: olm.operatorframework.io/v1alpha1 +apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: pipelines-operator spec: - packageName: openshift-pipelines-operator-rh - installNamespace: - version: "1.12.1" <1> ----- -<1> Update the version from `1.11.1` to `1.12.1` + namespace: pipelines + serviceAccount: + name: pipelines-installer + source: + sourceType: Catalog + catalog: + packageName: openshift-pipelines-operator-rh + version: "1.15.0" <1> +---- +<1> Update the version from `1.14.x` to `1.15.0` ** If you want to define a range of acceptable update versions, edit your CR similar to the following example: + .Example CR with a version range specified [source,yaml] ---- -apiVersion: olm.operatorframework.io/v1alpha1 +apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: pipelines-operator spec: - packageName: openshift-pipelines-operator-rh - installNamespace: - version: ">1.11.1, <1.13" <1> ----- -<1> Specifies that the desired version range is greater than version `1.11.1` and less than `1.13`. For more information, see "Support for version ranges" and "Version comparison strings". + namespace: pipelines + serviceAccount: + name: pipelines-installer + source: + sourceType: Catalog + catalog: + packageName: openshift-pipelines-operator-rh + version: ">1.15, <1.17" <1> +---- +<1> Specifies that the desired version range is greater than version `1.15` and less than `1.17`. For more information, see "Support for version ranges" and "Version comparison strings". ** If you want to update to the latest version that can be resolved from a channel, edit your CR similar to the following example: + .Example CR with a specified channel [source,yaml] ---- -apiVersion: olm.operatorframework.io/v1alpha1 +apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: pipelines-operator spec: - packageName: openshift-pipelines-operator-rh - installNamespace: - channel: pipelines-1.13 <1> ----- -<1> Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. + namespace: pipelines + serviceAccount: + name: pipelines-installer + source: + sourceType: Catalog + catalog: + packageName: openshift-pipelines-operator-rh + channels: + - latest <1> +---- +<1> Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. Enter values as an array. ** If you want to specify a channel and version or version range, edit your CR similar to the following example: + .Example CR with a specified channel and version range [source,yaml] ---- -apiVersion: olm.operatorframework.io/v1alpha1 +apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: pipelines-operator spec: - packageName: openshift-pipelines-operator-rh - installNamespace: - channel: latest - version: "<1.13" + namespace: pipelines + serviceAccount: + name: pipelines-installer + source: + sourceType: Catalog + catalog: + packageName: openshift-pipelines-operator-rh + channels: + - latest + version: "<1.16" ---- + For more information, see "Example custom resources (CRs) that specify a target version". @@ -253,24 +282,6 @@ $ oc apply -f pipelines-operator.yaml ---- clusterextension.olm.operatorframework.io/pipelines-operator configured ---- -+ -[TIP] -==== -You can patch and apply the changes to your CR from the CLI by running the following command: - -[source,terminal] ----- -$ oc patch clusterextension/pipelines-operator -p \ - '{"spec":{"version":"<1.13"}}' \ - --type=merge ----- - -.Example output -[source,text] ----- -clusterextension.olm.operatorframework.io/pipelines-operator patched ----- -==== .Verification @@ -286,67 +297,73 @@ $ oc get clusterextension pipelines-operator -o yaml ==== [source,yaml] ---- -apiVersion: olm.operatorframework.io/v1alpha1 +apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"olm.operatorframework.io/v1alpha1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","installNamespace":"openshift-operators","packageName":"openshift-pipelines-operator-rh","pollInterval":"30m","version":"\u003c1.13"}} - creationTimestamp: "2024-06-11T18:23:26Z" + {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipes"},"spec":{"namespace":"pipelines","serviceAccount":{"name":"pipelines-installer"},"source":{"catalog":{"packageName":"openshift-pipelines-operator-rh","version":"\u003c1.16"},"sourceType":"Catalog"}}} + creationTimestamp: "2025-02-18T21:48:13Z" + finalizers: + - olm.operatorframework.io/cleanup-unpack-cache + - olm.operatorframework.io/cleanup-contentmanager-cache generation: 2 - name: pipelines-operator - resourceVersion: "66310" - uid: ce0416ba-13ea-4069-a6c8-e5efcbc47537 + name: pipes + resourceVersion: "90693" + uid: e18b13fb-a96d-436f-be75-a9a0f2b07993 spec: - channel: latest - installNamespace: openshift-operators - packageName: openshift-pipelines-operator-rh - upgradeConstraintPolicy: Enforce - version: <1.13 + namespace: pipelines + serviceAccount: + name: pipelines-installer + source: + catalog: + packageName: openshift-pipelines-operator-rh + upgradeConstraintPolicy: CatalogProvided + version: <1.16 + sourceType: Catalog status: conditions: - - lastTransitionTime: "2024-06-11T18:23:33Z" - message: resolved to "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:814742c8a7cc7e2662598e114c35c13993a7b423cfe92548124e43ea5d469f82" - observedGeneration: 2 - reason: Success - status: "True" - type: Resolved - - lastTransitionTime: "2024-06-11T18:23:52Z" - message: installed from "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:814742c8a7cc7e2662598e114c35c13993a7b423cfe92548124e43ea5d469f82" - observedGeneration: 2 - reason: Success - status: "True" - type: Installed - - lastTransitionTime: "2024-06-11T18:23:33Z" + - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: Deprecated - - lastTransitionTime: "2024-06-11T18:23:33Z" + - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: PackageDeprecated - - lastTransitionTime: "2024-06-11T18:23:33Z" + - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: ChannelDeprecated - - lastTransitionTime: "2024-06-11T18:23:33Z" + - lastTransitionTime: "2025-02-18T21:48:13Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: BundleDeprecated - installedBundle: - name: openshift-pipelines-operator-rh.v1.12.2 - version: 1.12.2 - resolvedBundle: - name: openshift-pipelines-operator-rh.v1.12.2 - version: 1.12.2 + - lastTransitionTime: "2025-02-18T21:48:16Z" + message: Installed bundle registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:8a593c1144709c9aeffbeb68d0b4b08368f528e7bb6f595884b2474bcfbcafcd + successfully + observedGeneration: 2 + reason: Succeeded + status: "True" + type: Installed + - lastTransitionTime: "2025-02-18T21:48:16Z" + message: desired state reached + observedGeneration: 2 + reason: Succeeded + status: "True" + type: Progressing + install: + bundle: + name: openshift-pipelines-operator-rh.v1.15.2 + version: 1.15.2 ---- ==== @@ -364,61 +381,73 @@ $ oc get clusterextension -o yaml ==== [source,text] ---- -apiVersion: olm.operatorframework.io/v1alpha1 +apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"olm.operatorframework.io/v1alpha1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","installNamespace":"openshift-operators","packageName":"openshift-pipelines-operator-rh","pollInterval":"30m","version":"3.0"}} - creationTimestamp: "2024-06-11T18:23:26Z" + {"apiVersion":"olm.operatorframework.io/v1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipes"},"spec":{"namespace":"pipelines","serviceAccount":{"name":"pipelines-installer"},"source":{"catalog":{"packageName":"openshift-pipelines-operator-rh","version":"9.x"},"sourceType":"Catalog"}}} + creationTimestamp: "2025-02-18T21:48:13Z" + finalizers: + - olm.operatorframework.io/cleanup-unpack-cache + - olm.operatorframework.io/cleanup-contentmanager-cache generation: 3 - name: pipelines-operator - resourceVersion: "71852" - uid: ce0416ba-13ea-4069-a6c8-e5efcbc47537 + name: pipes + resourceVersion: "93334" + uid: e18b13fb-a96d-436f-be75-a9a0f2b07993 spec: - channel: latest - installNamespace: openshift-operators - packageName: openshift-pipelines-operator-rh - upgradeConstraintPolicy: Enforce - version: "3.0" + namespace: pipelines + serviceAccount: + name: pipelines-installer + source: + catalog: + packageName: openshift-pipelines-operator-rh + upgradeConstraintPolicy: CatalogProvided + version: 9.x + sourceType: Catalog status: conditions: - - lastTransitionTime: "2024-06-11T18:29:02Z" - message: 'error upgrading from currently installed version "1.12.2": no package - "openshift-pipelines-operator-rh" matching version "3.0" found in channel "latest"' - observedGeneration: 3 - reason: ResolutionFailed - status: "False" - type: Resolved - - lastTransitionTime: "2024-06-11T18:29:02Z" - message: installation has not been attempted as resolution failed - observedGeneration: 3 - reason: InstallationStatusUnknown - status: Unknown - type: Installed - - lastTransitionTime: "2024-06-11T18:29:02Z" - message: deprecation checks have not been attempted as resolution failed - observedGeneration: 3 + - lastTransitionTime: "2025-02-18T21:48:13Z" + message: "" + observedGeneration: 2 reason: Deprecated - status: Unknown + status: "False" type: Deprecated - - lastTransitionTime: "2024-06-11T18:29:02Z" - message: deprecation checks have not been attempted as resolution failed - observedGeneration: 3 + - lastTransitionTime: "2025-02-18T21:48:13Z" + message: "" + observedGeneration: 2 reason: Deprecated - status: Unknown + status: "False" type: PackageDeprecated - - lastTransitionTime: "2024-06-11T18:29:02Z" - message: deprecation checks have not been attempted as resolution failed - observedGeneration: 3 + - lastTransitionTime: "2025-02-18T21:48:13Z" + message: "" + observedGeneration: 2 reason: Deprecated - status: Unknown + status: "False" type: ChannelDeprecated - - lastTransitionTime: "2024-06-11T18:29:02Z" - message: deprecation checks have not been attempted as resolution failed - observedGeneration: 3 + - lastTransitionTime: "2025-02-18T21:48:13Z" + message: "" + observedGeneration: 2 reason: Deprecated - status: Unknown + status: "False" type: BundleDeprecated + - lastTransitionTime: "2025-02-18T21:48:16Z" + message: Installed bundle registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:8a593c1144709c9aeffbeb68d0b4b08368f528e7bb6f595884b2474bcfbcafcd + successfully + observedGeneration: 3 + reason: Succeeded + status: "True" + type: Installed + - lastTransitionTime: "2025-02-18T21:48:16Z" + message: 'error upgrading from currently installed version "1.15.2": no bundles + found for package "openshift-pipelines-operator-rh" matching version "9.x"' + observedGeneration: 3 + reason: Retrying + status: "True" + type: Progressing + install: + bundle: + name: openshift-pipelines-operator-rh.v1.15.2 + version: 1.15.2 ---- ==== diff --git a/operators/olm_v1/index.adoc b/operators/olm_v1/index.adoc index cd071a4186a1..eaeca4affad2 100644 --- a/operators/olm_v1/index.adoc +++ b/operators/olm_v1/index.adoc @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[] toc::[] -{olm-first} has been included with {product-title} 4 since its initial release. {product-title} 4.17 includes components for a next-generation iteration of {olm} as a Technology Preview feature, known during this phase as _{olmv1}_. This updated framework evolves many of the concepts that have been part of previous versions of {olm} and adds new capabilities. +{olm-first} has been included with {product-title} 4 since its initial release. {product-title} 4.18 includes components for a next-generation iteration of {olm} as a Generally Available (GA) feature, known during this phase as _{olmv1}_. This updated framework evolves many of the concepts that have been part of previous versions of {olm} and adds new capabilities. Starting in {product-title} 4.17, documentation for {olmv1} has been moved to the following new guide: From 51a5deacd59ebeef37a8ac34c54d42d7471f9865 Mon Sep 17 00:00:00 2001 From: Jaromir Hradilek Date: Thu, 20 Feb 2025 14:56:16 +0100 Subject: [PATCH 312/669] CNV-47369: Updated the Getting Started page links --- modules/migrating-to-virt.adoc | 2 +- virt/getting_started/virt-getting-started.adoc | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/migrating-to-virt.adoc b/modules/migrating-to-virt.adoc index 80409b004e46..0c234845a35a 100644 --- a/modules/migrating-to-virt.adoc +++ b/modules/migrating-to-virt.adoc @@ -14,7 +14,7 @@ To migrate virtual machines from an external provider such as {vmw-first}, {rh-o ==== .Prerequisites -* The {mtv-full} Operator link:https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/{mtv-version}/html/installing_and_using_the_migration_toolkit_for_virtualization/installing-the-operator_mtv[is installed]. +* The {mtv-full} Operator link:https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/{mtv-version}/html/installing_and_using_the_migration_toolkit_for_virtualization/installing-the-operator_mtv#installing-the-operator_mtv[is installed]. .Procedure . link:https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/{mtv-version}/html/installing_and_using_the_migration_toolkit_for_virtualization/migrating-vms-web-console_mtv#adding-source-providers[Configure the source provider]. diff --git a/virt/getting_started/virt-getting-started.adoc b/virt/getting_started/virt-getting-started.adoc index ccbfab718d8a..863efa2ce757 100644 --- a/virt/getting_started/virt-getting-started.adoc +++ b/virt/getting_started/virt-getting-started.adoc @@ -38,7 +38,7 @@ Quick start tours are available for several {VirtProductName} features. To acces . Click the *Help* icon *?* in the menu bar on the header of the {product-title} web console. . Select *Quick Starts*. -You can filter the available tours by entering the keyword `virtualization` in the *Filter* field. +You can filter the available tours by entering the keyword `virtual` in the *Filter* field. endif::openshift-rosa,openshift-dedicated[] [id="planning-and-installing-virt_{context}"] From 6e38d1722b5786f29dbfdc812b250d86939e4fbd Mon Sep 17 00:00:00 2001 From: Steven Smith Date: Thu, 20 Feb 2025 12:10:33 -0500 Subject: [PATCH 313/669] Updates a few minor things for 418 GA in UDN docs --- modules/nw-cudn-best-practices.adoc | 4 ++-- modules/nw-cudn-cr.adoc | 14 ++++++------ modules/nw-udn-best-practices.adoc | 22 +++++++++---------- modules/nw-udn-cr.adoc | 2 +- modules/nw-udn-limitations.adoc | 4 ++-- .../opening-default-network-ports-udn.adoc | 2 +- .../about-user-defined-networks.adoc | 9 ++------ 7 files changed, 26 insertions(+), 31 deletions(-) diff --git a/modules/nw-cudn-best-practices.adoc b/modules/nw-cudn-best-practices.adoc index b9bb5f9b2cde..09f7ce158f14 100644 --- a/modules/nw-cudn-best-practices.adoc +++ b/modules/nw-cudn-best-practices.adoc @@ -24,8 +24,8 @@ Before setting up a `ClusterUserDefinedNetwork` custom resource (CR), users shou ** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a pod is created, the pod attaches itself to the default network. -** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `ClusterUserDefinedNetwork` CR is created that matches the namespace, the CUDN reports an error status and the network is not created. +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `ClusterUserDefinedNetwork` CR is created that matches the namespace, an error is reported and the network is not created. -** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `CluserUserDefinedNetwork` CR already exists, a pod in the namespace is created and attached to the default network. +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `ClusterUserDefinedNetwork` CR already exists, a pod in the namespace is created and attached to the default network. ** If the namespace _has_ the label, and a primary `ClusterUserDefinedNetwork` CR does not exist, a pod in the namespace is not created until the `ClusterUserDefinedNetwork` CR is created. \ No newline at end of file diff --git a/modules/nw-cudn-cr.adoc b/modules/nw-cudn-cr.adoc index fdad6c0fd93a..f618d007ec0f 100644 --- a/modules/nw-cudn-cr.adoc +++ b/modules/nw-cudn-cr.adoc @@ -4,13 +4,13 @@ :_mod-docs-content-type: PROCEDURE [id="nw-cudn-cr_{context}"] -= Creating a ClusterUserDefinedNetwork custom resource += Creating a ClusterUserDefinedNetwork CR -The following procedure creates a `ClusterUserDefinedNetwork` custom resource definition (CRD). Based upon your use case, create your request using either the `cluster-layer-two-udn.yaml` example for a `Layer2` topology type or the `cluster-layer-three-udn.yaml` example for a `Layer3` topology type. +The following procedure creates a `ClusterUserDefinedNetwork` custom resource (CR). Based upon your use case, create your request using either the `cluster-layer-two-udn.yaml` example for a `Layer2` topology type or the `cluster-layer-three-udn.yaml` example for a `Layer3` topology type. [IMPORTANT] ==== -* The `ClusterUserDefinedNetwork` CRD is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network. +* The `ClusterUserDefinedNetwork` CR is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network. * {VirtProductName} only supports the `Layer2` topology. ==== @@ -57,8 +57,8 @@ spec: - "2001:db8::/64" - "10.100.0.0/16" # <9> ---- -<1> Name of your `ClusterUserDefinedNetwork` custom resource. -<2> A label query over the set of namespaces that the cluster UDN applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces. +<1> Name of your `ClusterUserDefinedNetwork` CR. +<2> A label query over the set of namespaces that the cluster UDN CR applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces. <3> Uses the `matchLabels` selector type, where terms are evaluated with an `AND` relationship. <4> Because the `matchLabels` selector type is used, provisions namespaces matching both `` _and_ ``. <5> Describes the network configuration. @@ -94,9 +94,9 @@ spec: - cidr: 10.100.0.0/16 hostSubnet: 64 ---- -<1> Name of your `ClusterUserDefinedNetwork` custom resource. +<1> Name of your `ClusterUserDefinedNetwork` CR. <2> A label query over the set of namespaces that the cluster UDN applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces. -<3> Uses the `matchExpressions` selector type, where terms are evaluated with an _*OR*_ relationship. +<3> Uses the `matchExpressions` selector type, where terms are evaluated with an `OR` relationship. <4> Specifies the label key to match. <5> Specifies the operator. Valid values include: `In`, `NotIn`, `Exists`, and `DoesNotExist`. <6> Because the `matchExpressions` type is used, provisions namespaces matching either `` or ``. diff --git a/modules/nw-udn-best-practices.adoc b/modules/nw-udn-best-practices.adoc index 05b0288435e6..f84340030bf0 100644 --- a/modules/nw-udn-best-practices.adoc +++ b/modules/nw-udn-best-practices.adoc @@ -4,16 +4,16 @@ :_mod-docs-content-type: CONCEPT [id="considerations-for-udn_{context}"] -= Best practices for UserDefinedNetwork += Best practices for UserDefinedNetwork CRs -Before setting up a `UserDefinedNetwork` (UDN) resource, you should consider the following information: +Before setting up a `UserDefinedNetwork` custom resource (CR), you should consider the following information: //These will not go live till 4.18 GA //* To eliminate errors and ensure connectivity, you should create a namespace scoped UDN CR before creating any workload in the namespace. //* You might want to allow access to any Kubernetes services on the cluster default network. By default, KAPI and DNS are accessible. -* `openshift-*` namespaces should not be used to set up a UDN. +* `openshift-*` namespaces should not be used to set up a `UserDefinedNetwork` CR. * `UserDefinedNetwork` CRs should not be created in the default namespace. This can result in no isolation and, as a result, could introduce security risks to the cluster. @@ -21,29 +21,29 @@ Before setting up a `UserDefinedNetwork` (UDN) resource, you should consider the ** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a pod is created, the pod attaches itself to the default network. -** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary UDN CR is created that matches the namespace, the UDN reports an error status and the network is not created. +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `UserDefinedNetwork` CR is created that matches the namespace, a status error is reported and the network is not created. -** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary UDN already exists, a pod in the namespace is created and attached to the default network. +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `UserDefinedNetwork` CR already exists, a pod in the namespace is created and attached to the default network. -** If the namespace _has_ the label, and a primary UDN does not exist, a pod in the namespace is not created until the UDN is created. +** If the namespace _has_ the label, and a primary `UserDefinedNetwork` CR does not exist, a pod in the namespace is not created until the `UserDefinedNetwork` CR is created. * 2 masquerade IP addresses are required for user defined networks. You must reconfigure your masquerade subnet to be large enough to hold the required number of networks. + [IMPORTANT] ==== * For {product-title} 4.17 and later, clusters use `169.254.0.0/17` for IPv4 and `fd69::/112` for IPv6 as the default masquerade subnet. These ranges should be avoided by users. For updated clusters, there is no change to the default masquerade subnet. -* Changing the cluster's masquerade subnet is unsupported after a user-defined network has been configured for a project. Attempting to modify the masquerade subnet after a UDN has been set up can disrupt the network connectivity and cause configuration issues. +* Changing the cluster's masquerade subnet is unsupported after a user-defined network has been configured for a project. Attempting to modify the masquerade subnet after a `UserDefinedNetwork` CR has been set up can disrupt the network connectivity and cause configuration issues. ==== // May be something that is downstream only. //* No active primary UDN managed pod can also be a candidate for `v1.multus-cni.io/default-network` -* Ensure tenants are using the `UserDefinedNetwork` resource and not the `NetworkAttachmentDefinition` (NAD) resource. This can create security risks between tenants. +* Ensure tenants are using the `UserDefinedNetwork` resource and not the `NetworkAttachmentDefinition` (NAD) CR. This can create security risks between tenants. -* When creating network segmentation, you should only use the NAD resource if user-defined network segmentation cannot be completed using the UDN resource. +* When creating network segmentation, you should only use the `NetworkAttachmentDefinition` CR if user-defined network segmentation cannot be completed using the `UserDefinedNetwork` CR. -* The cluster subnet and services CIDR for a UDN cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default network's join subnet, you must not use that value to configure a UDN `joinSubnets` field. If the default address values are used anywhere in the network for the cluster, you must override it by setting the `joinSubnets` field. For more information, see "Additional configuration details for a UserDefinedNetworks CR". +* The cluster subnet and services CIDR for a `UserDefinedNetwork` CR cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default network's join subnet, you must not use that value to configure a `UserDefinedNetwork` CR `joinSubnets` field. If the default address values are used anywhere in the network for the cluster, you must override it by setting the `joinSubnets` field. For more information, see "Additional configuration details for a UserDefinedNetworks CR". -* The cluster subnet and services CIDR for a UDN cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default join subnet for the network, you must not use that value to configure a UDN `joinSubnets` field. If the default address values are used anywhere in the network for the cluster you must override the default values by setting the `joinSubnets` field. For more information, see "Additional configuration details for a UserDefinedNetworks CR". +* The cluster subnet and services CIDR for a `UserDefinedNetwork` CR cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default join subnet for the network, you must not use that value to configure a `UserDefinedNetwork` CR `joinSubnets` field. If the default address values are used anywhere in the network for the cluster you must override the default values by setting the `joinSubnets` field. For more information, see "Additional configuration details for user-defined networks". * A layer 2 topology creates a virtual switch that is distributed across all nodes in a cluster. Virtual machines and pods connect to this virtual switch so that all these components can communicate with each other within the same subnet. If you decide not to specify a layer 2 subnet, then you must manually configure IP addresses for each pod in your cluster. When not specifying a layer 2 subnet, port security is limited to preventing Media Access Control (MAC) spoofing only, and does not include IP spoofing. A layer 2 topology creates a single broadcast domain that can be challenging in large network environments, whereby the topology might cause a broadcast storm that can degrade network performance. diff --git a/modules/nw-udn-cr.adoc b/modules/nw-udn-cr.adoc index 474a40762a83..c92d3692509a 100644 --- a/modules/nw-udn-cr.adoc +++ b/modules/nw-udn-cr.adoc @@ -6,7 +6,7 @@ [id="nw-udn-cr_{context}"] = Creating a UserDefinedNetwork custom resource -The following procedure creates a user-defined network that is namespace scoped. Based upon your use case, create your request by using either the `my-layer-two-udn.yaml` example for a `Layer2` topology type or the `my-layer-three-udn.yaml` example for a `Layer3` topology type. +The following procedure creates a `UserDefinedNetwork` CR that is namespace scoped. Based upon your use case, create your request by using either the `my-layer-two-udn.yaml` example for a `Layer2` topology type or the `my-layer-three-udn.yaml` example for a `Layer3` topology type. //We won't have these pieces till GA in 4.18. //[NOTE] diff --git a/modules/nw-udn-limitations.adoc b/modules/nw-udn-limitations.adoc index 17d4408f0786..2b77982653d4 100644 --- a/modules/nw-udn-limitations.adoc +++ b/modules/nw-udn-limitations.adoc @@ -4,9 +4,9 @@ :_mod-docs-content-type: CONCEPT [id="limitations-for-udn_{context}"] -= Limitations for UserDefinedNetwork custom resource += Limitations of a user-defined network -While user-defined networks (UDN) offer highly customizable network configuration options, there are limitations that cluster administrators and developers should be aware of when implementing and managing these networks. Consider the following limitations before implementing a user-defined network. +While user-defined networks (UDN) offer highly customizable network configuration options, there are limitations that cluster administrators and developers should be aware of when implementing and managing these networks. Consider the following limitations before implementing a UDN. //Check on the removal of the DNS limitation for 4.18 or 4.17.z. * *DNS limitations*: diff --git a/modules/opening-default-network-ports-udn.adoc b/modules/opening-default-network-ports-udn.adoc index 7dd93e783fc2..7170f649a152 100644 --- a/modules/opening-default-network-ports-udn.adoc +++ b/modules/opening-default-network-ports-udn.adoc @@ -6,7 +6,7 @@ [id="opening-default-network-ports-udn_{context}"] = Opening default network ports on user-defined network pods -By default, pods on a user-defined network are isolated from the default network. This means that default network pods, such as those running monitoring services (Prometheus or Alertmanager) or the {product-title} image registry, cannot initiate connections to UDN pods. +By default, pods on a user-defined network (UDN) are isolated from the default network. This means that default network pods, such as those running monitoring services (Prometheus or Alertmanager) or the {product-title} image registry, cannot initiate connections to UDN pods. To allow default network pods to connect to a user-defined network pod, you can use the `k8s.ovn.org/open-default-ports` annotation. This annotation opens specific ports on the user-defined network pod for access from the default network. diff --git a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc index 128b1658acf8..3069e166250c 100644 --- a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc +++ b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc @@ -15,18 +15,13 @@ The following diagram shows four cluster namespaces, where each namespace has a image::527-OpenShift-UDN-isolation-012025.png[The namespace isolation concept in a user-defined network (UDN)] -[NOTE] -==== -Support for the Localnet topology on both primary and secondary networks will be added in a future version of {product-title}. -==== - A cluster administrator can use a user-defined network to create and define additional networks that span multiple namespaces at the cluster level by leveraging the `ClusterUserDefinedNetwork` custom resource (CR). Additionally, a cluster administrator or a cluster user can use a user-defined network to define additional networks at the namespace level with the `UserDefinedNetwork` CR. -The following diagram shows tenant isolation that a cluster administrator created by defining a `ClusterUserDefinedNetwork` (CR) for each tenant. This network configuration allows a network to span across many namespaces. In the diagram, the `udn-1` disconnected network selects `namespace-1` and `namespace-2`, while the `udn-2` disconnected network selects `namespace-3` and `namespace-4`. A tenant acts as a disconnected network that is isolated from other tenants' networks. Pods from a namespace can communicate with pods in another namespace only if those namespaces exist in the same tenant network. +The following diagram shows tenant isolation that a cluster administrator created by defining a `ClusterUserDefinedNetwork` CR for each tenant. This network configuration allows a network to span across many namespaces. In the diagram, the `udn-1` disconnected network selects `namespace-1` and `namespace-2`, while the `udn-2` disconnected network selects `namespace-3` and `namespace-4`. A tenant acts as a disconnected network that is isolated from other tenants' networks. Pods from a namespace can communicate with pods in another namespace only if those namespaces exist in the same tenant network. image::528-OpenShift-multitenant-0225.png[The tenant isolation concept in a user-defined network (UDN)] -The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a `ClusterUserDefinedNetwork` or `UserDefinedNetwork` custom resource, how to create the custom resource, and additional configuration details that might be relevant to your deployment. +The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a `ClusterUserDefinedNetwork` or `UserDefinedNetwork` CR, how to create the CR, and additional configuration details that might be relevant to your deployment. //benefits of UDN include::modules/nw-udn-benefits.adoc[leveloffset=+1] From 0dd62e590a002606f4d32254658c1523c71dda36 Mon Sep 17 00:00:00 2001 From: Jeana Routh Date: Tue, 11 Feb 2025 15:55:44 -0500 Subject: [PATCH 314/669] OSDOCS-13365: multi-NIC support in Nutanix --- modules/cpmso-yaml-provider-spec-nutanix.adoc | 25 +++++++++++++++--- modules/machineset-yaml-nutanix.adoc | 26 +++++++++++-------- ...n-configuring-nutanix-failure-domains.adoc | 14 ++++++++-- 3 files changed, 49 insertions(+), 16 deletions(-) diff --git a/modules/cpmso-yaml-provider-spec-nutanix.adoc b/modules/cpmso-yaml-provider-spec-nutanix.adoc index a01b5fdea0fa..813c122acfb4 100644 --- a/modules/cpmso-yaml-provider-spec-nutanix.adoc +++ b/modules/cpmso-yaml-provider-spec-nutanix.adoc @@ -78,7 +78,7 @@ You must use the `Legacy` boot type in {product-title} {product-version}. ==== Clusters that use {product-title} version 4.15 or later can use failure domain configurations. -If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. +If the cluster uses a failure domain, configure this parameter in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it. ==== <4> Specifies the secret name for the cluster. Do not change this value. @@ -86,13 +86,32 @@ If you specify this value in the provider specification when using failure domai <6> Specifies the cloud provider platform type. Do not change this value. <7> Specifies the memory allocated for the control plane machines. <8> Specifies the Nutanix project that you use for your cluster. In this example, the project type is `name`, so there is a `name` stanza. -<9> Specifies a subnet configuration. In this example, the subnet type is `uuid`, so there is a `uuid` stanza. +<9> Specify one or more Prism Element subnet objects. +In this example, the subnet type is `uuid`, so there is a `uuid` stanza. +A maximum of 32 subnets for each Prism Element failure domain in the cluster is supported. ++ +[IMPORTANT] +==== +The following known issues with configuring multiple subnets for an existing Nutanix cluster by using a control plane machine set exist in {product-title} version 4.18: + +* Adding subnets above the existing subnet in the `subnets` stanza causes a control plane node to become stuck in the `Deleting` state. +As a workaround, only add subnets below the existing subnet in the `subnets` stanza. + +* Sometimes, after adding a subnet, the updated control plane machines appear in the Nutanix console but the {product-title} cluster is unreachable. +There is no workaround for this issue. + +These issues occur on clusters that use a control plane machine set to configure subnets regardless of whether subnets are specified in a failure domain or the provider specification. +For more information, see link:https://issues.redhat.com/browse/OCPBUGS-50904[*OCPBUGS-50904*]. +==== ++ +The CIDR IP address prefix for one of the specified subnets must contain the virtual IP addresses that the {product-title} cluster uses. +All subnet UUID values must be unique. + [NOTE] ==== Clusters that use {product-title} version 4.15 or later can use failure domain configurations. -If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. +If the cluster uses a failure domain, configure this parameter in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it. ==== <10> Specifies the VM disk size for the control plane machines. diff --git a/modules/machineset-yaml-nutanix.adoc b/modules/machineset-yaml-nutanix.adoc index 0ec2248ad5a9..aa111b8f3a29 100644 --- a/modules/machineset-yaml-nutanix.adoc +++ b/modules/machineset-yaml-nutanix.adoc @@ -108,16 +108,16 @@ endif::infra[] project: <10> type: name name: - subnets: + subnets: <11> - type: uuid uuid: - systemDiskSize: 120Gi <11> + systemDiskSize: 120Gi <12> userDataSecret: - name: <12> - vcpuSockets: 4 <13> - vcpusPerSocket: 1 <14> + name: <13> + vcpuSockets: 4 <14> + vcpusPerSocket: 1 <15> ifdef::infra[] - taints: <15> + taints: <16> - key: node-role.kubernetes.io/infra effect: NoSchedule endif::infra[] @@ -143,12 +143,16 @@ You must use the `Legacy` boot type in {product-title} {product-version}. <8> Specify the image to use. Use an image from an existing default compute machine set for the cluster. <9> Specify the amount of memory for the cluster in Gi. <10> Specify the Nutanix project that you use for your cluster. In this example, the project type is `name`, so there is a `name` stanza. -<11> Specify the size of the system disk in Gi. -<12> Specify the name of the secret in the user data YAML file that is in the `openshift-machine-api` namespace. Use the value that installation program populates in the default compute machine set. -<13> Specify the number of vCPU sockets. -<14> Specify the number of vCPUs per socket. +<11> Specify one or more UUID for the Prism Element subnet object. +The CIDR IP address prefix for one of the specified subnets must contain the virtual IP addresses that the {product-title} cluster uses. +A maximum of 32 subnets for each Prism Element failure domain in the cluster is supported. +All subnet UUID values must be unique. +<12> Specify the size of the system disk in Gi. +<13> Specify the name of the secret in the user data YAML file that is in the `openshift-machine-api` namespace. Use the value that installation program populates in the default compute machine set. +<14> Specify the number of vCPU sockets. +<15> Specify the number of vCPUs per socket. ifdef::infra[] -<15> Specify a taint to prevent user workloads from being scheduled on infra nodes. +<16> Specify a taint to prevent user workloads from being scheduled on infra nodes. + [NOTE] ==== diff --git a/modules/post-installation-configuring-nutanix-failure-domains.adoc b/modules/post-installation-configuring-nutanix-failure-domains.adoc index 7a3b8c45eaf5..c3a031220341 100644 --- a/modules/post-installation-configuring-nutanix-failure-domains.adoc +++ b/modules/post-installation-configuring-nutanix-failure-domains.adoc @@ -10,7 +10,7 @@ You add failure domains to an existing Nutanix cluster by modifying its Infrastr [TIP] ==== -It is recommended that you configure three failure domains to ensure high-availability. +To ensure high-availability, configure three failure domains. ==== .Procedure @@ -62,6 +62,16 @@ where: ``:: Specifies the universally unique identifier (UUID) of the Prism Element. ``:: Specifies a unique name for the failure domain. The name is limited to 64 or fewer characters, which can include lower-case letters, digits, and a dash (`-`). The dash cannot be in the leading or ending position of the name. -``:: Specifies the UUID of the Prism Element subnet object. The subnet's IP address prefix (CIDR) should contain the virtual IP addresses that the {product-title} cluster uses. Only one subnet per failure domain (Prism Element) in an {product-title} cluster is supported. +``:: Specifies one or more UUID for the Prism Element subnet object. +The CIDR IP address prefix for one of the specified subnets must contain the virtual IP addresses that the {product-title} cluster uses. ++ +-- +:FeatureName: Configuring multiple subnets +include::snippets/technology-preview.adoc[] +-- ++ +To configure multiple subnets in the Infrastructure CR, you must enable the `NutanixMultiSubnets` feature gate. +A maximum of 32 subnets for each failure domain (Prism Element) in an {product-title} cluster is supported. +All subnet UUID values must be unique. . Save the CR to apply the changes. From 8c00df6bfde367d808215fc95445fb7b7849b08c Mon Sep 17 00:00:00 2001 From: Steven Smith <77019920+stevsmit@users.noreply.github.com> Date: Thu, 20 Feb 2025 13:20:23 -0500 Subject: [PATCH 315/669] Revert "Updates a few minor things for 418 GA in UDN docs" --- modules/nw-cudn-best-practices.adoc | 4 ++-- modules/nw-cudn-cr.adoc | 14 ++++++------ modules/nw-udn-best-practices.adoc | 22 +++++++++---------- modules/nw-udn-cr.adoc | 2 +- modules/nw-udn-limitations.adoc | 4 ++-- .../opening-default-network-ports-udn.adoc | 2 +- .../about-user-defined-networks.adoc | 9 ++++++-- 7 files changed, 31 insertions(+), 26 deletions(-) diff --git a/modules/nw-cudn-best-practices.adoc b/modules/nw-cudn-best-practices.adoc index 09f7ce158f14..b9bb5f9b2cde 100644 --- a/modules/nw-cudn-best-practices.adoc +++ b/modules/nw-cudn-best-practices.adoc @@ -24,8 +24,8 @@ Before setting up a `ClusterUserDefinedNetwork` custom resource (CR), users shou ** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a pod is created, the pod attaches itself to the default network. -** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `ClusterUserDefinedNetwork` CR is created that matches the namespace, an error is reported and the network is not created. +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `ClusterUserDefinedNetwork` CR is created that matches the namespace, the CUDN reports an error status and the network is not created. -** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `ClusterUserDefinedNetwork` CR already exists, a pod in the namespace is created and attached to the default network. +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `CluserUserDefinedNetwork` CR already exists, a pod in the namespace is created and attached to the default network. ** If the namespace _has_ the label, and a primary `ClusterUserDefinedNetwork` CR does not exist, a pod in the namespace is not created until the `ClusterUserDefinedNetwork` CR is created. \ No newline at end of file diff --git a/modules/nw-cudn-cr.adoc b/modules/nw-cudn-cr.adoc index f618d007ec0f..fdad6c0fd93a 100644 --- a/modules/nw-cudn-cr.adoc +++ b/modules/nw-cudn-cr.adoc @@ -4,13 +4,13 @@ :_mod-docs-content-type: PROCEDURE [id="nw-cudn-cr_{context}"] -= Creating a ClusterUserDefinedNetwork CR += Creating a ClusterUserDefinedNetwork custom resource -The following procedure creates a `ClusterUserDefinedNetwork` custom resource (CR). Based upon your use case, create your request using either the `cluster-layer-two-udn.yaml` example for a `Layer2` topology type or the `cluster-layer-three-udn.yaml` example for a `Layer3` topology type. +The following procedure creates a `ClusterUserDefinedNetwork` custom resource definition (CRD). Based upon your use case, create your request using either the `cluster-layer-two-udn.yaml` example for a `Layer2` topology type or the `cluster-layer-three-udn.yaml` example for a `Layer3` topology type. [IMPORTANT] ==== -* The `ClusterUserDefinedNetwork` CR is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network. +* The `ClusterUserDefinedNetwork` CRD is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network. * {VirtProductName} only supports the `Layer2` topology. ==== @@ -57,8 +57,8 @@ spec: - "2001:db8::/64" - "10.100.0.0/16" # <9> ---- -<1> Name of your `ClusterUserDefinedNetwork` CR. -<2> A label query over the set of namespaces that the cluster UDN CR applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces. +<1> Name of your `ClusterUserDefinedNetwork` custom resource. +<2> A label query over the set of namespaces that the cluster UDN applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces. <3> Uses the `matchLabels` selector type, where terms are evaluated with an `AND` relationship. <4> Because the `matchLabels` selector type is used, provisions namespaces matching both `` _and_ ``. <5> Describes the network configuration. @@ -94,9 +94,9 @@ spec: - cidr: 10.100.0.0/16 hostSubnet: 64 ---- -<1> Name of your `ClusterUserDefinedNetwork` CR. +<1> Name of your `ClusterUserDefinedNetwork` custom resource. <2> A label query over the set of namespaces that the cluster UDN applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces. -<3> Uses the `matchExpressions` selector type, where terms are evaluated with an `OR` relationship. +<3> Uses the `matchExpressions` selector type, where terms are evaluated with an _*OR*_ relationship. <4> Specifies the label key to match. <5> Specifies the operator. Valid values include: `In`, `NotIn`, `Exists`, and `DoesNotExist`. <6> Because the `matchExpressions` type is used, provisions namespaces matching either `` or ``. diff --git a/modules/nw-udn-best-practices.adoc b/modules/nw-udn-best-practices.adoc index f84340030bf0..05b0288435e6 100644 --- a/modules/nw-udn-best-practices.adoc +++ b/modules/nw-udn-best-practices.adoc @@ -4,16 +4,16 @@ :_mod-docs-content-type: CONCEPT [id="considerations-for-udn_{context}"] -= Best practices for UserDefinedNetwork CRs += Best practices for UserDefinedNetwork -Before setting up a `UserDefinedNetwork` custom resource (CR), you should consider the following information: +Before setting up a `UserDefinedNetwork` (UDN) resource, you should consider the following information: //These will not go live till 4.18 GA //* To eliminate errors and ensure connectivity, you should create a namespace scoped UDN CR before creating any workload in the namespace. //* You might want to allow access to any Kubernetes services on the cluster default network. By default, KAPI and DNS are accessible. -* `openshift-*` namespaces should not be used to set up a `UserDefinedNetwork` CR. +* `openshift-*` namespaces should not be used to set up a UDN. * `UserDefinedNetwork` CRs should not be created in the default namespace. This can result in no isolation and, as a result, could introduce security risks to the cluster. @@ -21,29 +21,29 @@ Before setting up a `UserDefinedNetwork` custom resource (CR), you should consid ** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a pod is created, the pod attaches itself to the default network. -** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `UserDefinedNetwork` CR is created that matches the namespace, a status error is reported and the network is not created. +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary UDN CR is created that matches the namespace, the UDN reports an error status and the network is not created. -** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `UserDefinedNetwork` CR already exists, a pod in the namespace is created and attached to the default network. +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary UDN already exists, a pod in the namespace is created and attached to the default network. -** If the namespace _has_ the label, and a primary `UserDefinedNetwork` CR does not exist, a pod in the namespace is not created until the `UserDefinedNetwork` CR is created. +** If the namespace _has_ the label, and a primary UDN does not exist, a pod in the namespace is not created until the UDN is created. * 2 masquerade IP addresses are required for user defined networks. You must reconfigure your masquerade subnet to be large enough to hold the required number of networks. + [IMPORTANT] ==== * For {product-title} 4.17 and later, clusters use `169.254.0.0/17` for IPv4 and `fd69::/112` for IPv6 as the default masquerade subnet. These ranges should be avoided by users. For updated clusters, there is no change to the default masquerade subnet. -* Changing the cluster's masquerade subnet is unsupported after a user-defined network has been configured for a project. Attempting to modify the masquerade subnet after a `UserDefinedNetwork` CR has been set up can disrupt the network connectivity and cause configuration issues. +* Changing the cluster's masquerade subnet is unsupported after a user-defined network has been configured for a project. Attempting to modify the masquerade subnet after a UDN has been set up can disrupt the network connectivity and cause configuration issues. ==== // May be something that is downstream only. //* No active primary UDN managed pod can also be a candidate for `v1.multus-cni.io/default-network` -* Ensure tenants are using the `UserDefinedNetwork` resource and not the `NetworkAttachmentDefinition` (NAD) CR. This can create security risks between tenants. +* Ensure tenants are using the `UserDefinedNetwork` resource and not the `NetworkAttachmentDefinition` (NAD) resource. This can create security risks between tenants. -* When creating network segmentation, you should only use the `NetworkAttachmentDefinition` CR if user-defined network segmentation cannot be completed using the `UserDefinedNetwork` CR. +* When creating network segmentation, you should only use the NAD resource if user-defined network segmentation cannot be completed using the UDN resource. -* The cluster subnet and services CIDR for a `UserDefinedNetwork` CR cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default network's join subnet, you must not use that value to configure a `UserDefinedNetwork` CR `joinSubnets` field. If the default address values are used anywhere in the network for the cluster, you must override it by setting the `joinSubnets` field. For more information, see "Additional configuration details for a UserDefinedNetworks CR". +* The cluster subnet and services CIDR for a UDN cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default network's join subnet, you must not use that value to configure a UDN `joinSubnets` field. If the default address values are used anywhere in the network for the cluster, you must override it by setting the `joinSubnets` field. For more information, see "Additional configuration details for a UserDefinedNetworks CR". -* The cluster subnet and services CIDR for a `UserDefinedNetwork` CR cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default join subnet for the network, you must not use that value to configure a `UserDefinedNetwork` CR `joinSubnets` field. If the default address values are used anywhere in the network for the cluster you must override the default values by setting the `joinSubnets` field. For more information, see "Additional configuration details for user-defined networks". +* The cluster subnet and services CIDR for a UDN cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default join subnet for the network, you must not use that value to configure a UDN `joinSubnets` field. If the default address values are used anywhere in the network for the cluster you must override the default values by setting the `joinSubnets` field. For more information, see "Additional configuration details for a UserDefinedNetworks CR". * A layer 2 topology creates a virtual switch that is distributed across all nodes in a cluster. Virtual machines and pods connect to this virtual switch so that all these components can communicate with each other within the same subnet. If you decide not to specify a layer 2 subnet, then you must manually configure IP addresses for each pod in your cluster. When not specifying a layer 2 subnet, port security is limited to preventing Media Access Control (MAC) spoofing only, and does not include IP spoofing. A layer 2 topology creates a single broadcast domain that can be challenging in large network environments, whereby the topology might cause a broadcast storm that can degrade network performance. diff --git a/modules/nw-udn-cr.adoc b/modules/nw-udn-cr.adoc index c92d3692509a..474a40762a83 100644 --- a/modules/nw-udn-cr.adoc +++ b/modules/nw-udn-cr.adoc @@ -6,7 +6,7 @@ [id="nw-udn-cr_{context}"] = Creating a UserDefinedNetwork custom resource -The following procedure creates a `UserDefinedNetwork` CR that is namespace scoped. Based upon your use case, create your request by using either the `my-layer-two-udn.yaml` example for a `Layer2` topology type or the `my-layer-three-udn.yaml` example for a `Layer3` topology type. +The following procedure creates a user-defined network that is namespace scoped. Based upon your use case, create your request by using either the `my-layer-two-udn.yaml` example for a `Layer2` topology type or the `my-layer-three-udn.yaml` example for a `Layer3` topology type. //We won't have these pieces till GA in 4.18. //[NOTE] diff --git a/modules/nw-udn-limitations.adoc b/modules/nw-udn-limitations.adoc index 2b77982653d4..17d4408f0786 100644 --- a/modules/nw-udn-limitations.adoc +++ b/modules/nw-udn-limitations.adoc @@ -4,9 +4,9 @@ :_mod-docs-content-type: CONCEPT [id="limitations-for-udn_{context}"] -= Limitations of a user-defined network += Limitations for UserDefinedNetwork custom resource -While user-defined networks (UDN) offer highly customizable network configuration options, there are limitations that cluster administrators and developers should be aware of when implementing and managing these networks. Consider the following limitations before implementing a UDN. +While user-defined networks (UDN) offer highly customizable network configuration options, there are limitations that cluster administrators and developers should be aware of when implementing and managing these networks. Consider the following limitations before implementing a user-defined network. //Check on the removal of the DNS limitation for 4.18 or 4.17.z. * *DNS limitations*: diff --git a/modules/opening-default-network-ports-udn.adoc b/modules/opening-default-network-ports-udn.adoc index 7170f649a152..7dd93e783fc2 100644 --- a/modules/opening-default-network-ports-udn.adoc +++ b/modules/opening-default-network-ports-udn.adoc @@ -6,7 +6,7 @@ [id="opening-default-network-ports-udn_{context}"] = Opening default network ports on user-defined network pods -By default, pods on a user-defined network (UDN) are isolated from the default network. This means that default network pods, such as those running monitoring services (Prometheus or Alertmanager) or the {product-title} image registry, cannot initiate connections to UDN pods. +By default, pods on a user-defined network are isolated from the default network. This means that default network pods, such as those running monitoring services (Prometheus or Alertmanager) or the {product-title} image registry, cannot initiate connections to UDN pods. To allow default network pods to connect to a user-defined network pod, you can use the `k8s.ovn.org/open-default-ports` annotation. This annotation opens specific ports on the user-defined network pod for access from the default network. diff --git a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc index 3069e166250c..128b1658acf8 100644 --- a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc +++ b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc @@ -15,13 +15,18 @@ The following diagram shows four cluster namespaces, where each namespace has a image::527-OpenShift-UDN-isolation-012025.png[The namespace isolation concept in a user-defined network (UDN)] +[NOTE] +==== +Support for the Localnet topology on both primary and secondary networks will be added in a future version of {product-title}. +==== + A cluster administrator can use a user-defined network to create and define additional networks that span multiple namespaces at the cluster level by leveraging the `ClusterUserDefinedNetwork` custom resource (CR). Additionally, a cluster administrator or a cluster user can use a user-defined network to define additional networks at the namespace level with the `UserDefinedNetwork` CR. -The following diagram shows tenant isolation that a cluster administrator created by defining a `ClusterUserDefinedNetwork` CR for each tenant. This network configuration allows a network to span across many namespaces. In the diagram, the `udn-1` disconnected network selects `namespace-1` and `namespace-2`, while the `udn-2` disconnected network selects `namespace-3` and `namespace-4`. A tenant acts as a disconnected network that is isolated from other tenants' networks. Pods from a namespace can communicate with pods in another namespace only if those namespaces exist in the same tenant network. +The following diagram shows tenant isolation that a cluster administrator created by defining a `ClusterUserDefinedNetwork` (CR) for each tenant. This network configuration allows a network to span across many namespaces. In the diagram, the `udn-1` disconnected network selects `namespace-1` and `namespace-2`, while the `udn-2` disconnected network selects `namespace-3` and `namespace-4`. A tenant acts as a disconnected network that is isolated from other tenants' networks. Pods from a namespace can communicate with pods in another namespace only if those namespaces exist in the same tenant network. image::528-OpenShift-multitenant-0225.png[The tenant isolation concept in a user-defined network (UDN)] -The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a `ClusterUserDefinedNetwork` or `UserDefinedNetwork` CR, how to create the CR, and additional configuration details that might be relevant to your deployment. +The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a `ClusterUserDefinedNetwork` or `UserDefinedNetwork` custom resource, how to create the custom resource, and additional configuration details that might be relevant to your deployment. //benefits of UDN include::modules/nw-udn-benefits.adoc[leveloffset=+1] From f94cc8039a766043ca62042ba6350c1b86edb5b5 Mon Sep 17 00:00:00 2001 From: ashiot Date: Thu, 20 Feb 2025 21:47:51 +0530 Subject: [PATCH 316/669] OBSDOCS-1693: Remove Logging 6.0 docs from OpenShift 4.18 --- _topic_maps/_topic_map.yml | 17 ----------------- .../core/telco-core-ref-design-components.adoc | 3 ++- .../ran/telco-ran-ref-du-components.adoc | 3 ++- 3 files changed, 4 insertions(+), 19 deletions(-) diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index e96331026de7..0e46fc4e302a 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -3005,23 +3005,6 @@ Topics: # File: logging-5-8-release-notes # - Name: Logging 5.7 # File: logging-5-7-release-notes - - Name: Logging 6.0 - Dir: logging-6.0 - Topics: - - Name: Release notes - File: log6x-release-notes - - Name: About logging 6.0 - File: log6x-about - - Name: Upgrading to Logging 6.0 - File: log6x-upgrading-to-6 - - Name: Configuring log forwarding - File: log6x-clf - - Name: Configuring LokiStack storage - File: log6x-loki - - Name: Visualization for logging - File: log6x-visual -# - Name: API reference 6.0 -# File: log6x-api-reference - Name: Logging 6.1 Dir: logging-6.1 Topics: diff --git a/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc b/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc index 370320853849..00e79e8752a8 100644 --- a/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +++ b/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc @@ -69,7 +69,8 @@ include::modules/telco-core-logging.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../../observability/logging/logging-6.0/log6x-about.adoc#log6x-about[About logging] +//* xref:../../../observability/logging/logging-6.0/log6x-about.adoc#log6x-about[About logging] +* link:https://docs.openshift.com/container-platform/4.17/observability/logging/logging-6.0/log6x-about.html[About logging] include::modules/telco-core-power-management.adoc[leveloffset=+1] diff --git a/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc b/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc index 49126d9665f3..d391ef0388fc 100644 --- a/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +++ b/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc @@ -48,7 +48,8 @@ include::modules/telco-ran-logging.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../../observability/logging/logging-6.0/log6x-about.adoc#log6x-about[About logging] +//* xref:../../../observability/logging/logging-6.0/log6x-about.adoc#log6x-about[About logging] +* link:https://docs.openshift.com/container-platform/4.17/observability/logging/logging-6.0/log6x-about.html[About logging] include::modules/telco-ran-sriov-fec-operator.adoc[leveloffset=+1] From 663a0a7c7d6e4959cc19c2a89492d14e7fbd82cd Mon Sep 17 00:00:00 2001 From: Michael Burke Date: Thu, 20 Feb 2025 12:45:55 -0500 Subject: [PATCH 317/669] MCO removing extensions from OCL docs --- machine_configuration/mco-coreos-layering.adoc | 3 +++ 1 file changed, 3 insertions(+) diff --git a/machine_configuration/mco-coreos-layering.adoc b/machine_configuration/mco-coreos-layering.adoc index 1fdaf2ffbcf3..23ea4d47ead1 100644 --- a/machine_configuration/mco-coreos-layering.adoc +++ b/machine_configuration/mco-coreos-layering.adoc @@ -191,11 +191,14 @@ include::modules/coreos-layering-configuring-on-modifying.adoc[leveloffset=+2] .Additional resources * xref:../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-pause_update-using-custom-machine-config-pools[Pausing the machine config pools] +//// +Hiding extensions, not in 4.18. Maybe 4.19? include::modules/coreos-layering-configuring-on-extensions.adoc[leveloffset=+2] .Additional resources * xref:../machine_configuration/machine-configs-configure.html#rhcos-add-extensions_machine-configs-configure[Adding extensions to RHCOS] * xref:../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-pause_update-using-custom-machine-config-pools[Pausing the machine config pools] +//// // Not in 4.18; maybe 4.19 // include::modules/coreos-layering-configuring-on-rebuild.adoc[leveloffset=+2] From b03deeab375c26714ae5154761621eabb2439e51 Mon Sep 17 00:00:00 2001 From: Steven Smith Date: Thu, 20 Feb 2025 12:10:33 -0500 Subject: [PATCH 318/669] Updates a few minor things for 418 GA in UDN docs --- modules/nw-cudn-best-practices.adoc | 4 ++-- modules/nw-cudn-cr.adoc | 14 ++++++------ modules/nw-udn-best-practices.adoc | 22 +++++++++---------- modules/nw-udn-cr.adoc | 2 +- modules/nw-udn-limitations.adoc | 4 ++-- .../opening-default-network-ports-udn.adoc | 2 +- .../about-user-defined-networks.adoc | 9 ++------ 7 files changed, 25 insertions(+), 32 deletions(-) diff --git a/modules/nw-cudn-best-practices.adoc b/modules/nw-cudn-best-practices.adoc index b9bb5f9b2cde..09f7ce158f14 100644 --- a/modules/nw-cudn-best-practices.adoc +++ b/modules/nw-cudn-best-practices.adoc @@ -24,8 +24,8 @@ Before setting up a `ClusterUserDefinedNetwork` custom resource (CR), users shou ** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a pod is created, the pod attaches itself to the default network. -** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `ClusterUserDefinedNetwork` CR is created that matches the namespace, the CUDN reports an error status and the network is not created. +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `ClusterUserDefinedNetwork` CR is created that matches the namespace, an error is reported and the network is not created. -** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `CluserUserDefinedNetwork` CR already exists, a pod in the namespace is created and attached to the default network. +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `ClusterUserDefinedNetwork` CR already exists, a pod in the namespace is created and attached to the default network. ** If the namespace _has_ the label, and a primary `ClusterUserDefinedNetwork` CR does not exist, a pod in the namespace is not created until the `ClusterUserDefinedNetwork` CR is created. \ No newline at end of file diff --git a/modules/nw-cudn-cr.adoc b/modules/nw-cudn-cr.adoc index fdad6c0fd93a..f618d007ec0f 100644 --- a/modules/nw-cudn-cr.adoc +++ b/modules/nw-cudn-cr.adoc @@ -4,13 +4,13 @@ :_mod-docs-content-type: PROCEDURE [id="nw-cudn-cr_{context}"] -= Creating a ClusterUserDefinedNetwork custom resource += Creating a ClusterUserDefinedNetwork CR -The following procedure creates a `ClusterUserDefinedNetwork` custom resource definition (CRD). Based upon your use case, create your request using either the `cluster-layer-two-udn.yaml` example for a `Layer2` topology type or the `cluster-layer-three-udn.yaml` example for a `Layer3` topology type. +The following procedure creates a `ClusterUserDefinedNetwork` custom resource (CR). Based upon your use case, create your request using either the `cluster-layer-two-udn.yaml` example for a `Layer2` topology type or the `cluster-layer-three-udn.yaml` example for a `Layer3` topology type. [IMPORTANT] ==== -* The `ClusterUserDefinedNetwork` CRD is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network. +* The `ClusterUserDefinedNetwork` CR is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network. * {VirtProductName} only supports the `Layer2` topology. ==== @@ -57,8 +57,8 @@ spec: - "2001:db8::/64" - "10.100.0.0/16" # <9> ---- -<1> Name of your `ClusterUserDefinedNetwork` custom resource. -<2> A label query over the set of namespaces that the cluster UDN applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces. +<1> Name of your `ClusterUserDefinedNetwork` CR. +<2> A label query over the set of namespaces that the cluster UDN CR applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces. <3> Uses the `matchLabels` selector type, where terms are evaluated with an `AND` relationship. <4> Because the `matchLabels` selector type is used, provisions namespaces matching both `` _and_ ``. <5> Describes the network configuration. @@ -94,9 +94,9 @@ spec: - cidr: 10.100.0.0/16 hostSubnet: 64 ---- -<1> Name of your `ClusterUserDefinedNetwork` custom resource. +<1> Name of your `ClusterUserDefinedNetwork` CR. <2> A label query over the set of namespaces that the cluster UDN applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces. -<3> Uses the `matchExpressions` selector type, where terms are evaluated with an _*OR*_ relationship. +<3> Uses the `matchExpressions` selector type, where terms are evaluated with an `OR` relationship. <4> Specifies the label key to match. <5> Specifies the operator. Valid values include: `In`, `NotIn`, `Exists`, and `DoesNotExist`. <6> Because the `matchExpressions` type is used, provisions namespaces matching either `` or ``. diff --git a/modules/nw-udn-best-practices.adoc b/modules/nw-udn-best-practices.adoc index 05b0288435e6..98882df02494 100644 --- a/modules/nw-udn-best-practices.adoc +++ b/modules/nw-udn-best-practices.adoc @@ -4,16 +4,16 @@ :_mod-docs-content-type: CONCEPT [id="considerations-for-udn_{context}"] -= Best practices for UserDefinedNetwork += Best practices for UserDefinedNetwork CRs -Before setting up a `UserDefinedNetwork` (UDN) resource, you should consider the following information: +Before setting up a `UserDefinedNetwork` custom resource (CR), you should consider the following information: //These will not go live till 4.18 GA //* To eliminate errors and ensure connectivity, you should create a namespace scoped UDN CR before creating any workload in the namespace. //* You might want to allow access to any Kubernetes services on the cluster default network. By default, KAPI and DNS are accessible. -* `openshift-*` namespaces should not be used to set up a UDN. +* `openshift-*` namespaces should not be used to set up a `UserDefinedNetwork` CR. * `UserDefinedNetwork` CRs should not be created in the default namespace. This can result in no isolation and, as a result, could introduce security risks to the cluster. @@ -21,29 +21,27 @@ Before setting up a `UserDefinedNetwork` (UDN) resource, you should consider the ** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a pod is created, the pod attaches itself to the default network. -** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary UDN CR is created that matches the namespace, the UDN reports an error status and the network is not created. +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `UserDefinedNetwork` CR is created that matches the namespace, a status error is reported and the network is not created. -** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary UDN already exists, a pod in the namespace is created and attached to the default network. +** If the namespace is missing the `k8s.ovn.org/primary-user-defined-network` label and a primary `UserDefinedNetwork` CR already exists, a pod in the namespace is created and attached to the default network. -** If the namespace _has_ the label, and a primary UDN does not exist, a pod in the namespace is not created until the UDN is created. +** If the namespace _has_ the label, and a primary `UserDefinedNetwork` CR does not exist, a pod in the namespace is not created until the `UserDefinedNetwork` CR is created. * 2 masquerade IP addresses are required for user defined networks. You must reconfigure your masquerade subnet to be large enough to hold the required number of networks. + [IMPORTANT] ==== * For {product-title} 4.17 and later, clusters use `169.254.0.0/17` for IPv4 and `fd69::/112` for IPv6 as the default masquerade subnet. These ranges should be avoided by users. For updated clusters, there is no change to the default masquerade subnet. -* Changing the cluster's masquerade subnet is unsupported after a user-defined network has been configured for a project. Attempting to modify the masquerade subnet after a UDN has been set up can disrupt the network connectivity and cause configuration issues. +* Changing the cluster's masquerade subnet is unsupported after a user-defined network has been configured for a project. Attempting to modify the masquerade subnet after a `UserDefinedNetwork` CR has been set up can disrupt the network connectivity and cause configuration issues. ==== // May be something that is downstream only. //* No active primary UDN managed pod can also be a candidate for `v1.multus-cni.io/default-network` -* Ensure tenants are using the `UserDefinedNetwork` resource and not the `NetworkAttachmentDefinition` (NAD) resource. This can create security risks between tenants. +* Ensure tenants are using the `UserDefinedNetwork` resource and not the `NetworkAttachmentDefinition` (NAD) CR. This can create security risks between tenants. -* When creating network segmentation, you should only use the NAD resource if user-defined network segmentation cannot be completed using the UDN resource. +* When creating network segmentation, you should only use the `NetworkAttachmentDefinition` CR if user-defined network segmentation cannot be completed using the `UserDefinedNetwork` CR. -* The cluster subnet and services CIDR for a UDN cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default network's join subnet, you must not use that value to configure a UDN `joinSubnets` field. If the default address values are used anywhere in the network for the cluster, you must override it by setting the `joinSubnets` field. For more information, see "Additional configuration details for a UserDefinedNetworks CR". - -* The cluster subnet and services CIDR for a UDN cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default join subnet for the network, you must not use that value to configure a UDN `joinSubnets` field. If the default address values are used anywhere in the network for the cluster you must override the default values by setting the `joinSubnets` field. For more information, see "Additional configuration details for a UserDefinedNetworks CR". +* The cluster subnet and services CIDR for a `UserDefinedNetwork` CR cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses `100.64.0.0/16` as the default join subnet for the network. You must not use that value to configure a `UserDefinedNetwork` CR's `joinSubnets` field. If the default address values are used anywhere in the network for the cluster you must override the default values by setting the `joinSubnets` field. For more information, see "Additional configuration details for user-defined networks". * A layer 2 topology creates a virtual switch that is distributed across all nodes in a cluster. Virtual machines and pods connect to this virtual switch so that all these components can communicate with each other within the same subnet. If you decide not to specify a layer 2 subnet, then you must manually configure IP addresses for each pod in your cluster. When not specifying a layer 2 subnet, port security is limited to preventing Media Access Control (MAC) spoofing only, and does not include IP spoofing. A layer 2 topology creates a single broadcast domain that can be challenging in large network environments, whereby the topology might cause a broadcast storm that can degrade network performance. diff --git a/modules/nw-udn-cr.adoc b/modules/nw-udn-cr.adoc index 474a40762a83..c92d3692509a 100644 --- a/modules/nw-udn-cr.adoc +++ b/modules/nw-udn-cr.adoc @@ -6,7 +6,7 @@ [id="nw-udn-cr_{context}"] = Creating a UserDefinedNetwork custom resource -The following procedure creates a user-defined network that is namespace scoped. Based upon your use case, create your request by using either the `my-layer-two-udn.yaml` example for a `Layer2` topology type or the `my-layer-three-udn.yaml` example for a `Layer3` topology type. +The following procedure creates a `UserDefinedNetwork` CR that is namespace scoped. Based upon your use case, create your request by using either the `my-layer-two-udn.yaml` example for a `Layer2` topology type or the `my-layer-three-udn.yaml` example for a `Layer3` topology type. //We won't have these pieces till GA in 4.18. //[NOTE] diff --git a/modules/nw-udn-limitations.adoc b/modules/nw-udn-limitations.adoc index 17d4408f0786..2b77982653d4 100644 --- a/modules/nw-udn-limitations.adoc +++ b/modules/nw-udn-limitations.adoc @@ -4,9 +4,9 @@ :_mod-docs-content-type: CONCEPT [id="limitations-for-udn_{context}"] -= Limitations for UserDefinedNetwork custom resource += Limitations of a user-defined network -While user-defined networks (UDN) offer highly customizable network configuration options, there are limitations that cluster administrators and developers should be aware of when implementing and managing these networks. Consider the following limitations before implementing a user-defined network. +While user-defined networks (UDN) offer highly customizable network configuration options, there are limitations that cluster administrators and developers should be aware of when implementing and managing these networks. Consider the following limitations before implementing a UDN. //Check on the removal of the DNS limitation for 4.18 or 4.17.z. * *DNS limitations*: diff --git a/modules/opening-default-network-ports-udn.adoc b/modules/opening-default-network-ports-udn.adoc index 7dd93e783fc2..7170f649a152 100644 --- a/modules/opening-default-network-ports-udn.adoc +++ b/modules/opening-default-network-ports-udn.adoc @@ -6,7 +6,7 @@ [id="opening-default-network-ports-udn_{context}"] = Opening default network ports on user-defined network pods -By default, pods on a user-defined network are isolated from the default network. This means that default network pods, such as those running monitoring services (Prometheus or Alertmanager) or the {product-title} image registry, cannot initiate connections to UDN pods. +By default, pods on a user-defined network (UDN) are isolated from the default network. This means that default network pods, such as those running monitoring services (Prometheus or Alertmanager) or the {product-title} image registry, cannot initiate connections to UDN pods. To allow default network pods to connect to a user-defined network pod, you can use the `k8s.ovn.org/open-default-ports` annotation. This annotation opens specific ports on the user-defined network pod for access from the default network. diff --git a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc index 128b1658acf8..3069e166250c 100644 --- a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc +++ b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc @@ -15,18 +15,13 @@ The following diagram shows four cluster namespaces, where each namespace has a image::527-OpenShift-UDN-isolation-012025.png[The namespace isolation concept in a user-defined network (UDN)] -[NOTE] -==== -Support for the Localnet topology on both primary and secondary networks will be added in a future version of {product-title}. -==== - A cluster administrator can use a user-defined network to create and define additional networks that span multiple namespaces at the cluster level by leveraging the `ClusterUserDefinedNetwork` custom resource (CR). Additionally, a cluster administrator or a cluster user can use a user-defined network to define additional networks at the namespace level with the `UserDefinedNetwork` CR. -The following diagram shows tenant isolation that a cluster administrator created by defining a `ClusterUserDefinedNetwork` (CR) for each tenant. This network configuration allows a network to span across many namespaces. In the diagram, the `udn-1` disconnected network selects `namespace-1` and `namespace-2`, while the `udn-2` disconnected network selects `namespace-3` and `namespace-4`. A tenant acts as a disconnected network that is isolated from other tenants' networks. Pods from a namespace can communicate with pods in another namespace only if those namespaces exist in the same tenant network. +The following diagram shows tenant isolation that a cluster administrator created by defining a `ClusterUserDefinedNetwork` CR for each tenant. This network configuration allows a network to span across many namespaces. In the diagram, the `udn-1` disconnected network selects `namespace-1` and `namespace-2`, while the `udn-2` disconnected network selects `namespace-3` and `namespace-4`. A tenant acts as a disconnected network that is isolated from other tenants' networks. Pods from a namespace can communicate with pods in another namespace only if those namespaces exist in the same tenant network. image::528-OpenShift-multitenant-0225.png[The tenant isolation concept in a user-defined network (UDN)] -The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a `ClusterUserDefinedNetwork` or `UserDefinedNetwork` custom resource, how to create the custom resource, and additional configuration details that might be relevant to your deployment. +The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a `ClusterUserDefinedNetwork` or `UserDefinedNetwork` CR, how to create the CR, and additional configuration details that might be relevant to your deployment. //benefits of UDN include::modules/nw-udn-benefits.adoc[leveloffset=+1] From 0c02026c0738355ee718b44e87b5873e40ae02b5 Mon Sep 17 00:00:00 2001 From: John Wilkins Date: Thu, 20 Feb 2025 11:37:47 -0800 Subject: [PATCH 319/669] HCIDOCS-652: Made updates and clarifying remarks for live updates.. --- ...stall-post-installation-configuration.adoc | 4 +- ...onents-resource-of-a-provisioned-host.adoc | 4 +- ...ttings-resource-of-a-provisioned-host.adoc | 11 +-- ...g-the-hostfirmwarecomponents-resource.adoc | 62 ---------------- ...ing-the-hostfirmwaresettings-resource.adoc | 60 --------------- ...o-the-hostfirmwarecomponents-resource.adoc | 73 +++++++++++++++++++ ...-to-the-hostfirmwaresettings-resource.adoc | 66 +++++++++++++++++ ...setting-the-hostupdatepolicy-resource.adoc | 4 + 8 files changed, 153 insertions(+), 131 deletions(-) delete mode 100644 modules/bmo-patching-the-hostfirmwarecomponents-resource.adoc delete mode 100644 modules/bmo-patching-the-hostfirmwaresettings-resource.adoc create mode 100644 modules/bmo-performing-a-live-update-to-the-hostfirmwarecomponents-resource.adoc create mode 100644 modules/bmo-performing-a-live-update-to-the-hostfirmwaresettings-resource.adoc diff --git a/installing/installing_bare_metal/ipi/ipi-install-post-installation-configuration.adoc b/installing/installing_bare_metal/ipi/ipi-install-post-installation-configuration.adoc index 29dd146b2118..ec0208c8932f 100644 --- a/installing/installing_bare_metal/ipi/ipi-install-post-installation-configuration.adoc +++ b/installing/installing_bare_metal/ipi/ipi-install-post-installation-configuration.adoc @@ -47,7 +47,7 @@ include::modules/bmo-getting-the-hostfirmwaresettings-resource.adoc[leveloffset= // Editing the HostFirmwareSettings resource include::modules/bmo-editing-the-hostfirmwaresettings-resource-of-a-provisioned-host.adoc[leveloffset=+2] // Patching the HostFirmawareSettings resource -include::modules/bmo-patching-the-hostfirmwaresettings-resource.adoc[leveloffset=+2] +include::modules/bmo-performing-a-live-update-to-the-hostfirmwaresettings-resource.adoc[leveloffset=+2] // Verifying the HostFirmware Settings resource is valid include::modules/bmo-verifying-the-hostfirmware-settings-resource-is-valid.adoc[leveloffset=+2] // About the FirmwareSchema resource @@ -61,7 +61,7 @@ include::modules/bmo-getting-the-hostfirmwarecomponents-resource.adoc[leveloffse // Editing the HostFirmwareComponents resource include::modules/bmo-editing-the-hostfirmwarecomponents-resource-of-a-provisioned-host.adoc[leveloffset=+2] // Patching the HostFirmawareComponents resource -include::modules/bmo-patching-the-hostfirmwarecomponents-resource.adoc[leveloffset=+2] +include::modules/bmo-performing-a-live-update-to-the-hostfirmwarecomponents-resource.adoc[leveloffset=+2] // About the HostUpdatePolicy resource include::modules/bmo-about-the-hostupdatepolicy-resource.adoc[leveloffset=+2] // Setting the HostUpdatePolicy resource diff --git a/modules/bmo-editing-the-hostfirmwarecomponents-resource-of-a-provisioned-host.adoc b/modules/bmo-editing-the-hostfirmwarecomponents-resource-of-a-provisioned-host.adoc index b8216de10e7b..6c4d406d90dc 100644 --- a/modules/bmo-editing-the-hostfirmwarecomponents-resource-of-a-provisioned-host.adoc +++ b/modules/bmo-editing-the-hostfirmwarecomponents-resource-of-a-provisioned-host.adoc @@ -20,9 +20,9 @@ $ oc get hostfirmwarecomponents -n openshift-machine-api -o yaml + [source,terminal] ---- -$ oc edit hostfirmwarecomponents -n openshift-machine-api <1> +$ oc edit hostfirmwarecomponents -n openshift-machine-api <1> ---- -<1> Where `` is the name of the host. The `HostFirmwareComponents` resource will open in the default editor for your terminal. +<1> Where `` is the name of the host. The `HostFirmwareComponents` resource will open in the default editor for your terminal. . Make the appropriate edits. + diff --git a/modules/bmo-editing-the-hostfirmwaresettings-resource-of-a-provisioned-host.adoc b/modules/bmo-editing-the-hostfirmwaresettings-resource-of-a-provisioned-host.adoc index 00532733e595..f2c0d88e046e 100644 --- a/modules/bmo-editing-the-hostfirmwaresettings-resource-of-a-provisioned-host.adoc +++ b/modules/bmo-editing-the-hostfirmwaresettings-resource-of-a-provisioned-host.adoc @@ -8,6 +8,7 @@ To make changes to the `HostFirmwareSettings` spec for a provisioned host, perform the following actions: +* Edit the host `HostFirmwareSettings` resource. * Delete the host from the machine set. * Scale down the machine set. * Scale up the machine set to make the changes take effect. @@ -30,10 +31,10 @@ $ oc get hfs -n openshift-machine-api + [source,terminal] ---- -$ oc edit hfs -n openshift-machine-api +$ oc edit hfs -n openshift-machine-api ---- + -Where `` is the name of a provisioned host. The `HostFirmwareSettings` resource will open in the default editor for your terminal. +Where `` is the name of a provisioned host. The `HostFirmwareSettings` resource will open in the default editor for your terminal. . Add name and value pairs to the `spec.settings` section by running the following command: + @@ -48,14 +49,14 @@ spec: . Save the changes and exit the editor. -. Get the host's machine name by running the following command: +. Get the host machine name by running the following command: + [source,terminal] ---- - $ oc get bmh -n openshift-machine name + $ oc get bmh -n openshift-machine name ---- + -Where `` is the name of the host. The terminal displays the machine name under the `CONSUMER` field. +Where `` is the name of the host. The terminal displays the machine name under the `CONSUMER` field. . Annotate the machine to delete it from the machine set by running the following command: + diff --git a/modules/bmo-patching-the-hostfirmwarecomponents-resource.adoc b/modules/bmo-patching-the-hostfirmwarecomponents-resource.adoc deleted file mode 100644 index 97a8b3bb7a30..000000000000 --- a/modules/bmo-patching-the-hostfirmwarecomponents-resource.adoc +++ /dev/null @@ -1,62 +0,0 @@ -// This is included in the following assemblies: -// -// * installing/installing_bare_metal/ipi/ipi-install-post-installation-configuration.adoc -:_mod-docs-content-type: PROCEDURE -[id="bmo-patching-the-hostfirmwarecomponents-resource_{context}"] -= Patching the HostFirmwareComponents resource - -To patch the `HostFirmwareComponents` resource for a provisioned host, perform the following: - -* Cordon and drain the host. -* Annotate the host to reboot. -* Uncordon the host to make the changes take effect. - -.Prerequisites - -* The `HostUpdatePolicy` resource must have the `firmwareUpdates` parameter set to `onReboot`. - -.Procedure - -. Patch the `HostFirmwareComponents` resource by running the following command: -+ -[source,terminal] ----- -$ oc patch hostfirmwarecomponents --type merge -p \ <1> - '{"spec": {"updates": [{"component": "", \ <2> - "url": ""}]}}' <3> ----- -<1> Replace `` with the name of the host. -<2> Replace `` with the type of component. Specify `bios` or `bmc`. -<3> Replace `` with the URL for the component. - -. Cordon the host by running the following command: -+ -[source,terminal] ----- -$ oc cordon ----- -+ -The Bare Metal Operator (BMO) updates the `operationalStatus` parameter to `servicing`. - -. Annotate the host to reboot by running the following command: -+ -[source,terminal] ----- -$ oc annotate bmh reboot.metal3.io="" ----- -+ -Once Ironic completes the patch, the BMO updates the `operationalStatus` parameter to `OK`. -+ -[NOTE] -==== -Depending on the host hardware, the change might require more than one reboot. -==== -+ -If an error occurs, the BMO updates the `operationalStatus` parameter to `error` and retries the operation. - -. Once Ironic completes the patch, uncordon the host by running the following command: -+ -[source,terminal] ----- -$ oc uncordon ----- diff --git a/modules/bmo-patching-the-hostfirmwaresettings-resource.adoc b/modules/bmo-patching-the-hostfirmwaresettings-resource.adoc deleted file mode 100644 index 4e5056f03c46..000000000000 --- a/modules/bmo-patching-the-hostfirmwaresettings-resource.adoc +++ /dev/null @@ -1,60 +0,0 @@ -// This is included in the following assemblies: -// -// * installing/installing_bare_metal/ipi/ipi-install-post-installation-configuration.adoc -:_mod-docs-content-type: PROCEDURE -[id="bmo-patching-the-hostfirmwaresettings-resource_{context}"] -= Patching the HostFirmwareSettings resource - -To patch the `HostFirmwareSettings` for a provisioned host, perform the following actions: - -* Cordon and drain the host. -* Annotate the host to reboot. -* Uncordon the host to make the changes take effect. - -.Prerequisites - -* The `HostUpdatePolicy` resource must the have `firmwareSettings` parameter set to `onReboot`. - -.Procedure - -. Patch the `HostFirmwareSettings` resource by running the following command: -+ -[source,terminal] ----- -$ oc patch hostfirmwaresettings --type merge -p \ <1> - '{"spec": {"settings": {"": ""}}}' <2> ----- -<1> Replace `` with the name of the host. -<2> Replace `` with the name of the setting. Replace `` with the value of the setting. You can set multiple name/value pairs. - -. Cordon the host by running the following command: -+ -[source,terminal] ----- -$ oc cordon ----- -+ -The Bare Metal Operator (BMO) updates the `operationalStatus` parameter to `servicing`. - -. Annotate the host to reboot by running the following command: -+ -[source,terminal] ----- -$ oc annotate bmh reboot.metal3.io="" ----- -+ -Once Ironic completes the patch, the BMO updates the `operationalStatus` parameter to `OK`. -+ -[NOTE] -==== -Depending on the host hardware, the change might require more than one reboot. -==== -+ -If an error occurs, the BMO updates the `operationalStatus` parameter to `error` and retries the operation. - -. Once Ironic completes the patch, uncordon the host by running the following command: -+ -[source,terminal] ----- -$ oc uncordon ----- diff --git a/modules/bmo-performing-a-live-update-to-the-hostfirmwarecomponents-resource.adoc b/modules/bmo-performing-a-live-update-to-the-hostfirmwarecomponents-resource.adoc new file mode 100644 index 000000000000..506b02a0cc42 --- /dev/null +++ b/modules/bmo-performing-a-live-update-to-the-hostfirmwarecomponents-resource.adoc @@ -0,0 +1,73 @@ +// This is included in the following assemblies: +// +// * installing/installing_bare_metal/ipi/ipi-install-post-installation-configuration.adoc +:_mod-docs-content-type: PROCEDURE +[id="bmo-performing-a-live-update-to-the-hostfirmwarecomponents-resource_{context}"] += Performing a live update to the HostFirmwareComponents resource + +You can perform a live update to the `HostFirmwareComponents` resource on an already provisioned host. Live updates do not trigger deprovisioning and reprovisioning the host. + +:FeatureName: Live updating a host +include::snippets/technology-preview.adoc[] +:!FeatureName: + +[IMPORTANT] +==== +Do not perform live updates on production hosts. You can perform live updates to the BIOS for testing purposes. We do not recommend that you perform live updates to the BMC on {product-title} {product-version} for test purposes, especially on earlier generation hardware. +==== + +.Prerequisites + +* The `HostUpdatePolicy` resource must have the `firmwareUpdates` parameter set to `onReboot`. + +.Procedure + +. Update the `HostFirmwareComponents` resource by running the following command: ++ +[source,terminal] +---- +$ oc patch hostfirmwarecomponents --type merge -p \// <1> + '{"spec": {"updates": [{"component": "", \// <2> + "url": ""}]}}' <3> +---- +<1> Replace `` with the name of the host. +<2> Replace `` with the type of component. Specify `bios` or `bmc`. +<3> Replace `` with the URL for the component. ++ +[NOTE] +==== +You can also use the `oc edit hostfirmwarecomponents -n openshift-machine-api` command to update the resource. +==== + +. Cordon and drain the node by running the following command: ++ +[source,terminal] +---- +$ oc drain --force <1> +---- +<1> Replace `` with the name of the node. + +. Power off the host for a period of 5 minutes by running the following command: ++ +[source,terminal] +---- +$ oc patch bmh --type merge -p '{"spec": {"online": false}}' +---- ++ +This step ensures that daemonsets or controllers mark any infrastructure pods that might be running on the node as offline, while the remaining nodes handle incoming requests. + +. After 5 minutes, power on the host by running the following command: ++ +[source,terminal] +---- +$ oc patch bmh --type merge -p '{"spec": {"online": true}}' +---- ++ +The servicing operation commences and the Bare Metal Operator (BMO) sets the `operationalStatus` parameter of the `BareMetalHost` to `servicing`. The BMO updates the `operationalStatus` parameter to `OK` after updating the resource. If an error occurs, the BMO updates the `operationalStatus` parameter to `error` and retries the operation. + +. Uncordon the node by running the following command: ++ +[source,terminal] +---- +$ oc uncordon +---- diff --git a/modules/bmo-performing-a-live-update-to-the-hostfirmwaresettings-resource.adoc b/modules/bmo-performing-a-live-update-to-the-hostfirmwaresettings-resource.adoc new file mode 100644 index 000000000000..92740526985b --- /dev/null +++ b/modules/bmo-performing-a-live-update-to-the-hostfirmwaresettings-resource.adoc @@ -0,0 +1,66 @@ +// This is included in the following assemblies: +// +// * installing/installing_bare_metal/ipi/ipi-install-post-installation-configuration.adoc +:_mod-docs-content-type: PROCEDURE +[id="bmo-performing-a-live-update-to-the-hostfirmwaresettings-resource_{context}"] += Performing a live update to the HostFirmwareSettings resource + +You can perform a live update to the `HostFirmareSettings` resource after it has begun running workloads. Live updates do not trigger deprovisioning and reprovisioning the host. + +:FeatureName: Live updating a host +include::snippets/technology-preview.adoc[] +:!FeatureName: + +.Prerequisites + +* The `HostUpdatePolicy` resource must the have `firmwareSettings` parameter set to `onReboot`. + +.Procedure + +. Update the `HostFirmwareSettings` resource by running the following command: ++ +[source,terminal] +---- +$ oc patch hostfirmwaresettings --type merge -p \// <1> + '{"spec": {"settings": {"": ""}}}' <2> +---- +<1> Replace `` with the name of the host. +<2> Replace `` with the name of the setting. Replace `` with the value of the setting. You can set multiple name-value pairs. ++ +[NOTE] +==== +Get the `FirmwareSchema` resource to determine which settings the hardware supports and what settings and values you can update. You cannot update read-only values and you cannot update the `FirmwareSchema` resource. You can also use the `oc edit hostfirmwaresettings -n openshift-machine-api` command to update the `HostFirmwareSettings` resource. +==== + +. Cordon and drain the node by running the following command: ++ +[source,terminal] +---- +$ oc drain --force <1> +---- +<1> Replace `` with the name of the node. + +. Power off the host for a period of 5 minutes by running the following command: ++ +[source,terminal] +---- +$ oc patch bmh --type merge -p '{"spec": {"online": false}}' +---- ++ +This step ensures that daemonsets or controllers can mark any infrastructure pods that might be running on the host as offline, while the remaining hosts handle incoming requests. + +. After 5 minutes, power on the host by running the following command: ++ +[source,terminal] +---- +$ oc patch bmh --type merge -p '{"spec": {"online": true}}' +---- ++ +The servicing operation commences and the Bare Metal Operator (BMO) sets the `operationalStatus` parameter of the `BareMetalHost` to `servicing`. The BMO updates the `operationalStatus` parameter to `OK` after updating the resource. If an error occurs, the BMO updates the `operationalStatus` parameter to `error` and retries the operation. + +. Once Ironic completes the update and the host powers up, uncordon the node by running the following command: ++ +[source,terminal] +---- +$ oc uncordon +---- diff --git a/modules/bmo-setting-the-hostupdatepolicy-resource.adoc b/modules/bmo-setting-the-hostupdatepolicy-resource.adoc index 108f60774f97..7c2c1a2ff89c 100644 --- a/modules/bmo-setting-the-hostupdatepolicy-resource.adoc +++ b/modules/bmo-setting-the-hostupdatepolicy-resource.adoc @@ -8,6 +8,10 @@ By default, the `HostUpdatePolicy` disables live updates. To enable live updates, use the following procedure. +:FeatureName: Setting the `HostUpdatePolicy` resource +include::snippets/technology-preview.adoc[] +:!FeatureName: + .Procedure . Create the `HostUpdatePolicy` resource by running the following command: From 34d18a12e9ada469bea0aa65c223fdd06a3b5bda Mon Sep 17 00:00:00 2001 From: Pan Ousley Date: Thu, 20 Feb 2025 01:25:55 -0500 Subject: [PATCH 320/669] CNV#52365: s390x compatibility TP notes --- virt/install/preparing-cluster-for-virt.adoc | 65 ++++++++++++++++++++ 1 file changed, 65 insertions(+) diff --git a/virt/install/preparing-cluster-for-virt.adoc b/virt/install/preparing-cluster-for-virt.adoc index 23dbd103cd44..aa4267ec20a8 100644 --- a/virt/install/preparing-cluster-for-virt.adoc +++ b/virt/install/preparing-cluster-for-virt.adoc @@ -52,6 +52,16 @@ include::snippets/technology-preview.adoc[] :!FeatureName: endif::[] -- + +* {ibm-z-name} or {ibm-linuxone-name} (s390x architecture) systems where an {product-title} cluster is installed in a logical partition (LPAR). See xref:../../installing/installing_ibm_z/preparing-to-install-on-ibm-z.adoc#preparing-to-install-on-ibm-z_preparing-to-install-on-ibm-z[Preparing to install on {ibm-z-title} and {ibm-linuxone-title}]. ++ +-- +ifdef::openshift-enterprise[] +:FeatureName: Using {VirtProductName} in a cluster deployed on s390x architecture +include::snippets/technology-preview.adoc[] +:!FeatureName: +endif::[] +-- endif::openshift-rosa,openshift-dedicated[] ifdef::openshift-rosa,openshift-dedicated[] @@ -70,6 +80,61 @@ include::modules/virt-aws-bm.adoc[leveloffset=+2] * xref:../../virt/vm_networking/virt-connecting-vm-to-ovn-secondary-network.adoc#virt-connecting-vm-to-ovn-secondary-network[Connecting a virtual machine to an OVN-Kubernetes secondary network] * xref:../../virt/vm_networking/virt-exposing-vm-with-service.adoc#virt-exposing-vm-with-service[Exposing a virtual machine by using a service] +// Hiding in ROSA/OSD - todo: double check this +ifndef::openshift-rosa,openshift-dedicated[] +[id="ibm-z-linuxone-compatibility_{context}"] +=== {ibm-z-title} and {ibm-linuxone-title} compatibility + +You can use {VirtProductName} in an {product-title} cluster that is installed in a logical partition (LPAR) on an {ibm-z-name} or {ibm-linuxone-name} (s390x architecture) system. + +ifdef::openshift-enterprise[] +:FeatureName: Using {VirtProductName} in a cluster deployed on s390x architecture +include::snippets/technology-preview.adoc[] +:!FeatureName: +endif::[] + +Some features are not currently available on s390x architecture, while others require workarounds or procedural changes. These lists are subject to change. + +[discrete] +[id="currently-unavailable-ibm-z_{context}"] +==== Currently unavailable features + +The following features are not available or do not function on s390x architecture: + +* Memory hot plugging and hot unplugging +* Watchdog devices +* Node Health Check Operator +* SR-IOV Operator +* virtual Trusted Platform Module (vTPM) devices +* {pipelines-title} tasks +* UEFI mode for VMs +* PCI passthrough +* USB host passthrough +* Configuring virtual GPUs +* {VirtProductName} cluster checkup framework +* Creating and managing Windows VMs + +[discrete] +[id="functionality-differences_{context}"] +==== Functionality differences + +The following features are available for use on s390x architecture but function differently or require procedural changes: + +* When xref:../../virt/managing_vms/virt-delete-vms.adoc#virt-delete-vm-web_virt-delete-vms[deleting a virtual machine by using the web console], the *grace period* option is ignored. + +* When xref:../../virt/managing_vms/advanced_vm_management/virt-configuring-default-cpu-model.adoc#virt-configuring-default-cpu-model_virt-configuring-default-cpu-model[configuring the default CPU model], the `spec.defaultCPUModel` value is `"gen15b"` for an {ibm-z-title} cluster. + +* When xref:../../virt/vm_networking/virt-hot-plugging-network-interfaces.adoc#virt-hot-plugging-network-interfaces[hot plugging secondary network interfaces], the `virtctl migrate ` command does not migrate the VM. As a workaround, restart the VM by running the following command: ++ +[source,terminal] +---- +$ virtctl restart +---- + +* When xref:../../virt/monitoring/virt-exposing-downward-metrics.adoc#virt-configuring-downward-metrics_virt-using-downward-metrics_virt-exposing-downward-metrics[configuring a downward metrics device], if you use a VM preference, the `spec.preference.name` value must be set to `rhel.9.s390x` or another available preference with the format `*.s390x`. + +endif::openshift-rosa,openshift-dedicated[] + // Section is in assembly so that we can use xrefs [id="virt-hardware-os-requirements_preparing-cluster-for-virt"] == Hardware and operating system requirements From fbdea9ef4fbad44d25a72513610a2b33e7068fd2 Mon Sep 17 00:00:00 2001 From: Pan Ousley Date: Thu, 13 Feb 2025 18:34:33 -0500 Subject: [PATCH 321/669] CNV#44000: update docs with candidate channel + rework --- ...virt-about-control-plane-only-updates.adoc | 6 +-- modules/virt-about-upgrading-virt.adoc | 29 +++++++++--- modules/virt-about-workload-updates.adoc | 2 +- modules/virt-changing-update-settings.adoc | 28 +++++++++++ modules/virt-manual-approval-strategy.adoc | 11 +++++ modules/virt-monitoring-upgrade-status.adoc | 4 +- modules/virt-rhel-9.adoc | 2 +- modules/virt-viewing-outdated-workloads.adoc | 4 +- virt/updating/upgrading-virt.adoc | 47 +++++++++++-------- 9 files changed, 98 insertions(+), 35 deletions(-) create mode 100644 modules/virt-changing-update-settings.adoc create mode 100644 modules/virt-manual-approval-strategy.adoc diff --git a/modules/virt-about-control-plane-only-updates.adoc b/modules/virt-about-control-plane-only-updates.adoc index 3c8347805b00..4dcc786f1349 100644 --- a/modules/virt-about-control-plane-only-updates.adoc +++ b/modules/virt-about-control-plane-only-updates.adoc @@ -4,7 +4,7 @@ :_mod-docs-content-type: CONCEPT [id="virt-about-control-plane-only-updates_{context}"] -= About Control Plane Only updates += Control Plane Only updates Every even-numbered minor version of {product-title}, including 4.10 and 4.12, is an Extended Update Support (EUS) version. However, because Kubernetes design mandates serial minor version updates, you cannot directly update from one EUS version to the next. @@ -14,8 +14,8 @@ When the {product-title} update succeeds, the corresponding update for {VirtProd For more information about EUS versions, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat OpenShift Container Platform Life Cycle Policy]. -[id="preparing-to-update_{context}"] -== Preparing to update +[id="prerequisites_{context}"] +== Prerequisites Before beginning a Control Plane Only update, you must: diff --git a/modules/virt-about-upgrading-virt.adoc b/modules/virt-about-upgrading-virt.adoc index fa6e6c0fce7e..41eae5c7365f 100644 --- a/modules/virt-about-upgrading-virt.adoc +++ b/modules/virt-about-upgrading-virt.adoc @@ -6,22 +6,32 @@ [id="virt-about-upgrading-virt_{context}"] = About updating {VirtProductName} -* Operator Lifecycle Manager (OLM) manages the lifecycle of the {VirtProductName} Operator. The Marketplace Operator, which is deployed during {product-title} installation, makes external Operators available to your cluster. +When you install {VirtProductName}, you select an update channel and an approval strategy. The update channel determines the versions that {VirtProductName} will be updated to. The approval strategy setting determines whether updates occur automatically or require manual approval. Both settings can impact supportability. + +[id="recommended-settings_{context}"] +== Recommended settings + +To maintain a supportable environment, use the following settings: -* OLM provides z-stream and minor version updates for {VirtProductName}. Minor version updates become available when you update {product-title} to the next minor version. You cannot update {VirtProductName} to the next minor version without first updating {product-title}. +* Update channel: *stable* +* Approval strategy: *Automatic* -* {VirtProductName} subscriptions use a single update channel that is named *stable*. The *stable* channel ensures that your {VirtProductName} and {product-title} versions are compatible. +With these settings, the update process automatically starts when a new version of the Operator is available in the *stable* channel. This ensures that your {VirtProductName} and {product-title} versions remain compatible, and that your version of {VirtProductName} is suitable for production environments. -* If your subscription's approval strategy is set to *Automatic*, the update process starts as soon as a new version of the Operator is available in the *stable* channel. It is highly recommended to use the *Automatic* approval strategy to maintain a supportable environment. Each minor version of {VirtProductName} is only supported if you run the corresponding {product-title} version. For example, you must run {VirtProductName} {VirtVersion} on {product-title} {VirtVersion}. +[NOTE] +==== +Each minor version of {VirtProductName} is supported only if you run the corresponding {product-title} version. For example, you must run {VirtProductName} {VirtVersion} on {product-title} {VirtVersion}. +==== -** Though it is possible to select the *Manual* approval strategy, this is not recommended because it risks the supportability and functionality of your cluster. With the *Manual* approval strategy, you must manually approve every pending update. If {product-title} and {VirtProductName} updates are out of sync, your cluster becomes unsupported. +[id="what-to-expect_{context}"] +== What to expect * The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes. * Updating {VirtProductName} does not interrupt network connections. -* Data volumes and their associated persistent volume claims are preserved during update. +* Data volumes and their associated persistent volume claims are preserved during an update. ifndef::openshift-rosa,openshift-dedicated[] [IMPORTANT] @@ -39,3 +49,10 @@ If you have virtual machines running that use AWS Elastic Block Store (EBS) stor As a workaround, you can reconfigure the virtual machines so that they can be powered off automatically during a cluster update. Set the `evictionStrategy` field to `None` and the `runStrategy` field to `Always`. ==== endif::openshift-rosa,openshift-dedicated[] + +[id="how-updates-work_{context}"] +== How updates work + +* Operator Lifecycle Manager (OLM) manages the lifecycle of the {VirtProductName} Operator. The Marketplace Operator, which is deployed during {product-title} installation, makes external Operators available to your cluster. + +* OLM provides z-stream and minor version updates for {VirtProductName}. Minor version updates become available when you update {product-title} to the next minor version. You cannot update {VirtProductName} to the next minor version without first updating {product-title}. \ No newline at end of file diff --git a/modules/virt-about-workload-updates.adoc b/modules/virt-about-workload-updates.adoc index 770dceefacb5..470b3abd8fa9 100644 --- a/modules/virt-about-workload-updates.adoc +++ b/modules/virt-about-workload-updates.adoc @@ -4,7 +4,7 @@ :_mod-docs-content-type: CONCEPT [id="virt-about-workload-updates_{context}"] -= About workload updates += VM workload updates When you update {VirtProductName}, virtual machine workloads, including `libvirt`, `virt-launcher`, and `qemu`, update automatically if they support live migration. diff --git a/modules/virt-changing-update-settings.adoc b/modules/virt-changing-update-settings.adoc new file mode 100644 index 000000000000..9bbb01d4105b --- /dev/null +++ b/modules/virt-changing-update-settings.adoc @@ -0,0 +1,28 @@ +// Module included in the following assemblies: +// +// * virt/updating/upgrading-virt.adoc + +:_mod-docs-content-type: PROCEDURE +[id="virt-changing-update-settings_{context}"] += Changing update settings + +You can change the update channel and approval strategy for your {VirtProductName} Operator subscription by using the web console. + +.Prerequisites + +* You have installed the {VirtProductName} Operator. +* You have administrator permissions. + +.Procedure + +. Click *Operators* -> *Installed Operators*. + +. Select *{VirtProductName}* from the list. + +. Click the *Subscription* tab. + +. In the *Subscription details* section, click the setting that you want to change. For example, to change the approval strategy from *Manual* to *Automatic*, click *Manual*. + +. In the window that opens, select the new update channel or approval strategy. + +. Click *Save*. diff --git a/modules/virt-manual-approval-strategy.adoc b/modules/virt-manual-approval-strategy.adoc new file mode 100644 index 000000000000..c871e55e6b05 --- /dev/null +++ b/modules/virt-manual-approval-strategy.adoc @@ -0,0 +1,11 @@ +// Module included in the following assemblies: +// +// * virt/updating/upgrading-virt.adoc + +:_mod-docs-content-type: CONCEPT +[id="virt-manual-approval-strategy_{context}"] += Manual approval strategy + +If you use the *Manual* approval strategy, you must manually approve every pending update. If {product-title} and {VirtProductName} updates are out of sync, your cluster becomes unsupported. To avoid risking the supportability and functionality of your cluster, use the *Automatic* approval strategy. + +If you must use the *Manual* approval strategy, maintain a supportable cluster by approving pending Operator updates as soon as they become available. \ No newline at end of file diff --git a/modules/virt-monitoring-upgrade-status.adoc b/modules/virt-monitoring-upgrade-status.adoc index d31f6050144e..fa9ed0eafb1e 100644 --- a/modules/virt-monitoring-upgrade-status.adoc +++ b/modules/virt-monitoring-upgrade-status.adoc @@ -4,9 +4,9 @@ :_mod-docs-content-type: PROCEDURE [id="virt-monitoring-upgrade-status_{context}"] -= Monitoring {VirtProductName} upgrade status += Monitoring update status -To monitor the status of a {VirtProductName} Operator upgrade, watch the cluster service version (CSV) `PHASE`. You can also monitor the CSV conditions in the web console or by running the command provided here. +To monitor the status of a {VirtProductName} Operator update, watch the cluster service version (CSV) `PHASE`. You can also monitor the CSV conditions in the web console or by running the command provided here. [NOTE] ==== diff --git a/modules/virt-rhel-9.adoc b/modules/virt-rhel-9.adoc index 4c0fedfdb150..ab6fc6b3001f 100644 --- a/modules/virt-rhel-9.adoc +++ b/modules/virt-rhel-9.adoc @@ -4,7 +4,7 @@ :_mod-docs-content-type: CONCEPT [id="virt-rhel-9_{context}"] -= {VirtProductName} on {op-system-base} 9 += {op-system-base} 9 compatibility {VirtProductName} {VirtVersion} is based on {op-system-base-full} 9. You can update to {VirtProductName} {VirtVersion} from a version that was based on {op-system-base} 8 by following the standard {VirtProductName} update procedure. No additional steps are required. diff --git a/modules/virt-viewing-outdated-workloads.adoc b/modules/virt-viewing-outdated-workloads.adoc index b2276ca52641..ab6aafd5a70d 100644 --- a/modules/virt-viewing-outdated-workloads.adoc +++ b/modules/virt-viewing-outdated-workloads.adoc @@ -4,9 +4,9 @@ :_mod-docs-content-type: PROCEDURE [id="virt-viewing-outdated-workloads_{context}"] -= Viewing outdated {VirtProductName} workloads += Viewing outdated VM workloads -You can view a list of outdated workloads by using the CLI. +You can view a list of outdated virtual machine (VM) workloads by using the CLI. [NOTE] ==== diff --git a/virt/updating/upgrading-virt.adoc b/virt/updating/upgrading-virt.adoc index 13c7efa97319..7b38fbb305d6 100644 --- a/virt/updating/upgrading-virt.adoc +++ b/virt/updating/upgrading-virt.adoc @@ -6,40 +6,47 @@ include::_attributes/common-attributes.adoc[] toc::[] -Learn how Operator Lifecycle Manager (OLM) delivers z-stream and minor version updates for {VirtProductName}. - -include::modules/virt-rhel-9.adoc[leveloffset=+1] +Learn how to keep {VirtProductName} updated and compatible with {product-title}. include::modules/virt-about-upgrading-virt.adoc[leveloffset=+1] -include::modules/virt-about-workload-updates.adoc[leveloffset=+2] +include::modules/virt-rhel-9.adoc[leveloffset=+2] + +include::modules/virt-monitoring-upgrade-status.adoc[leveloffset=+1] + +// workload updates + +include::modules/virt-about-workload-updates.adoc[leveloffset=+1] + +include::modules/virt-configuring-workload-update-methods.adoc[leveloffset=+2] + +include::modules/virt-viewing-outdated-workloads.adoc[leveloffset=+2] + +[NOTE] +==== +To ensure that VMIs update automatically, configure workload updates. +==== + +// control plane updates ifndef::openshift-rosa,openshift-dedicated,openshift-origin[] -include::modules/virt-about-control-plane-only-updates.adoc[leveloffset=+2] +include::modules/virt-about-control-plane-only-updates.adoc[leveloffset=+1] Learn more about xref:../../updating/updating_a_cluster/control-plane-only-update.adoc#control-plane-only-update[Performing a Control Plane Only update]. -include::modules/virt-preventing-workload-updates-during-control-plane-only-update.adoc[leveloffset=+1] +include::modules/virt-preventing-workload-updates-during-control-plane-only-update.adoc[leveloffset=+2] endif::openshift-rosa,openshift-dedicated,openshift-origin[] -include::modules/virt-configuring-workload-update-methods.adoc[leveloffset=+1] - -[id="approving-operator-upgrades_upgrading-virt"] -== Approving pending Operator updates - -include::modules/olm-approving-pending-upgrade.adoc[leveloffset=+2] +[id="advanced-options_upgrading-virt"] +== Advanced options -[id="monitoring-upgrade-status_upgrading-virt"] -== Monitoring update status +The *stable* release channel and the *Automatic* approval strategy are recommended for most {VirtProductName} installations. Use other settings only if you understand the risks. -include::modules/virt-monitoring-upgrade-status.adoc[leveloffset=+2] +include::modules/virt-changing-update-settings.adoc[leveloffset=+2] -include::modules/virt-viewing-outdated-workloads.adoc[leveloffset=+2] +include::modules/virt-manual-approval-strategy.adoc[leveloffset=+2] -[NOTE] -==== -Configure workload updates to ensure that VMIs update automatically. -==== +include::modules/olm-approving-pending-upgrade.adoc[leveloffset=+2] [id="additional-resources_upgrading-virt"] [role="_additional-resources"] From ea8d54974e09d655f73577dec3eab8a31c2a5df1 Mon Sep 17 00:00:00 2001 From: Israel Blancas Date: Thu, 19 Dec 2024 12:34:55 +0100 Subject: [PATCH 322/669] OBSDOCS-1356: Document how to verify if Instrumentation was injected correctly Signed-off-by: Israel Blancas --- modules/otel-autoinstrumentation.adoc | 21 +++++ modules/otel-config-instrumentation.adoc | 4 +- ...l-troubleshoot-debug-exporter-stdout.adoc} | 4 +- ...entation-injection-into-your-workload.adoc | 93 +++++++++++++++++++ ...tion-by-the-instrumentation-libraries.adoc | 31 +++++++ ...otel-configuration-of-instrumentation.adoc | 3 +- observability/otel/otel-troubleshooting.adoc | 13 ++- 7 files changed, 162 insertions(+), 7 deletions(-) create mode 100644 modules/otel-autoinstrumentation.adoc rename modules/{otel-troubleshoot-logging-exporter-stdout.adoc => otel-troubleshoot-debug-exporter-stdout.adoc} (88%) create mode 100644 modules/otel-troubleshooting-instrumentation-injection-into-your-workload.adoc create mode 100644 modules/otel-troubleshooting-telemetry-data-generation-by-the-instrumentation-libraries.adoc diff --git a/modules/otel-autoinstrumentation.adoc b/modules/otel-autoinstrumentation.adoc new file mode 100644 index 000000000000..4fe89f6e4e53 --- /dev/null +++ b/modules/otel-autoinstrumentation.adoc @@ -0,0 +1,21 @@ +// Module included in the following assemblies: +// +// * observability/otel/otel-configuration-of-instrumentation.adoc + +:_mod-docs-content-type: CONCEPT +[id="otel-autoinstrumentation_{context}"] += Auto-instrumentation in the {OTELOperator} + +Auto-instrumentation in the {OTELOperator} can automatically instrument an application without manual code changes. Developers and administrators can monitor applications with minimal effort and changes to the existing codebase. + +Auto-instrumentation runs as follows: + +. The {OTELOperator} injects an init-container, or a sidecar container for Go, to add the instrumentation libraries for the programming language of the instrumented application. + +. The {OTELOperator} sets the required environment variables in the application's runtime environment. These variables configure the auto-instrumentation libraries to collect traces, metrics, and logs and send them to the appropriate OpenTelemetry Collector or another telemetry backend. + +. The injected libraries automatically instrument your application by connecting to known frameworks and libraries, such as web servers or database clients, to collect telemetry data. The source code of the instrumented application is not modified. + +. Once the application is running with the injected instrumentation, the application automatically generates telemetry data, which is sent to a designated OpenTelemetry Collector or an external OTLP endpoint for further processing. + +Auto-instrumentation enables you to start collecting telemetry data quickly without having to manually integrate the OpenTelemetry SDK into your application code. However, some applications might require specific configurations or custom manual instrumentation. diff --git a/modules/otel-config-instrumentation.adoc b/modules/otel-config-instrumentation.adoc index 6485ddf6a561..447f2f868c04 100644 --- a/modules/otel-config-instrumentation.adoc +++ b/modules/otel-config-instrumentation.adoc @@ -8,11 +8,9 @@ The {OTELName} can inject and configure the OpenTelemetry auto-instrumentation libraries into your workloads. Currently, the project supports injection of the instrumentation libraries from Go, Java, Node.js, Python, .NET, and the Apache HTTP Server (`httpd`). -Auto-instrumentation in OpenTelemetry refers to the capability where the framework automatically instruments an application without manual code changes. This enables developers and administrators to get observability into their applications with minimal effort and changes to the existing codebase. - [IMPORTANT] ==== -The {OTELName} Operator only supports the injection mechanism of the instrumentation libraries but does not support instrumentation libraries or upstream images. Customers can build their own instrumentation images or use community images. +The {OTELOperator} only supports the injection mechanism of the instrumentation libraries but does not support instrumentation libraries or upstream images. Customers can build their own instrumentation images or use community images. ==== [id="otel-instrumentation-options_{context}"] diff --git a/modules/otel-troubleshoot-logging-exporter-stdout.adoc b/modules/otel-troubleshoot-debug-exporter-stdout.adoc similarity index 88% rename from modules/otel-troubleshoot-logging-exporter-stdout.adoc rename to modules/otel-troubleshoot-debug-exporter-stdout.adoc index f7d81994392f..c5efdffbb26a 100644 --- a/modules/otel-troubleshoot-logging-exporter-stdout.adoc +++ b/modules/otel-troubleshoot-debug-exporter-stdout.adoc @@ -4,9 +4,9 @@ :_mod-docs-content-type: PROCEDURE [id="debug-exporter-to-stdout_{context}"] -= Debug exporter += Debug Exporter -You can configure the debug exporter to export the collected data to the standard output. +You can configure the Debug Exporter to export the collected data to the standard output. .Procedure diff --git a/modules/otel-troubleshooting-instrumentation-injection-into-your-workload.adoc b/modules/otel-troubleshooting-instrumentation-injection-into-your-workload.adoc new file mode 100644 index 000000000000..cb26973abba4 --- /dev/null +++ b/modules/otel-troubleshooting-instrumentation-injection-into-your-workload.adoc @@ -0,0 +1,93 @@ +// Module included in the following assemblies: +// +// * observability/otel/otel-troubleshooting.adoc + +:_mod-docs-content-type: PROCEDURE +[id="otel-troubleshooting-instrumentation-injection-into-your-workload_{context}"] += Troubleshooting instrumentation injection into your workload + +To troubleshoot instrumentation injection, you can perform the following activities: + +* Checking if the `Instrumentation` object was created +* Checking if the init-container started +* Checking if the resources were deployed in the correct order +* Searching for errors in the Operator logs +* Double-checking the pod annotations + +.Procedure + +. Run the following command to verify that the `Instrumentation` object was successfully created: ++ +[source,terminal] +---- +$ oc get instrumentation -n # <1> +---- +<1> The namespace where the instrumentation was created. + +. Run the following command to verify that the `opentelemetry-auto-instrumentation` init-container successfully started, which is a prerequisite for instrumentation injection into workloads: ++ +[source,terminal] +---- +$ oc get events -n # <1> +---- +<1> The namespace where the instrumentation is injected for workloads. ++ +.Example output +[source,terminal] +---- +... Created container opentelemetry-auto-instrumentation +... Started container opentelemetry-auto-instrumentation +---- + +. Verify that the resources were deployed in the correct order for the auto-instrumentation to work correctly. The correct order is to deploy the `Instrumentation` custom resource (CR) before the application. For information about the `Instrumentation` CR, see the section "Configuring the instrumentation". ++ +[NOTE] +==== +When the pod starts, the {OTELOperator} checks the `Instrumentation` CR for annotations containing instructions for injecting auto-instrumentation. Generally, the Operator then adds an init-container to the application’s pod that injects the auto-instrumentation and environment variables into the application's container. If the `Instrumentation` CR is not available to the Operator when the application is deployed, the Operator is unable to inject the auto-instrumentation. +==== ++ +Fixing the order of deployment requires the following steps: + +.. Update the instrumentation settings. +.. Delete the instrumentation object. +.. Redeploy the application. + +. Run the following command to inspect the Operator logs for instrumentation errors: ++ +[source,terminal] +---- +$ oc logs -l app.kubernetes.io/name=opentelemetry-operator --container manager -n openshift-opentelemetry-operator --follow +---- + +. Troubleshoot pod annotations for the instrumentations for a specific programming language. See the required annotation fields and values in "Configuring the instrumentation". + +.. Verify that the application pods that you are instrumenting are labeled with correct annotations and the appropriate auto-instrumentation settings have been applied. ++ +.Example +---- +instrumentation.opentelemetry.io/inject-python="true" +---- ++ +.Example command to get pod annotations for an instrumented Python application +[source,terminal] +---- +$ oc get pods -n -o jsonpath='{range .items[?(@.metadata.annotations["instrumentation.opentelemetry.io/inject-python"]=="true")]}{.metadata.name}{"\n"}{end}' +---- + +.. Verify that the annotation applied to the instrumentation object is correct for the programming language that you are instrumenting. + +.. If there are multiple instrumentations in the same namespace, specify the name of the `Instrumentation` object in their annotations. ++ +.Example +---- +instrumentation.opentelemetry.io/inject-nodejs: "" +---- + +.. If the `Instrumentation` object is in a different namespace, specify the namespace in the annotation. ++ +.Example +---- +instrumentation.opentelemetry.io/inject-nodejs: "/" +---- + +.. Verify that the `OpenTelemetryCollector` custom resource specifies the auto-instrumentation annotations under `spec.template.metadata.annotations`. If the auto-instrumentation annotations are in `spec.metadata.annotations` instead, move them into `spec.template.metadata.annotations`. diff --git a/modules/otel-troubleshooting-telemetry-data-generation-by-the-instrumentation-libraries.adoc b/modules/otel-troubleshooting-telemetry-data-generation-by-the-instrumentation-libraries.adoc new file mode 100644 index 000000000000..70669d230041 --- /dev/null +++ b/modules/otel-troubleshooting-telemetry-data-generation-by-the-instrumentation-libraries.adoc @@ -0,0 +1,31 @@ +// Module included in the following assemblies: +// +// * observability/otel/otel-troubleshooting.adoc + +:_mod-docs-content-type: PROCEDURE +[id="otel-troubleshooting-telemetry-data-generation-by-the-instrumentation-libraries_{context}"] += Troubleshooting telemetry data generation by the instrumentation libraries + +You can troubleshoot telemetry data generation by the instrumentation libraries by checking the endpoint, looking for errors in your application logs, and verifying that the Collector is receiving the telemetry data. + +.Procedure + +. Verify that the instrumentation is transmitting data to the correct endpoint: ++ +[source,terminal] +---- +$ oc get instrumentation -n -o jsonpath='{.spec.endpoint}' +---- ++ +The default endpoint `+http://localhost:4317+` for the `Instrumentation` object is only applicable to a Collector instance that is deployed as a sidecar in your application pod. If you are using an incorrect endpoint, correct it by editing the `Instrumentation` object and redeploying your application. + +. Inspect your application logs for error messages that might indicate that the instrumentation is malfunctioning: ++ +[source,terminal] +---- +$ oc logs -n +---- + +. If the application logs contain error messages that indicate that the instrumentation might be malfunctioning, install the OpenTelemetry SDK and libraries locally. Then run your application locally and troubleshoot for issues between the instrumentation libraries and your application without {product-title}. + +. Use the Debug Exporter to verify that the telemetry data is reaching the destination OpenTelemetry Collector instance. For more information, see "Debug Exporter". diff --git a/observability/otel/otel-configuration-of-instrumentation.adoc b/observability/otel/otel-configuration-of-instrumentation.adoc index 6109eb97fba6..a534d46cc578 100644 --- a/observability/otel/otel-configuration-of-instrumentation.adoc +++ b/observability/otel/otel-configuration-of-instrumentation.adoc @@ -7,6 +7,7 @@ include::_attributes/common-attributes.adoc[] toc::[] -The {OTELName} Operator uses a custom resource definition (CRD) file that defines the configuration of the instrumentation. +The {OTELName} Operator uses an `Instrumentation` custom resource that defines the configuration of the instrumentation. +include::modules/otel-autoinstrumentation.adoc[leveloffset=+1] include::modules/otel-config-instrumentation.adoc[leveloffset=+1] diff --git a/observability/otel/otel-troubleshooting.adoc b/observability/otel/otel-troubleshooting.adoc index c2ebb838a65f..c0998a5af474 100644 --- a/observability/otel/otel-troubleshooting.adoc +++ b/observability/otel/otel-troubleshooting.adoc @@ -11,7 +11,7 @@ The OpenTelemetry Collector offers multiple ways to measure its health as well a include::modules/otel-troubleshoot-collecting-diagnostic-data-from-command-line.adoc[leveloffset=+1] include::modules/otel-troubleshoot-collector-logs.adoc[leveloffset=+1] include::modules/otel-troubleshoot-metrics.adoc[leveloffset=+1] -include::modules/otel-troubleshoot-logging-exporter-stdout.adoc[leveloffset=+1] +include::modules/otel-troubleshoot-debug-exporter-stdout.adoc[leveloffset=+1] include::modules/otel-troubleshoot-network-traffic.adoc[leveloffset=+1] [role="_additional-resources"] @@ -20,3 +20,14 @@ include::modules/otel-troubleshoot-network-traffic.adoc[leveloffset=+1] * xref:../../observability/network_observability/installing-operators.adoc#installing-network-observability-operators[Installing the Network Observability Operator] * xref:../../observability/network_observability/observing-network-traffic.adoc#nw-observe-network-traffic[Observing the network traffic from the Topology view] + +[id="troubleshooting_instrumentation_{context}"] +== Troubleshooting the instrumentation + +To troubleshoot the instrumentation, look for any of the following issues: + +* Issues with instrumentation injection into your workload +* Issues with data generation by the instrumentation libraries + +include::modules/otel-troubleshooting-instrumentation-injection-into-your-workload.adoc[leveloffset=+2] +include::modules/otel-troubleshooting-telemetry-data-generation-by-the-instrumentation-libraries.adoc[leveloffset=+2] From b5cbf9c779d33965d13d8c43c684b3104b4a4ed3 Mon Sep 17 00:00:00 2001 From: JoeAldinger Date: Wed, 19 Feb 2025 14:37:31 -0500 Subject: [PATCH 323/669] OCPBUGS-49933:UDN rhel worker nodes update --- .../primary_networks/about-user-defined-networks.adoc | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc index 3069e166250c..7105e5b5de79 100644 --- a/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc +++ b/networking/multiple_networks/primary_networks/about-user-defined-networks.adoc @@ -15,6 +15,11 @@ The following diagram shows four cluster namespaces, where each namespace has a image::527-OpenShift-UDN-isolation-012025.png[The namespace isolation concept in a user-defined network (UDN)] +[NOTE] +==== +Nodes that use `cgroupv1` Linux Control Groups (cgroup) must be reconfigured from `cgroupv1` to `cgroupv2` before creating a user-defined network. For more information, see xref:../../../nodes/clusters/nodes-cluster-cgroups-2.adoc#nodes-cluster-cgroups-2[Configuring Linux cgroup]. +==== + A cluster administrator can use a user-defined network to create and define additional networks that span multiple namespaces at the cluster level by leveraging the `ClusterUserDefinedNetwork` custom resource (CR). Additionally, a cluster administrator or a cluster user can use a user-defined network to define additional networks at the namespace level with the `UserDefinedNetwork` CR. The following diagram shows tenant isolation that a cluster administrator created by defining a `ClusterUserDefinedNetwork` CR for each tenant. This network configuration allows a network to span across many namespaces. In the diagram, the `udn-1` disconnected network selects `namespace-1` and `namespace-2`, while the `udn-2` disconnected network selects `namespace-3` and `namespace-4`. A tenant acts as a disconnected network that is isolated from other tenants' networks. Pods from a namespace can communicate with pods in another namespace only if those namespaces exist in the same tenant network. From fa1fdc006907ac5ff3180ea3cb627d0744d7d4c1 Mon Sep 17 00:00:00 2001 From: Alex Dellapenta Date: Thu, 20 Feb 2025 14:01:29 -0700 Subject: [PATCH 324/669] OLMv1 capability and xref fixes --- _attributes/common-attributes.adoc | 10 +++++----- extensions/ce/user-access-resources.adoc | 2 +- modules/olm-overview.adoc | 13 ++++++++----- modules/olmv1-clusteroperator.adoc | 19 ++++++++++++++++++- operators/operator-reference.adoc | 4 ++-- 5 files changed, 34 insertions(+), 14 deletions(-) diff --git a/_attributes/common-attributes.adoc b/_attributes/common-attributes.adoc index 828ede7e0d2d..ea746372c9bd 100644 --- a/_attributes/common-attributes.adoc +++ b/_attributes/common-attributes.adoc @@ -229,11 +229,11 @@ endif::[] //Version-agnostic OLM :olm-first: Operator Lifecycle Manager (OLM) :olm: OLM -//Initial version of OLM that shipped with OCP 4, aka "v0" -:olmv0: existing OLM -:olmv0-caps: Existing OLM -:olmv0-first: existing Operator Lifecycle Manager (OLM) -:olmv0-first-caps: Existing Operator Lifecycle Manager (OLM) +//Initial version of OLM that shipped with OCP 4, aka "v0" and f/k/a "existing" during OLM v1's pre-4.18 TP phase +:olmv0: OLM (Classic) +:olmv0-caps: OLM (Classic) +:olmv0-first: Operator Lifecycle Manager (OLM) Classic +:olmv0-first-caps: Operator Lifecycle Manager (OLM) Classic //Next-gen (OCP 4.14+) Operator Lifecycle Manager, f/k/a "1.0" :olmv1: OLM v1 :olmv1-first: Operator Lifecycle Manager (OLM) v1 diff --git a/extensions/ce/user-access-resources.adoc b/extensions/ce/user-access-resources.adoc index 0ec88cda86ff..22efc68738ad 100644 --- a/extensions/ce/user-access-resources.adoc +++ b/extensions/ce/user-access-resources.adoc @@ -17,7 +17,7 @@ The RBAC permissions described for user access to extension resources are differ [role="_additional-resources"] .Additional resources -* xref:../../extensions/ce/managing-ce.adoc#managing-ce["Managing extensions" -> "Cluster extension permissions"] +* xref:../../extensions/ce/managing-ce.adoc#olmv1-cluster-extension-permissions_managing-ce["Managing extensions" -> "Cluster extension permissions"] include::modules/olmv1-default-cluster-roles-users.adoc[leveloffset=+1] diff --git a/modules/olm-overview.adoc b/modules/olm-overview.adoc index a1e302a91f65..189f3ab0858d 100644 --- a/modules/olm-overview.adoc +++ b/modules/olm-overview.adoc @@ -11,28 +11,31 @@ ifeval::["{context}" == "cluster-capabilities"] :cluster-caps: endif::[] - :_mod-docs-content-type: CONCEPT [id="olm-overview_{context}"] ifndef::operators[] ifndef::cluster-caps[] -= What is Operator Lifecycle Manager? += What is {olmv0-first}? endif::[] endif::[] ifdef::operators[] = Purpose endif::[] ifdef::cluster-caps[] -= Operator Lifecycle Manager capability += {olmv0-first} capability [discrete] == Purpose endif::[] -_Operator Lifecycle Manager_ (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their {product-title} clusters. It is part of the link:https://operatorframework.io/[Operator Framework], an open source toolkit designed to manage Operators in an effective, automated, and scalable way. +ifdef::cluster-caps[] +{olmv0} provides the features for the `OperatorLifecycleManager` capability. +endif::[] + +{olmv0-first} helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their {product-title} clusters. It is part of the link:https://operatorframework.io/[Operator Framework], an open source toolkit designed to manage Operators in an effective, automated, and scalable way. ifndef::cluster-caps[] -.Operator Lifecycle Manager workflow +.{olmv0} workflow image::olm-workflow.png[] OLM runs by default in {product-title} {product-version}, which aids diff --git a/modules/olmv1-clusteroperator.adoc b/modules/olmv1-clusteroperator.adoc index a24c554e6ab7..35d1c516bd49 100644 --- a/modules/olmv1-clusteroperator.adoc +++ b/modules/olmv1-clusteroperator.adoc @@ -3,14 +3,31 @@ // * operators/operator-reference.adoc // * installing/overview/cluster-capabilities.adoc +ifeval::["{context}" == "cluster-operators-ref"] +:operators: +endif::[] +ifeval::["{context}" == "cluster-capabilities"] +:cluster-caps: +endif::[] + :_mod-docs-content-type: CONCEPT + [id="cluster-operators-ref-olmv1_{context}"] +ifdef::operators[] = {olmv1-first} Operator +endif::[] +ifdef::cluster-caps[] += {olmv1-first} capability +endif::[] [discrete] == Purpose -Starting in {product-title} 4.18, {olmv1-first} is enabled by default alongside the {olmv0}. This next-generation iteration provides an updated framework that evolves many of the {olmv0} concepts that enable cluster administrators to extend capabilities for their users. +ifdef::cluster-caps[] +{olmv1} provides the features for the `OperatorLifecycleManagerV1` capability. +endif::[] + +Starting in {product-title} 4.18, {olmv1} is enabled by default alongside {olmv0}. This next-generation iteration provides an updated framework that evolves many of {olmv0} concepts that enable cluster administrators to extend capabilities for their users. {olmv1} manages the lifecycle of the new `ClusterExtension` object, which includes Operators via the `registry+v1` bundle format, and controls installation, upgrade, and role-based access control (RBAC) of extensions within a cluster. diff --git a/operators/operator-reference.adoc b/operators/operator-reference.adoc index 2072519460ee..004a94293fe9 100644 --- a/operators/operator-reference.adoc +++ b/operators/operator-reference.adoc @@ -135,11 +135,11 @@ include::modules/openshift-apiserver-operator.adoc[leveloffset=+1] include::modules/cluster-openshift-controller-manager-operators.adoc[leveloffset=+1] [id="cluster-operators-ref-olm"] -== Operator Lifecycle Manager (OLM) Operators +== {olmv0-first} Operators [NOTE] ==== -The following sections pertain to the {olmv0-first} that has been included with {product-title} 4 since its initial release. For {olmv1}, see xref:../operators/operator-reference.adoc#cluster-operators-ref-olmv1_cluster-operators-ref[{olmv1-first} Operators]. +The following sections pertain to {olmv0-first} that has been included with {product-title} 4 since its initial release. For {olmv1}, see xref:../operators/operator-reference.adoc#cluster-operators-ref-olmv1_cluster-operators-ref[{olmv1-first} Operators]. ==== [discrete] From 84deba017edcec7f63594cc518e7283702347955 Mon Sep 17 00:00:00 2001 From: Jaromir Hradilek Date: Mon, 13 Jan 2025 18:57:56 +0100 Subject: [PATCH 325/669] CNV-44934: Documented storage class migration --- _topic_maps/_topic_map.yml | 2 + _topic_maps/_topic_map_rosa.yml | 2 + modules/virt-migrating-storage-class-ui.adoc | 42 +++++++++++++++++++ .../virt-migrating-storage-class.adoc | 13 ++++++ 4 files changed, 59 insertions(+) create mode 100644 modules/virt-migrating-storage-class-ui.adoc create mode 100644 virt/managing_vms/virtual_disks/virt-migrating-storage-class.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 0e46fc4e302a..bd6360a64857 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -4685,6 +4685,8 @@ Topics: File: virt-expanding-vm-disks - Name: Configuring shared volumes File: virt-configuring-shared-volumes-for-vms + - Name: Migrating VM disks to a different storage class + File: virt-migrating-storage-class - Name: Networking Dir: vm_networking Topics: diff --git a/_topic_maps/_topic_map_rosa.yml b/_topic_maps/_topic_map_rosa.yml index 8cc0aaeef5ec..c916a96b868a 100644 --- a/_topic_maps/_topic_map_rosa.yml +++ b/_topic_maps/_topic_map_rosa.yml @@ -1986,6 +1986,8 @@ Topics: # Need to check if supported: # - Name: Configuring shared volumes # File: virt-configuring-shared-volumes-for-vms + - Name: Migrating VM disks to a different storage class + File: virt-migrating-storage-class - Name: Networking Dir: vm_networking Topics: diff --git a/modules/virt-migrating-storage-class-ui.adoc b/modules/virt-migrating-storage-class-ui.adoc new file mode 100644 index 000000000000..10af5ab23864 --- /dev/null +++ b/modules/virt-migrating-storage-class-ui.adoc @@ -0,0 +1,42 @@ +// Module included in the following assemblies: +// +// * virt/managing_vms/virtual_disks/virt-migrating-storage-class.adoc + +:_mod-docs-content-type: PROCEDURE + +[id="virt-migrating-storage-class-ui_{context}"] += Migrating VM disks to a different storage class by using the web console + +You can migrate one or more disks attached to a virtual machine (VM) to a different storage class by using the {product-title} web console. When performing this action on a running VM, the operation of the VM is not interrupted and the data on the migrated disks remains accessible. + +[NOTE] +==== +With the {VirtProductName} Operator, you can only start storage class migration for one VM at the time and the VM must be running. If you need to migrate more VMs at once or migrate a mix of running and stopped VMs, consider using the link:https://docs.redhat.com/en/documentation/migration_toolkit_for_containers/{mtc-version}/html/migration_toolkit_for_containers/index[{mtc-first}]. + +{mtc-full} is not part of {VirtProductName} and requires separate installation. +==== + +:FeatureName: Storage class migration +include::snippets/technology-preview.adoc[] + +.Prerequisites + +* You must have a data volume or a persistent volume claim (PVC) available for storage class migration. +* The cluster must have a node available for live migration. As part of the storage class migration, the VM is live migrated to a different node. +* The VM must be running. + +.Procedure + +. Navigate to *Virtualization* -> *VirtualMachines* in the web console. +. Click the Options menu {kebab} beside the virtual machine and select *Migration* -> *Storage*. ++ +You can also access this option from the *VirtualMachine details* page by selecting *Actions* -> *Migration* -> *Storage*. +. On the *Migration details* page, choose whether to migrate the entire VM storage or selected volumes only. If you click *Selected volumes*, select any disks that you intend to migrate. Click *Next* to proceed. +. From the list of available options on the *Destination StorageClass* page, select the storage class to migrate to. Click *Next* to proceed. +. On the *Review* page, review the list of affected disks and the target storage class. To start the migration, click *Migrate VirtualMachine storage*. +. Stay on the *Migrate VirtualMachine storage* page to watch the progress and wait for the confirmation that the migration completed successfully. + +.Verification + +. From the *VirtualMachine details* page, navigate to *Configuration* -> *Storage*. +. Verify that all disks have the expected storage class listed in the *Storage class* column. diff --git a/virt/managing_vms/virtual_disks/virt-migrating-storage-class.adoc b/virt/managing_vms/virtual_disks/virt-migrating-storage-class.adoc new file mode 100644 index 000000000000..7f766f2300b5 --- /dev/null +++ b/virt/managing_vms/virtual_disks/virt-migrating-storage-class.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="virt-migrating-storage-class"] += Migrating VM disks to a different storage class + +include::_attributes/common-attributes.adoc[] +:context: virt-migrating-storage-class + +toc::[] + +You can migrate one or more virtual disks to a different storage class without stopping your virtual machine (VM) or virtual machine instance (VMI). + +include::modules/virt-migrating-storage-class-ui.adoc[leveloffset=+1] From 94e906ab8fb3d2643a4911274de4514e2b4c4217 Mon Sep 17 00:00:00 2001 From: Michael Ryan Peter Date: Thu, 20 Feb 2025 12:42:35 -0500 Subject: [PATCH 326/669] OCPBUGS#51125: late breaking OLMv1 4.18 doc bugs --- extensions/catalogs/managing-catalogs.adoc | 1 + modules/olmv1-about-target-versions.adoc | 76 +++++++++++-------- .../olmv1-disabling-a-default-catalog.adoc | 42 ++++++++++ .../olmv1-forcing-an-update-or-rollback.adoc | 36 +++++---- modules/olmv1-installing-an-operator.adoc | 2 + modules/olmv1-version-range-comparisons.adoc | 21 +++-- 6 files changed, 122 insertions(+), 56 deletions(-) create mode 100644 modules/olmv1-disabling-a-default-catalog.adoc diff --git a/extensions/catalogs/managing-catalogs.adoc b/extensions/catalogs/managing-catalogs.adoc index ed31693f80b4..530a8a5703c5 100644 --- a/extensions/catalogs/managing-catalogs.adoc +++ b/extensions/catalogs/managing-catalogs.adoc @@ -28,3 +28,4 @@ include::modules/olmv1-about-catalogs.adoc[leveloffset=+1] include::modules/olmv1-red-hat-catalogs.adoc[leveloffset=+1] include::modules/olmv1-adding-a-catalog.adoc[leveloffset=+1] include::modules/olmv1-deleting-catalog.adoc[leveloffset=+1] +include::modules/olmv1-disabling-a-default-catalog.adoc[leveloffset=+1] diff --git a/modules/olmv1-about-target-versions.adoc b/modules/olmv1-about-target-versions.adoc index 83d51b14e1f0..aaf4bf5b5c0f 100644 --- a/modules/olmv1-about-target-versions.adoc +++ b/modules/olmv1-about-target-versions.adoc @@ -22,18 +22,22 @@ If you specify a channel in the CR, {olmv1} installs the latest version of the O .Example CR with a specified channel [source,yaml] ---- -apiVersion: olm.operatorframework.io/v1alpha1 -kind: ClusterExtension -metadata: - name: pipelines-operator -spec: - packageName: openshift-pipelines-operator-rh - installNamespace: - serviceAccount: - name: - channel: latest <1> +apiVersion: olm.operatorframework.io/v1 + kind: ClusterExtension + metadata: + name: + spec: + namespace: + serviceAccount: + name: + source: + sourceType: Catalog + catalog: + packageName: + channels: + - latest <1> ---- -<1> Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. +<1> Optional: Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. Specify the value of the `channels` parameter as an array. If you specify the Operator or extension's target version in the CR, {olmv1} installs the specified version. When the target version is specified in the CR, {olmv1} does not change the target version when updates are published to the catalog. @@ -42,36 +46,42 @@ If you want to update the version of the Operator that is installed on the clust .Example CR with the target version specified [source,yaml] ---- -apiVersion: olm.operatorframework.io/v1alpha1 -kind: ClusterExtension -metadata: - name: pipelines-operator -spec: - packageName: openshift-pipelines-operator-rh - installNamespace: - serviceAccount: - name: - version: "1.11.1" <1> +apiVersion: olm.operatorframework.io/v1 + kind: ClusterExtension + metadata: + name: + spec: + namespace: + serviceAccount: + name: + source: + sourceType: Catalog + catalog: + packageName: + version: "1.11.1" <1> ---- -<1> Specifies the target version. If you want to update the version of the Operator or extension that is installed, you must manually update this field the CR to the desired target version. +<1> Optional: Specifies the target version. If you want to update the version of the Operator or extension that is installed, you must manually update this field the CR to the desired target version. If you want to define a range of acceptable versions for an Operator or extension, you can specify a version range by using a comparison string. When you specify a version range, {olmv1} installs the latest version of an Operator or extension that can be resolved by the Operator Controller. .Example CR with a version range specified [source,yaml] ---- -apiVersion: olm.operatorframework.io/v1alpha1 -kind: ClusterExtension -metadata: - name: pipelines-operator -spec: - packageName: openshift-pipelines-operator-rh - installNamespace: - serviceAccount: - name: - version: ">1.11.1" <1> +apiVersion: olm.operatorframework.io/v1 + kind: ClusterExtension + metadata: + name: + spec: + namespace: + serviceAccount: + name: + source: + sourceType: Catalog + catalog: + packageName: + version: ">1.11.1" <1> ---- -<1> Specifies that the desired version range is greater than version `1.11.1`. For more information, see "Support for version ranges". +<1> Optional: Specifies that the desired version range is greater than version `1.11.1`. For more information, see "Support for version ranges". After you create or update a CR, apply the configuration file by running the following command: diff --git a/modules/olmv1-disabling-a-default-catalog.adoc b/modules/olmv1-disabling-a-default-catalog.adoc new file mode 100644 index 000000000000..3764240ac8f3 --- /dev/null +++ b/modules/olmv1-disabling-a-default-catalog.adoc @@ -0,0 +1,42 @@ +// Module included in the following assemblies: +// +// * operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc + +:_mod-docs-content-type: PROCEDURE + +[id="olmv1-disabling-a-default-catalog_{context}"] += Disabling a default catalog + +You can disable the Red{nbsp}Hat-provided catalogs that are included with {product-title} by default. + +.Procedure + +* Disable a default catalog by running the following command: ++ +[source,terminal] +---- +$ oc patch clustercatalog openshift-certified-operators -p \ + '{"spec": {"availabilityMode": "Unavailable"}}' --type=merge +---- ++ +.Example output +[source,text] +---- +clustercatalog.olm.operatorframework.io/openshift-certified-operators patched +---- + +.Verification + +* Verify the catalog is disabled by running the following command: ++ +[source,terminal] +---- +$ oc get clustercatalog openshift-certified-operators +---- ++ +.Example output +[source,text] +---- +NAME LASTUNPACKED SERVING AGE +openshift-certified-operators False 6h54m +---- diff --git a/modules/olmv1-forcing-an-update-or-rollback.adoc b/modules/olmv1-forcing-an-update-or-rollback.adoc index 9f4343551455..0682ac058358 100644 --- a/modules/olmv1-forcing-an-update-or-rollback.adoc +++ b/modules/olmv1-forcing-an-update-or-rollback.adoc @@ -27,22 +27,28 @@ You must verify the consequences of forcing a manual update or rollback. Failure .Example CR [source,yaml] ---- -apiVersion: olm.operatorframework.io/v1alpha1 -kind: Operator -metadata: - name: <1> -spec: - packageName: <2> - installNamespace: - serviceAccount: - name: - version: <3> - upgradeConstraintPolicy: Ignore <4> +apiVersion: olm.operatorframework.io/v1 + kind: ClusterExtension + metadata: + name: + spec: + namespace: <1> + serviceAccount: + name: <2> + source: + sourceType: Catalog + catalog: + packageName: + channels: + - <3> + version: <4> + upgradeConstraintPolicy: SelfCertified <5> ---- -<1> Specifies the name of the Operator or extension, such as `pipelines-operator` -<2> Specifies the package name, such as `openshift-pipelines-operator-rh`. -<3> Specifies the blocked update or rollback version. -<4> Optional: Specifies the upgrade constraint policy. To force an update or rollback, set the field to `Ignore`. If unspecified, the default setting is `Enforce`. +<1> Specifies the namespace where you want the bundle installed, such as `pipelines` or `my-extension`. Extensions are still cluster-scoped and might contain resources that are installed in different namespaces. +<2> Specifies the name of the service account you created to install, update, and manage your extension. +<3> Optional: Specifies channel names as an array, such as `pipelines-1.14` or `latest`. +<4> Optional: Specifies the version or version range, such as `1.14.0`, `1.14.x`, or `>=1.16`, of the package you want to install or update. For more information, see "Example custom resources (CRs) that specify a target version" and "Support for version ranges". +<5> Optional: Specifies the upgrade constraint policy. To force an update or rollback, set the field to `SelfCertified`. If unspecified, the default setting is `CatalogProvided`. The `CatalogProvided` setting only updates if the new version satisfies the upgrade constraints set by the package author. . Apply the changes to your Operator or extensions CR by running the following command: + diff --git a/modules/olmv1-installing-an-operator.adoc b/modules/olmv1-installing-an-operator.adoc index 87b1bdea2440..ad15092b81c4 100644 --- a/modules/olmv1-installing-an-operator.adoc +++ b/modules/olmv1-installing-an-operator.adoc @@ -34,11 +34,13 @@ apiVersion: olm.operatorframework.io/v1 channels: - <3> version: <4> + upgradeConstraintPolicy: CatalogProvided <5> ---- <1> Specifies the namespace where you want the bundle installed, such as `pipelines` or `my-extension`. Extensions are still cluster-scoped and might contain resources that are installed in different namespaces. <2> Specifies the name of the service account you created to install, update, and manage your extension. <3> Optional: Specifies channel names as an array, such as `pipelines-1.14` or `latest`. <4> Optional: Specifies the version or version range, such as `1.14.0`, `1.14.x`, or `>=1.16`, of the package you want to install or update. For more information, see "Example custom resources (CRs) that specify a target version" and "Support for version ranges". +<5> Optional: Specifies the upgrade constraint policy. If unspecified, the default setting is `CatalogProvided`. The `CatalogProvided` setting only updates if the new version satisfies the upgrade constraints set by the package author. To force an update or rollback, set the field to `SelfCertified`. For more information, see "Forcing an update or rollback". .Example `pipelines-operator.yaml` CR [source,yaml] diff --git a/modules/olmv1-version-range-comparisons.adoc b/modules/olmv1-version-range-comparisons.adoc index c5f2ac2fda07..2c3cc0e5123a 100644 --- a/modules/olmv1-version-range-comparisons.adoc +++ b/modules/olmv1-version-range-comparisons.adoc @@ -40,14 +40,19 @@ You can specify a version range in an Operator or extension's CR by using a rang .Example version range comparison [source,yaml] ---- -apiVersion: olm.operatorframework.io/v1alpha1 -kind: ClusterExtension -metadata: - name: pipelines-operator -spec: - packageName: openshift-pipelines-operator-rh - installNamespace: - version: ">=1.11, <1.13" +apiVersion: olm.operatorframework.io/v1 + kind: ClusterExtension + metadata: + name: + spec: + namespace: + serviceAccount: + name: + source: + sourceType: Catalog + catalog: + packageName: + version: ">=1.11, <1.13" ---- You can use wildcard characters in all types of comparison strings. {olmv1} accepts `x`, `X`, and asterisks (`*`) as wildcard characters. When you use a wildcard character with the equal sign (`=`) comparison operator, you define a comparison at the patch or minor version level. From cbcd5f90f6c277ae8f17300bd9f593037ae834d6 Mon Sep 17 00:00:00 2001 From: Michael Burke Date: Thu, 20 Feb 2025 12:09:29 -0500 Subject: [PATCH 327/669] WMCO hiding 4.18.0 docs --- _topic_maps/_topic_map.yml | 86 ++++++------ ...installing-aws-network-customizations.adoc | 6 +- ...stalling-azure-network-customizations.adoc | 6 +- ...zure-stack-hub-network-customizations.adoc | 7 +- installing/overview/installing-preparing.adoc | 6 +- modules/configuring-hybrid-ovnkubernetes.adoc | 130 ++++++++++++++++++ .../configuring-hybrid-networking.adoc | 7 +- release_notes/addtl-release-notes.adoc | 2 +- welcome/learn_more_about_openshift.adoc | 9 +- windows_containers/index.adoc | 8 +- 10 files changed, 207 insertions(+), 60 deletions(-) diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index bd6360a64857..73a457a71471 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -853,8 +853,8 @@ Topics: File: troubleshooting-s2i - Name: Troubleshooting storage issues File: troubleshooting-storage-issues - - Name: Troubleshooting Windows container workload issues - File: troubleshooting-windows-container-workload-issues +# - Name: Troubleshooting Windows container workload issues +# File: troubleshooting-windows-container-workload-issues - Name: Investigating monitoring issues File: investigating-monitoring-issues - Name: Diagnosing OpenShift CLI (oc) issues @@ -2835,47 +2835,47 @@ Distros: openshift-origin,openshift-enterprise Topics: - Name: Red Hat OpenShift support for Windows Containers overview File: index -- Name: Release notes - Dir: wmco_rn - Topics: - - Name: Red Hat OpenShift support for Windows Containers release notes - File: windows-containers-release-notes-10-17-x - - Name: Past releases - File: windows-containers-release-notes-10-17-x-past - - Name: Windows Machine Config Operator prerequisites - File: windows-containers-release-notes-10-17-x-prereqs - - Name: Windows Machine Config Operator known limitations - File: windows-containers-release-notes-10-17-x-limitations -- Name: Getting support - File: windows-containers-support - Distros: openshift-enterprise -- Name: Understanding Windows container workloads - File: understanding-windows-container-workloads -- Name: Enabling Windows container workloads - File: enabling-windows-container-workloads -- Name: Creating Windows machine sets - Dir: creating_windows_machinesets - Topics: - - Name: Creating a Windows machine set on AWS - File: creating-windows-machineset-aws - - Name: Creating a Windows machine set on Azure - File: creating-windows-machineset-azure - - Name: Creating a Windows machine set on GCP - File: creating-windows-machineset-gcp - - Name: Creating a Windows machine set on Nutanix - File: creating-windows-machineset-nutanix - - Name: Creating a Windows machine set on vSphere - File: creating-windows-machineset-vsphere -- Name: Scheduling Windows container workloads - File: scheduling-windows-workloads -- Name: Windows node updates - File: windows-node-upgrades -- Name: Using Bring-Your-Own-Host Windows instances as nodes - File: byoh-windows-instance -- Name: Removing Windows nodes - File: removing-windows-nodes -- Name: Disabling Windows container workloads - File: disabling-windows-container-workloads +#- Name: Release notes +# Dir: wmco_rn +# Topics: +# - Name: Red Hat OpenShift support for Windows Containers release notes +# File: windows-containers-release-notes-10-17-x +# - Name: Past releases +# File: windows-containers-release-notes-10-17-x-past +# - Name: Windows Machine Config Operator prerequisites +# File: windows-containers-release-notes-10-17-x-prereqs +# - Name: Windows Machine Config Operator known limitations +# File: windows-containers-release-notes-10-17-x-limitations +#- Name: Getting support +# File: windows-containers-support +# Distros: openshift-enterprise +#- Name: Understanding Windows container workloads +# File: understanding-windows-container-workloads +#- Name: Enabling Windows container workloads +# File: enabling-windows-container-workloads +#- Name: Creating Windows machine sets +# Dir: creating_windows_machinesets +# Topics: +# - Name: Creating a Windows machine set on AWS +# File: creating-windows-machineset-aws +# - Name: Creating a Windows machine set on Azure +# File: creating-windows-machineset-azure +# - Name: Creating a Windows machine set on GCP +# File: creating-windows-machineset-gcp +# - Name: Creating a Windows machine set on Nutanix +# File: creating-windows-machineset-nutanix +# - Name: Creating a Windows machine set on vSphere +# File: creating-windows-machineset-vsphere +#- Name: Scheduling Windows container workloads +# File: scheduling-windows-workloads +#- Name: Windows node updates +# File: windows-node-upgrades +#- Name: Using Bring-Your-Own-Host Windows instances as nodes +# File: byoh-windows-instance +#- Name: Removing Windows nodes +# File: removing-windows-nodes +#- Name: Disabling Windows container workloads +# File: disabling-windows-container-workloads --- Name: OpenShift sandboxed containers Dir: sandboxed_containers diff --git a/installing/installing_aws/ipi/installing-aws-network-customizations.adoc b/installing/installing_aws/ipi/installing-aws-network-customizations.adoc index 8220facac7cb..0e579a1df1ca 100644 --- a/installing/installing_aws/ipi/installing-aws-network-customizations.adoc +++ b/installing/installing_aws/ipi/installing-aws-network-customizations.adoc @@ -104,11 +104,13 @@ include::modules/nw-aws-nlb-new-cluster.adoc[leveloffset=+1] include::modules/configuring-hybrid-ovnkubernetes.adoc[leveloffset=+1] - +//// +Hiding until WMCO 10.18.0 GAs [NOTE] ==== -For more information about using Linux and Windows nodes in the same cluster, see xref:../../../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads[Understanding Windows container workloads]. +For more information about using Linux and Windows nodes in the same cluster, see ../../../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads[Understanding Windows container workloads]. ==== +//// include::modules/installation-launching-installer.adoc[leveloffset=+1] diff --git a/installing/installing_azure/ipi/installing-azure-network-customizations.adoc b/installing/installing_azure/ipi/installing-azure-network-customizations.adoc index eac3e048a01d..6e4d03225a45 100644 --- a/installing/installing_azure/ipi/installing-azure-network-customizations.adoc +++ b/installing/installing_azure/ipi/installing-azure-network-customizations.adoc @@ -46,11 +46,13 @@ include::modules/nw-modifying-operator-install-config.adoc[leveloffset=+1] include::modules/nw-operator-cr.adoc[leveloffset=+1] include::modules/configuring-hybrid-ovnkubernetes.adoc[leveloffset=+1] - +//// +Hiding until WMCO 10.18.0 GAs [NOTE] ==== -For more information about using Linux and Windows nodes in the same cluster, see xref:../../../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads[Understanding Windows container workloads]. +For more information about using Linux and Windows nodes in the same cluster, see ../../../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads[Understanding Windows container workloads]. ==== +//// [role="_additional-resources"] .Additional resources diff --git a/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.adoc b/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.adoc index 532b5731b1b0..eb496de661b1 100644 --- a/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.adoc +++ b/installing/installing_azure_stack_hub/ipi/installing-azure-stack-hub-network-customizations.adoc @@ -50,12 +50,13 @@ include::modules/nw-modifying-operator-install-config.adoc[leveloffset=+1] include::modules/nw-operator-cr.adoc[leveloffset=+1] include::modules/configuring-hybrid-ovnkubernetes.adoc[leveloffset=+1] - +//// +Hiding until WMCO 10.18.0 GAs [NOTE] ==== -For more information about using Linux and Windows nodes in the same cluster, see xref:../../../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads[Understanding Windows container workloads]. +For more information about using Linux and Windows nodes in the same cluster, see ../../../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads[Understanding Windows container workloads]. ==== - +//// include::modules/installation-launching-installer.adoc[leveloffset=+1] diff --git a/installing/overview/installing-preparing.adoc b/installing/overview/installing-preparing.adoc index e2f89ec81f7a..b34abb4255cf 100644 --- a/installing/overview/installing-preparing.adoc +++ b/installing/overview/installing-preparing.adoc @@ -116,7 +116,11 @@ For a production cluster, you must configure the following integrations: == Preparing your cluster for workloads Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application xref:../../cicd/builds/build-strategies.adoc#build-strategies[build strategy], you might need to make provisions for xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-low-latency-perf-profile[low-latency] workloads or to xref:../../nodes/pods/nodes-pods-secrets.adoc#nodes-pods-secrets[protect sensitive workloads]. You can also configure xref:../../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#enabling-monitoring-for-user-defined-projects-uwm_preparing-to-configure-the-monitoring-stack-uwm[monitoring] for application workloads. -If you plan to run xref:../../windows_containers/enabling-windows-container-workloads.adoc#enabling-windows-container-workloads[Windows workloads], you must enable xref:../../networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.adoc#configuring-hybrid-networking[hybrid networking with OVN-Kubernetes] during the installation process; hybrid networking cannot be enabled after your cluster is installed. + +//// +Hiding until WMCO 10.18.0 GAs +If you plan to run ../../windows_containers/enabling-windows-container-workloads.adoc#enabling-windows-container-workloads[Windows workloads], you must enable xref:../../networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.adoc#configuring-hybrid-networking[hybrid networking with OVN-Kubernetes] during the installation process; hybrid networking cannot be enabled after your cluster is installed. +//// [id="supported-installation-methods-for-different-platforms"] == Supported installation methods for different platforms diff --git a/modules/configuring-hybrid-ovnkubernetes.adoc b/modules/configuring-hybrid-ovnkubernetes.adoc index 24078be8c200..9af2588a56d7 100644 --- a/modules/configuring-hybrid-ovnkubernetes.adoc +++ b/modules/configuring-hybrid-ovnkubernetes.adoc @@ -15,14 +15,142 @@ endif::[] You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations. +//// +Hiding until WMCO 10.18.0 GAs [NOTE] ==== This configuration is necessary to run both Linux and Windows nodes in the same cluster. ==== +//// ifndef::post-install[] .Prerequisites +// Made changes to hide Windows-related material until WMCO 4.18.0 releases. Below is the full procedure, commented out. + +* You defined `OVNKubernetes` for the `networking.networkType` parameter in the `install-config.yaml` file. See the installation documentation for configuring {product-title} network customizations on your chosen cloud provider for more information. + +.Procedure + +. Change to the directory that contains the installation program and create the manifests: ++ +[source,terminal] +---- +$ ./openshift-install create manifests --dir +---- ++ +-- +where: + +``:: Specifies the name of the directory that contains the `install-config.yaml` file for your cluster. +-- + +. Create a stub manifest file for the advanced network configuration that is named `cluster-network-03-config.yml` in the `/manifests/` directory: ++ +[source,terminal] +---- +$ cat < /manifests/cluster-network-03-config.yml +apiVersion: operator.openshift.io/v1 +kind: Network +metadata: + name: cluster +spec: +EOF +---- ++ +-- +where: + +``:: Specifies the directory name that contains the +`manifests/` directory for your cluster. +-- + +. Open the `cluster-network-03-config.yml` file in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example: ++ +-- +.Specify a hybrid networking configuration +[source,yaml] +---- +apiVersion: operator.openshift.io/v1 +kind: Network +metadata: + name: cluster +spec: + defaultNetwork: + ovnKubernetesConfig: + hybridOverlayConfig: + hybridClusterNetwork: <1> + - cidr: 10.132.0.0/14 + hostPrefix: 23 +---- +<1> Specify the CIDR configuration used for nodes on the additional overlay network. The `hybridClusterNetwork` CIDR must not overlap with the `clusterNetwork` CIDR. +-- + +. Save the `cluster-network-03-config.yml` file and quit the text editor. +. Optional: Back up the `manifests/cluster-network-03-config.yml` file. The +installation program deletes the `manifests/` directory when creating the +cluster. +endif::post-install[] +ifdef::post-install[] +.Prerequisites + +* Install the OpenShift CLI (`oc`). +* Log in to the cluster as a user with `cluster-admin` privileges. +* Ensure that the cluster uses the OVN-Kubernetes network plugin. + +.Procedure + + +. To configure the OVN-Kubernetes hybrid network overlay, enter the following command: ++ +[source,terminal] +---- +$ oc patch networks.operator.openshift.io cluster --type=merge \ + -p '{ + "spec":{ + "defaultNetwork":{ + "ovnKubernetesConfig":{ + "hybridOverlayConfig":{ + "hybridClusterNetwork":[ + { + "cidr": "", + "hostPrefix": + } + ] + } + } + } + } + }' +---- ++ +-- +where: + +`cidr`:: Specify the CIDR configuration used for nodes on the additional overlay network. This CIDR must not overlap with the cluster network CIDR. +`hostPrefix`:: Specifies the subnet prefix length to assign to each individual node. For example, if `hostPrefix` is set to `23`, then each node is assigned a `/23` subnet out of the given `cidr`, which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. +-- ++ +.Example output +[source,text] +---- +network.operator.openshift.io/cluster patched +---- + +. To confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply. ++ +[source,terminal] +---- +$ oc get network.operator.openshift.io -o jsonpath="{.items[0].spec.defaultNetwork.ovnKubernetesConfig}" +---- + +endif::post-install[] + +//// +Hiding until WMCO 10.18.0 GAs +ifndef::post-install[] +.Prerequisites + * You defined `OVNKubernetes` for the `networking.networkType` parameter in the `install-config.yaml` file. See the installation documentation for configuring {product-title} network customizations on your chosen cloud provider for more information. .Procedure @@ -102,6 +230,7 @@ ifdef::post-install[] .Procedure + . To configure the OVN-Kubernetes hybrid network overlay, enter the following command: + [source,terminal] @@ -152,6 +281,7 @@ network.operator.openshift.io/cluster patched $ oc get network.operator.openshift.io -o jsonpath="{.items[0].spec.defaultNetwork.ovnKubernetesConfig}" ---- endif::post-install[] +//// ifdef::post-install[] :!post-install: diff --git a/networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.adoc b/networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.adoc index 36343a345699..f49a7a7b8e4d 100644 --- a/networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.adoc +++ b/networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.adoc @@ -14,7 +14,10 @@ include::modules/configuring-hybrid-ovnkubernetes.adoc[leveloffset=+1] [id="configuring-hybrid-networking-additional-resources"] == Additional resources -* xref:../../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads[Understanding Windows container workloads] -* xref:../../windows_containers/enabling-windows-container-workloads.adoc#enabling-windows-container-workloads[Enabling Windows container workloads] +//// +Hiding until WMCO 10.18.0 GAs +* ../../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads[Understanding Windows container workloads] +* ../../windows_containers/enabling-windows-container-workloads.adoc#enabling-windows-container-workloads[Enabling Windows container workloads] +//// * xref:../../installing/installing_aws/ipi/installing-aws-network-customizations.adoc#installing-aws-network-customizations[Installing a cluster on AWS with network customizations] * xref:../../installing/installing_azure/ipi/installing-azure-network-customizations.adoc#installing-azure-network-customizations[Installing a cluster on Azure with network customizations] diff --git a/release_notes/addtl-release-notes.adoc b/release_notes/addtl-release-notes.adoc index 37199ba2627e..f860c211b416 100644 --- a/release_notes/addtl-release-notes.adoc +++ b/release_notes/addtl-release-notes.adoc @@ -12,4 +12,4 @@ Purpose: A compilation of links to release notes for additional related componen Do not add or edit this file here on the `main` branch. Edit the `addtl-release-notes.adoc` file directly in the branch that a change is relevant for. Changes to this file should be added or edited in their own PR, per version branch as needed. -This is because there might be different version compatabilities with OCP, or some components/products/Operators might not be available all for a given OCP version. \ No newline at end of file +This is because there might be different version compatabilities with OCP, or some components/products/Operators might not be available all for a given OCP version. diff --git a/welcome/learn_more_about_openshift.adoc b/welcome/learn_more_about_openshift.adoc index ad96ef77f8be..b700b079570e 100644 --- a/welcome/learn_more_about_openshift.adoc +++ b/welcome/learn_more_about_openshift.adoc @@ -172,11 +172,14 @@ a|* xref:../networking/networking_operators/cluster-network-operator.adoc#nw-clu | xref:../operators/understanding/olm-understanding-operatorhub.adoc#olm-understanding-operatorhub[Manage Operators] | xref:../operators/user/olm-creating-apps-from-installed-operators.adoc#olm-creating-apps-from-installed-operators[Creating applications from installed Operators] -| xref:../windows_containers/index.adoc#index[{productwinc} overview] -| xref:../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads_understanding-windows-container-workloads[Understanding Windows container workloads] - |=== +//// +Hiding until WMCO 10.18.0 releases, replace as the last row of the above table after WMCO GAs +| xref: ../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads_understanding-windows-container-workloads[Understanding Windows container workloads] +| +//// + [discrete] ==== Changing cluster components diff --git a/windows_containers/index.adoc b/windows_containers/index.adoc index dca33f9ae4fc..f2ce1808b2cd 100644 --- a/windows_containers/index.adoc +++ b/windows_containers/index.adoc @@ -6,6 +6,11 @@ include::_attributes/common-attributes.adoc[] toc::[] + +Documentation for {productwinc} is planned to be available for {product-title} {product-version} in the near future. + +//// +Hiding until WMCO 10.18.0 GAs {productwinc} is a feature providing the ability to run Windows compute nodes in an {product-title} cluster. This is possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes. With a Red Hat subscription, you can get support for running Windows workloads in {product-title}. Windows instances deployed by the WMCO are configured with the containerd container runtime. For more information, see the xref:../windows_containers/wmco_rn/windows-containers-release-notes-10-17-x.adoc#windows-containers-release-notes-10-17-x[release notes]. You can add Windows nodes either by creating a xref:../windows_containers/creating_windows_machinesets/creating-windows-machineset-aws.adoc#creating-windows-machineset-aws[compute machine set] or by specifying existing Bring-Your-Own-Host (BYOH) Window instances through a xref:../windows_containers/byoh-windows-instance.adoc#byoh-windows-instance[configuration map]. @@ -33,7 +38,4 @@ You can xref:../windows_containers/disabling-windows-container-workloads.adoc#di * Uninstalling the Windows Machine Config Operator * Deleting the Windows Machine Config Operator namespace - -//// -Documentation for {productwinc} will be available for {product-title} {product-version} in the near future. //// From 1c10d5e9359939184e77bf98309a9c25716c30c3 Mon Sep 17 00:00:00 2001 From: Aidan Reilly <74046732+aireilly@users.noreply.github.com> Date: Thu, 13 Feb 2025 10:37:27 +0000 Subject: [PATCH 328/669] RAN RDS 4.18 docs Jeana's comments Update modules/telco-ran-gitops-operator-and-ztp-plugins.adoc Co-authored-by: Alex Dellapenta Update modules/telco-ran-gitops-operator-and-ztp-plugins.adoc Co-authored-by: Alex Dellapenta Update modules/telco-ran-gitops-operator-and-ztp-plugins.adoc Co-authored-by: Alex Dellapenta Update modules/telco-ran-bios-tuning.adoc Co-authored-by: Alex Dellapenta Update modules/telco-ran-gitops-operator-and-ztp-plugins.adoc Co-authored-by: Alex Dellapenta Update modules/telco-ran-lvms-operator.adoc Co-authored-by: Alex Dellapenta Update modules/telco-ran-local-storage-operator.adoc Co-authored-by: Alex Dellapenta Update modules/telco-ran-ptp-operator.adoc Co-authored-by: Alex Dellapenta Update modules/telco-ran-lca-operator.adoc Co-authored-by: Alex Dellapenta Update modules/telco-ran-lvms-operator.adoc Co-authored-by: Alex Dellapenta Update modules/telco-ran-sr-iov-operator.adoc Co-authored-by: Alex Dellapenta Update modules/telco-ran-sr-iov-operator.adoc Co-authored-by: Alex Dellapenta review fixes final review comments for RAN RDS 418 Sharat's ClusterInstance updates Ian's comments --- _topic_maps/_topic_map.yml | 18 +- .../observability/telco-observability.adoc | 2 +- ...lco-ran-du-reference-design-components.png | Bin 0 -> 184431 bytes modules/telco-core-monitoring.adoc | 2 +- .../telco-deviations-from-the-ref-design.adoc | 3 +- .../telco-ran-agent-based-installer-abi.adoc | 22 +-- modules/telco-ran-bios-tuning.adoc | 25 ++- modules/telco-ran-cluster-tuning.adoc | 30 +-- modules/telco-ran-core-ref-design-spec.adoc | 3 +- modules/telco-ran-crs-cluster-tuning.adoc | 22 +-- modules/telco-ran-crs-day-2-operators.adoc | 101 ++++++----- .../telco-ran-crs-machine-configuration.adoc | 35 ++-- .../telco-ran-du-application-workloads.adoc | 28 +-- ...co-ran-du-reference-design-components.adoc | 23 +++ ...nsiderations-for-the-ran-du-use-model.adoc | 99 ++++++++++ ...o-ran-gitops-operator-and-ztp-plugins.adoc | 70 +++---- modules/telco-ran-lca-operator.adoc | 13 +- modules/telco-ran-local-storage-operator.adoc | 6 +- modules/telco-ran-logging.adoc | 14 +- modules/telco-ran-lvms-operator.adoc | 18 +- modules/telco-ran-machine-configuration.adoc | 34 ++-- modules/telco-ran-node-tuning-operator.adoc | 69 ++++--- modules/telco-ran-ptp-operator.adoc | 38 ++-- ...hat-advanced-cluster-management-rhacm.adoc | 25 ++- modules/telco-ran-siteconfig-operator.adoc | 38 ++++ modules/telco-ran-sr-iov-operator.adoc | 43 +++-- modules/telco-ran-sriov-fec-operator.adoc | 13 +- ...topology-aware-lifecycle-manager-talm.adoc | 28 ++- modules/telco-ran-workload-partitioning.adoc | 13 +- modules/telco-ref-design-overview.adoc | 13 +- ...-application-workload-characteristics.adoc | 2 +- modules/using-cluster-compare-telco-ref.adoc | 13 +- ...p-telco-hub-cluster-software-versions.adoc | 32 ++++ modules/ztp-telco-ran-software-versions.adoc | 50 ++--- .../using-the-cluster-compare-plugin.adoc | 2 +- scalability_and_performance/index.adoc | 2 +- .../_attributes | 0 .../images | 0 .../modules | 0 .../snippets | 0 .../telco-ran-du-rds.adoc | 171 ++++++++++++++++++ .../ran/telco-ran-du-overview.adoc | 33 ---- .../ran/telco-ran-ref-design-spec.adoc | 18 -- .../ran/telco-ran-ref-du-components.adoc | 132 -------------- .../ran/telco-ran-ref-du-crs.adoc | 43 ----- .../ran/telco-ran-ref-software-artifacts.adoc | 14 -- 46 files changed, 713 insertions(+), 647 deletions(-) create mode 100644 images/telco-ran-du-reference-design-components.png create mode 100644 modules/telco-ran-du-reference-design-components.adoc create mode 100644 modules/telco-ran-engineering-considerations-for-the-ran-du-use-model.adoc create mode 100644 modules/telco-ran-siteconfig-operator.adoc create mode 100644 modules/ztp-telco-hub-cluster-software-versions.adoc rename scalability_and_performance/{telco_ref_design_specs/ran => telco_ran_du_ref_design_specs}/_attributes (100%) rename scalability_and_performance/{telco_ref_design_specs/ran => telco_ran_du_ref_design_specs}/images (100%) rename scalability_and_performance/{telco_ref_design_specs/ran => telco_ran_du_ref_design_specs}/modules (100%) rename scalability_and_performance/{telco_ref_design_specs/ran => telco_ran_du_ref_design_specs}/snippets (100%) create mode 100644 scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc delete mode 100644 scalability_and_performance/telco_ref_design_specs/ran/telco-ran-du-overview.adoc delete mode 100644 scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-design-spec.adoc delete mode 100644 scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc delete mode 100644 scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc delete mode 100644 scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-software-artifacts.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 73a457a71471..d329292425e2 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -3279,25 +3279,17 @@ Topics: File: recommended-infrastructure-practices - Name: Recommended etcd practices File: recommended-etcd-practices +- Name: Telco RAN DU reference design + Dir: telco_ran_du_ref_design_specs + Topics: + - Name: Telco RAN DU RDS + File: telco-ran-du-rds - Name: Reference design specifications Dir: telco_ref_design_specs Distros: openshift-origin,openshift-enterprise Topics: - Name: Telco reference design specifications File: telco-ref-design-specs-overview - - Name: Telco RAN DU reference design specification - Dir: ran - Topics: - - Name: Telco RAN DU reference design overview - File: telco-ran-ref-design-spec - - Name: Telco RAN DU use model overview - File: telco-ran-du-overview - - Name: RAN DU reference design components - File: telco-ran-ref-du-components - - Name: RAN DU reference design configuration CRs - File: telco-ran-ref-du-crs - - Name: Telco RAN DU software specifications - File: telco-ran-ref-software-artifacts - Name: Telco core reference design specification Dir: core Topics: diff --git a/edge_computing/day_2_core_cnf_clusters/observability/telco-observability.adoc b/edge_computing/day_2_core_cnf_clusters/observability/telco-observability.adoc index b9a55c776948..1abb017bdbb9 100644 --- a/edge_computing/day_2_core_cnf_clusters/observability/telco-observability.adoc +++ b/edge_computing/day_2_core_cnf_clusters/observability/telco-observability.adoc @@ -28,8 +28,8 @@ include::modules/telco-observability-key-performance-metrics.adoc[leveloffset=+1 .Additional resources * xref:../../../observability/monitoring/accessing-metrics/accessing-metrics-as-an-administrator.adoc#accessing-metrics-as-an-administrator[Accessing metrics as an administrator] + * xref:../../../storage/persistent_storage/persistent_storage_local/persistent-storage-local.adoc#local-storage-install_persistent-storage-local[Persistent storage using local volumes] -* xref:../../../scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#cluster-tuning-crs_ran-ref-design-crs[Cluster tuning reference CRs] include::modules/telco-observability-monitoring-the-edge.adoc[leveloffset=+1] diff --git a/images/telco-ran-du-reference-design-components.png b/images/telco-ran-du-reference-design-components.png new file mode 100644 index 0000000000000000000000000000000000000000..bc0548147f0a5c0448cb16afef5d2e74e41561de GIT binary patch literal 184431 zcmeFZc{J8-8#ekwBtx00%%z0PMUoJb26HI0L?luXnWIdRN^?XhBvTS)E-A`TGL|7i zA&P_y`?&Nx&o}I~_FCT`d+mSrz25hIp0~K~-+f)zd7j649LIS@80^{3%Cv%sqA1oK zx>`mQMW0MjbghgG_>-PgtC#r45)WOAV-&UYIr%S5m3!&|{NeKB+UCcN-47o3w)1qP zyuH1p4;^tm=3wXHDDCd)bmP0~3W^e@c4%psoV+>o$=ighxqb1dOtHjm;Z)%k1NBOd zQ_sb{qUnc@!d&sX;O*=u%VCHp_Gc>2^71(kacOm zxtllnxxR?7n3@J`*I9U5Ev<2z3g>0){@foPm~A&V=lkASx{R4QDmP*2zi;5L{U&^k z=)bRIqe^wffB%Y|Jzjz#IU@YuSE`8|+wOnf7yn3!40ita_e4b{Oj)!z|NBkDG<)Ie z|NdThxE?oM!~gh2lXSlS|0Dko7B<~+3k_N5X>UD!{i=&=^+$(V6%-G*uwA})S^@ zJ#}tOZCG2iT1>3_M^6XHTA~`izkjOnRUtdq$^G|% zt6$~2wX>~2^raf5Fx}Hnv~Duy`0?vkSX2}%UhFZ0mw1m|B_*QN_ zOi_xiO+z*-!v$q+DS3H$TPLTgg_GG}_HyB$hRY`>f2_j8oXN4*7w&RnV7q+`d99sT}s(;|BMB%>5PK81tPmO|Xz+*D9*Ztm#c16sZG@7Z+h z>V17tlWhgHnqx_f(AF56z}d&U`c1iv@9hc>4yMV=%agS&3(5+Z!{@z8*s_wUe_$q4 zI4dC`(UR}Bvi9bt(@hWeKW}N7d{7s+aPnXLJ>?Q~Yxms5*Kw|=A|fIfmo2M$eR1uN zvAV6>cJF?eaoJ}@ z?Z62S4~ly6^5y60UxSq@d-v|;!Rn~F3hvmkBXL7!s;8&t==V>XwRChMb4!^o#K(u3 zWm${J%X750wx*~kUCYZ;3_W{xm7H8oi-UrK0=uAK%;e;x(Y5_Y4>&q@zrCRp9ucu_ zqUF*huF$YB5e0=uCViL9@F6&UYytwT5n4vQN7K^M@HMuZn3&|$Ub%1~qQGd_mo+EE=D7lALez z)b1M_yXR*AoY!5Qy6>?B!&3h6q-|#-U6kp{CV%uC9qZeD_3G8K_wPAZDjj)&qjUb| zO_ufR*B>9Kr$I1Cd-}WR>+4Hd7x7Wq$NKjux_4OSZBeu-I!rvJn*OYHRsJu#~2U|FP?digm&G%e3^%zpI`RGH;HiFdX@6JGn zhGgyJW4BaJFx}sCi$x9{Fv{cS5gadK)(1EGKj)}j=;B4rm*-Y4-o1Nw!>0Vhts&S-x_3WipB(vA@$JKdxVX4r zk!@#en=&lIYUfGFZ4FSpaN&Z%GcUKHw#Up23=Cm0G(p+fvWGj0mYZc-vX@+{fB$}4 zX~3*(z|6^kfr0Sp{@&iMzPC4&PfbY8%*<$&2drAPibX($Yn4v4f%*KOPtUwtO8hq$ z6&J@{y^8$7di-1C665=OW_wjh2}vE7T}1+JKvHklu26hkstn1=*Vi}IvQYl|MmHvE z^w%%#!-o$iRTcRDl6V;_6kArNF5%-|5LWcxB8-SwlAZX zw>)uAWD7l`SmWQAYM40F`q;UFmSSUP5B@#U$=v?DqGEJnct2HkZl#je@^0_%A8GC~ z@53$_wanFRowt2;UZmjo*9*=iLR$k~OboZn9Um}wQCZ2Y%qt`$B(*<3Xol`u(!`X_;!VqO^M}q%kuJaKIF`~xj*hAHCtA#q9X~ntNb+cOd$tC)bOjrLS-J1 z{Q2`I2W?*N1^!o$2CTijHttDOvl@^K2??oBGhTbRJ)d=Z41f9O$|yS*7i`Yh*mLAJ z?wFmry6VUWJkplotG373_SfDTc$X5D?T$s#-LYe8$aUwgT`%$uy&vstk59Y}u|vH395)eRK2n;kL&#RC;|Ww^Yh&YM8C9t(6h>Yx__B_^f^S(^GDah(#aLO`l)i;Mz4jd=86p z$MB9$kYd?0|36+~J;p{xWIeITJ>{ntkB*Orrlg2b!qU=9&n^>^9-n!*?=gpU4egur z46cqG92}!v<#b8@zrR~iLHId|y%~(Ur~AhG-flzbJbvPYoO_4r&8>lgID8c&rnxng2s20lFC-RR!I;_mKlS?EchnVE?=q@zxr zJlTb>S8)}YnJgtv(jsbnsFjl>p14)pvw=WUA3O+rd_jhk>(#L-n@9~Q5w);bN=HYB z4bf>i;_e<>IycRX{H^RY#HSO(zs^ih^B&HJ`P1WR^_lyhNDmDSk(j}$cEA@wy=2LI z{8+n3QBl!ja@5X{%JCWNlij$HC-(v#`ZNt4gJ08QA75Xl;PM?8*Vz5+eO;mE>gsxY zsD=I7wQI>7!o{b4>x^_1YgZk5cX!F$%$UBN;4&dKtysY=x9{Ac6A%zEWXQ_T&u0sj ziA3zsQscjV<+yjMiDd6D^mG)FC24x7_0i&OXP0UhdVbOB*`TPHedzs~?v8yH7Iq{Z zHfIS5&tAAKixd%d?b`B=BA?S(5k?IMM3(x#Oi>XLnv#+dXMFnkwQRSPR{8?cpdN+g z=E`q8@=<&oM?m7Tlm)G+sVV=~0RHAIo9*iA)C#2|Qc71)lAgD=PF?&hB_+jz%BAh+ zr|Qy_5n=VrD@ikqDUp9Mb!uWr!^nu^(xpq;M?R@EY>?WQ$N25rHxF;`D(qJG$cRka zR`2hYg*_*}zN43ulQS>z+vIud*pJcfZ6=wP%g#9ssS^6$PhGlTZ>&>Tq@L1{#x4G>E0b~4hjmQjYR?weX`reMyTiI zIl8-!>jCh@nzwJ;w#|-b5D7?X(O$<5m$#mkI<{%V@~6Cl0_TUbwKDEH8B(q*-tB6> zz{bO~Mv9}gG%yf}^Ul3{XXG4SQtwWsI(O@C*s!6HDosgAVat}Trahl)`#`Gf=GCj= ziL+|ftvZ-X@Of0cE@JcVKw;4E4 zzjlMFDnD6AOH0d~)Ac@eZmzD9O!76muF7hpj#<{L8ouacZJqr2rN*HmbctsbDQmg% zTj!8x+2gtYK^9`Uhn*=T0fVHJ)iDB_@5z>#Up>RRk}1paT9S6s!-uRpckcXH8dz%H z^Yh}l$NBj@^sUV3OM=llbptX1-UQ#fx5lM0RbU)t zZeRt3C)+T*i+VoQS4R_2eieV_l@~=_${`$_n@%?EC8*_bF>Ui)PaWQ9#sGlQfR# z=x8RtkNe`fx@--1Z$s=grkg$}n=33VOx}JP`y96KMkIjpl509ER;-{8Fkda2T(@>V zS9a>$RKE~GyicD#)fT+yEwa({!{^y+Ygg|7c+}^${Qjt?!j@WEix^l}gicPLOifL_ zZTI|)xLGE@J<1aPwJxFRe&1QY<@r8eZk;;hShvx&i3N}J>h#zIK6T3B1%^5jBR9u`9pUqTt;J|@!`PWDh z9s~6vh;Bo>dFx`|C9(PY%+0%i1`>-H&^{*s4>DS0+|fCw382V5aSSO0ty$dV%M3Cy zGRm7aaZmC4&>hFKNd7FqI;;9mb5UBqZba2oRM4UtXc-u=85tXY(KGM$9_`Y2QB}qK z_`%r0r#jl&NkCzVTX?m9c3i|UJO2HnMP?nrs?9c@GEBxLem>^=_pid!&dtrC%+pbM zZtXnh8SV1 z>{5XWAX&2P_Jd6iwX08>85^IDh*;85;xG9$%ST&FOBj{om3OI1z?Qsu3cQ`M99g{tr9&nzFTki?%CuY_zUOGk}Z25 z?n{wnWTmofN?)Q?3qvQ!)T%2chK~R=4!(8k)<=ibDFM}!2GZ^8Hf>rC2ruz-3cVeA z4t8$t6pyWgVjS%8YN(HB2nmjDYHDJdXBAX6=Q}ntc{HrbcVg(8Wz&zr=BymF6_k1y z6L)=vg+k%cLx&Eb|KWA3!3n+~wlj3?IVPs_ykQxA9xg5tmR43@I!XoA{KPL^S5Giv zx_9p$P-T$jPW4{@bn*4;AHEt>*swu+cKR31*Sb~n#rTp064HE44t|IN{4*!&0cu&0 zV(JPZp=GGtEiEk+)%MtRF@a#533=5vHif5te-9c@TVqk|d=2ZEVev>ObEN3k2`4&xCeoAA=r*XM1+Sq6ABo76X4^9E>U^u7L8 ze|buZ6CGs0>@@d-JqHZ6b#!>HYut*Drw0ZP`@{4=GF|uZVW~Cx2^piOSCKY5MN&#? z*DFRh1f+y%x`}OP(FTvxAt8j&C{)9_e|}zMTi4gG5t3$^wl$a6^QoSc1u&zi_xE-i zU(LCWc2?swgB{Xhf#(+{r2jL3J{16l=bNt1U6ATk^Pgt?aw~>^Q^?Vtm)$QSIF*m} zvCK{OEQ{vf#IbdL=8=a?x>?pz-A-4)msN{a_w@ASAm`B5-9avwdU|qp`ovANdr#au zc{1`PMtdC09d2FVzb|*2j9w-bQF)2c@Vec-m<#7W%Dy*%hv3=D!ixx5GNY> zma*qR>B=k8Ruw?FVZ$;uB`?7Ha3|E1munO>9qX^Pn;h*1>U}Xa)}Mp^fJIRCs^ew? z3#-|79;;Vc8@O>CYZy6`v}yC^m8j4>f~H}L57N;g5nP3OL8t_>AXPB}+m0NOMh=s8 zX%q)?DQr3B)|R&p$VpUX9!&@VWp?d=JV*O$1r*)dOpX+x#X|uI%?HpQrTK;<&dAKX z?d|mqQK~{%nhcv#L157b&YKsG#UPfX!gcZDv-%Sw0Kf=tdfi9KV^>5{Egr2&Q1uC^ zjN+|1C!lnM9{}#*XaOn|&;gtL{juD&CIHvo*0z%`FQ}`MR{kZO=dzC z5H*{JAdV3_h_IFxb5h54UXczvbA|y=$HdySZC?p#J_twgvG5qpS6Y9cM=sn1uJWwk z7pfh_Bci5eT~H^lr>Doa#aEGmg>UqlcarmAH#dNd^)psDnJXLk+wu;b1{R{;$;kXH zHABy8hXB8-azaAQZ<10{Qo`Yhn9g(W6hw)TiJMA)@W25-z^CS?j7-|~{rl2YtM2Gu zUacT2TZS_t=UC09rLAoT2;KrHld6xyk>lJT+K`_ZIJf-bn%$a~mi%}U3Si>E!Goe; zbILkdQNlBJ`3m_uya@C{&XSb9U6kDzxB;YZeM0k^b@Vn`;V- z!D&9nbFuUCF2)LrMDe_fZW|vzE^|@npV1X6 zdvs9*auo&63j#-^T0PqcHmUxRlYSxPRT;@0mT0BnY74N+?0>+b`TU$3X@nJ=YPEcQ zRitf7qSx%cejCVJ7Np2SE*90(S6Lg{v0uNL1@+HI3qeov=SBE4e*#*bG zQ&Cn9mwyD}mh_dhb*3M~j_kRqEK<$#ttDsGQXw^|X7}D#amwD`Z`RLs&d;sAV#ozL zNGM=ZBh|4bY1?8#+pw-sWXnxHckUcJAD{S@VsPSn2WWSq?1G$=M?XpPbyZpJzSfk- z=1JIoY3ri!D>61XhRIs7w{G5)`b|&I-Vg}3%stfi8Jd&I#x&z7Rj--J9|b;RYpSov z$V`ilyW;#LU#}JWpp=EIC@Ldmd7*#y$<*G=o@0Ge4=q=+8wQ*?bH>vqpPk)5Yf@Fs z|JcvP{P>uqXogY12_TkLQaXF*4v}-TW>HbJm{)7;dm@|VJ<`z)uv$?XE3`GT&8Oyo zpET+Gi%uHmy{)d+%S?Rs?3uI1EmT;P6*e?6-1*wt+TG}p5>=fZ9ek@*-WeXwxa-Eo z-Oi5`7o$(Q{CUx`txENiVj2T7n?_3CKJgStRH$?R(nj)rs;$&pjn11h8<5`Sr)K?MrF9G(LN)Y()_?ozfn-mJ~TV%f+*_j^N!+EQL1L>rWrKd1A@K zjEr8vs<&_7anXIr^)|9~p` z?S_J*4t8>DuOW_mz`nnc2dG?lLGOxop6+WwmpwP^74@6XOHzjTjj6vq$o%-lQ*JXnyYA^Y!yLRpBoe)s=P;Xqy#1wMp z&Yk+b2s2Pe!mM7WK#^SQjC`e6u76!|iJF?4=BC)_XcmNx^{WeOM3Vz$z?a&c3;ise zxA(~rE3`KV?Bfy4HP0wgCw+Y*X&X>6wt0Hy)+ea>tDxHu%Nt7^F+0&}WP+Ed)`51DjaaE$ zG5Gnj(dRLO(b2*zz1(-(eec18_bP_5{mBaLnLgG@k*NIX7nt z0MMRoY&0=s+qZKkvzM1w<#h+Z>(HLJe-kd}1PV0Cg`N(oTedJ4<%fiZhIZj7(toeR zxwBrmVud}3xeVt2qnl05*oK-#))HnPZjFLOC%ru4hUik?wV;Zuv+g} z?}lT2yDUos1dwB9KMyYK(ZW8lC?tQqAsVNf4W_NPx0kV<04z{!b_k$k;XFubd$X)r zjg5^PnJVFt|ISml@!;8#ekJX#G6@S3-6<0=5Yxy&{1`2U$8 z(5wxDlrmsPMah&xGL5>aDL3?((+LS3PFBqSE|{Sq1m#XHF)^{B&Qc-3GHtZC%tL0y zz9+zULT#gA`t1!B0hv?>N(2+jxQny1Em%fW1bz~C_RU1mF>LZQG2!z1{xPDr_rOi% zqneFMfx8Eb8XFslMAwiXQhiLENsBHB{2;!V)f-9Sf7e1nP%I+ZX(FP81_y(Yh2;g3 z7m>NHpJKQ9*9(BBgZxA&Eoi3ffWVQ*Z-T?@!Vx(+^;3+4lQZf5{j(}3zCq8}lUHDC zNVGQ|)(kK%SUr*Ey3S4^C^!ol4k4O+y%~sj0B1R=KPo(2U0pqhfGUeRb9T;5H(a&FMu4HFti#Ir+_a-(1t=Ju;=xU9p zfd5HKPBvRoHoom99sJC*Z==awSVv3jda_RZAW+HV}W6l z@zh=Pu`}rZ2>`KoaQH`bUieNnyBz~RZmT@T)%$Z!qc!y227WpWk%L>7Xvjr(CDP}_O^7OD}lrzS!9EjSM$Ju)lel2 z9v$2bA{>3tw#*SQHAm;?{FAJ*thW$y2}EdtM>j1Z)b7xsh=eVtBvg6-O+5`)5FA{Q zv9XA(%^u);R0HQM?GjPe9sW!VqwBnvJ&d-{JjXt)z9a7t=f_W<&?!hBHa0aSVp`6D z*BplRV^9d-UC5|U*x<@j@%%X&4w?up(ZeG^uNx1ypgnsoYE^@-SnkM2<%XkHmX=!n z#3%p_`!oL3vj=D=Qf%ktGDtrZ5m@*)c`m{sGciB-ysV6Z_Txd+e$cz>$F1byj9_ys zJ=VYU&D*y*E*~TsW|@{OVgGa=iV)=VWrBi&uWD;AZ~l!oRu1ezgBQrVnd!kS%~L1| z;pfg(lK$>@K6-1&6K0dzH2E@0Hqbdbc?wd4mv^ugbK6>zi-IkVNF^KKndsl z&`nyfJp(kPHkrMZqC%qFN zA0rSXA0J;A7!>eWlG*cBu|jfCmWk-HeqY|?=czC8*RL}zUc6YM^O-0z)iNM8kE7TdP3pz|&F4am)LVv7uq`_3QgA^iTgAkW6LjitFwsCDAxJIg##- zP?pcnuo^;6A_X)cAmG*8xAYWZunUv}pUUyniXK9|gWZIe;^d!+^`fg*h54frKF5J! z%J_uVi{-?L6F~98n>JZ+Df=Vm6Co>O-{WslnF{a$1kO#}cfNt*0-1=7pFg^wK$#-O z1v^_?in_YdZ3Xx!T2R!AE{%+E2Y^hgLjI1!$)^Yh4rfF?qVDRoYfF|_fB*jd)tfhq zCm#OtFd@_VNhd1W_s51KIZ3v9| z+ZuFmbTFoD6MieajFr{Yyrj2N4=4+m!f^}hf0=`#YcL^wNfOS3^ZR?OCZ_ZAW{%M6 zCWoJNhI$zo7z9@l76<@|4iVbh*Jlr^!@THZPIe$fUwVe*z{&2#L!X}U!UsW7;HaZl zD(zwHg~n??H!}sQaDK@3a zsw0bN_Xmviu|hOo_eP&bHa|Bv1ffn5PfkUE#gf?MNC-D)&z`N{C?{8r_zArrg7mfSGLm_h5%_AX{ru`2;+w8M-nGx!%-$v#?dbACy9h8)4 zB)z+L@7{*!k{o~zmR=B?hhTkU<01-A3-!-JQ4QiBB891{t+jQDhBGzL^l&ASPV+#T zYF#Tz>_IP9F$kl^=U7NpdSQ?VYyo2cq;_Gml2TH_zz0HC5dkv|qAzc1HR;r|tc&YG?k5fR_n$?E4y&r(yhmG`9vPQ+xb9i1=zr0A1<}gX zJ2C>q;9G_u1blXG&tLD-oOx~u=Ni9;qc{>0F1&lsP8&OPg1TuPgI7Yw-afe=N~^Td zE#OUOeCTp?%4FkJyoMOjrslbPU_xF_+>)sOIxrDFjNa)+^nxeX$LVUN7%&+doOA#8 z>?B_(d&hs*0QCW#JDepVZ3(w-SvjwZNe16|I@Rh@+bj8;oSfwRn3!6px^3w2fOs?y z#Q?+W$Gk@Kk3JZEiNuo;LeBge}{Z@i;tpLFnt(ui#{|oNvU(o5uBD zNK8+cfNr}kPiTwpYTg5ZlcOR;lgRV;_t(}KfGw>H{SM=jCFSvwW?Eu0YAFR@WPUau zRtuOZNWFFbJYg=FhOxnsdN=;V`b3YssOh5+i^h$pwvfFj3f2YV-X0>)gVG_IGjd7J z(HDxdMflkkb1XqEy~ZyF*Hv|RwfD>ET41~aY`W-+rtq@ z_)kEJ9KR_~(7`>(eE|KDmOA-(Lv)-JH9b9DUtcdi?gy% zwl^f7pCWD*kPKBdS7c}?2m&vW83}FUH#5n;^RkqNrRB|mHXvKIvD*Lf1^F40cZjT9 zxsuqVh}DG#xf#qZ7gTJRS&ZUhFJ8WU22exdqm!*|5JHkid83jNFRJHrr0d>^wDfd) zuvOX#+ck7z1DsbkQ@)K-M*5E@RLADR1joYTjV}Ze=rgtPA`dmqWky! zKrzKZtw+bCEvE3jqd02){`{_y(s>=kA)deVW@vOyjvPum%EK4hs-{e<{dPK8aDqWh zTJyxCyZ?N=K+faG>$78F7nmOZ3OR-jL9q;yUquN-0(-P!g#1g`8aVel+8HHBTrzNO z-U~5XMKtL*qG!iT2q%I~O7s`}@3Ahth;}x5g$mzamso{9f?z-F906WvKwjR~I4$e& zk|oLgF=A4+7WxkHu)#xvs#ON&lDs{7W7HRe0sesLuWR!Tw{j342xOVt=tWRb>^~M9 z1slit%M^u;{vu$pbWsz?(_sZ

    q$>Hxe9jdhs%#7CI3T5n_qJQV=%|&JJl{>meK$mM$-5W zxZ5BSibuz}T{(LExE8z#L`btN@SwrgpC&>1A)S8v_U+Mpa+*ijtC+YY!oO?GwKcqa z`7(JHCDLqC;V-j*g`4KrEiAArRc^DUq=H#-GOyQd&Q}_86TL!ES-TFwwXNr z>{|W$bsRG_zrSwmao=>x(D3k4xP^-RrZ`Y-9JdJkyA9Kk=GRZ_24``%tdAJ;W7hc3 z%sgH4zxkXuBu5O~xB?3u$UoxHCa<{g%*^k+>&^9v#adl4UYwbkInT9uk2AOl9-qVH z7?*w%dhUowAS9RyYdLzP1oGdfWhnGaYv&@i%K#8g#=DTa*;gV$>otlT160~Vq(Q4Oc=D3GZv2y#{bIu7hHwI(K?;Q z$jDfR1QO;27*KOu?Z0+L9^B%8|A)B42S!W|9$W*bZRCbE8#erks=Y?ISw9dXB?FU{ zKT?{BJ@w6-H}>b&4NaH6oD6}NLR~cOQgzOMo^zD--zW#PfRgHv#%&KCWI$&HYcNaT z%#X#%$;rLbP6rR#BmT3WJUROpgBc)iL@!rhfSk=#*HC!OiDWHUOts-BJAdmID~=Cx z&hw8S`O%KI#zbaZaZ+`?!Z+zciCcE^9~u`RP$d!Lqo&2 z$#XS&y&$k+4rsbgLTFEF4xIYw2x*NE(Ha6=176|`pd$$^SSvhdd8N#SRYnk+#OXjB zTL1^!@U!)#K^>u|2tz~sO|UkgfW<(jbpY`JeKXRs6G@?A3Q>0kWZ}$6;QZ|jlk6Cx_oW_qYIS~fz5DR%#R9INuE)e*&prD`&u2^2T zC%L&5FwjvHK0y=7z+rpCt4 z;fiR04&N{_KRXf#4q;nx>q|(9lCa~g%IidbOH%JJkzNIB>B0%Md~Wp(6TQ3B4`YONv5uBlhW^=7c*tIcS`sctbvnkiz)fi;H5*J{e@>G2(0 zq50X_K`@7aZeMqxa`UE5@h90}51uQ=mpY5w&s#VT;~yf-R(<-wd&C5F3%A!t|X@(~Rh0K_&)hFn*DK(iAy)!~%`gtr>n`o+ApqJ_MCa!`S$ddDZM* z2q+>zDnNVEgJ}ftGpK_!J>sOlL>@CD15Q$CTXHW zV?Tcynagi`3Dyffn3HgM_0=WHw0Xm{npCtM{;#>|K^eP5s4~)4PeakEcLQFeS{Faf zb_WSS-}JKS!-q;_CpNe?f`TaE)$MsUm)8{|XhYAOu?Kido`ry{?(8fMpWD&iSL}8P z@*7+lgGx)c5(WMCZDC@8MXIR)XT@B!7^hWi9hAn?46GH%d|V7kBL4QyurS~pE=s<0 zC+he|92X=j3P*|QbUj6U!9@J0TfFp=B=dyL*v~IJVTxjh&AtNdz(362_$_;#@xYG& zti=AGWc6V0`hsWAwpm#TQ1!WvJIB9$U?$3>Ag_wwWOls`*kw@TY~25ZtT?M=t5YoMGHvjs&EB z@}#mbYkmp%0suuAD7ZcZTIV^lBd;(}1dvCcJJ;}VzmnZNL}unU@?J-e2FJy*Q(Z(Y zB8v;Kth;se9{Nhe5aFs3{$WR9$XKxbP$)a(0nt3sd_6meRz4Zk$+O?JA(`HHe1Jyx zbStVbx0N^Srj+{r{VS0kG}m!1M;AT-u{!_jw{J{YHbp0cLAaB~H9J3F2OAO@&%Rwt zYkmv1Z?gCO#x-l!Fg?+~rNT`duE#Ub+m5OL`@yr#RUJa;KT>m`5+%WQ2gU+33gw?Z z5nC}al3kXzirnDZ8VZj6Q1!#Z`pCJ=!vhBO)SaUgf0@X0I8E5>`gZ_4CBqD6A z6hNr<^__FKovoqbYpXC&g;VHeY`e;W&GQd07k(%=1@adp$-1JsBAAzGkzz$|TQ<7F zW~qrf6nfNW3mCk{1jR!f5Wr2ka!?vGW!Y}obSxW5d;ks@Z*$Gy8^UuX=!i4fozS8Q zvxVeTItl;c_%O0}@Z-n7oX%mzTykC_!|rF)RKlbNXco2)u^Zvq!YP+M@QMwUP#DHy zfRf4Hqflr>7A5@}AJ6s~JBTSC^~U>XVTiRG49C%tjx9x}elG>j`uy_x{gFp27na3@ zpG9lF>WfX;LmZH>38HCzkgH8}d@#W0^C+-JZ!RnrUcO+z=B$hi78o)Lb9}Py-P?nM9~+3r3n*Ml9Xh|>EmgGOXEDVX3Zsth%jrMMtj~-8n_Jf-^b!pyjy2R z>4g2a{vI8rDCnfgNPL77C+!qy3c|A@p&L#<35(VeEr*Uy5+mBSvRAJdv5=ZLVqfNB z!lR;|M{usCzVd!STTz97^%!hgfA#ux`|3;UG&`YCl~+|kzGe@Pi8*uq`ihK)4;x?( zY2cj)iep@g#Hi!*XTr@+1a1t;s00?eAWVV~7d_NmT|K=J^e(Wj*Td>kI0&;FnIjNh zKjnxn8&#{Ky?rIlzqMfVr^>(ds0&(EXBLq{FDtCLh^ z7^2eC=jR{%1lg-uXUOyO;4e+s&wiJl4=_iNvcvxq<_0)`L4>gSjS32o>Uv9s$glu{ zf_cdO@Zp)teiN+;5rtAxrh%a*ZQ&_!Sj4~(N&6lrifA+8dT)Vj!kQu5Ny)-_osvQi zgP^{k5?DpFNOX5y?fueC!V?k%Uqb&OaEBtn$QV2Tp(}~yRejR79Jlt3hJ^~c&4-A! zoOsZ@hkzPMaq*1WIs$r*>>p&W^}eD~m>f~=ytK}k05<;3UM%MsqVMmzMzn=8v~^IO zJV(aX>L`9M-bZjV)z#B;8V4$3dI}i=CN!$P$mf`$a2x&=QOr{v_?nzSRJE7j z!L)4{fJB&%eQ&t4m{{>)Tx0wL0k!-u3Ad!cygSem=-LkTB@c?kI@L_3Ll)sefis5V z<>^qlDh6SqDR}ynX&8P8`_~u5l8<4GD2*#iJpl?en<6@@N_gCk5}7OzDs<@eH&cU_ zkiiXJP1fS?Lg<;#gOD1MiE#ben08UT<|~L;)ISyM7tBk{D6|O2J6{Mi zKCrW9WMmL4rX3Pxc5W^$g=a*Mu6^K1_CSV=ny14y{;}$PpND^u}{da1qG2$2CTX(9;0!4iMD`Q09uzy z#{MUJj-0}r(yCO`}f|Vi~D7r`Ja*NQaLfvBiL!cKCjnMiKKPODGy|vzO zTEe7m(Y=BL1mzi+^PO3jMQDX!?CmoSJ$y1%STtKG#z;VS06vgLN&7{8?Idu4LqGxP zpCl}}^2oD+8)FpP4YN%5w{MZaOFc+UEstHVR6Ld;xgJcwx}v`nmh7tOKQPHf17e2f zZ-MHM&RBEJ9+;oebYbcf_@5Xos_mD6Q^n(GX*gq{4Fz6itwXPe?eo+az%xau@e_^^ z9GYiriFJX823IJta6%!{ZtbzLC!NIpCz^cAUipV>8Ut? zsx&)kRljd(FvBnjG)HUVuwb4mjzP zXl?^pYX)s{A63ah2gqA+J_i#rA2r{>B#Qa1EA`UfFYd;VaL?VFhL1`6#N+t*crt%T z;3?$+R1Mq3D}g&`I9N0ntky|a*=Gd|_U=7{Z+cnACI%EAGxOAIXBU?R3lhe+LfwFv zwSaU++ucheX!CP>PK z2Zt)gD&qbG9ddI!2ile?w)k==Qg#pI!mA2SOJfALNcb?5ng%$`1ih_e7in|O+UI~xH zq}%)<)uvPpctpnUsL;xllGO}X;)Vm~W;TIqxDBU6*9q~T<6n*OEf&LCm}-zji+ZuD zTFPu((1x$|O5^!}w8TWXxIqK{z~l&!iSU{=I+=OqJN&QdlY};J#iA9s7ULnZPPP0q zKWk(P;2T8D(_<{a+}!*&zBPtfZhpLq!%Ym}WC=*7zzl}T!+iu^Yl6+;N^rtK)~w$U z_&Ned;ts+^v%Rft-^qvfe+MpHCSkUcVbsa)Yca1zc}QfjR?hE^`c%GiYnoad+;djT zt+~v{xjQ7Ru4BW?4uc%CjF$5UY3MKIk3a1cs(xvAkk>#jxo(6%cIeh-1#h$dpS7F4 zM>agiBAI4&TL0K0gY-@`V9f22TOOp=uLpz6`ug|igc_6*O(6PAt7m60?3_ALV|7V9 z3wh)Yhw1$V6Nv(u+fr@n>#vJA(`15@^JU5ki`(WB_3Yex~9j8q3O5y0CF=qAZnFd^8` zuVcr%lQp%pgyfLnz|0iAF9CDUK69k`E>azch1Dc~<1JtJ_CF)}1na(-B4&P6nad+; z;C)TSjtruGO&x=4Ha#FFDvGW{Po9q0Xd6cKtBI4Sh40s;+9kQrZ;(Ojke#ZItwfF_1H}^4h}uEi6bD&2(A{s_*&Mv zUYN*vo`w7{-icv&rs<^U>O5`^j{M3NH0Vl#Y`|{pg7|jmXGl50J zWZ5Z9WRPHJP~2OHeZV-06trCuw<}^@q4i+em2$t(4AUsjYJ)LBz@I{>XEYR{P5}fj zXFNEP9)%3{AfWaZ!wbL=d^aixKVAU~D~!1=^IQiLhES1cD3ts&5a4bj-cn5;@L-w5 z3xjTRrESXY3@9jpKdoT`)F;8JCl$aKEl~8nyt@`I8E81Q6RL$~QvKvwrPd6vYcT7l z>Cll9LFR-#d&6By`rvWIcpMuk*@QBJ5b);KW~ELq$VUbn#6IAZ*FSM5(+Jj&gKVG) zlS&I)c^9BHMR9U+5@h7kf3^cE4^WEflCMzhq^zErY@jW~`S3V#!nF7aCM@|W%ut?b zedNT)2Hu@7;Ukn!Mxa2Q!;5?Mp!*VY-}!aGv1bk_^PNA>5It|5*UGjxc^E5eL?x zi@A!f89>t3!QnJqJ-8JlZDRiWvynx={`?v|OHAY7;)o6eA3`i5P!v(VOi(pQGpP)9 zJO?=OT>sk#ATfxAj@Vu7Uoes(GQAWIKm)Q9ZL?^GAy@@r`S`Ix=rxER9}^TP))~IC zu|;QYJi4K`{W7{AqL#|VchaX!x@~FmGw>EuCD-#l^mD_ zBmHXwhmue(Tc8h= z45|S?(h-SZ*De-}Zl&tIX-YF*44llpUYMLL4484F{D#KFEX6od7ZiFpo79sG%5b1T ziv(Z1XdK+3p9zjJtuLnpqe+A|L07|$!Jl%lLTO%Gu@}3p$&+hZs34dV2PhMX#;;dw6=f-lkk#RZ~}KGO!Q&0}U>NiHVK62IlZV3<1#O-d4x zFw10Fwk%e2{lFD7h&7~4pz|Rk^00MyczSAL1K^`d@+w83>Y&68P{|C7Nza0`Y9P+#@mETukCN>(l8DEp0rZrz zva+sVRQP~{lbhQ^Up8H>@f-)y>*0EfI5BcIZX+5`t{ak&T7$cc=E_H!Kmh#nGR(2Z z&;9p`~kH{}zxu=s&1F=hfCmVpAEwrP92AWKfykq^O3d>J+u;eN?WVd1UR z_-oKB;c6eA1!VM(T~v*Vm-H4NI`PBx>VDKoS259{jc zdUu40tiWv5KOZo~g!_K3T;T#aGr@5Ld5se}{$7422xNkqFvvhd z!Pj6MvVJz3o&g?%i)+wjm_^)r1h+s7{M#6yl(VZ?L}s{O)GqkDg?Rq1itZg^kc6{@ zU=*}0SvlYFz)29Sfc9@AEXWNac2EH`j{8ZXWw8VDs z0-T(glxxbe`R6bs&kOq;QVJqFH&;xqZ$OU0=+f%Wr)0tvLjV{iT|x~&EhiUe5X%B+ zXqVr{67k9>zH!0f5(QfZEhS-+O0K#{f&h2^^?{ISrZ8IIw+hsiAE3!$0J-y*TFmMV zA0{J`Q2J1LgY)w@64gC&|KvGVk4#M7;q|yW$(_?!45GY_e zip&&YcAAbtc9+9F2jzuu0E7brdb-32>G(j;l}G2Juva2|uJ zLcr62crS9LCI@$j5TW70-iJmXOeNwm{jmfRz!{_>LX{)EDF@6bPfrX- zp93$LoQiX5@0n`i-fs{?alo+ZAv96l_L?}EW zEj}FcKk>q_cvpV08|-n!ibs5eSVaP$k-%|5jS4PZISu=+#w`Bl zpS(^kA0oqO@K%%q{QR6meJUq93fKj27d!?=l!}!9Ca%>}fBw)>7*tDe{{%)v6uW~Z zVUK<~-1=xOA;EA^b@vR*p+kT^kOY>Nj5MM_qoFVggm$u=Fbu@zh*c}UD(7$>w?VMc4&0H23pc{yS+fORgyYLpzzKX8`2mD-;>e{{;C%=S1CIrU z?HPzfaq{ATHKttlW|#+qxcGd-y@Y-E8U79qXKU$p<$z@c;uRCCgkjI@T>%fe~l95VSKdB(}3@}c1!NpgG z*^4OrxkcUxY*@I1CJdE5Lg#krLqJRfW`Wo63OE(hejHnHV*CDgHa{A^L3dE|gyba} z3KzwM{6ta2-9^iBmyU*uOSWNDX}ZW)J)UiSe;SFbFFb^r&Qw z23m6N2-|(bbu5r#h#R!*hSmf?MKCGj-%A?H}X&i47a-&@f| z+k!bIZ~f|kJgE*u{s&Nl;=Hcy32=8Z`KvFU$A{ZY&xbMu$valxMn?r@TLYU};#L!n z+uE?;6>V)RN@pi-NG!Q^DicnCWF&i{Duc1q`tz)yz!v@fii`D>Er`|4n>T-G6kaHu z^#7E!9g*Pjk$rKm)rKZxaV`=q#PNfcuYBXRU1x9$|@>qLCM+B^b9kMXuS<#r|5cnoJ0QFF4+qf7<=UJddoN%^6FtH#MDdc_wRME z!mW^gu|1zZhhSTrRrh@VBN2|C!2zi-)$k4j?qp-cL@0)mNlh8aW3xT7d4vCD`D-lr z558NNni|P04S)W@Q>#I#lA8rr3=#LX8;Aux!S-_}+* zeec8cAF8m8U{IhO`INbF;^Wu#C@T%5Fb=P(gq#FY7SytDR!`A-G{~rbT96Wm6TbX& z3^Gn7E+@m3bh4&^!tYrIP{GIx7ful3;o7yd^go##oSgJgn&b{FcJ4*}e?iy|SYip$ ztC?i5f30_0R8bUH--mp+{h0W%1)54vTTAS*lsPEo61TY52S^3SJ9hW?UvZyB6(W~5 z!FgwLYsBg?Y8RINIjUdBpmHJh<;GYg6K zPq}b@p#KF5tEIKIxLJ-I!hxQDpwy0ZtUW~r-!8P{GAqIylCVIxydsvUd2q^z0;*$) z7$~ALpjE7bt%G=LaGhV;H(c3f3q1-Z^y@AX5omzP^MXGu2V=9SDi##+-5nRnyMCA= ztRDz4nw!|%O@as34z zK8J|5%&$!uT58eABeUvf;yR%7B(mC z`QOf0*XW4n6pUsVyNYxEgNKj1c{5t3bS78*m=Xpm?a}YGJaSrwV!IgGgmha_8pt&l z#2DiD^YvOP2!aMOG%cnbL5H7(c?u7lgC3aJpGom2(?4LlQ74Jn1;zp5n}V;LqKKaX zv*QbfMO^O(mXCP5Anr3j7QK7_{vDvzsWlA({QRV+CSU?QA*#cAgES*C&}o?9Wk=sc zN;`~K%;SaEZ`>fhE_h|Ol+66xiQLPL(|+=2?~2Z#3u6v9LL(ejg#D zc{pHacCFb4lanl98F?HGc&0m`=OGuskvT3hR}HB99P{ot1KTmJw`hkWwq?_?zS~fQ ziKaoy9&iedc@nVci#ro=(@t4c)%HBx`mmH(7_cAR@S9-Df_**P zCQJ$v`G{Pc2%||HpbD;MvM0PaF2dXW2bv@!?gWO!%`?ym{p}O+Rbcs+r8CN5c8xx~(vm-3e(!+g4N-0KD z)~iCIrj_SY#$d4EM~nUvo@6;(LdCH~31W0Sc-3BC3Vi6Q+P_)c1>cB?9J<)pWdO4K z+ur)PYsVL@f4lnlp_^N=O$0@Qm&8qi_2fP9^5^@fu{;MO932*naCTr4Q$F!a*r8DH z!Mg|_SGGsDX2ZDP-%4HnPseru#1C+{2D*089ORBoy=Oa2G!gN*krHlfS;C8>4(J?G zT4+z_?)?3%WItF7mLdUa{hgyjpP9sg`8_&Vp>Zi~7&8}%=!R2qSG?#J5?g6G1{UBH zJatIvv>x)~azA|IdPpv%B|idq%z*?xOjBZq)zJTk3`HuT>fOw~t4xB-siu)yD^LqS=4%_z;}vezoK!Tq>a z0Tc*VY9?k;$UL>*5RN_irPA$#WD&p!Bw&ON7whdL-+Z14oR*fDJrE!1mT^B#W=;Zv z4aSL`8CHLv6RhzJ!R&Km{~-79xpU;-286Ik4T09MjzJFjXCbw;Xu&c8rlx#i0XIhf zzfuZ^;{R^N-zOP({_=n2AAFA6^91}jcj048Ub5WNCPmImR{l4mwPG^cm>aC`RV1SW}b`Szu4Zor24aQZqBIl-QJv)1uK%w z1deI6K6F~w60m+xZb|c6p46Vh;^Ke)98}E=-60dSvG~pwDV8wir8AzQObwAzhE@u6 zZf}MT$cXQJ{i^);VjEq-XZeq4rv2^Li9h?T%6&t+O0Q|eqKco@>f9p7v$Px@(&9UE z=U(LJXDfVQzcl-u_uOc~E<>wrR_oske-jq3sW!Bq=;*&t<>lR2CA>^h!@}UW%AEIH z{DE2N4R7i*F67*P^E;S!tqT-v!#M!rhmsmZx%>Cdwrj5QKxxaWF zMrGx9=g!4$sZp1#GqE3As=D3x-pP%(axXvS`}9u7U_w-cv-DQc)RZg9`6cctbY4jnrW2^*Hh))yt=zM14kzgs$8vqxz~W_ z;>|Fzs=I$ubhxAI?_3^jn+UuR@~Jr`Yj}=T%JBV;7MbA^_Qms6?`Gd+)K18L;I(ew z8M9qv%?>}e{mX0Q9v5fZ=G)%-JJB=@id+*r zm#uoAJ!c<%VD|gtH}wS}qit%w`oZsm-!)H6HsyOL@=omRaI*fqW7nC~%!C-0&J� z?VI#$n>yJqRAnB{iY-4EHfuES;CP0b7AE|Kt)A(GAzx4lN>#f78h`X-g zqo|aCbc50$sUi&uC`cpSC0!yQ4Wb|oA}L4<(xIfJNJxh?($X#6-yZMhe%|+bua7@a z;GCH`Gr!n-?X}kC^0JMvGJ5kuz0rHwhZNv%ncrM(&l(g(Is)6xH$B4hZMMtStqMYl z60nQbw<@onZZJ=9l@%Jl-rvZ2{l)R&6Mmw1H?@2$RIF9M5bw*vfSR6kT=ur~#*apPk5TC_VljzRd1X$Q=Qng&OK#qqT)@u-dq12ej&BNG~G$R z?484J63(`Ne0WE(6aViWYU^$|IR6nhe%+%Rjuw`e#N0`Q7g+2w0`R;1&Yn87Qc-fX zPJ7=n4&TJ9dmz(t(n{6EMxG%X_PUh{O@v*UL$R~uOq<*;UuZrd4|gu^qk&Bzlj>poJ zbw1+rk1!dim}rsYTuJRgXU-{@$}6l|%6dLtrg`{;pKlSdciSo~Ge3uutc?rP; zefZntR^!Imd5mzeRI&Nx-T_uw>4}Z?b}cj0)XNIn=!#Mn^I8M5!-RcGpMuchAJ2zL z1}!Z$v6tB@H|jg7xSjH-NQI!C!nz07@X2X! zct0+z0L~W>xpKnDCnVA*KNEx=q&dTU7UBZ)4t_kA<*D40^F*_;&!u-z$*x=XTG4{j z-QpTF7!O|%gu~p8AQ&9lJP7iK`_K`isf&S+p1^cGmYyh{`P8d^pz7NP7W}h&>StxP zW~s;{yOCaJu%g}}=cH2kOHAR8LqR4Je(eAsmgR?8RF;2Y5WJf1mWr*8Q>iH&-I}6s zeMn6XK95RYMrjrSZw$ zEUT2uwNrWUp@EZ1{JY#b0}ACUJgp@sSou#09Yd{*=)bY5k~&704G>QdSp^HgQrJor zk&Z>`_|ZK+&_F@DmFf+J7L3TjW-4|6Pyu1<5R>p?jb}ML$+`9Ana1H>6>}vj z@|`?+UqyERG_FeMbt-qXJ#bRVW?di~Z&xRB{EewG^)e@Z?)ihuLpe`Uy(e1KqXX4R z9aT#?bUS$}OJd*_O;a0(PhsOb=2stLRWQr zZKg^rYMM=#>@?E;8J{7Gb61&8Tcy*;js@S*Vkz#EaX9tx?DWY4Dn|?JHU`4J6_~+x zOzD!PS@9AYoZ$k6qMXk<9q!V23|{wBCUtCLfZL#`8wxl2nb~*{K_4G)6SBd>5PEY7 zbL44Y{Ew4PS>Z6^R;mrSYjsw3^fHoDgK#~yp}zbJm2s~It!Tofo2f(>4Q6~LJC(b0 zpYyaX;&8yUmgfn`Q|-i$4>a`;6u7S&?0Z4DUHk4?XJp3t*oIcCZgk)fk>f~0ut~V% zLm2Cds|g_ledC4AGjeIh;nLE6R2G~1mGM8M6=kCX>xk0fO_~#`(Seg*BDevYuy|^| zq@xHT;E{?FZ=t$FE{U8_>Z?pYPb()c-bZ*WQ)l{gPj7__SP4-5qv_9V=zSNe$B;sv~MpmYNEcbSjP z4Cc3PseM~lSoqf4wL`>ACrY@iHTz0Zmg$XbvY}MYl}6*HOLr!Jvy3i2dGXn7L3;A~ zbQc%uE}GH!+|w`kadc-s`Jq*XOPB*8esZ5=uBhBW6}kyZ){tYeHb0QU#x-j!|7^N2 zU6oL-L)QKXj~+rs{JgAnTr)xLpLb>WpGLACsnVi!3wHnR_qb9;`BDY7ems7`q5L2z zGE>sEX7_@}5zSVUWOl5aZ_qc*?t76f>1~ldm2PGp@}IF5)rIA9yRv+Db%M+t`fd3J zhe;+58L~|0G84*W^~SjfzrNE9^j_A|y>^GO&^E{W$*(ewK`zU0<+h>o>6EJSXYarI z7qS>?40Bm_JWHqiKsCRLF7-6jsZ0{D`01S&3_i=^U1ijXAN>!-cV&C#?nbINcw@qX z(zq`h`}T!0sxZgk#l2tnUT6>W>aRQ?%AS$GdAsaI@jzKpa;Bu@t-^J`0h_kO``OEJ z=|mr-IVI~qZHx^?S*OX}I?lX!(Rpi-i)X>LfSV^kwf+<9!n>8YV4(%2VXm`b6WpAC z0fog%9GN3gS8cD40P<%t#~SsI$g7*;=gJzJsGgqn*K;86;8|eN?M%xtFz);wPhTIO zF}qg1m2Vp^LVsI&JANDw^xv|7IGHg<=&%4vWQ0-Xp!u@Sn^Gz8{lX z=`<7Zc90RJF4U*gpq&4cKp@|<{isuva!M1Gp8YpF`S-W~x9<4|+aty|vB2B1;{Pg+ ziYna!yn=oG@eO=pe=5dz$sbh4y8ij|q({WdU0Yh+!d{Y8y;msq#J4K#KHJDr%uAI? zN+{FrewS|h`OsAtQqun%PSp4X2YEXwx6wUNb3uvOIzI|vY(Os3K}`Xfl?r~@`@r+t zrH}Ok%xxXl$hFc+6aP;BEAbxlBQyz6>q`J6(^0r#qcHb=4nn7B)ULZ zFStA?21>b-ilD%3ex!KHXpER^goZW&qV~7I1B^ueC*4|l!xYu_9C#a^t59W`i&7Nz z&C9b6Dh&zDuKa1f7W826&b7muG*in8|3Fa?{@Ec10YnkpXx`cD{r9#XKmSQe9U=+O z2ogQzlJ^}3K9@3}WXfVt=V{3o@Sp30E4&fs@V}~gx^&}af&Z(FeD0s+QXEkI#@muF zL;8PE;(rf&mupw0=T7E?AKm{TZvQ#l|CXKqCr6gq5X)oh#DQ#;`u~iPUElY^x6L)j zZ2#9wi7s6jqT8#axAjV0_&C`q|yy7Ad9PyN$MB3PN&rCK9Y&SnzGkH}VMJWx^@|U<4V_&GXotG|CQ%{JefQbcl!1<)F zN-OTn_la?7S$W@bUEF{>-S8?7cXu3d^2HbaqMZMr0pjq}CbX+^dji}EqQYQHfRyJE zrsY^{R8)qHWCXPk31D$3FmXpAy?CJRq%!CQLiURenuy6c}HrUMXp_^2}(0zl8{~f5f(&@xjjYx8Lp`EbH~`&m*^DFq?s<6f9E!(76|bjRs&;AY^p`q9oBF6;#T9GzHB@ z-bJubfX{h15!;M1=G>^hUHilw<)1?5Ze0s?C7(4cNtX%me85 z7K}dtdkg^Hz=B670KNoZC6+r36o^(}T@MZ!0F4K6iDcQ?-+vnxCSSLBT$Fg%Jp38K zu`QG$5S!%aBv5{BK@o5gNYCoE;MdXoqr~n&c|<7M*CTA~r?evJfTE)8kLIsMK5^QkBk5*IicO0=cS&Oes-vYQ6e$4X zFgBQ3D3g^|Ww}8@6}W5?U^uZsf0csVYjTr+L5?O2kWkdCoDTvN6&7expT9`Us7|*G z%3|QwMC5nM*=W{Yo>8PVnM^09wYhVw=EN{mq>YPufANoFFWFomP1pzAf9SPhA*LM6_IGrW zpsq)JxT6W`HN>(BL1Te*Y_$puDomiu<_-fzn<9deh={>T0_YmPW4i`fKobMzgXgNH zwe=6slmekY_Y1qn)Yx@=q1K_ z_5OfsKC{vHL%WvYT!vEfsU9Bpes(ena_p5P5x48uW{To3p3*7Mh@ghC><(Z28NO=8 z^$!+&!hi0}IFhgg)SF<*9K0jMTnA6=^c7G&7IpYQ!~^VvxZfmXWPykmj0z=CWBj0$ z*JjHoz_tJ^gC(FDgMu;wTz&|IHUkh^I+QVpqH&BUprX5@zi!rTWKla35L=2;=z#FU}~YNe7}&%9;KZK4onbr@19k zYlwI7_3KxBlD`UjNDX?e3eJqJ>B=lBmH=J*)n%- zvgYNJ!IlbEg4u02N_vKVc01I(R2SWdsToIOqyI2mnZ$c_F~8$VMe&K}ewT>4P3P_D zuA1M&MY;{^s~Y6Lka=GFxm-DH|$tiFI>IclD?1?(( zRdw*fDC%8AibV*o2qyhQPEv1%%R^KMUI(qW1!Q?6W&sLFF+}`KoUi8^VQ}B2WjP1O+j=+ffR64E8+1>YBP1;z<*Z z{n?=C%+PxAmBX(Ei+qC&1+U!|J`5qgGdyzg<&g@8;b#JXyRWDSBbv`=%fXt?tkY45-L& zW>cQ;@?3K-zlyp}E`rZ%HQf4}H6t=x`c-BO?|w?1FAV;Vf-i5h;+=<4YH-mB3F(y8 z0}W@)X(c@9ZoLKvTa{@6UwJ-pxXe^ZAG*-}V-(72==du1KheG4Y>*b9N(J%fmMZQb21+i!eKS}N0Wds4ZHFt^G2>e0`~ z%JpOXy2HX6Q$$-%wFtKs`>$FZ^mKPO@)6dF|Jm8Xds%3U1XS)WFXnR_BW`kDhf+33 zF=SOYAkqWZ9i*j_+Di}`v&6`>J)1_@G7!i;90d|Ec=u(vCxPP@)J1??vYIsZ4Xk(& z0}7--A4BYH+z&<%Q1c~;4uj1c5k>=Lfea6u!5yTNfA7@XPedO#3DjFmzE-2|kRKwztW2bmuH zL(CWqEvJgXbNzRn7bS4KUjdV|J(eQ`7XNg3;_1goAsVV%C=?i`N$Dd-_V5Z+DtLX7 zfrmQE5XeBUn=ybuk5K1oel3!K{~ri55qc7GI4DDcdOZ+`(1=CYrW4qbr1wWB#rW$C z7a8_dT*ze%WtmhE2)KYB$7wiqqxPn$eNuIGi&wIUTmEsCm9xWjw&qMvMzX|sI#%Ud zBNw*#inJM@hi~q#@!c4@{}Kb++mH8l>~z+JsW2{1;}_aRL)Zy(j~CCvO~#K0QLnNJ z#!SX){EJJ!Jl3_jPU0m*jr?i#TZ~0-r2xckbb?cnXVrbhPaH!N__0ktcHCN+UyawA zYPccMD%Y~J_1R*0l(3MP?O|cYo8oDxY73GWefO>Onq#v&GI*S%NMSV4^nr}mfl2%g zjj>YEKI)Z`36pHMf9+xQG6a8DYMylC(Fk0M9lGy&<2iOnU7{}>#YV3li+9EIwOPS< z$o@gS=cW1E-)4M-@G_p7y1=kCWZ+@^N2~PuC}V4d3lAEsm@oacY4Bw`t(1Q5Y*H~Iowr_CIA6VWH3nFIjYHrX< z0SE(cq5T|@2jyQZEUZQdt<;J@6X~$Kq#O#v56fR%30VlpV5von0koh0ZS=>kblpy0 zaR!%GAnqW|6@Y_{YJjEtioxwJsM$k(8k!(j_5ECcLf$m!U7@kqTc|sNjwWM_gqoVe z!cia82uf!klapa}fR}^8LQ14MP(1t_`-pjFbl+Z zzY-q%pRle^vHt0kaI$(Tp%j~tKo|e3vQA3HX(`>Uq`_sz!%oy28ss&@Xllaz~Bg@rVzhQ@xZow)NLUy<=!r%%XU+U!Qk zNmi@nyxsKKR}W(^|5Bo6yR&vum0f4u+)gH7B$yRJeHrDtJ596V*`M98dGSP(8)6-- z8n<3<+ez`k@|CjH8NL+T!x|}T7J}IlqrDexlBHn(5nC{so)9tK`8&N~5oCjObe8(o z;@eLI=;bW!HOA!K>@)Y1sz5#TyP8?oskyOH{0?S;Dq9Kcrw_G&dthi`5tuCE&nJ$! z-h!E&@3@gU^Qly5aBw3y7OEMTv*#2MgV`5SxlaEJLnij;(M}XlBM^?-`(D?mhJ)>G zX zD|&57XkxiE1sfY>Q1F%SDZd#?!^1h<_f9TjYnK@%#*($H3<`zAKCrW3SI9QSsd{1U!>3`CKwR)QkAh-218D?r|i` zK7Jq4BLgPgRZ|e%lsB-B5U6oVkF|DW@I6p{rok9!ewfO>(iq8d}nT@D2`}##Rk3S3yYzQ58|2!OP%% z&>zIA3J6M&wg!yNA?$bH>_D^)>Jfj4g{Hq=%*PpksDODy3JUN*iwDl`>qi*iFNhe^ zApUE>PXu0+vOF3gEo}toS24m7_1&No4Fsf{y8_bN`}-vFlqv`fu0V$r0#~HnVD2eU zwGfd9P$Q!0gv1~XRxbJK(v*tdh~Dvnnp#t9JXO2vTkh4ZV2M3#=shMy`|v|@vZ&44 znlc1dgMqWB(T)Y#$448sKBYFFy>G4QM$k;EbOq^lmx-s1tuLJSVEkRmCPlr9vp;W1 zWW680HdXNQ0`i`w*5=0k_HPY~HLm*%=LfA^FpCP=+-x9j`-anOT}-6v^hfMhLQK&T zlU8e{IOAYXBp02brR`4lB0{K4k5DXOPKQSo`cFuu2o`2QT6y<7)+JILoC?+Yzkxvm zw2k~?NHYt%YNS77x+UEw;Qkr``*LZlEbr!}8t8`y^pxzRlS+OdST+KqPBG>DD@M3@ z+5=b#fjKOjPrXuVH$eeR3O^{+gPTt=%m9P}{90KULV)4;Kv0!Q0Q(p5t^{uc1+^@o zTs1XGx_V%1I2S?NO31+iK-WMALRwf(TU@5RK7U&V)?*8#uRPF66rp7OOYebdV*)Ih z*Cx>Th8)yTkB&b8SLL6ZQZ(3LTn8r!CHd9a_4P2w&T`6&KxqIiU4|=N!SzwulKX4% zg9uSHnfc$QdTc~(Rg}EeIM4tNG^Ep8lX_&rFU0FqoYqFa>F+ttE!bi?*Zsb7A2pcn zd+qJx+%IqCb#fnz<*CHSPtT4<#q-#)08_xw)Fk9uSb~J20c~Ut{j|%PQrg)0{n`rS zF0fTtq9hAm`Sdw6$NN4vk^R3FtY1=gf6&?U+nrCUlpy%{mk>=%=#|Q+DFrT^t%KVC zkG&V;z0F-Jm;n6FGVaYgHE%5oT;xtT;(8`RU;5Q(1f`H{EOo0o!GW#z&Zwgc9GQmw)qz@ z_b3pOt%)7z(|=By8-Ygw(oG?E800@dS<5xF+Xm7KCdpc8NC;urF`^YpH?p-3C{HpF zBhp}KGHE-w%d3#&JkyCe7-%X%Z{q5X!#;4tnS8{&z^-xb+-YoLU6hb6E}P|lN#e2P zsfi6;^1eh8HKo@(@Ob|Gg>%Gv>w2Bm7Is$W5x$s*eZxw@3D)3##e6w6>Xb`X(d*?| z`&AUO{ZLUQO&)>G2&_pCY-IxJj^GV80}Dd>vloTHhtau*(*#Os#4vs^1ynU5`yE1` zO9iXwt${ck1S=3$Ik-7$zL;E?C7`9{(yYNpfyfiMcMzPqJaZb;p;!4NEuRJot1X4~ zDD6s~&1@4C1z{g7Ca(`NEX9D>1U_I|s$82FkY!<%IVzA=)a*pU=A91P21suRPu)E5 zWl$KHtP=rZaFw)Qit3UJwOKZsBG2M^jr^>}LUl*Av_{HIIm{HPa%5B^68O;;NMhf& z*7z*Gwc=dR{qB1Jd*CYTEIWHaZ*kkI{aTx|Q*U9%D*9qv&q)r+B~P8hDDE)K7tY2e z`eG=qMQvs7zk&xfD?cX_>^Q+ftF-n}VtDRcfmX>ChZ3IG>k#qH!H`A0id`3y4Uhk) zitGP0@_7O2&0=L04jvc)+seh9HYYYE4M;EP0<#N0Mg)4^M2~cIfY~9C@n4sfEgbE? z7^jPBLv#*^DiTI$?)uXw8?#?rHuXVv{|}qG>AF5W{fd<9yecvnOxdA_!PuV-n1WY} zK=z3kTtBk~>GvC8i>6M10AWw=e4XOevTCBPpYe zXyv8;U)W)Z6^|9d36Nyox4t~hA>@91g+ohl4&C2JmK$|5rf&xucrYVEKitC>yV$~^OBVV5t}dzf z?ALj5dD)LzAwGv^7k2Qj_=FF0dE1P9O8;PZQL5(Tz(O!MXbC$p?@NRZeOM$f!SDfT z9=qL4Y7n?z3mp%EnZ2j5*abdbkg*`%L1BB7G(pCurv8ZkDJT})KI5}((gV*8v|uDL zE+Ed|8td!B2MLE#t%8h9BT$PU?QJn(dO_0mPzw}~3{v;&^I&HTe81>M*nAb7UzlCb zK=0k=4w%O4;X5DJl6sEn{+5sS_7*>IOdp0uMPxdNAv}PmhJy?B+duMyFg%}=!Wge~ z*h|k&E(v0-J9bZK0jj)efj+ikqj<6N8Qa4ua0h*?czkVLkj;V@^;5{JK9pBLP$Cs>zyJ zg`z;zq~Wm!jP>spt4QX`hL3}SZ1x8TX~F>cn?KPHZU>GcBJ}GEoCc48kgPPMg(46* zfw#4gXpc0k0v5lT7IY;BZ&wZ(0?1KMlg6@D*%+nt7xl6gnWAcUE=NFQj9#&eQJ$uy&@?2A=b|XRGLAxTcdA0W-@qS+ z5h1}eJyOMcHe&=nr>Aem&g%7QF6#`r&-GWSxDr!eRk@2jkbe@C+D%H|lIE~;f27)x zkY=(r%+JqtWK<{}rs&fCvN<;|Ve2f%xbxemIg(`wp*~0zCOE<0Xt{`5Lsdo?Uk}_= zAyCmSj-Q6&M+tb2JnZKe5upY=A@y5tHf(7UL#%VIdxrYdu74L9XRX`fB!t`qw@!mI5Ni=y+JCdg7~yyVqSEvAHdW> z7E3U7e*pa48^F*8LlXX7CeYmBiNP|r5 z`SdG`!U;vOGu1f<$QoFsCFi=bE#!liDx{Pa-*?O-qGPaS(n~$^#x|L%JJq-w*Y-@a z*wFXSk}2rR5s!N0u+cCH450!^5g`QAZXi3&?b&-K5KmKlPI}id9}mF25bYE=27@lK z393YSY}nwS3=DW+EeAr(*7pXDnfbGeLLuo@AYDU-V**Nr+zVNdU|E1SIHK}}lR+Z^ z6exuo5|6s#K=ffnU-`dcQ<*ohn|?C!!-Hv(irT;wFSY0sS0bUL41v05uD<;Q+eZmS zNKPQyqKFtroVhS&A#9-p4ip zOom`^rDABa$TLM50ZskzPn05-l1?qeXKA2ePyY**rM~(Yu!IB~<~)aQvG0397K+&N zsnUR+8Ell)2q;j{1D!*<6tPVC_vLlHRA#lwO=|Zt(s}?fh%Sj< z8yaE3WFU;5*X!iafXT2885|v0mMGBxMh7zva2|D~Nmw{9m(75@h$Kk`xZZm%llyAW z+h%UN4#?F|jl)=Y4)NYu@E`j;Sa_E_?SBE71gvAk=?7M0Qul92TUqE8H4C5uVweez zL!cT(x+jC0Iu(kikViqY&c6n)KHt2CM-vLgk~w&!i0l;veZbsDi22#J9#HdBhJG7^ ztI&aH-h&24AMDL0;6??cc5rrju{PnXsX9uDzO~7HofSiREd8~iFZIYo$-O}88i)dz!jQ5fo={85-qJ2k7Q4;9wA<*SC=c z+Ei=idLZCcofw$r{fV^t%iSBv!R_Nz7;#W${5HIKLMk zlw~&2CU4$jz<3lA=l7_DcCqI>NP22~4oGf2^yTLL`XH!3HWxR@8~^)5UkvvH68B?& z<)QYsQFzjs#wO`$1sP~Tk`jYyaIikU284Ishu1iCsOhC)Gh{Jd`e z0b@yTPxE>sxv-lP9$G+zzuhene=j&qyr>=7#zfsM8=<*#3^>R1iYL{1j~9={AASo< z%etE@d+94_4bOw>r?YDTyEj=+kM9$PUOD$(^Gc(&7*@l zc0M1_v!~PB@JRcHRSf#e$nxdNRH?xzvG0PTN;e$m;tcJxo=%^2gHmWRx5`L`&7`X2 z+kkm@Pty~Mcy5zh?ONlPQH|nnc&rwl~OW9T?;1*!77i5O!#nm;63ecyQ&Ud4enWlzR&FNJ{XDZN(|OG_g)ftPL`{l z=ilV#w@HI`<_Xh?M2<2bI{H~WDM$E}JD%!Dz3oJOrOU}R?efjnOdMli`*bZqTU$Cx zX1k|z8>9&=T3PI4&Kqf891jO_n;o}iM^yqP=|U6lD-O;O#Y0Ker1|x-zy{Z+&k>Ki z#tj>a#gt23pJduXtcR4%43p-JM%0RuR$U|UXCOUgpy0I+^I0qRbUwX0Z(TqBsb6(w zOt17s2VTrG05H<8uc&B$$P>H<|GluL)ZlEn)XH&KaPT0MOo(^fTjZ=!b8GXlzbz0n z;NC=3SNwKZ{%sk$G)$~>kRtiv#SnJ|%Ud;_1M}6z;5=*vn#q8=H&SVjsRL|I%F3LH zyo>a!MUQdF3yL{-d9Ob7y#!GyOLvOsThboSVz;RvTC1Tyi!#tN;OplI8QwXv~pEb27_E)g6M(nV}cJcK8Q#m7|p5dLC(TwNoO6o0+N~zR1)&`q19H%vWU4a++hZ-4nX4M3|GN z7Es6*v@N9p;}!184G9mIhCe)LC5wlTi28qNIBUXQVvzY;&fcl~ZbJ$F)Wve=>#VF5 z6a@X}2F)#!CSCniEY_xPN>moT9mLiJ4<6*)nSZ{x;0)Cf5{pm){2)m((U(}L>9Kjo zDbr5-N1SnJsDT0RntBq`=4+fcrLM|x#!2j=>F<2fowz1nGP%utkGHJYX@lrWz0clokZwN9uwtpY6)(DH-jLB-u6@N64Kdf|X^t5;N zJ{^7I4H}ZmC=AyVEDc>f6jHU%UVoEy#c4(PA*RIuk&p2_pK}z`My(znL z?MtsuS)Zo&!>n1G!FBJ(NW2w-ml)JK455Sy@V2YnG%L92CA$TBXA}L8JKVvlL3{AStRSX4{VZCJ>Bl;y#&-LKz<{JxaiJ#pKJN-ksoUNxxjvK#$JzD; zI7MNTp!TexP0v^Jx8eQfRYN=5mf;~SBPZF`Z;ITU%oIkk6lRR;u3>_>Vc_i$q(fup zx^Y4TrjFUl6_?th%br?ndN&h4%^|WksJ#L`K6QVr#&MxgvGWDWxpGxhJtr~swE8Dv zQhQ^2%F>FoF82BXo4;D2k$mk!R}(AaA1 zA^ql$mG<)N@lkHY#o@W0Ux0tA`}(m!;iIjF4nk%8nQZ0OOAyxhufH6$&Tt4R=l9x0 zUAqpR>1OOw(1UT8Udg~+luSKKd&&4>hwB~^IDUow?z!l~m5#r}f7O`ixnP4pp(N<% zBbI!yVfzBxh3M-~LdvWvkCq~3@LXRJ0&g^$V(J(G2vFcuSL6ieQObAv z`r|BPeZkWq>~X$R^a^ZqZlf0Qz`^>Sn)IlE1Eab2W&lIgM)cbX1s6DU;?`S+x8Hb7 zzc{^le@!rWe&_S8>Gj()C$Ma%ZB8pUs_+W99UopA;!KI?BQ#^&BM*dANO2CZ56;QF?_of$Qz(R;17}WZ_eYhAR9}rPvyI4e3iIEzM#FtO&n@~dcPz?x9IR${Mpwx?49)R; z{8)4ul~8WQ)%5bFW1=*}>+*2iZ@X&&BFvn#>7ThcC-#2Co_$tIk7g{-*Lpinqv0<~ z>pszvV7k+rZp&P$>k|l6s>qjL7t{ba3D2t)2H{7wBdepG$Vb`bZ{fr+? zOoU8;OeB>OI$xAn465tyyN6>Zi^lU?nXfDjDrS+_R3$`BT5Pcng-j}JRU zly-&YN~3jepGVOF0XGBP>BroO3y;|>JMz(Q>vL^QWr@6#zQ^sYX>jN&rb;mm*+FmZ zrlIb)v+Jar!av+w=~?u@+{0ZT4Zq8bhrN!QTowM6CZexw7=>!c5^43{*&OweuABS; zp=7Gs31{W=aZ*yd6XR9iUyv+PaJJ&j_9m~7j$Pb!6#n?q)?Z?dWW=kJSYxC1Ra6JB z)AA@SENa}|l43vm`@sdGRjvUZ!$=L+oY|wrJP(T?|5B{AR1dHz5n0{s~l@{ zz4fk4%;*<0@}UCFSH_Go#>?NR`R8BaN$J@eH+K3%xkd3;t3 zxOu)d_#__8&L)}jd+mvyCyUbBPt`PT)g|E*5M}R<49YH~%l3Fz#vh5={`R_#Eci>8 zq&^D%HR>C#93g{|c$uVGW_n8i5AODL4MkdHDB*_b%yxIJDyejwJn=1p$Q%2F!ubXz zVc0NNiqFnAYI=IZR#Vi6LMFDnX^a}(VRThMLBB2preEV`4&U3oWxs6c`HSS!gOm?T zd`@yL<{RZ)>h?k?m7VmFueM&{Q!`rHE~hw{fI_|GUHvnv&J=|iQRc(Wa453zyYlUu>WYVYdaomWeDwbA zs03kRj<(3lM$s+lp*}{?HW6!(5^?AgOdMZt|H=hBTII`SVVDUKRUJc^Ums#qk{M5L z)%Rv=-Hwmf!dQnn`5N3%Y{nd^fb51}K5W}dM4YGkgLY!v@aWVsTdl~b{ErsIDMVIw zi8@3-PKhadi)67~EA=E@EigjmyZlBozkO))z`^MfCUnrk-5d{+I@m~^de?{b^>`PX za=PcqS0lAF0us54zvQmkV}iZ?-L9LL9_Z02Q;aWR-u-HeZ1Oofu||^hoRv*&fg zs^n=s-6F@ZxAMqQ382P@&Tg}Jk5-u*c^%g6rT&bRV%yfY zF|H?LwAkkpflRa$KXcUT@rTy<10POkn=JokeJ=dI_T|w#9ztTXehnyn?P%CEz0+TR z{Mn_Ba%nhtV{KJwR))?T|#ykBMD z;15|}>Bb?Y8T+YXyuVJ(Gt1DF5|lpv9?;+92@NI zC2Ajoxf0lXB!7>zMI9!vx<;$KB@iD9$ zy`^d%S0FK17?gEbJti()=~b&-@LSc>JK8IhQ+;+toiq6jU>U}98ySsV{2Uy%r$Zh$ z**mmM*4|CRAQx1(`5m)<_>jIeO=}gmaT8LS>6DB6MoI_w`Q2Q(1f4h0P_C!boCLFT z+=hn+NZRl-|F*qK4x8EO4(&5b%Of@|l^b8oXr=u8*@Pec<$#9q@Xx+YH>K5L6`g&? zq9jZAo?Ccc{rnn4!Czv^1R&;YT3J~dSwEmUDbCYEhvilL#fyFy))df)NLI~ZF(SF# z*1r7X_0j&=`5hZdLTY~+TdFT-2aBmRAo-xa9u{mkdYqK~Or!RJiBtV-f^FxN(7o?| zk!Q%}39GzG*UQhIjG^H)&!I066H~YLCnx2GJM;Pe2_;1aqOE9LilrM{iQ)u;A>Kx4 z*C_;W0Wy;@A7IT}XN$B*_0BK@>`kg+Tq9+hjm2fQmxhcy(Qqq!Tjs#$iQdv{)ih(` zz`;R$5&{OB4tKAV6VbLIeFg&JpY-Q|%}t+e(Llf;D=qGmA1*5ZV6l zvrI+5k>UK1w_VzE`(b%9=8tsPs+b@feX#xBbZdVvU^7_U|BjfL=%|?3&l;nh+D~o$ zb>s1z%(xHP_C)e1leHUjQBx_2U*M4A-aw^FpuTWYjCi z&&1}B>b;MsA~vTUyp5q(O%$1$s^E8NyQq$26>>rd zW1^a^DPRyHf9tN>VP~t*@YTf*Z>nhKfJn{JL zS`P8g3^i)+v#FOz30Ui>mK1%krdxD;R_g*~D}Fakw|-R#2|^fZ7`UDRfFK~LE>uoi z+J2l&pZtk@zB98thwggxo7C(4P|1Z4km^3=NI+AfOfE1T$QecTg*|e2AHThmWk%B2 z=rffN^g+NgMOZw6U0#~uSkw)}I#EW3bXsB|ELwYy0&B0K-W)TTU33x;K|KHoSU8CX zy!&^x7n&li*aIsotWfz*rP0D~Th*Opb^ql8+!G&@YCGiGTeEDz2@u4s@-(<-g9XoW zYc0nmJjB(zy{VIGQ}mmqT8R~}YI%{b?dnDIUdN(xrn_zQ*K%K5(`uU#J#T9*>&u>} zRWZ-JNuQ{wI0|#hs#CAkw6CFF6&=t6Gy_mFU+aBm%6rX8iNo6fpQ;w=i5V;!!iD-C zN#hRZkKcd~SvmdUa>jWwr{|&|t>a~s)A^Dmrs?%N!&j7I82}eO*0Nbqc{+HDi0ogj zKo#ux@K;vedosQQ1wh|oM5J5!ZHvqfZQ`Gm+uIKBGBdw7^s*SNo21kGc7Es)XpFcK^HTWWclq1>zkJcNngBxq z)w-DH=h==jzoU@*|;=1d={znJ`z@-o@8c!pan4BDX%F4 zyB^N&{45%YxAPkTO8UKWn&1 zEC3S&R1*@EHFkwW`wOrDleiUWar54iSKSyT3C|@IzE~()S+90w2Rb~y ze#^i>ecGC8tQyT+k@`DKexvEFl190QP_+XUi_uWi8XrEgH8fS`knz~D0gQflax}Y8 z)-keKNgnkWr^(x#@0P<3ZG#>?3Z9#fw_4~v|I-|G{Xf(wK-38lJ2f&#J3a2#F ze!o#(DC^7Cq6AofGi2^G5;k-)VxGsYnj6C^7C${^o?xi47#qLseRuQc{9jK-d?X@^ z>q#+cOR$B+=IKcC4O;Zi6FLwnuG9{Ty))G6J}UFY5bGd@*cMQj48w2 zB|b7pdwnkzIc)CBuZ@<|+-;3$=>B0|f3#)+1rh)!(oL(ou{I?vJ6@n(c~sb^I=4=} zSf86*r~J_RM~|ZY=Lk`cXH8^6!uvTF2@5mK)chXvht|JxgkIWeg*bi=PxsDPsPrBs zhV^7g$ki!%*!ZdOWz+-3%^j;6C`b2Iq`frKb?dY=c>a|D@KhFiEzGE`;*U`UAQ|B0 zg)QMtE8YfHMD$`7WZT&}IWuGPYav~xRkQBbP)cRm1((Bk#hje5zr!X41+TYFeutZ) z-m!p5rG74o5PKY^!Gcc*qa@W*Hao+Zc=HmHgab9{o2JM#6Q4CJR3!vH)9C!xIEu@lM2V`H!3RfNbK zbb!+)Ku)&Q6E-23B+-y(eTU5hiN--7@-adHnmc0K3c2*`jN~WA`s1jbT)~FvVya|q zml46%T=kk89@2qo_ZSIk`r^07V45c>;1j+8-o9R*^CP#St)BOOq$%CmS;Inc-`XG8 z=X1xV!#1{4wO(NpI>Qaf?|JRd^ky#x?CbHwJ$2dD+2ae90t7nFg!t;+u4`+LL8ABg z%X_JA=ye1AnhT3LZi(VTG!wl~OiPN5>g^j;8!2RZsxs!IAlJp-o9#xa>)`x-Fr?j{ zLGlqB*XRCxXY?h~`wf8NO?!4?E<6g!fZ)d;sdP@9To-??bQ3V9Q>`-}%t)3DCxxd^ zpZf0QblTUVi!%@yjn?Fh%H)DMXuh^EI>4PvP{i@~(wZlDU+>_F>dllA1N_~%xshBZ z<=WPc+w0E}w5Sh!DFck}p%C$u+drs4{)O1y5W@aSj}E=Vx}O~|f}wyM0zrz`*AbXub!8mGKW2{asSg?v1sa@a z8A9QSR@Jn61@%gZZvrW^sRitG4ZgGp)kk~VXgREYeR=Sb$O)dW=@C|7`1MvuLv@86 z3HEtcjrYpYP_HoWl72sEGwDp=ClEQtef%SV&iwEwiQN$WC=m$4RZXq*qqGIJ0|3p9iW83U+N;1Z+kn#GH>62SYoF< zY1(0r&P<-lAL^~CFgxDKsLaeS`xaPhJd?F3)W1Wzs284L-E+S5`zx1B^cQ4M!@Ru& zYnL(%gYv5^A#vB(-x` zlnS!`)|px%Zui;ifQDS;=^H#7I)7v);0H<3E%}6N&Oz?JaR%8o254siXPB+>c|7h9 z_)bMF7-V8fB&D}Js3z*Mp(6Ge1dM&T5*q$^cE2WGMV_5S_;CRy1#dnHL3kxf=a%BGN2GDB9f_a@0qX0lgS$`;Bd zD-jv-a~|Uy!6h#B_qpMpeK9`hiTb1;(5a-A zUN}zRoy+YU--{!S>3oA8(0rxa>5HlMh-Pcz*=4U2iVj~$S7H0JlzIRTDZwXzc9V$6d7X7do1hB8+jlr1t zzH}~F7LuHr0kz3ocW8fv08<0SkPAE$A$^|nRMdtGRWos7W@P@`zdg|!kA0!afVoLi z4e?Qe4$l}`wThZnZPnKM$79X=((pGax*E`0I*?jxE~%1B6&nRS!te3Qvm>pJwXQfo z#_pGe4eelprlh-9Qy-!p|8dZ58JV{T4pdkEp6==!5kHmbOeEKFkRKn73o0>qP%7wX z^}y%R#yi}RriBHdh0om#OJ=gX#>e3e*rax`0X0Cv!4URv4mi~Fyl<6IHV&7sJ4pm^ zLa5q^$!{#*I+^<7F4vwUs4%1j$xORmIl*+mPKV>@6+(ffkz(Q!%EIFXDv;{=u67ey z>V2ZEbuvX)Qn~_J4nQhgSKNe?d#UK01aMcjfoTr`DE8sQ^c7m8aa}!!$3P)jlZx! zQo1s%Xhs7kGUxrWv#+(dns_qDa7$)k&{cPeSsVK0qwX9xO4_lNz_z(JwtH3AUc@t= zP&jx7geIXt%n7TF!6cmFswJXJs84S;u3N2rPnVvZ?dR2a+z=jina5GynsH;+E2^yw zJ%SJ=?07&2Di!StBQ#jL%0pL4f)F_8P{=uERd0Ll=XV&w;rXo~UvT2XSpbOO06`vi zD80m)jm$6&;pPdI>0;Au8Nb1a1s|=-t1(&t^yiH^W1n123@1M);Ti!DKA%-Ty~kaB zl-N;WF7YVrIE*T1KjS-}CVxAE$e>K=Lc4AiuHUBk;q)qy{DSX29w%)jNH`w7&+W~; z(R%-TibWyXe2=wF!8wiKCL_)VCHSAp^XenrM2a1jpZXmQgJBxZl2BTS<&q~;{N_KX zPppQ0G(zSKevK01Jcgs%O2naMDBG^q`70GXLOzRcI2Q?id%*_}X_mKbP&rkE0Kk8g z_k0WC!mNURKCRU4mn7;b1e1%&2WagL7Fa_7PL!JEZ!5KY%Ib41W!>jNs`(I;J9Tfr zhxBB-Fl1OA9=Uq9%X}UT4#k$!mM4Nj{>xV6EHM04+a3f*4Vbv+n#jrEFsk_xK}1ZC z3w=6@veo$qSyfEsi5`gW$R?-6-^&&V3V06JopbRJVk`cb)boaI zC@89)Phy=(XXd`vRrAEZHklBB7NM(4(L@XZkeWg}Rf0#Y_ZlG^NIDR2O8K4NwbDhkXiVnZP0iL5Q(bQjjvnPfgxe1G&KXwIW@a;5PAqe#`K5H3?s=P zueCEyOP@IEJ_p=lckh5?6UTOCO&7?iR^6Ww0yoO8@*WRFXkP}698g5SA_IHFc%2RR zoyqy)nHYI#@dqn+yT%SuDS?VNFwn*TZd3U)>j8QUQ~e4t-FdXO^#KLzeGK3yKEIU# zUZ=I33jl)Z5dk8SLwY~_Q021c`Ie7}#VP2iy#rU3Uva2Wz(V}KjfGhy2{}%yRu(ot zf0^Y$Mz6Z4%R1GhArrf$DF&3u&NU_oQnS-X17#K{KXAt>f!72E?WWM>*Omf=P-){j z1j+nbOwkQa>)NbIZfbNDz? z>N0|@uS!%CUb*eel9P~-zSkp#8~AK}m0n8PJB3YE`q%rJ=pST@gOx(?Pyyw%%Rk_R zd|t{dE#lV#g%mOAH!comC;_0s$^&eC0`J=~m;hpDX*Y>?SzHo;?+G$4W>$70P%^<8 zle_teB7ViiUY)fyrZ7dhQM?V}38Rt{d8@jc>+L(q-{3AA0BIJ->vRdwERjL&Z{NQ0 zRNcLZuhlfzcp7@( zj)(~420=1y2xk&USVUmWn=h-}_U8A`urw>lCQIDR5l+qapt){X(Bj8xZbDW2m~?T<~dwl;ujkNmCvCE|R?q06`e`Z-UUfM82I(K2#yxE)_X z3q2d+)^@lvLUwy1E-P_#Xa)#{j5L*?P?ui^H+TYbb zSCKDHZu}%}CioR*a3SLHw}@Xrgv<|DUB6zSi+9&apvrAI0_#B%!d*apfK*nXCtxl0 z09{4Q%;gj)WulvvWX1}{eiVqBghI7;vb??hWxUCoTf#dgOO23#T%-R~TPHw3JwZy>{AZe;!MDB$WrAdx!1h#r?IutZ7eb z2ig}+ zqgI!5>QXZPi7yCM_a%RBMOg0h{G#ZUjyC7lpu+_7Y^M3v{p9|n8D+@7-7WK={nh`y z#$SDOwR6yDx$%ELk*VeG{MV$TeKbf!C};}(vxP&^|LQvpj})phLesV_Zr#-&4F{^T z-v92$y#MY$eWI)tI_6if=4&^~PFdVjF)ge;{a5$Z)9`RE@oKb4{_DfpV`l%=n#x*1 zuOAImXn(y_9Jweoy$hNPd1B-K^@(W=FEWbXc2HE#|J)>EH#PGI)+CeAo@znx@5_P( zL7Aax97)K=;FkH{A2;>V{`01rN`H-cI{ne+nfH7HzVS&iP&i>DTHt?v@q4wl_ALYs z{P%-Pv;I$$?EQCrnWemP8Aii7;s5I6=B~1!_>n@*0q>u8+YHK*IS=;**X;Pv?~42J zwSS(+{#o+#C#WN*8dZ4aQm2oG{MAD)u@wX6B5(a$Ulfv&q{3vjEB_UB(T*^>hEvkFRRV|8tGwWEQ7mh5oFI zBhQao0x}({A9D!OZ~UiI+8JTNzEhy|zpLth?+hV+i44z~p#NE)&vzF5bHcs;d2Hgg z5ImdkShKXsU*t&dnREWTQQDb>%B>5Uy4WiJ|90rpphF>7>h=b7Rt?`8@!?nm`H;W6 zq4cXuI`kSj{od||K9CEgv>)3_R=y372x>#pzLnID{(8 zd$*vC+=uyVzWx}KwH_w*fI{^i!RgUw-G}FPOqKECZS>b(ZjdXB9tiMWms~+>>?t`R z%F1^8?qd%~Llcm^Fdro<_7z(i04+AAU5-da0fy1<+ls8zP(CwYhEe6d(1u=OabmV) zAm9MB#b);~Oq6sE5i=!OEux{+=B~79F$;s@HZy!FV0yM6o3P_p`>i(T%@F9jv)OeE zQGq`8koW`%rs$3{deA$zJS&cDVOk{U?BB|w+YWB%h+X?@2Z&2s-sU@|9tFb6!rZ(b zxIXx0oD#c7W@ZQleRF%!yV0e>W1A{+(op+kMT#?lKecYUiv++m39PNUcN&VX*is{f z)b>|@EK`k^&^_2Yz!?dl;0gi}9a{XYJ%B5J3daCcwfC(B#E>8Ty7L?YM5h%&1-EJ2 zSY%XhpK)tG&%Nu7a4=A~)2V~Yq6dfZ)ga5yU(oy<=&omTwC=1C0JBGBWF(0Htl)i| z<*^#Hl^<2W3M})ZSWc1R~bULg2zpt-I#eD~gEn!FKbc|#q5 zB}BzWi_H2}fy191%}zJ%eNVR_VE!Ow?nOd^_t$i+hL!^+R9$x#_CDbOX;?yqS9bha z5dTm9vLz3HtvTJ1ONkB1-FV}^!3f%8s@Eo>9fKgGdglBbg#vwmH%K0S3MR#7Vo6JDlBuG@r7sp;T(C}0^uL9+9E^z-j47HgYpX8;;Dp2|WS zs@f3*!i~LX@R!`n7&gfKa~8Ny*WAU=O~~0D=q^S7gc1$aq!`l-3MR6bIZ(F_T#x2< z?j%_f{OJv7fy~;=jOgw3s}$0AYl|8@!0sRej57}?mr~#4Zmgk81kllF;@+C)eUx9c zu_|Q0vVjGw421%p0HUI?l=$fl1QICkrKO7=c0NpvXudLxJtJJ8CmP_p`+>Ga#AUr` z)$X96dOdWdFY-u=WO%3#PiH&%a~-jgF4cw2zc8_oz-C4v0u=RKPHFc(QrX8;u+bmZJNkY4)8Im4>Sh=;A~9{R~# z%`<>XK-=f92*BN=gR3`Y+gCuJf{w}=EQo-n1e3j$-#*wa3crkcsr|Fj7qd1%Mw%2#i=G-q!dGSQiT8o%4t_mY$o}eXrSld^ z8Z&Q!^hR1!kM$Q*KS6j@Hv98vyW-Biar5#}8>ZaxVFO1YDU8r8c{A;C(hl{2|9egH z?u^GCiNfz0fL%sMAus|^FLrKS$O?0pZKS5k{bQbiFx_)E>!FzvfnT@+OZJ+)Lo7>5Fl ze3IaM36^ZP?VrRMr>aDjQw`!|Mb%?ziH{ywhD`M+&-+_6sgR;omTSezbDMtFzI|kY zjhXyGT8S}rI$ z+JB@)YcuLSv*^SzYAGKWLE3!T(jf5M<2%UPF>W}>m>9X6t`I!i*>)&j1H>r^AO~)< z0p`gv*V*YSoAER85dukj&(5GMFOOVeO+#W6i1}~|ontN+T!jlbLyBy_4Jv|GhjRNT5NnE`e0QDPp?09d<$gZVQ!UcaMG&^3l_pD(fp49S+ur$ zBWb{&th%jZNBkbIhiVe>m-rdOb)uuXySrs_xAit?7bht{Yn42Z2EG;W+O^fBb=h-z zZz80DYEZgg?&I8J^lYzz(CLY;4+Q6hj1A?XB5csMa+XFw(GMYxQIMSobWc$I(Ljkj z1rqH$^uqxYJ1g9VG9F99vpIY!E;}(I042z_ZR;ryzpvu1>}duat5M{iv?i^qyLe;;#Ot49_}gIMUt+NhuxMcR@$YU1 zaWaH_AQp{5d|ObOyyJxi?wq#`wfn5z?FsaE6%0eYAbJ8+J-QNJPdPlbM2aXVrb&^fvLOn zk;~zp++9dS47&y#Ktz4BJPu%{;?6a0&>wV_TQ`LI0VLBIQ&-NZj|PMY@U6i#T3o4i z5j9!&D$qcq;aO35PKSK&ZztDM=3e2w+k3 z0$ZiHJwiOZu%}im6dt=GS-MxdHgwT$Bqd!fCC|NdOjqvwI$XQL742nIlnq1~JZc^S zW|iiLA=fU7bE*y%D0aDqZP~g4ma`{3J6+BkDNl7`+;86B7Bf_EYJZ~h0k;^VFb4^U z&4Iopf!~CNf5vQ1)i*oHUv_^}|wpIfD-epRwgNDM8q4o#C0Nm>TP+6GU|*=V1Ih zIzr!kKZ^bSug9$4yZkdw$7Z`5fquvITQLpzl`v7Dz()jcJ5R>zfZIU8KN&e$qnWht z+GP$N<Si{ zYq{ke?R+qs@nZhS-mp>X`M&As`zxIg;=o@hy1MS`7A6dzCb(c^^?&gi!qEzL9nd$4vjpJQ%yOr)NGZ&N= z_|5&nM2C)ko&yq zRMl(-4tMe~5Ww9fCoe=JY-^vQnW>tEug%rMBsmn*5f2`%RZL4n(3|GHSWBWdT1L8ULJ)Jq|Wat{4*jsE%^n672s+|A-*!$s&q73K=fJRy16HQI97#CXO^Vd1bA2{O<1ezES4s6ObpB z6Vz@s1FkNS+cgTpkTB4e-T;3EXzopWbLb!`A*Md0kL`NX?B3F$-&a`tHQ*`3%8we& zIK3;D9vcd~LYeD~r)lD%%4Ly8n)K~j4Tn=qVDmh!Kw-1z_*8M@&pn3w-y2o`DS-c*yL&_)ST_uPar=6 zN~&Yv4(RjUOz`Kymhb^Ann0hQi7zXVxLOfO31kZ%-8Ge6x1PMUXY&p^W+MP~0XKAC zzDN@eto<+@cB|JKwFhcK^Ek1sfy^r4>635&o;wp|i8iAnBLHy)l;w z?l>JXAC*&0+F8BlTU}ZV?@LQ9N1(f^a?Ad1z1a8dix)3U9sUYcy#yJ)KSw6p&@$XwUg+ylpS2ADZnsf+_Qo0o#pZNRVu&2Sh?s{A1j4yD=uq-^{F z7?Qo;`oLj|dxT=}Qj&}o+s+6GMF6jib88T61XR5xVu7AA`lfN1vh(hPsxVP(zX`&ffNi z^2d9;wy|xl2BE&(LO|GhAI=;uU$XNgyJ;NPLx5#8p8<=!fgZ zo;!I|cMeW2)Tyn-04!AI{ni*LLAxm;#A)*LCoAKISQ2E0-ahDPsOo*f4ySCg(?WQ! zm=CPTkmokHz9YemNb-d3!wmew%tj}|Q$tlxcAZY$cci13Yia;|{%M%)4qc%@oG zT7rk2!9Oa0jkE_3fW8yCtuAZkWL)>|o6)2*HzQWD>&qgk$^t?clRMp?jAVb}R@}GC z2ILux8{k-J=OIp3F?mQv8<8jt1!U#+;CrjboAiQs=0?RYd?`YjOu@ldz9f-R>iPvl z>^x@MM(|*6utiPI*y)4v+DrFp0GP{~@4U$^rLcPf9&azO*5U(C2+M8)%u8T$mz)?G zgpX4xXt@*rN&UllfU5&uoD@3^s;FdG)7I6tze!yDI78i+B|O1`i4VI+i6wJ&ZqwpBPvIC7g-}9jW%D zuo{22ifEP8MG_Ybhx-NM7DJzgNoYS0L%st-LKM9cS%mcbQ-(It<6i?T&ldT1`uj8% zHX)9`HZ7K&o#Ifmy;LP`G!q%F#y%*QWS33d&YTgfe5EO3{rovK`j3bGRs%hf_bP&u z>2&0kQ_}GK?6P^2M>%=wPRJkBJ|nPo<-C2nePZG#dPT4XBavy1yA((CR7cxf@XE5H zu$eByftr*zI_N%)Kg-B9fL^CbaVTB2KY{4i!mKbGD7~&MK4L2#fucIwE{0i>He!&A zSJ412z(2aI<6_qvrpSpkACcf*3W5u7CN+zLTPrWTv`2_3l!3-)wk~|M>1iVn5m~VY zPAiXgN~%N&ySNft=xC2nXq=B_3~vE{IYKY0U9)?2MIXnJAe`eUjfG`(YR?$9SDI-t zNtSYkFmz+;Z(eU*f%(+uLzQmZxm}W!+0+Kl$77w7tWYa@kF~6n~Q=2YY+va@UNz+W>41X6Zk8FCfI^F;HQ|qG&@|s z2Ci@sfeTBD?%Vv?#xR9zp=?aTB2->WE1l=#$LG}gwUvGsK-HD~^iXYDsiS=|vduqK zh`hL1_UL93b<@l741HwV&5tnK6c5O$l#FWG{6iHGyWvEhG<2PiDk358ua}Z-iPYc! z&75YfoGwO#1A>w{=C#v z{!NzbW8R|}V$vg-H2<^zd8cwYy{zUVOD3J?_kottz`VlQdE{$ujo zCAuh+hm7J^H7$6u)0eyQZnJ$Gu0&wAnzN@VW{(p%hpE!{o$BLB12l${L_dW5nhr>F zrV}X5(xLBI{E&LDgO}=B3eEQ6M`9k8NDgE7eM9bHfr4a`uD8Lk7VrCoJfb zSN8Py#2Zl1#*sFbF4jVsm*K|1QMP+NF;%T5yI-j#Grdr-y1ibPkYPtmqF(g+dD}x? zyrl&{0wwdQy{Le*&kMZYuT*sGZbVEgw6KLO1%O6>dia8Ve$&=niSYA*$p02fbh&QX z!5`i}FNPAio^K+RfP5L9G(-G6mHfAn<%orlnS*Y}$;{ z-l>s1)XwEnw=QbQU*jdBAn}Dfd@8nIadpu@dj5wsHYOg!xigLJk%7&G% zEVRQLm&56X!X(q#QdVB{>^!@u#*vDSE8hMFGi z<0_H*I~(8@3xve4tm~HjLyU*K=Z(^8j~{Dtba+DJWcRTTB#=ioFJ&w;64@vr>()(Q z=JkP~xd^n+U`|6P1uscZB%i$Y8nyXQYpLnrqEq!{54#Jy@dt?5W)>76op@7g_2=) z;4~iGNQm?@H1?3$ahFahv`*=ptAFzvghy=G3-1Nbf58q-zAm8O`6fzUi&BIjSr)!T zt5(kb_vy?~dQVS*LYv>SC9V<6ig{!1C@-|*EZyn>Es@87(N|TAcE8tq+`&%A@h?-<YXJa}!ShqiEOupn3rsM!-bmtihB^=7Sc2J80GhI;9`dIz}5v`XMpp7`(A} z8N83z*a+lq`eU>WtbB3cw*_lik=c&QZR#G%07&kCJ)P-!r?%r7kqP0G{eGTIO;@f` zjHaxCnTE`WAv;e+QQv+@deRwx#IqjYJ`S0)m`X`OfaWw4JQnL@gU@32dFEc}EV^ZM zt(4Q_KJesJQXNjV@pT1VNc3FrZ#3obs}K9YQi6V@3c@o-if@!zy%m2wJxg`TP07W5 z%rh=a9`C@g^`hc^SklHnf|a=Wc$>L$V@*pVFQ`0#p$T5h*UPmlZ73FH5zVuc-ziAP zQA)5fh9)NFwn?+^Gwet+V)R13cXDU5=W*B3NQ?vQ#~aEMB2C3E6$P+w&&=JSm#upp z8z*c+Bx(^Y8?^u+_FcKrkK0-W8yTz@4R^}%=f^WHAiKFjbF0|omo5PN)0xH6+zXnr8zX=eq7c;wgPBk|_L9q6}tSH@WJZBo*Lw%muhyl*Oe`w#~q0#Z40Gb zM(9%Zib{pw94t6qRB~M4V_p|p2>M68v9gL7dg$DK;%*{FqV%fM|obR?7TEf zJG4A)_kJUmA9Ygh~*t4+{p7%NI{t7SA<0Caxk3U+=0eg^Abj}SI7es#Tb2yKX?&*VYg|2P^v^unhup-&{c?*l~-nTy}LKD zzISccdD=QlxpMg8Tu@lJ4E#Kt;gGHtyaMm8jMRS_qN0e2-?<2LKE5^MgF{Da@_sWu zOv~uV%XZu1;*39HDR{8ibQt|QB{Q>-Z>4SS?&@@$S72DUw@agmY0o)&E;hWI33?qcuI1=eEBVj z&Vz(9^9VPQiiz3&5Yxeo#|1|V1J-!sYT~EeD#B?CdyfGE)BW@_WERxkpZoAoL^O1M za1*~UX>BUrmcIetCrP|2U~J@iio5aMXMEef>2{Z>sHsym!SIPl{+U3*>Pm=ucW=W) zA_B*#cvzeC^BUZGnfvC!Q*p4_c2lc~K=vNdoU{rL4c!X`i{o=pql@jzfWzVQiBjh6 z(a$X=9^$-Fi^Aw=3PZ8#BRgOg3k=2%xWTSYdk&t91m5P&9-|9D^pHMtSn`*#6}@`2 zhjqO`8~a4dJy^K=d)FI+a75+EaKfj^TngCl6 zbpP_A!|JMQ9lp8DU-5eg|FtGpGvqKTpWM?$S1uso)sSiZ$2!HE+9Z>h%YN+%naq!TP z`+610EnyQSYCjBH(d$d1p}`tCmxrM-zL%|!i^29!mEGkmF37$4KgPgQGaS!u``}&`5Ur0UC;~>cG_g$8>;u z%SW&^H|Klzt}Pu*oa~>4)-d~12QQV_$&9`ZTfqsPvKhKbBa8`syc&Mc)lT&FoSEHfyocC{b-{Ss+-9lUr323Q?Xs=3I*LaTF;6)+fe>+!^+JoCCBvx zm_;yAF}i=g%Hzlx(^|i_Zq-(T;{qfs-^=jWI<-BEx(yPAY_+E?&hr>1D3Az?@=;}m z|CX!9RN}Gu`AintY+|6J-Mo_+-r8?5Ie$D(L`t4*Dza=xWXL2*bQ~E&Vq4+!<*W)(ctsdWlL}T7*`@4szP2++b_U>q83V z_O(Wvs~VcqN*lsk^O-MwubO7G#%LX+rP+10M=x+COLWF=ynV-N-0VwCgzKkWWM0yW zyQR=?p2NEC;IKuO9HrHj>~4#v#kDt;F|G!C*$25GJZRIHu*o zli$fpu~_(gBc9Hbf-7-4sfn4ZY}3p_5EQ6v+}0eB{Pu|QukEYqcx41OY4VcABP$;u z91S|(Zw=WUq90ey1n3hcHWR8HV;#J_Bj~oocyn{v7aHw^bza$(Tj$it=8dPJZ-!<` z;YZOL&(kedBr7{(ul2l+wTN|PA`9xy(W5L|#(!M~wB7k9x-}_1uBeIoHyxsy;d}*7 zKh@Z2EB85+x!u_=5^(yyGFQ;AvB7L+dj|}3)Ud5=$Vh#)iW+iaPwfhZ86VPqzqPgd ztV5p@1=9>TR9{9JJTcq*T0{47%l`&rT~K4|WzWI~o3bF`6`pxI-e=ptdO=3=>%9u$ zgBb;Q(X;SDt3bZ04G+Kg5KXsryBspPwLHOf!|5+=b6qzWiBgPrYRpHVR_=k55u=*0 zPUJ+3V|hV*5m*&=Lw%ApTOJN8p-1|5x9uM$9l>Eee=vKm6EAvMe1(gzR)ZOT|MbSr zv9f)ZW+b|Ca;?pr(W3v+_w{Hfai_MVrI!m3e-f&$Z#Vh!w-5qYa@0 z8xOzQJ;@_$-vCVMt5K;IiaWYUfyyMxw-eUKM;udz_NK%lUiaWbOy}*C*-=IwQC2EP zf(t;C#x4rRs?MwqCO;;kCV6FHwZauiw=wrbd2gx54dtmuKbV;B>vpTp(b^Yc2QvpV z?LfIK8tP)ik`GjEaC9CP8lBvKAnxG}4ZS9h-@KN;bq5}p5OVUvaITGIT5&AL`khyV z`}lM|`(dK}+d@w3>;y+Obk+~{B-x@HT8A+;EczsGT^w``Vc}%Oh6cq&MWwA}5q8<_ z$RCT|pAbx5JxQ!BhiWoyC1Pc_cbI@_{I7-h@&>yv0BO2c2cO9n z3rnREjUC`O!QB!Uan9~JWG3e2Wm{1JRkPW|k7?d}{AEVb3)~e&pTpWHUoeITKs)W5({Oy^qHL^(Zl@p9 zRl7{DSj5`w4@n{iq?C`u#E<+%)4{UX=B>)I`juZSq365tH3zO!iP;M4D#S`lNr^&B zEu*gOs^rT`Vjk;60Q*aa32*T{+hi?s+x|To8g`i??dF^Pxax;jg0)AO*@Q{D7$iKq zy!7@+uu#~3%<>t|5>fB-93#0sl5aIto!vZ@SGrm3>JTE~Qo&?426nRiZKAo4@2~Uo z%ch56Z*B5wtx}&qKj)IT9p3rD3DSo{{h7)J?QfNkL7&CC80&h^@D0tUmv>N}lwHYn z^qgq~n)(mg@-HVEZh|C#Z#-#+imb{(fw! zQ;SaeuIu1uT&(+D+ux;xF!Hcx=$m5tMxVtHo@_{T-B*?a#32Or0|TJWZ}>*va!WiM z)Vib4oA(wEwe+K$9)A4VkxnD;C#L6qm?uf2`?QnSd?XZ_nsFtT)e88g z5347D0^)}PHaPk{6vKPrQ}tlT3Y5i~vR zMLS~z`z{byq<&UBoT}%`&5_eCu`{_#GL}$pl`X?eKq`s17u}Ntl(LWWggs~_Fi=^# z>5ermzrN?tp$wqVs@&dFcm1AF@f(HOJsLc`SDEm-xkE=9LS?}AVU8nzHXtl4E=~== z>u=F`aDM7;&lT(a_RZ$uxV8D5+pit69qaKMaBlQjvt^l-eDFwj8Lzi_dMS+Mz;SBc zeXIE4Ib7F(^V5L0MjK6S?g2V!ZchL+Lmc&<1L=b8q5sKOBAh`g!<=6w$8yiL(m(Mb zZ~XAI=0G_LReNzkYdtd~XW)4_9-&WkS(qw`a4n!M?MG&uX=%*dD|2B*r5tJH`bI|R z5CD~%^V5c9wlcFu)VT$c9xS7w4jiATjL-uD6H=227$+_~MG29ZDvYz3H{|57Af(PL zRw3BgI7fXE=78yMlXrjMwRMdqj(#-MU4|cOBS|k4@lut2k*p-;x0TJo{-bE$2M}?~ z3l7d>Ih_Re&a|dKqS<=AM}XhJEn4Tj1bs#RZVUyW_t+2lzSv;7%je7&Mb=EP890^llbK8u&V?`&@c;lzn- zVPWjnHoWMGdZj5|VfhVbhC{tMTq$?0WldTErx9lK88q)2cK7!;f;%j_^YKGVxa`0k zYyMA!$wojF$Xb-w4VSZPUQP6KmI!NtEfBpAMP>b5-BrBBe)`GOdMYt9Ho%vT^EOHs zdr6t~^f;3R=s>VI|1;lRx{AvX25!IvNg3aOoYLs4I`mD!Ba;v;AWx1{9q~biL5$yt z=7Ca*h?v3~%jro@=^Js!)exhxN@x5h`n92v)0wHNWeJtr*REA~TJ=nQaGpt1IQm&y zs;(<&r@Aa?OiGH4;obH8j`_frClo<~jy3|&93X8o9_JiRLmvaC^vfiFo~XxWqM7gL zB|e;!n^UEZ4M32Om2b`xJoH#IT_!!l&8MVQDgOaP)@g4IEh z#mMuevEuWja+`h{#1^k?b1`6c6;zq4n>}uD#H7I$YL`CGZO%267~I^u%ze!a0|{el z;#kPB1z-;Xs68d;Xsmfls@k9T;OHVG&+Xbp=qIm5aiJMqBlIp)E;h%<#y}T!IYdEe z!M>KpVdT(Nno^5capY*G#LzIy1B!2MQEXDHTQ z3gjRIRuZd)(_FuP2IsrnHJWS>S`~A{s{ykbT(D&@g&ef|`>|Jh6IO6uu(t7Wu#)L$ z)z;(W^uVwy)O==MfVdhOn)$I^897{hW~WwCfc4}-bs$J4Ql6IGM}}5(GrWTfHz&fz z*fsmnwQIY1qy4~)GyL}net2gF(fXtqsEB($rV9`(4Zq_X_nb7g-B^%CifGVFc||Fj zL0*TPF$N}2_6ny%!$a#j^#@QWz>iX|<2-d>+|n06o+y->C`@q3(VxP%Vb*n5fd^YaF?X&Q0-W~XQlgsl|yN?dsb zqG{;0Ni>hmZCtOQ{&lX_9~)wlrvO(^#+;|?8-#GfO{TDTkZ!DTj+QRcgXX$Lhz<0o zw=1Q?ASQSJsp=?+F@grHo35>DaJGpw)^Dz#oCZp`O5(;8BSa7<(k* zkL4u{ItO8&gX!0o;KK9EMZIq@B5_<9=6*G~DZ-$=ergA9=K`MJI#x`h zFeJn%ejT%(;YZ`p5ZHL^Kf@+b8P4|}erDd69|34C2G&`4Qjm^?1K(zqGzb7VhMnSN zKJx(utEsV}&lvf`PLeYVWk^?pxOsM8KUdH28=k#S$dMjzx8IcDJsRb3Uw5B*r;XzR z<6MMMQu}rr+AQ4r>pE0)7l~Ams{4HL~K)>d+(|zgD=N(3MuQ`v5h+ z@GCW5O9V;*G!!5UGCuFFyhwQI4PH}(&dS%+yRBb@{z7`jnU9BGI@eK;F-ig8O5!95 zcTW%^mj0d(gFrJ08!JmY<702|>5{Mek<;kV{ zn2<*rTBYwNTl{(&?EFSoG5vwhJH+_+4a;e5zsZh@irVO>ui2{UDEic{c3<9Ns_NBr z6S0W)xu1vH6-M)cK~}r5OWLENNV^NlE_qqHa#Y;Ty;6!m~v8ri|`%3W!!y2O=HM zQYH6R#*)HC9-kqqYJV9z(};1PlP}v9Wpjbu?7zYD>sm1Ti^r zIFcFYBufqxLSG=gLOVBc^l;>1VmpkSB)Vaw)ojhPXm30LgQ8R@*yDlBNX zA=B}ZewaW`c27aWc7Dl0cYlMAe(Y0N_XOdKwnyyx*&GW3Pr@~=M5NiL3aw;LH1_u~ zO}pP-xOpco9OUPwaPL9yMR;%YE5Rd6{8GA`H50X_$VYu*B*W#|47fB*z@i09wW9-Q#YT!a9w;ICJ!;jc>WUD3dAK(Bi%rXt_@&#OlAo2z&6fo5rW;}^<5 zstt4^@l{=#Lt;DeY;Gwj{iS@txb(%n5Qoo|YWb#tUb`O|{3y1RSOe9`Hw0{-~X#(9`RQb<(9&5QT>NA<#ZGWBq*0@j-lN0U^e zvKC5<#_jK41+xrSXDw0V=Ma+jm)5jUd*D}iJq-rLE zP5)Uq|KF{&`aO;P*y4ZI-YZk>GZt3=_ClW6thn*Qg0m^>8FXaV5B(!@_Iu#(xeoMHg&07=|BHxw{g7HPWYES5BJsS zP5%3>B6G5@grvIKp84O8+{e7~`(frk8^CQ23{-#wF!EoxHRO878uFI;Vmy2+yXFp(1I+a~;xXL(XeyVSf z|4RW;;|x@bkCuBE3S}kyoHE!~9};WmG0pdz0s8)f(O?o-{GY>_007&s2#|G**!zZu zhpWC86er+H68A2s5zpz*JCH5*hyYa58wlTqljvlHk?SZZx2c5Ose!@oRdCghg7OzW zgoyeO4JyhqIs)YWxp`;#r$_$T_Gu@c5YG_?7hp>?6rA&^x_b&l!7_o6+Cs4xpX+dx zWb~ud7BFB6$ZchK#NjK_z3vCs)rx>`(0o^^-NyaHkA`QVEy7e>O%0ko2Dx}%iyuM4pFD5@t;HD$zRW#@9Fv% zaaod&(C#XZ&lnAuHnL?ZLNXvp*D|koUoM$kpN{m(hPG&DW-e>=tG@M91b%7YNR$DE zoI@_Zg!!z#Yi8fI6~PKk;5);=$Y0mTXacSW4f7PmuHz!EUz>%~lr1gj_bxy6YZ&ziJs?STACZ6)`nRd zzKCh1UDtvC%hZXIB-CDjBsO^xPuuM7EF%v@K7%*!@R2~CRR}sNNy0f0$}b|SRSKqh z22>@jj$shnp978!20?y)buYvQN;sgi(E$u7DhJI}b4SkyXA5koN+ zih~^2TkwG7o9jySJXwszCd8L|eqT6vL@33uB*yex4~>+{>;`nD2;FL zCH8cH8TbII%f!n|n|`tL29yNid)H1a*&1oHx%!RiuW(E%I*u5S*{3p_kB(~`aiHs) zthe9;+a=(E5#$|f$Fe2#?B#Cw^3k?k$)cb0f>?->qV3z94~Oc1naf^`{~76q=q*@ZZqnLS)mQ5b7JCe8a_Ux|kIv~=Pgnuz642~Y zviv9GE$5Zaj@?K7Mj5I^ z6$M9UVa+P{ekCVd8AC^bl7-~sM~0R?0674(V`3In|0PU6T8Wc<^a;`R0ABK9t{T){ z6Rm#G(gE_+O=>~ru4CvgrsaWvp1@6v?j3NDd|nHFyEA=I0hS_ikPOP+d@g*4K_rWB z08lRVd=;#a0ZWZryO|*NfYA8id5_kw598nDcBh9VLle2_#IW$x%s*}HXjHkc(GuLb z%06R<3ONyw?V(_Z-7_+VKLLy_{V5&HqMAVP>dB>Boa37`pce#GTgy2H_WrBzuq08n z$UyhKL!5UJ4F0R#1{pj6=r#77`NVC~i)X)hO*{L_#v|ETSGwv0Lwle1c;%vhyyWEAXJ0s@mTpuKhM%Exe77&!{Ho5AU0uBzJ&)9aL(6dL+bOT9>Klw zV-=%Ac%gK~OE<2zj1O`lax{efTFd(?Xfu1i;PQiwUYK`~DB?f{RB#L7Q8;P9PQ%v> zKA`x#pr(IyRg1pg;?w2+u+DH7C^C_y1;{SS6Ue(%cb2=17H1vGfrToz{CyX&bRbux zU*GwG=$t6K=%>3R!dcijNDyhnwvR_$XT8dW=Bz-Q!x}eIAS2U_`l*Rpn(Du>UrWMN z(wX{|evmcY18sJ-q={*C5LyT{HH_gAJ}bYBO^5QLd(U}&6UQpDZIU{jc}y*^1K-OG z0_DWZl}oZD_w{I_9y%Nv>}JhyJeTFO3v1J zoU?VMMWp~D2bX;Cv+LH~y9~hOPfS&+fof)TeUB8Wqe;dQLcTWgVyi|L9{j=8M3Ep+ zF(KR4&VxE+hQ>qfK2y!EhZ@5FFMCa-K5k1r6lPAx1 z=t7yV%kn&J;6v4Ujg0_XildmzJl}I-j6TY@NG~hvYup=H18{&BfGlv7C}-bSlJWJQ zc-4gl&|?t@oZ{kjkoJ`3D>I;{Ot-IGH|;4k&svRCV%^+Y@$P%|kTY3=?o-4Ytspb%^gG{Z?H9%5d#<~VMu1p= z=i?SV_Up7J6iO-*d(ny38vym*oFCz12Sx*LBaxHDFtvWwV{0nvvTc91b{S?&!FYIv z-8*JpQ9L*r(2fg8~1wcJvN)!$*7 z%b&7X-52>ygCmc4Nh1eOgbU@N58yk4qxJUTc0hEstF0tjKWeCl((sz_Hsk+JwLkh^ zUZ5Yo`(da(d2jnnuq`qM8Dzaw{e>aSW=U_fH7>GosIEEmOgUC=A7mO`SnBVLRAMJF zY=x8|$^i5!~5r}n6c55;FWfGg*EqWS9X*MTYrtWEQAjNVW@-C9?ApyeBT za{{L-g`8TkAQ6LmqUs!jGT2B!q_$|N_WM?D@oUl`T>lTuI_smUOt0PC$S@wB16liJBJPLJ++SR=)SGe7EDo?_N?lhk}B?pRL3ElYqJ_9Zh~hT%WpL1RJ7M*0dfA z`QdXj4W{2HY~^2LB^j-8KhJXD&>VJ|QrzPAh(IADECv)bKG%-6$Q}QprsA>>0;Aww z0wE0qqXQCta|^$(K)e8!576;wC>|b_;At(ceP&5cOcWly*sV*Tlhl;W>wkj54nlwi zWVpAijs zw6QVw%Q(2)?qUS`DeZ6i^I3-N^XL0iJjWMNH=b!@!(pHPv?i@dAw}*a)!djI8-wGP zDXPfM#`lfeY|ah+mDH#FO}Z$?sJ8XQX*ERQ4gyY9W;Hm#fs&a7>Bt9< z159345`=sSaI-c~{Ngfqa&ygLc|WXgI~!szWoMEvAIlk#wQ>Zxo%w%My#-j6S@#Ee z1ZflyK{}+AZcso31eB1HM!FFxK^o}>m5>lgkq!kUq#Kb^8l^+JJMMaCzWLvKeP-ry z;BYwaj~-=A~;d&vRl;Mz76zyZ9n0FqsXF5 za3Fyn)L;0hN}m->%$IhV11y4Ur4%%zegI>DYR*Yv^C%ZV!Bf~!N$qEcb;zOFboDe6 z6`6HS?}aZyxA*Lxg?PJTB0k*Qp+?RUCl@zUppzM;TJynIqKH!BX1@8(M{y;FLn*ok zH|ga@;t=lD$S}d#V9qhJ_!0_1k{lc)DCKsscNVwXw+`;y`*eq#EU)M-GvJBiB+=jZ zo+ho;|9c8h1{z%Ml65$C%Vuy*oSd9;aVZts)E57s^!J0cN*{MnA7g?JRAL!AT$Y#4 zd!tUA5)*Zi>M>_>gqY#@1+ov2lnXHiF)8;`-M&p*Jj(sm!z|0am>*nP4HKzttQwsQ zWi~7B zDjL0}qUj*E_lEwJdx_ZK54xqB{yC%NVGtw3aNUKe$%(_>#>rQ^SUGP~aNU*CIQ4k! z-iBrNgfZRYDp?TCSx{qB9=37D&8?{(MfT(qVpS)`50`GGT5T)|sRKnK^^6nd$|Bfv`@S0&|#Mq(h(_}6v*;5w==UDxhc-Y^QNRBmqW2M~O7 zR{H0{i(5e{MR@J@tE)i@*>&H7ff_jZ`W&UrBuYg!c<~{ZQv%`veLzz;#x>df^(R(J zijUg~{JzB4OZKNm7!34=hXr9GbJ z;KVgDk~0)_&f;AD#TL5o!3}<2!^_4wzY-8Z`eF5R|26}A5+vbd;jDi{=*V5JQ|EJh z&Z7@Kd-d=U(6lM5oYw>h;Z)@=0zQ_g`<9znsQYjf-@+OH{;YY>Z}(j%hRtCZ0Pr4asx-|)ktr8CgCBai_w%*V3k_3O zvLTM-Xoyej2Hfay5ff5uLN_qxXY0%2bv{_LLAy^1?n5CItop;8kIzXuWlHwIjsY}P z)b-F_2TXmz$j|@=+Z2(Du7ME3k?e5CjGFlAS&p>KbuN*gG*V z%*e~1<>CFMFlQFZ?1z1*5tv<`P0bAf|=vuh(!#!LPkTQqfW51RH~=%;gT zGO&7NSqgFUt3I>DLaFBsx8)VqODQQCSbb={NcIEN;3yUiRx1~pd*K_8aV*(>_nS2tYky-v^$p>5+(1OY!|}j+CZ3SkhukNa zbN-aX>(}pmQd<$Dp^QPfv4Dk(*sxs>PV_KNYT(%4PS!x&-D2k4bus6=*`s;yoV$`Z>LTTu2cLIvg!SVe(#`H^5pQ^rd zdynZMO^JzL9Cn6ADlTF(;x{^Qej-sEJ5UlVGk)N*K|B!qOex%sO=VnlQE)ZOOarYZ zT;zONn!^SKXD6#M-V7fZF+Qx!)%s$a3WnGd_s{KeA5|HdZ?ULHDk_w^MH5u-Ql*il zdKDKG*m9e1v&S)(F+*540z)zq#^D%KFaa=nUR`a1>t(p1NCojZm)ZihYGh4R+S7L* zlyq4Zu+#3g& ziJadJ0}Ty?$Q36WES!0Yl8dIMrUCQG-u|_jUk0}F1eOiYdZeJZHH}Lgt=M3M11?eE zNMQ_V#m2i51P(&*H6G*B1bR1n_Z_;U8lWGE&3p6;7RwFT7?qCGRETqizm&zT*cJkdY-Y$?18=M&Jg=4txjk&D*a(G0uG5w?89N+%@*DmllXg-xwQm#K~iaA(J4zz3+zbOqg5P)$~85OP=yohLLv+0>VfDztatYXC^T}N z19O;n-Z7F3%n3lFKz6rY>|&1lNB7UT*<_^NLh(E<@4lv{Qsx%u%6@2*jU63YZ%%uw z-1lc{hpi2m!FdLJ6h55BL5TeV5G_8;nA@LBr8f@;wr2jCfH4X>bfH{vs8laN^YZc+ zZ8s|p5ftESPFzV8XhokqVvt~$0g4-%*9dA0j+{w`?I zu(hk=aU?XnRy4hph-fI#z@ZddtakR zD{dxqG)+T$F>o!-qm)YJ)IBV@@*t?GoFm`gRx^@&5MK80;e-8OyP+0pk&(@WJ1&7Q zzie9g@Z9$M-EVG3R4BNb*NX$W2o(wi+kXY^kXtukdmw#)Gqc7>(-0X2If95)6G@5H z`0&q+pdXb9(Xx)S|JPNUA&O6RhX&m7XejWgAr12|C7@4$J*^D_%=D6;+_=#{RWd!a zv$DEec`=l>_%-_9s1f3bTyYCBMuttXFLH2-;Vbch0af6pu~~)(;Q4;Kgru5H=l$sD z=;(7HLq2I1J)$#$0{TCh)C$=rK(PLDAv0*{>(Cmf|EC0hDL3`kJg945=HDt4$&l~| z_jZ5kth~)X&5_7cz#rC@O6`C8c@g3UuDZ|ZXpvp25xf8YX%B{1Wn74sYT?t;(u#WQ z<8N(k-BQiMu70@@#j2iXDZ-!O!y$MF{n20E|Gz?);lp(IF2!)MK`>-r{d(^!N=iyE z)+dhg+Z_(ppYE;<6R4=D$hAWZsLy}!!qJhpsi_HQ1eA@9%@XVq5yqPs{a}}Srs%9P z`bT-Z6nU^W6!0BTYmmP8n;X$cH9W-H}`Ddqdb{_b?nT^_Wbunc|CG+a@VaH3>q=_fM5bzt?hYvc7r;-aeA+V2R}6HWfrYTg?XFQ|NZ#=;s{THXQ9~Cq^=7$kT|36 zcVCRw>{L?h3tgQ`-B(O5Q(?VM#QJvigp<+qyW^T}1-8T5*lN+**yk5(8Itt!&Q;Q# z`cLGnCJ)q7c1~u8EblIs5rQyzvbX#(@u=rz9p7SX4_~k7{n|i5M~-(vk=mbpXO-KtJ0OVWDt*AI z#oXc|0~=fZnBb#FkFc*?Y2BJ_pI%r9a+q!NI&k=rDozV?YZ1TLlfqVJ3%)aGT1qs7 z5yhvyM>Q}o0JC(mZesiJ-o1NT70(hIhF>>NPc`~uxo%Efx=MXFV75J4N?tx+qHUD$ zcz-Q?X$j74DJWpa$nZhCF!(BoO&>rg=}nKErPPJ-GcCIaXTFWK`}p&Ks%KQRl(#-- zlC^+^mzCg8nMXKS-hYNmT-#i!{>=VLTT~_wRy*COZ+&09o>@?Ky%^ZqeRWr{sMVD$ zW|rwcw#6In5lXa~M7x{oFU8u5CFOJ|FoV`gO#H?K<_wR|yNKlV zx*oy|Q;2=kd7XsRFnJa?CnwKne4PQ!zfZvGcZJxH-K77W7r#u<|YUglz=wM!DQ11-^gFA{=JWBWLa|sO%Qivx(-@b`f*iA-7 zMBt;qt#`e`c035S%Uf$;BrlDeYrTd;W)!i4^$%3}VXhR!+YMXP_K5<+0P@(SY z>(dB6pXgXMfzs>m&rb|=bf}jbf285dI%Sr=uz@uxr?}YIntvny`dBC!{ zuct?Ke0&Tc_4?hH>)pGQ5G*17UP9%Y3kNqhcEzzJnu&;rSlZZpX}eBEg%243E(q~$ zpM|r&DdXegmxc-{ClA?Zu~6MVei+%>MmjC_Ople@5E2tNpP!wi7Zq{tCJkzI=rr4{ z&ss?MS|VZLb!tR=F2loPDuPd5?P+N_yMt$Cws!b#7}5`XLI}sQOXz4-2Y8*6zGY{$ z$PD6KcN3-Ac66d+8^$&^Hg=}5)7B=(-QMYYsDBAEoAVz7IDOCO`>S8hel@4x^g6l( z^1eD?zkQdCe%tD=dx|HQIM)rw$hnm=C96j`a^L0XIev({-GNnMByrDlo?1dX&JUyZ(uH{PWVCX{^m+j-*;bw)_>?TeGrtw$U_VO)zo&wH`0u-}9o_-rj~63ws@xHPo>Vhd-aJ z6Juv*w_l&QQ}@ggg^i0VsjN(hx^G~Riu1td{FEHVk3>snuHy{@**#C;{q^5?7#JAl zUY?$!HVR!`UD#Kzwl(@2oL_YI_6Ec1EYh#-ip6>J?wu?|@lw|7WA9UdRgtpBbNc3Myn^5TagyG)&%*@TQdxx+uxZ!Pv9)zOoXWOorX4a~guUV|04*%LaIW@!jee>Z%&r3ZYA0Mha zj+ZMcD(oE`eBn2%s;Obr1vcjx2$TxEE6^vh!bEsxAo<@4dkB*)%Jo6oMk>>;$M{@mC$Rm7 z!jH}#zmtBi&AsZdf^PhW4<9yVg=3g===oNxMQYpb>^Sn6OenwD5JSb7cQ;2(u5Il) zpynt9nhMi2QOJ1(RHG|?K4`RQSpn!g4i01$E3fF|hsb!$(TKG?p-zS?vjQIREg0<{ z-tOiB`%F~ki5?*&4Kp*d*RipFZyZ=6$oMXy2Sxh914q$LU>FqVD;ORvyS<#KI@-We zrLNT|xW*((Lom#h(me0PLUh0rW@W->ThZ`n-TSLS$kG9ku-jkKiHnKoe4i=?5jNs$ z*WNDi+20~9)FU8K7fADL`iAkd4GlG*rUoG(2&6-9ufow)Tw*@F^x{V{?i5kf_RY29 z`W?OBq30d@g)T+f#p^d@A`TRDB-e@HlZptAprbzX@x4w?`Ej=`{ikN)77FfFowgUR zrlt20mvnF-$}%(UFPEk5POu1$$2=wHuaBg~RFBLaDKR0Ks3-`8OH3-w%r@Q0Fr}Hy zJR`iL!RJA@)cBOqGYlhG4Ie|AscgkC?C3jHpJe?NgH+V!P5#U4KI`rIi@nNPuL)oX zN~nZzs1<)qS9)?XkMuN+g#Oe%iH(b&nHeSC{N{>@lWI)Q%p4StY0EwTHE>DDHK1=L zXIzr@3^V#weX$Tcy4J8iY}GT5?KvFc02w!&kS>WkMo9uT%-q;dwPCjn6=?fISp4-} zE3mV>y9Kt|*598j>9(Gpo-j=TltRz%OLj!|A8bs{?yrpt&=lw7V7s`uz(zsE#>N`9 zhLPNNLzU%;yj-6;-diE#Gmq%2%g0nuR8Y{;o>U(l9W6!c=i%WA2@8|_9)CJN0C&!4 z7drufMkxVX5GZR1iPkQ^T$4?ecMtzcwC-xfjc|J801*LG*AAKK*jFzf2-UdNG~W*I{b z&Z+-RYjO6Xp6KOgYfN;3zdg+67+b^F4{!Guv z3%Vn(!K2;3A=qwk#eB z&n=kpoz(+!!~3KyB`-@5u5ni~*MS(8hV-j?F2Ul`W_Xc8ss)z6pv=_u8w?B_B+#84 z6s6DhV%792O4(|g30oW1_2k-(sm2st?q$m`H+kxY*{!LKl}KvJ)h$>{SCgM=Hy)M6 znldtbHZcb?-C|{LS(Ro}(bJ2NJ$OC~caI~8_OaeKoGO{^*KgxxCMB9Lp$?vV(BI+l z)y7n|y(`Ss z(fee?#Qu8~XK0w+fsEA$`3y=kKYXnbgTrg0=ckZMY6yQT*G&b8D8yVrAr|9nPMxnqYSwL3? z6$10)_aQSg#_z>ENN?YUqYEs0KNww{pJ~;3Ry8E@ioc#R!eP8Vi0H;qa{78N%Cx_NGNwB}azcg7d z?$iT0%p4>AlVWlZ5rYgJD=Y5f$B(-ogt2KBJ%D2Ga-H2hpYw>8mIqq8x|wR99>cr5 z0+fkr7#boy*HH8OJt;?eG%9a5bLpbN{(vy=8UUXrBd&UTxT|a!C;Ox@uFq*vfaS zqqqHU{>${d^ZAV~$aVApV3O2vv^1RmoZW)dV1L-HuJI`JuhToa%|>dncUZ|D`;-A6 z&M>EcuiU6~p5?up__BSqb>H(WMa-KJj_GJByA#f1(=W$7CF*djw>q&9vT~!hsiTru zCMu<43_>qq&{`yq)$u!pBnLHfs(GIp-#^;{}x&92{_^x?eJ$d1K@Vd@U^(pM|2cjzqIvs#tL)KCsEZn^`c4Y0VlY*c!NsX^y9 z5K%Sq+59%#=PThs%(uJBa@hImDzo|PI2*W~&gT}PA~yZr(y!Z%#PpiTT7X?-^LrAO zep*^u1MB$SxLv)`M5Tl4;U>MZe!UPU=hdK~ppx=(jIJ-rY*-nxkraV is}2iO(L z`V^Utbk%@nN=i$O;(rbdXnhm|$l|crL!G^_8+%)=ujZv)_LG**7?x7g&TGEqNmuak zr~dv8g$1hV@dpwWoRw!G_w0eRbgA5(0^P4shsFcz7R89DR^|as|y$OhxP_H5PZ4=(?l;%p z-#@kGYB`XFvqN({8!eu2zpS!S_)DE6q{bCy`PsI}>BU7XI0<%v%**-Qy z1w>?q&xZ=g0Sj8QYwieS{0yUKxsB@_xcvRy8Sb)1W#m6Q##0lHv3RA&;8u6Pgzn9T zzR_9OQ2|$QYrWBDGjTXe_8N`@EsxVVuhuRcgN}{o76C^@7&36iWyB`K;#qv|+Z;F4gF*d&>;QYh7m!6FeJxs0(ZH3}siH{Ic z=;d`5>|nKsopss%?59{7gcS2J6C6$>0k9Pv)?4U@@OMz)XlTZj)LiAW>93C&3kVd= z-4hPz>^c?FeVCa@Iw=SRl;!UVuaskvmotQ2U2>(Jdng0u3UfZ&g3+0vCO^;j;%6+A zb#S4pR@s1xVf}K_4=sr$T>qd*2#Ab->OER@e75^&Lc-+@Pd;RN&MGTEX^L=@KRWK} z>}e|Bu`v9TNnk*zEh%};MzBWmQ58=Q74dmOMPzoXt34KY#s?q zRNugWu^Al;3k!pwpoXD=KNdj|4|e2sJp=2o+w=Ym3>%O62G7CnxzC?JTc2pCs2Fh! zU$0zntWs#G@GBW$O-f2qfU&!@>bx?dbW8p6$5e5j9@beR$ZC*5cnsFp*9)$(kPtuA z_?Vtf4^^Rrv@`}#A#ES!;Xs02tBGo7ygPU9^bMXlX4Yn9JurOqW*MO+SXhF!D_}4A z^^vrv@LOH`^8Vesb|D_~zJfd+Zw>;X(r07SgE_(4ds}V19Yy|TM z8QqSmszO|M-rChVnvmdAiqSO7M$3A{aD55E68naSBI9)|XgJ5mAD1Xrw-Lp0eZ`dT zZDDIuG_E?80QY@NtvHjcX!f+i8ch81mcQZzboA>CWiNK?o?UmW6-c&{l9Qtt)%IJV847f; zaDsBxc)WTZZ2b2k?j~h~jtwgsLM2LQe{prpnLkip9^KbALL^a2wwT0^W};E9-L}9M z7SH9!kF~S`YdJbO(Qy@{9B^<#Lc=pl<%l)X=(vQijOe5LGdJ!d1F(0PdJpFTaDD!= zS~wG&3~a%F(1T`>J<_=P97t^=uig^H_{yLd^28Q z`(3B*QC~99o zv9p70`;qp*7WtB|IIAQdNDn$2I9;8?!+k)EB-PUg@VuAJZ{Qgln>Qw96*J#{lzQov zz*Y*K0LCpgNBz9!OG8hGmJf`?K_EJr1h|J|(K zzTqj#1vji*{!!*^Xl(8~o!Q>pQK?qg{`T#uonW8-qT*AV8s5IWIOc|^XZ*xl#RK!{Ic)Y|nv@A31;f?>FLkl>DQqh3h= z=LI0>xXYn)bMu@!Ha?a+MrLU4tVT$<%mPc&|3U_45kh_c6J0DQ4-J{a3tlI^$A(Lo zA*4v25HsG{nmm{>4`v!!RrV6w{WbR>-Pe*_`wr}L<=3fL+Ep^n6zH(^8z&|vxL+*w zx&O6k^us6!-Gkx{Us_t))WQNUWn$eUC+Y0uXvd;Xkfi(-+b^c6^UNz0{MZl~%y?e` zp0!`njFmgS(OGSHga{6z~{mXw2n!%V~ zYO;}9UVTnNuB49MtnLtY2Pm~*%a83o`NWhvmjIwh=}QppuYAo^thsuNlT*fXwVs&H z+{`@9;bSK~Q&+K@;%ttb-0|cYX_0+^ck;3UUS#&^M+Mf{xQl{^zl@uJjHMUX0Pc_t zTg$vAuZ@4rtgIvpzrhvs;;5y~!NCDk19COF%%;y~-vjnj-oEua zJ@w4K>IiB2{rmS?wybhr$S5hL6ci$EGrdht4i~T)D&dWDa&$~fQ$A6(i4*fWjwK2B zQ|BdO{^R&-){W?Ps1F#lh)FeI!Z*z#S5{V}fTsn14E0L>9jDROOe?S+=55~Rr*7G` z5fKV0{(l1$C#J2@3KU}hDXoA!Cw{G|eBWH=_~qZ4s^{{XuA;yf8Z%!i;8Il`SQ;q2 zQT<{C0|lV5VPG`Ieqn?*;jRsRR#sO2Zv%}YBKKYL5E?H`w0VJmMFpBbKwg7)2@5k1 z>|B7wBD(vs@ZL%dJ(Mr$wT~>apUDAFsV7(z6ul_@&EpI$NQT;GRZ2$VYTh!1>yG5% zZ1G1XCS=}q?z)%vl-12lNBNzKQr%hXWIgJysO5p1#n4wIR>x+RlN9M8>nnWL)+HhT zB}#LUte^jWRiUG!O9DE=AhwD z$mV4AN$uBN8|lFLcY{VH=CS|4+M3n-WKHN68=DD~zfhK9b=f_Hn0o!^C#7H@(?HMD z_T4NsY4>(kPB{%{W@G$ExCpfbbxGA&GKEjU#$+>u2gvzWaD1(ZCdxm3^t~eklpJ7| zZi=vdA@wM^eTy~S*24ti6wdn+*}iIx!tsniZ^}cxnvh>}jCE_1=+W)7$%iFHEiEm5 zqZuqHIzEO-I}h!|sbTA>lKznBlMpzH1b1THYftnuFCVx4NRCc46N4BAkjZ`4MY`cb z7v^OLG{V=54Fc#7H)PSqL4?^ZUdx-(i%W1#N9)8F^3&7k)UuvpD% zDk>5yLxqX+46;!TO-*Q^PzJwwLqvR?G+T|u#jR*+%CJDcmI5|q%O|DOWleHaJlUu> z0ny3%Jh^jZgp8NhK(38L`}5_qlY@4I0eN*B}P&yZAynKbpI3I1}gkGf%Hl`R4A%=O z`v&^`^OA!4&JL=c`K`p4L)Bqi7#$jh&@)Sh#%)RJbr~9)!69GKPIN0=__lW(OFjQQ zX=(E`v<(mQ45wbv%%Hm%U&hg(!F%^zdiUgIx%Gu`18um)rMh_Z}uf4EZIHWZaJF0m0p5Xvb9)uE8({X^el^HgrqPeCYW7 zSlLPlNTaS#kHYPefttzsv}IBJL;qQ{%=Y%-Z|_TJr)OHUyv?Z?&3_NnJx@YHLybnq zIKUp&(sfkv@UiKzspr{Ee9mO5yjuF0xJ2K|XcbF?;JY)h54;P2hLO-e|IXL~1r1OS z65ihTUO4C1|Gi_>bos@q){dHF`NyK7t03-_*$h9nOojwI144Vj)M9UUM+dgIx3|F1 zZhTBkGw>!*ApfP{s|8reVgFVJg714#w`(xl7KNH>YR!8q!~Jz$9_e0rIXQ@Q=lmg7 z1;SuYPme+_0q48$cU%TDP+ox)_bMq#r~G7mbaZe;2E@GKqNh^tIG+Yo@CXRBmfH-c zW&PqFWL(&N(t5x7LWWsTQR5+!vA2nN*m@(bY#tOIY?Gt7ElR~;Ax*)!)BLH81Qs6x zrGLlEVyIjKNRMp|&G@d!b(JSKFpCReo!9-r-aPQv{;ID$Y#A1@`$W`xjsWV_O6t2D zyHZ#%sS@7zz7>CHuhkXmtFj>gHCD=Tz~Yqn5Ea(drB&rkX+sN?Tc3^9D;D1a9@|wz zeibz=NJ`L{gF-Q<)WErwwf19%(YcveX^@naRn&pcR0{FFxC-(UkNx42`|aOG{rR+~ zUbUr`ZNg*~LK6Cq1?F{aS9iYhK zwTq~h1o-%xGi9R@ugB)5y}dA1+4J8NP{8;*JM?W*2o}Nyvtb`{E+1&)11cX9M0FE-zG?5U| zcH+K3&1by#RE1WXkd2Ma{F+QIYYerfkPK8ULlxD(J`Mf)6#&Kn{W?z^J3BihP~N#i z4y7gB)^G=7tzkU0bo1Z_Ryu7^c^Y!ZL>dn2q zz|>UQZ?(0E2L;lZrL}biWXc3?BQ!OqYQM|i*jI-17F@`V;?U2@>hgP0Wr#nR3Y#>6+GESM_eW>j zwAkZ58#C0(Bfh$2vguyGX{R-`douO1R#rGkD-{n8?xFYF_1TB0CmCh~m3Q(~5Ghy5 z?zm($)ES%Ho625f`sPIvwOXLi0qRFo_ET<`nHT&jC)<@vpQ8JPeU6oOmxag{#E!XF z7{Iq*e4zdSZkPY|P45;Y>ln9}5h=n~5G33#|5%}+v)N@e|-Kbtc$U$Pt@CPp*%R@`w@|c5~=Rp~&CL=xi zjP?;sOjNG=qTwJf-sEGrlDC|+H2RV*D1s_noS+hZ9p=x)(XF+2$uU=o^B1=kq{)O^ zTl;Kt3FKEEbtot`d+M?wU}VZW^`1=g&VV*DOcN7SIX3)(iP8iJ*#I%J z2^5S!eNs2)@}=v%uAr3Fh6_q z-Ko|Xq1%tRopqOnjk%D=cp?kp5dbLj_`Tg-iWi~*li!?5$!rS(By2I1NRQak@ALor z0pz6BugLbeHJx?bgUdsaM(>b?G#mqpOg%RkXB5;FtdQBk);1-S@TJPHmib1VQ!_riizsBpRfM0014}-Y0;v zaH%g{%0CjE82`TEQL&tolB}sIBOJ!90-l$GMaL@Fqiv>Z#JTvk>fitaLzfgtHBtnD zzZwHaIZ7PU+@%OI_4`Z%Q9$50fXJj%J~%Momz!%j8LbY*Om#?%`j#61W}%NeRkw z^Q!6<<#HcTcV4WHlxmoqP*4g!yAQ8~--Egu>1)nLa!sBCaMB>?|OBkGERB)Iy!=El&_(3QJ?Ql)|H zmWd$q9U0LAtTYSu9r&2AF3)&@DkY?*z5=Kow5{_#ljxPKQ>o|!F%>$B`@qG*g+uE3 z74R;?uA6BMwwEql8mX|Ql$4Ufc&@3U(g?l~Fk>|7)LVf|GLp}t=hsM)N~R153yom* z=?AZs^>}qnjR}Zd1qSkh2HTK=vUFc=45s4)&Cuh;PXUBqj- zzov^wS`TY?6W_fHhOaP)ic-NqgE^}eEKva!lc00I#Ci1(>4uERoB|D?q?(1a|;U^A{qn)1a*7q>FKUuhb@fO)Y1wW{bG@slM`Z>oU>GXQ30zH!8)E^ zUT)x5PiE#>O&?%l3(?Z2&YAr9uUwi^LO3K9~&<$3~SCkDlBPj2%ml$Uts-d*4(ZW`@V^5n(Iw7$QJt z7KPCV)6G~4$%0p4{P$?eybji>1#QPJ3keDNj{E!hA$}#_aZypx61Sa&yVN#k$9n+t z%?gW)3+4e3gt{YdwzcYEMUpL{(hska|#LyC6FDc zL|qA&mzV8bT%I=&T?k7^NNjFx%2}RA=SQl^9GTXds4*-XrhB*ts?TU0dPZ=4h-O2 zYG4NoBg{PpAYk)IpPhvtujAt8$1gx_>OO$V@AaEEMm-k*XaHdr{A9oc{(osO|AtHy zRcAr|#*{mlC4u6pt4e+Q_8KD-Q+i1WA=tM1e*G$Vx0NpX^P8mYqzKqL9?Hx61F11L zKfmG$3reA-yDPQIE5~9*gBbGcSru)#t)@BdwR?{lEECs zj=nrK^=dF*^TL3+Q(ab04i`QT2wE>}y?J(eTGKP*@fQv>Y?snKzZld9Ns8sXC*rt$ zTYKgHO+-mA^?R+Oqk~9l%nAu85V>yMx|If6{YcGAt|+QIez0O^V1*4n#@D(5w@unR zIvNiKa&mH#3O5=7Sq8OeBD8EdMUe6MX+MVLnVM>l(vD1>=kac+c99KT()W2uMa9S? ziaU;s*REYFF>FKwGdE)QV`gTC^)X*#r5pq0Ct}|M(C!cA_g&}Z0B~TZ0oP;+{x-0o zHcs0o>_310Tv<)cKQWO!KNZSw-r1kS!&oRPG51*QcS%WIWr~j;84cN3S(P4x`qv2C zv;U*~jir^9W}rDZbfI`$|BY}FLkjgIARo3?-vdo%q~1q7A|j%%-p8jvryLgra%HFW zSC50|2p|B(PzG#oaC&SQ7<_s6K0{hCva!KKM@I*nXqxlNkk;4d)GO})mp(2a)gY=+ zS{j|G&-pivj|!lYZmy2-pF z>&3?J!4?BA>43>CypdH_hJccS&xm@YkQx`QsjAvU;W5bjk?Lx2dF}S%GI)T7D#Cit zK*|TXlbEa?o?HU*7iDj>~ z*iKE#=apGgb?0rs0TEDtaZYLX=Uc9ej}eF^z;6X2UqL*G2$P~SrLKN&5EualAvEmwjg9Q<#Gc|1N`vgoP~i?)y?Gxe2aUc^N#g+=rMrv^Jg0^FfvV`G+M zWw$52!BX$KyF_I38*kl9=4OAA_oOmUp2XfUcX1H}$3ytaa50?cJ(Jwa1*l3Y2;WHY{2iEcH8nL4kB$&F z83;m$$TQ%_(t%to_)M-IJ-fUd#-j9L3U0;oOl;Z%dKz6VTcHv7zoUv;p9>l|ZnDyC zhZ%^#{2vPYASK6h=q8vM)rS~-bM*zM#Fumbh{(uRP`&M!f88EIOj=8j<$cbNSd{f& zu4pncar&9d@mrz(jdWSlr(vDhV8vs3{JtSwDp+ft3ql0w8p;LQUohR>-3z>6IHw?U zLrmA`>wz1R|7RA-2QFj-X)Z7y9UG(g9?#*Ql0v=WBahm?p9zr=+*}=(BNP=COTplRgmT0ng96$7 zG6u#a6dD?up@~WJZmDJe&1WMezS`$2#r1}EcD?LCqndzC?EaDI%eeNb#l>J4J3|PJ zw>UVY!5!nUv!M7t3pkQbVdEU4wZe>Pm0OcQTxlp+`Caq1;ZBEn?}s0r^w+N-gCX|EO9qLjPpP5qZG(J*cqi=Xpa1#R$u91; zefh(O55o3SD2T_ewkbxTf@=7ct9BHn`{YRgd}>gAh!yZ3fkp^sqQR(lzo;=X@=YJy zASKO!G#?TbWnPZ(NgMSSSMl)h%%9rZbF*RR6cj`N;mP}*iHQl#M_XeE2&+1Sg~8jb ztdbBSzr26*<`U#c4eqDFTtU(4c>;V`{E@z@>NSAV0PuprR||$y-)*!5i2MTag$aVT z6h(Sfe(&G=Y;+)z2>33@HKz;r%StGQP=quzQ6y{{Q(zkx*k=Go1E|oY)>`4Ap<8Cu zSeGwPfz5+V=s6X%+ofe?UFLlE{XJzD`Uy$nbQmuMDD}#!s%Tvsq#ECDfIgR)aY17C zxKfXEp*NKawChpWg(b$VxK#H%wJekK1gZidxcEYV2CIiQ`3j7542u#b6gH4jq2|$Q zw1(qQVOsbW_t}VuNqpb`;yTo%{a=3PCxpTj3`Mw}lGUK)z~BRifmC9MHwx7NRrxgJ zE?_e1ydc5xSPd`&`JBYnr>zYJWp)mZMArv_uU>(?$%_G=M8X@zlO^gPZUQyY{;rCe zf&v%Fmh@2lz1xh8mw~dL6Bz>4C_lw+8TPT(H&-5BK0Y1@XYRDalRg)bP$+K2-Is7{ ztFEmTLt1>`QE?UL*x1=yz+AEkA#v!qqvN4M0{0~X+Is=M9PC8KWO_}#j>arX_ca}va4x9GsbN6@1oBNuNojB)B_(w| z+2?>A(I%9b4IVnD_keXY+QdPwg(PoX`T4(+=Nu0j8MI6iA4BTQW?T}#IBmWW5DzLK zbW?ok2Gi#1s-CUmA6T8r%E~A{KEArCx$p0|5_|igZh@+D+C%l}Rv<_+tN?$7R4EVzkrb!?Y@d=1n}d_HhZP+UrLV8=S(?@bRm0Z84@y97M1?x# z#p{5Rr*GzSwZDV$wAoBVwh0BJ~c^o_AzkXztVknqPTcegNp9ITDE z>$uq4=Llk|4PPtGa?eP_Fzeu2D3(9D$O2?(2?V9`haY3mpwgWpas#Hr_)Ha^1T2qt zVkDjl>>SvvzxVpV=sZ~Z`E%1-Ci%l5O|R>%U0q>E-T(;-Y<l7W ztw2a1<<#qVDfFditil!#BG`-NUpQbJ)~a%3f`Mtbb~-!8N^zn6&kL}pv-dG83)xgn z@Wwd%FkNI|k}@jK*CPfk&i%_2bbE30@%h7qx$iFdf{IXDp~?A9br2jFNGb#*Df#$u zRIc_@b>Ku34u&DSYWZBhb^CU@Y%~oD1=akJX$&DZ&gK7wm)92`V8sj(w?j}H`D#ZC zBaQeiU?)j{5DiJtVnNl>#RaUEYin!3K?VX|tgNi$F1-e2KlJe7L~@l`58^6)5cP$$ z>RE$IWVV7@1=>Tm`n@m3tGXXt4J*XVhc$~P7XUEPU>T`bwP=eSk5t}B^%izOa z3RawX2|_XWdS;+J5O!X^u1sSxR%S&E5GkOqPZ>h(G$<9W`>W(IoZnY<0o|39aHC#* z{VD{hpWAkfU*p1Kf7PfcGL1|{9f$+-R!3wq0I}YLwn;d&!1Tj0<}2~@Ln%v~CvI+T zuz{y~QbeHVV}?@G_U#{-lVDIfWEQvb1?gV& zG|6#Ql7Wu}u)bRmmanjg2pFVDVTVIqURd#hHCH65pyagZCq%WaxlUqH$e)@;dW}da zM60K!rj~6B1(?&PlI7o(5H|rZ|N1)uBeVB{v;my#4u_jYbv0rj=~TL`QP}*3cxYs9 zjtM)HU9a+WIR`T{W;CsM(Y_Xm1g!6`U)D!p`EbW9dT~g66oXqQ^Tc0@u(YpbuqhLEuKAznd*m4A%LGnh70)8zp01CfNQA=axY^!T)^st*j-BL)s zM-2iq(FrXy#V$IacHcM_Um$RNC$eJ(iJ{oBl1 zstTPlzZQz*q@?_F`#AuaPkNt3me&drW4f$Qkb^V9?3xT)8^6QM1C5&Hw{*eK{B~To zy~obVYHWP}&!0a9luS@Iw}6p9;d3DvEKOQ3cYX!`SX`N9zn@(Sbos<{KJ7XV)Z)p^ zt*W{MgOHf93hN0G2hvcfgeq@c|Wfw=gk?YQRi=ZwX93pG1O1R+J`ghJe2}rggan)JK9=P$U z2f`R@OceoTC|Ng1oAl%VxKi5G7 z@C!K=FOovQ543Ir4L3!l(qUHl=70(>1J9wgknJ1lsoM9@qkFAVY)k_^enLZA^3 zW~!mD@AXFY;5-GOay<5Vl_RlBC?J=LcU*k}2y`RqSdJz;n6mG`r~(w<(b>5*m?Gh( zVQWTGPepS5IuhA55<@^o$%JfeIb1}IR9%nI<>lqKK{!5Npc}TN&(E#@ z2K#k&UM|11En0^9NltIt->*4(HdO#Y^*A-Bh)1t5LF*v&O_?%fL;sPwE?!l-XYueZ z9Jfy<=FSf}GQl}9G4b&9Xy@gfKfTBLg?ZA%=K*a7gS|V%!>@4h@Qh4M{3ZM`r~5Ny z3v4?(JAu?OFXR&xG_QMUZ*OQ;X>MU*0&e^f@aI4+xCK-b1h|HQfwysiY^=fE4I>7Ugai=VB1H6X(IDKLv3Dt)A5)0IK!vxSgB=>)YW_U=EL`0E2c5 zngQZFuX+REG*DdZiq^(fbBou2E&pEJhX!y=vmXW?c>XJ_4+@V{`w^^rfnS#$(GC=9 z!UpsULda`V;C!8!h~wemp>dMB$4T{nIS6j4_=*eU!mN#275;<4tX!yA>-N=WSwXVf z;~ucnbfY>tIvRYOrA2Hc@V34c20gLTAF4J;kRiZv7{Sj%urNqD&Xb>zl#>fnN8=Y3 zPH>y4l^h&?qNG&(cs7^A!oq?X8$qd9SXfjdF8D|r0D_fiiP1YTVS(sIFDNK51RV|HFVuHYR+F$rni&4cxMR@R8TEqkPpNcRA92@+a_hRsLQw$~ z1?h{FbayEz(jYJ(AxJk!H>jW}slZT@(nCo%sFXBFhm6vVbi;2yzVH3M-}jIAUN1Au zJm;Lf*IsMweGG18lVof>h-hkpD;r%o5d9JQCXB$AGO(KS>C?9=odZ~X1TP`34*(5I zUP>Ao8o8%i0NNY3%nS{`dHCKYB{hHw&{-V!zYB3ql>|I3#+^K+c35%UN{0)uO%Y8K zc+h9IwxN)Mlf4pi5F%&g^yJVE`L0l~Afh02`TOXC$vaj&SjxIy3d z7!gMO0|JbIM95`Cv*ZXH`~+YYt1ttXIMroV)JxBBXTlOT!S;b?`woCparm_h*}|H` zkTbtdtply>pQvVne6TvOL$tsijh(IperS2TGGc_CiRm3AjDdx5%bX3q+nmpEpcE#6 zz%V>K>=k{_X@IT;+)4;+f#4VLP*(IL8l7=wYY9hiy2VbT15Cr zCl1o+39(%aaE_kwXm=Nn#qi&V$44*EJ4t{Qut~LvWwTTg{9>rl&n}Ue7t|p zPMo=gg#q-6cQ}A$0}4<=P&Xh;orQh&5THeK7aN*pgmuIof)CdwY9-QT|GMAn{I2 zKy2)Yz)+xVUj}}BLP9L5_G6!x&gN#M$*;7l3WyOQxG<<;_lg!IeqcTLR_h(-lwv-9 z^lc)&G+1V%vv+uC1okN}R~C9YI!@D$+ei!TTJPO|m4<20kt$~vtg#cy&klG=chyR; zBn-n}Ejii%OUUws2_Ov66@Tb!_~?MnJlda2l9X^(NZ+5PuqoXyK9tg9su_5o4D=tVu zz>?v=QIraGa^4K-z)ujEV%)Ug*F(|H1xN`9iYdsz*qIpMt|b}!0IN<1G4c6pgcfj~ z>guv-pB&v?DLRVog?0j@2j6S&&lzy=OG9C1Y+{mp2rO2tfzxL@{et7+~`$uvh>^W-^$am30po%Wb!};o+(wXTWjj znwFW#(|(Jz2`b>+I{ypPXP&i&)6b8Ub6BxD^Fj1gNE9UnC=N_M49*}DE}Viby1z(* znja}87a0U3^CP?OB#W`!f*JTTqz=GVq|jy5;QDs_!B!U8Ob^^7mLlIz zpW;Thn_N%~2@d|XwuVhMy0cy2dwK%4nn&vDH?0g6{yjtA{;yS{ZmAF`1>kKiBJ&u4 zw7|v1&CNt#kcCGC;ws=oxRLmrjg3u;w`M=3v^!3XgHR`|tj3Kfurw{b+lNK_kY-elPc4g55$a)0^Gr627+zBsfEXFqWnR(1;sG4gD7btKRjj6lzF4xJ?k zEYLGX5A~dCtK#|d=byu(78$onez^0n$FdYEVhkn>LU82bA^;N5&mwJNxlMj}$_p1l z)rb(buv?g(eEvrWmVSFrwgB7RO$eifpr7Y^G)L0Lri{l`HMPE*LG0(wo~?lH3pkhz zuB+oWnbq$$5of=Pn1#oK!7RESyVC)$^cxx;;X}L!6YvH=Ez2XQW-yKJpgW@0aW7tE zkiM(_g~i6L8@*$l<`*lbo^-v+J9pW;fAfUo>Q8UULQbCK z9!*h$lxvgi$$j?~az%Cis~)G(;@`HE2&wzN*7oHK%-{`tocpAZcP)ZNlf72JK~302 z4)5M%L}a#&@7M{BcHT~&5hz()0s@&pAj#d(&g?_xE1-(F)6X(P%Ezc@f6_LywvIQ{ zA%t(s+szd9JXbYcE`5LRokgkk#$31Z$vd3v!=um6?1 zXc|fEK!uR2$ASyfVRgLHxW#+3Qvl;0>3@y-4c(`QRehGFfUotmlx&J5{zYk$Z+9OT z8Oy#Qr0#+Mxy_smO@end^C>_rP6Z}wfjIS=?1;9>KV}&mw3-a>%-3PaSfT9NwSQ~) zOY0X{bSpR!iVK|Ia0>rG(>mx~z{}IazzdHE>dWg)6+C_HwNiV}=&rN<&+BOFm`S;f zlb?NkwLtmin*J>tkBk4GSHvHd*HkcHmVJ36F2j}L--G%0)paUwB+5HKPH_Dj`u;uv znLGdR;?kbOC*|=v31JgIW%-}y0xx>Yp;GoZdZBPq&m&;{zRVr%|9$D(0u`XJfySYw z6mc;I+HZj}bnhE!hAWQbTf#X=Qf_W;zkdHl_z4i97AV92=Y7Q=JI;0(02IZksH}wH zD2lZn+do8D*I#oVE$v;otWDZi@nCi<^x>s-4kR9fbKB6!=nTyLsG$3wRr4xCi0^=a zi(roZ2Eq!LQFEHB%o-aE<$xf|u3y&yE3Q@JX73Sj{zYbHrsY0}3E^*vPe~h2th~&1 zCM0k4N_x#Ar$o5 z5b}cx1Q_EM5IewZU}`&FaUY}@0Ap$Ji+C;jF9B&@yNTd@FP{!-LKO*!P zh*$7v#XPpYG&f5Nod8j_zO~f|&=Q~qG80ES^>L{C`s70ZWsx(GDh$ocT0l&;cXe$6 z8UO%%U)E#VlF`UHEC|QOFl;AY#5EL>f`NsFY_@7TVy!^J8+pC=p$m*t!LWidmDCZ- zGqt=7f133yIH~7pqe1k9HuLahq14pW5h#Mpy5fIBwQAg*m=Ji2L>33I_>tdc5wWc|YoJC)iZ>j9=nDfm z(M2mz5I~Cj2zVLv_gC-U;X%UBgrW;lW4aq59A_^v*eCA$lR_g!b1L5Kj4*mBbk4+; zYH$2pLTC{hqxxB;H;gLh?`pgGg6y4c`?CrTk<9&oz$o=HF6A{&@k{u}?DL~h<(25>#Ig$%t%y#Wo=%S=oDf=nau5Vh0*2tF2-L}ZxWlSbroi#? z>8A&HU==;{J(hpxM`oO9MlNRH{JyD|1W$)4^lj^ADA{S5`S2sJkF^e1znXGe}6r`xdr2OFO_t z6W|laYuwfN^3}6dt;fpnA?*(w0D*=`RJG^1*5CaG@S1g_4Sv6BtxJD}z0u-eUOS=0 z(LE=plA;L+{m{tr0bIc6IG_KzS7tr($u+c*StIv5{MDJ^B9kjv@7;urFaKma%(NAP z^chIPk+{f(MSt1p6Mxg&&v znsjz)#PP(6^fkx36Vpt8N*^5^Vm(^tKMoae+nU?K*k4TU(r_w=RExs?7tIeoXY~#? z$2w%kD`y@~&E_renD;*tT3h<{&nvwPXOsCj|Iu8np`NLMC?|eZDi3#TnN)mu9fYHM z@a(c5r{aRTd-SYD5nAO!kOFjNMG-`;ks^~+r-dH?rPWd-FW>;A6Y|{2EmDSgs9SZ9 zfU(!!J}!Ci?gl{Ui<3bG{MN%?KME=v ze7esTeGG{bzDj_eNuh#%eYt6-9T{T;D6FQcPz4}*xa9WEhTkCeM+-_8Oy2BL}s%4>$NbjEFa5v}?bR|R4t z5O(db=d{560W#Ng8yb;m7D^|eYwK%gr=r&CSb0G>;gcd`JXl@K=0#lGB5=fj_Wl;2 zu=8Sn3b55Nv9a`?gz&+;fRaY*%0X8R=#%(RKTJUZh-1!628bnMvG%4uuIvNf3qXlk zK&v^hv)Q3mJ;`T4xyE@0PZEe_iGg)+*)ETjD{qfVPd;#xL_c?PtC*+*6F8z_fWyH! zY-M>lS2RRE?kztruR6UcQF{aAE+J60;o4ja{t0tDbcY7;3q7scCOnBn^s-?-nzXP6({ZD+Snk-xjyEjN9?t*rQol z$zf@)4%Pd{ZN+aQ#Lpkq#kRQd=OtkyVJmB=n(NnE85K?PN z(l9S#yaii(H}$}A`ri0C<2;R^;w4!JADY(r`K-HCdlM0D?f_I)9OWLgG&cS{p?28B z$(YNxm_ab?fK^-=Db6fH!jQ*00{21rF>!})fyfGgOZz}_xao+~+}wx<1gVm?4t4nEn(D~H|+DF{bePiMdE)B$7(Y67J4ejp>sX@qo(!a=N z0E4(_elLBi-~c@9CC70OMyM7f-S4WMbtCm7fF^%B0Zv%a zJJm^|9{9yZ&`Js2b?O8B4J|DJAi`xRCv^d3_B~0W-VzHsV7i+aOksdH_#;K?A{oDR zmgV&gfORlOF`G5u^=lXrfin>B5Z;LtTx0;5)KIS8r^mZM5U086B`CFkhXXJR7n&KR zYBN@76ks_HH5rUK*2qZ*ehM)j95GSWf>=wa4AY?&w`_#7p0J>Bte7+24cWZGV2&O}2 zx&?zLW)Y?z7a$yy0PG>al5NAi+b0?trt+G?C)>a0|{`(IRad-K+lnY!SL*Qy)|H=C~J-ss5**#jy>ofdx23E z^~xQh8?=uROcMQmr*9#f|62Mi;2j_d!%_@%gEk?#Pfbs!9XX6aiUl@<7W_%bBpX0t zOouM>Jul(>FyYNNDr-aA^^-!w3lc|vq>ARA!oQ(%G+ytG*s`c+CD&*W-M%dY$qgbX zOk9-PzTymA7s5DA9_q|}{~#qa2h&oZ=uA|-l>lz00T92_Z{$d){?1JDQ7x13*t+xr zw1=KFSt0@o!DRK4bLtrNcR(X!LP>Aj_;ln6P#;QF8i04OM+0DpA~#^O+KB;3y>;T+ zc~S1ZdGx>1>oGTk?@QhwOV-XixJ}lI4W|xV>gs&(=;4|7P2a*bU!JM39(w@MPA%ce zl;bZ??8omi#~i8f9zVJvK1KT4^=Do8JzBpAmK6l%LjIim{X50$5u|_&z-vcqzNczC z_F%^S{+{NVz6vqv{w)3g%v^nDY1sxGJF0aQ>=Zwjc{~(sXxH z5`+!ZdK9p$kSky8)uY~WoQDh)a>ji+Zl#srFPzHsq!4>P> z0xKR~O9)o%tEAliu$3G^mV@d%(|U7#Jq6hMU@EaVpsV0nrg>2hEjU5Ps6^9X6gUGA z0pc3gl$VCo&d~5v$uiDemE?P5KH3%}6ch?je_9M+Zb4yyj2x9as|8gxa5!>k=IyQ) zHsBQC^Zo->)6v0F0Yok75g5NIo}m=tdAJJB6geuo`-O&0NGBIW25<)4b&P&j5M%(b z*5WpUxF(b$kGIddI5?xZrlqHchiaZS-vGzM7eMLdV3I*@ z>IU5mO(hS8W2N<)clum~pFbzV$I`8Fd#pu5M09RlnTn{|eKTXRfybWvNoJotQf?FV zLZPAB6jyRt1QxpKiJEGim-`IX&(9A{xDKAF$N9QKFa-`VzM=|Fpw`;+ppW*)1tL}T z|vfNMw%7=&64d?0m_;lN>> z;RrnLoZf;8bvB0Zfr5Pr@9Aqu}D%k?SztVvsBg*!Vuv@X~+K$U8ibti~u z%0^ziaDfY6ZZZaEe>*f9v_aK(LiW|yVJT{@_TO6$<#%$_AAMqleY|a_Q^WuWb%MH)<6cz7D?#GbJdt7YF3G^PJY-6no-8 zOv`#azu7%-JHsG&FPQ$g>1mqG2lbYg=4OnKJKa#bz(SS($pgCxr|{Iq6Pwq{FjiW< z9dGFUUUZH|&VVOereh3!dwh@t10JB7Ees;Rk*qcD5~?uZDadNC>C4S|k|@UUrhp)f znHXyHy-O+K6A$sY*m{H;5O27(gYG4o#v!ACO>Mo47Y$-N184I@a6Wr3;p$z${pGpV znZ}nHM#=C+BEKrrEt^|CS6j_Fo`s~_acBBE)BD5hvGKF{d4%V%>HzhlGpsMN*3o1fz9_wp>`3zrQ_m6z@N{Y zx~?|AyA(M(I(Ss`OrH)2-8jv?!O(RT56^T(LOzlUZXQtDd=Q6JIk`|PI>H=M9O#`O zg~e|4J!MtPxC#Wz*2<=qN}1dpZLh^56E#ilTDFDk0ZTYFzh>H_-z(htXq*^NFZReoX78H9drmEbZ@JZU zaz%`ywX`SEc9Y_&vSvTLso!q4fWxIKEKNA*xXQuRhJxhC6O;x5~OVEHk7#Y))6@?@j1I=S$D)H9sE zX8f6PrfrtbInk&0Uo-Q?tdHF&dNN-h>$(7pRv2~jX69o3eGN992ZiD3`#yuBmxqbr z!ZBx2uU~#CAFlH)@&AxFrjn>hqTig<)^=VUQP!e{gb_%;`DC;EM*0QmS51z-X%DNr zu2Ku!GMXw^=AJ3?CfYPrq!DFRFZEeC8i9-90&o?zYqoE{TN%AEqy&@kW8k*OaG{_J zCT*I{Q@W5)RKes>_>8&$6J(&PoJRehV_QJK=ZI+Z5O#2=$V% zLb5ny=(c^L^A|5<4-VvJo^Ap|yl#(VIzasMrMWN*=^N;b5aU4d{|@0YvfKrbTtST% zB{_LJXhe8dud37EA;JJOJ2{3CWQYs|JyV!XAhN-hzzws4j(KAhJ4;2U)XJXcy6&gcPtV8JBw^7}g{vdKudeG8 z>y-Nz+pOlcE)SRBPL2f*qqASwzq#t>p-aa*%_qX2m+~N+yTm*v@w{O6PaT0joBf42 z#_frI!uWAFa)M^&Uy7Z_tubDV`5F@wr<$tN!25ITSF$FB*tK0^RHp|%y3x9LaAm#1 zEi@}jW@lwo63X7Kodh8Zx{JH3>iOCJPqjKadF+oD=}-1-GA8!UGmh`@SQw8t_Qe)I zQNd1}k&>bfrF5Cu?_)d?HoDVH%6*scPI#QCB~IDHz`k6Rr`MXN<8n`3gCAY*N%?(gB=1K$mX&=Irl(y&eqF(=|JJQsnANL4@ca$yRq>+Yz#0Iyv@mBfrcr> zN{ES~6tJ5xhKiugb-`t2G+WyXd<2-w@lbSHj1;9Wib8oY3yY2LWMBvt6%#{YSDb|@ zfw{>5r5`g{SS~AECKW)yv}45(yS9{6l|@I`DM11tK!pt|u>{KjOuCNfWS_zO*3eVT zO-{2C@6ppoBuQZa&VbG*($$@0M#BM-21F0+B?~$GJljC)i8W#85WCN~ofj zVLGO=#BrK}F(P12uB|D<4^`Svy|Hw_2^TH()(6|CN-d_uw37^4Fci2%6*rydNT zu^8&T-NajhLh#uzGvG&11#TAD=vNv!r~_Nb>hIvtBMn91B5eIYdlB4)bP#Ff?g*!R zf~@W@V?7^JG4U=ljM}s#IT5F1p}8OU%juKuiSjb9)dUiP@`$E!}W;yMEUO2NkR85!83 zFZU%B=|x3@e~$4l(0@{{SkVmcc^!nF&b-hbxpN0BcdW6yJY7~NgChFyA_-;6Af?RI z7dF?4^PHN$aj&jro;0*<{QN?tTkG`lg2d^)pKlb=OY*so>QkKNM|HGZM#?@UWi-+= z!Re8M3;MoWqMz=^Zm*5TS!uEx7{sEq82zu^Ij;5E4+lq5FEF%kIrM6l;+xS#~Grsk+2(%69JT!C^d@U(ujR)pO{G9@d`tnXhaSge z5-1c#{qZd>pajt%Xfdq_0#CFx8W&jLH#j2pqR&-7)0JXU0xlwBEi&uQXFWVP;0E(A zLPIEYx&Hok0hfT{5@J5F=$-7$NrN;1Ce#GG`eO~W|Da_lcxP4CNak9Q!Da}J^66oa zdVv?{NNCV9@3w*83JkaMiGYI6};!ms`aF zEO#0dc)%cVyRPaYBq})Ew=wOfAs{?vAIKaY9`eAQ2w*7e24sIm(C0`H?1__5w*`~& zU51_@YisMfAjScc*j5J=t?UjWoHdR$Ha9biy=2iW$cs+19mI$YShb)Us;qrCU7PUA z&xv}H4N2x&t)0Qp@!kV=TC2{WJe;)?-4f@3o@BNX9q(p74mh*3GJz?{GXMEo(du2^ z(v-MssWuzWEc@f|9vstIQc?+T0x9Vntf5@j$^?HXI?>_Cfc5~~S-)1mWL(R>k z^J32*6wQfv^q{}IsHjhQ6O!<0e8}IE%iw^5mkgh#z)#45bAO+mf+9h^y^}H_5|2Ge z+$Sbu3D}kPsi}<)iKrt)sG3?r)2oKx+Vmp4W(N9ttwB zhATp4tOc|3JttBK8P~TTJbKgznne_hyMx<2A}6Z=U}o=-f`=DM^=Ci?f?ezP+S+OF zln*sTNX=65TA+y_qM1Pv2Gpk?4E6GU&(AcuNt-8BcNT_+m_p3lS{@;?vs*K60d|g= zC(DwT3?a=x85n_cRx}0N?02YLErvO53O@SCz`M=Cf67&VI1eQ$Fgns}B(6|@z5{U? zh_zO6v3_&js}zDNe16OIC;;6+@-zTfr*@S5xu{6ah5P%iiG@Wgd{tC|488NtvQ~+9 z9>&*r(pYpIHH}9|k7%o|Gp9$fKX?SW??@I%aNcus8#fE<{y7G=T0>*wj_n*f26D?p zy05YSSSUA}=8ZY7k28VRCRk$^t;$y8`bbxYJ%>Ul`)Pe4xAF$uEmplPHPhh04Jab$A^#h5D1NIR1PtSd-6NZu|SI7fa^hC zz{~}`t3WQPMr}^FrXG1h7*-#Xi(-wPEG>aL8QSBET~@SV(zr4r6lvs<0S&EKC|nKH zEU)$Odw6osXFEs}#k~VTdjxSX15RJ-&YOIdlvm9532;6KcCafYmYXAlZ~Jv#Oev~_c`$RNTUZ2Pbw&l;HY>B4E++NCISg^G+)|HsBcub z5rhTJsZx+b5|=+EneIRbyok1=8rPnY(P^*n2#Mp`ZE5v?{pR`eH6>xErG`?L=s%UN z7hgoJ2(#F5ISEr1oSnji*l4VLI1`++)UVVj>QGQDtY5_2Uep@k;juxMwp{ zaOc-V!c|fpQX(QXc-`0g_WD)DChdun;;ph3xG%O@PUW1JMZ;%UwRH^>6O9ly!UG^=Xft9&v#@aF7@xR1cA(^qt# z>hHilLjWRZ)#(OK@vzHP(&f;+VMzjZdVymgL}m*Z+IT*p7O{(Bcfl$UxBA#}mlTl>!Pj29Er{g4 z!|#^crVepkVu!s`V=398%H~!d*m`}GsrEnA-%BY^Z0`qT9`do+EP!XIKyS8{R>CA) z@|={EcG*&H@PIk*6Vu|!rs7Od_vz@i$Jj1raSd1@{xyk&$>4PyX*&ypsP_*T>)0NB z^AD31^WJz1eXkqBf^TULcY8`uA_IJ4UcWY#hTcD~=Oe@axRdw~Tm{Y)s(A4EDSWo* z`#A$e(0>mdxi0|pQ&CYtd-QcOGCAbz=vH!rH?PNYN$@3D5||Nf54}vG3S8W8*x`uB z0esH((EkKp8=8~OS>SN{YTO-bPrf%QJGcRx3kabF25Mirbmz(yj@L^CCp;b{DfISw4sw)*gKfQcWV#9L~&QYu4b{0eh_-*((_~-ypeW8 zCE!4iMlz6XLD^BM_wfduFm0=jA$DR|GEd(ohd1~@^j|lR7iW}K@rQyJY=o=WnToM9 z2ZZDS24pv4O_b?fAXNPdZ>F@XsS9XNmTr zTbSqHok;}ppZ*5C-y}DjUB+1QMTV^~{@PL^WfwSg>>+)T@w%U0&XLNmzJ&koV?nz7 z+DoT}5s49dy+kUnF`?D_n=p{je3aQ%vH@;mbacbo=IhnHU*=19!=GU(w0)?XzL#wj zU_Mki=c(UNEtkEhbcyD{!FMzZiI`{I*xU=H%swmMUP$;iv8n(u)pImLpirxI^m={y zmf^;9dG~KbD+e3@5A}U0_rn*tz^85Rq)^9!s`J|aaZ}j=Q6)9V5#L#f%~%v@oQpQ( zTzB>>sX-5nv{V2SI+Lv(C@X|F~e1^dkD?NYq)Os3k`)lV*_r!kH)BYi3tITa1+f6pmf_)Mf3hJgVz2z*6GEd;_Y z%Q9|zAL|LBS1UH(J}|81&rFTOZBiKy0mO&*mSY6=mfToiq{6<%nH{##G+_odii9f1 z*}dyZlp>7)sRex(Z{Nc7-^=X#ZNE-&(w$UjzwJQGs$W2-35LDB-_Z}RuJ*!cJ+5^m1&cP{S17b^Z4ExgFS~W+;d^B)njbV)s?*%O;l^Htgkupb{_=gGF>sw6 z(-ijGrrWGKX(jYUIjWIjx34AaHm9LiU`wUvA?{Fqqhj)tLhZcroo3q~MQpz8F(FEs zM!RJpS8sDAZlE#0`)dgiK64@B(beVNDBa3ULrc@pg6^j!annmm;5d!pp9`xvq$F

    voh>G`Rw>(>Q#dGt%2ynw!WnOgG%T8WTm7YlHGgM ztEe)x`q`xe7femhG&MCHXIxj4{O6l9r*zXxHZRdYev3^6Ax1|}^Rga0wjSQcH#8cq zW@TmdH1Y88c;)5g1laV?Y9w*<`TjDQsOrcmOFs{fZGi$dyMS{$C->!=bZf^U> z_hHThs2n1p61N7 zyubh+qrd1O6_%FX3IUM_ksW3~{KJj$TlCEs(5;3vBX>x3`jnpMZf*rRc5^f|aAe?V zc;tPSi@9uRd3M|O?fcg(|N8A)MS{AJBQIPqOE6k#DJdFQ5zryDm|9vB_@BGUH}BYp zF{FS40iL2SKSls-nv;{0Nx-uFLsGA##Kb%t|2WQgB?YcH9j=Esdqz<}3WQz-rwiI7 zv^1YAEwg~AaFU5yNdrDjEQURNrjdDpxO#7HE+X8pCF`D4^9eAH#TgyPnNCm2+O=yx zKWKlNH$N&eftpwtj5V59P^O?WTSApl>ZA?Ak?>4$~ku0$vgL1Ca zjr?M2a)8AHH2>DnptMw9t;%H~#Jc9IKsNv{)U21#Qx$$1_{pd~{NIiey!ZK8I z?^Jg^ioP0AwO>f6wZ9MN%ji=+z5Q?@POdK|v9uJ2pY}a4(9?6l782gFlGFe#=SP0H zZVM*_c6j(9J^>y1&Jg_!agFIFI*{hyii_7LWR0#PT>4Ce#Zb$Z%Uj`06u=eNd{x5X z5_g<*p`!K}eDXzjY+VM_Kbn_aV4X2Ava+zO#9iUaU$C;S4c}4~)Ro&)cSB4wkmEEwHZMbbAB_4|GKaTBKw_xCPsd{|vW18K&cefN>-sqi~nJN9%F`D(@!4>Fsc@h-F2&-8e zGaMefPz-ywBJ?7}ZdD|4F?i{whKEn4FFVL(4?gs>s3_VqMO8)RXj79Xr43FAX zr)1n__3WMGl4VO<`Tng6(ju2``hnhqdr&x{>Ly=#e*P+$L3_^J-2$tu&o0>!LOb;R`<1ZBAMZOKQzO@FCK|pC z9!^29fQ=N;1+)#olvI8Uf=(3X7nc5-h)5nzm$;(D>Dzxne-mDMRj4`JVs0@Nx8w3_ zf#Xqf@4~_ottWKDy0uQzJ$7_FWk!+Ru=lLg(mW2`oU$Ij8#V(E41olT=^KP!%>17R z@LWm%Vz>#;dA>ZQQP74nB;W8PZ9S%Z;9!0WzSXvGf*e!tRncC!aN&F}uhN}62i%whUZ^`P z?8JTW0tJ@8ne;7(Z3xR0e84^kP90kBX~DGf&@7XPhj-{QwxXY^5IJdkefxmjJO`{^T5pPG9K5LX998X19sj=t1)eob_N!d+p>TXR89Nxg9t;Kb z4;s;!Uon62h>43Zv-kfqYWSY(K_K|@nH`1UM}Qdi-5<1HUPbi)b)%EFzdqsPHN4@C zN*6@aN?KaE*=}x&K^6S#4mt^F9-;zd!c6l-?Qoz2G7jrfzr@YJv~K ze8t8!gFmtg$D1GJ<_cjBPvSPg#lGqHwPsmuhqG|e+?;#)^5q|GXN*7NVWOT;y%8C2 zt9n1$Oc~OkvU*M`ybf!nsi{d5zB`aogc@8$riBs9p+mQEUXO;yU`9fF)O}M};;8(tP9uFehCn(s`V|ByJ z0a%Okoq1U3iX1x|Qa*vQS`L9RjPuthaek0X zb-4f1>Mi@?{;rz5GI69DTz=Ia0X_7tXj3| z+4JXl(4L@at{@)rebcPhgR7ZyOK_9o!45^5r)^;>5coyrTyyN$G4kzwW8>)BQ@!dW zpP*UhAtS%}Ei}7a8IUCz6KvlUo&|YYqN_X1F+9I=J*vbn8F)4+r|Re7DCcuL5<_;H zK3wKGzM;eML(jZiiw_j| zD5(8b1_l=l0VS@zpqy9qbhNYxPGnSF^K^B@TQmmOJp1*G@m;NCNewBX=kr^RE zMr7~&o~P@3|NqzHe%$wUUsw2ezt4G{*Xy}X?rJVpBekR`%SUD3XE*KMefdggO1%XE zJ2bZC0CgPs{(5^n539B>3JOZJL9xqbL?yIbKCZ>RfB)9Y#p7#2!@}ae4z4?JQail~ zjKspwT$+WzV^Ngx$N;p{q+!A5=Kz#~^tS_;Y^lS!c{Pe$J4-mZxzWo}gT)~;BN~Wx zTZSlKGS5kNKmu`f>QqW0GSTH}ca&d93ncT_7h97bRlrK3? z(S4!qL!X52MXfsa+3)UMv3YHEb(80vdxK?e-nzBgC;M*bkX0Tl#NI$tgn$^Y#gax7 z#rmXOvq^<>%Myf%tJGe4gJojEM_vQ$3!dPudD5R@Sf-*dLoWb85!|IQ8dwty!`uTW zk~H11*~laeh5i-5r^~wYb*wBbEQFK6xTaX^R8%D@a5tvvusUg$U^2o6U&wy>QhW)T#5qXYryqO8PaO@!=mp5c2MUT`l{m^* zA0eYoqGqbx=4_pG`+?6p!9kxY11hOo;wFJ9V_N&q#mO2w4RF09ImZ+goi+kZ!~;>l zsfeN2n=E&B_9H7RgWtDjo-A~m=EHj+Q)qX$2_lsIntqvWhDSmGL}*BO_$|P)toPd5 z+6Y?(KR&DZ=V<@u&qe&={h2 zNsvseobRDT*bIV6xNK<)5#a!_onDR>XI+ntH?01K7JgHYGiO}X5Avth-D-(h~@BIyu8sS!E{6xn%e>vO>&TbKE< zKk$g<8As)#i#Pr}ds1>#8bpWO(x0QP?d^Lp9OI?77TE8_ z4D<(DL!H-#?8^Xe84#@Dt(-uLhId2a3`_y59>q=9;D3F9GXZ26Y^TG__xAJW4N#J& zA6bkOyGSDoK!vPtC{s{#F3WFIO6aP3-A)zhFgvhkVLMS%GzH$)PyIWwU2>22O7ofD z5=HCAzcV6hmkp=+hcl;Teeu3`bUfgvrN*!IV{~*wcxTXGe6Kgalf9_(tZnros&z= z5Etd?Uq^kRc(=CdP7O6Nklq3jm#YJn6NESMjg7uGLlF+b%-w^y>(-Z+VH%nxfL7$W zB7Q;cu!dr%sUM_1x;Mw~_E~Ke#YSd{_lXNR{4{g>3pb4{DD;2%b%2j?H5D`r0Dj*> z!S9Ekev~_mor_D!I0En%HW3|WeV-Op&MgpxuO<@&kX(?k_K*-ged%o7X3L8&XO?p> z_XTc=j7d>BcEpv)!X~Z%5sEv zcim397Z|u6E!jBwa}1;%-TOa&SGl@&?OIY~X&D%fLICDAJ8Jtp2;vkOSy}tR1}?LU z7vnJCt?30|xXSyw(H<+iYU5~=*QYr+CecN#x66LZwRricH|4qJlTS~v`4PBdab+SOQqI^mqWNWZuIZ0i>O~#pZia-dq0$Kq1}^kDmdv@JRXxDW5hn+C}JJ zdj1Bec|2+*_N$ z4OIfAnqH*RCS@fDDSD~{P)n@`$Ot4)Tsb(p_bxJy|I-5OGW=m4$*+?k5BA2pp?dMn zn>TzP z&8_d}_vvj##j_(^O~T2%o3Hc6Za-aRgM&*}OiZ)u@*gAAp*>+$-~^vGSxr|>OhAqN z*4PT2_0yLV2_r{K6**G>Ok;;@6@mqT4^&Ph<+v~pkN2tMqktih^KpJEiqv~_VaL#+ z6?)M%rT-s|03{=uDXd5H?b*>qN$JBLNZ}{GhQ)Ls1HUyQZ zU-#Ig8Z1oPUN1u}4XWvY)B)>e0Z1K2zU47c{y+%KQX5ibhDE!L6mZ!7nK_$IBCL;< z=od*8hlS0;)L?6&<33c5n;7#@IoZL1kk(2}Oibir^|qInyGrQ~`}QBD5WQrR@9k?X zgB8YN><{VH?R_-G!qz;|%HAL>EbNnh5(kbv)HWI$r%*X zPU`7Rgsvh}#+s|zf#6YcAiB7foSa-giGnU7WxQ#%PSX|*pG}xP0Z;aV;g)CAFVkw2 zrllBpEngk!DBKD>AI;{S(v6<3n4t(qv4QQNcSeT5vgej(X@i_w=-`N1MSxRf=f4NR zoS`2e(-jefxMKj29~7L+E}3B^rDN`lh1|WvW1?*eie?)w?l6kd!(A`g|D*?D(M^@y~v&z_52^vj61MmrB79!mAMqRc}xlaLli-$s6-8 zcWtSzZ^&cfpH0Y;ja_W=+Ve@AcC*RNZz&y0`}iNKvcqTqbqS5*8P<&GaG zZ4)MN?ZNTK`;YLv(n$Syi(zw!lz5^Cz3?-pz1ng4PRxz(#Um4oP=>G|Z(#c?Wa=En z{Dr391*-lpkh`Ezcvv%s8;M#&0$9qIg-kT4_>Mw=^HN_BJpzHVV9qPY^S}!MyRuNL zMtD)b6ia$qT6z4H}>)uIAF_o(n7kDBF|6@?*X&4Lq39Smtr!dtduven3 zoX4RiK@Qv_DMxNajmB3usJy{MQ@nN3efQ9IG6N2xs3l=gEcHvP0a^P_-=deGG>alrCvz4p?IpYaL)mkudL zd-`GC8S zQP(w}s%hzFlh7e37crSM>`P11v<&>a1yMePMI!_T(T5WfSxQO@1gLj#@J8tI@1gLc=!HE84lrTVR|G5oZ0-x#4Wu$LLj-mdQY-bSMCU74QXFgyi=2*}KOgsZ zxvFo%s_Zaf%Ard=;dk%eEzML#bW-<1hY0F{0(L*-y!$&^3bNbCpO96;z+9^|ud3P^ zp{A&HJ+CeCC04?ZA3x@X(zE%ZiAXua@HG=5dk@&OdZWA2Iw&c4I8rFc*!|pW@mcY2ag( z4K#Yys#Q5an{urXci!RXmo4@0Ki80XEV=>-*=_8Q!7Kdg5_|~bE%;I z5e8F4o*g;sA05p`Bp0 zDtuF~Om7|OeL7~cvMv0A!EuQ!?7G&n$t%Ccmagv_bGdNiZSUf$k?{{FY}q4dzxcm- zW*XT#WzD4N*ho9`{B7}yTX$SgkcFji`6%!2%`dw-Cc>v3HR1{_MgFn)R&JoSZUf{_ zC@5eCgHaCP)AmK+2|bB`J+zpu4_@h}bb2n@0ll>z|6fwj#XTZ9hA@vY50APAMgB zq1r9e%aHmrHg>o%+pfFZSZuoYm1EOEw%Fio?lmTAr_Z_^tjV#swa?r=Of$1rz3D$O zz*P=W(ao@F6UwjX+V8u-&$U~(mt^#vgLeH3#Fv09gmsIYc;VFx0L{|u_*1b_J+;Xy zp)}1*%6ZJVNjIX4+y`-@_2Y;jQ*e@B{01)8uT9~KT2ko5yRL7fh{v<)8}_cu<{T)Z znL6hEdACmJUCCL2F<1#2U%8{-L8TxJ-jNGN93g~gxm-HWO_)Qps~kc?8~7sVfrXX> zUkltaFX^cD#bS8i=HPDw2@bDu%>&rX9v+b;)VN>mud?43cU-V7;cBAlR;?&|9G`D5e)-$<_!lq9 zi-wM0ri`?BJnCY1*1db|(_ymrp~>-mm)xE8&o}GRijI16t8lva1M9>7CJce-i&_B#O1Opt?z8T`zXGU2q{n7q|_bVa{z8+0lzdBhs* z{oslH-~930F6dSrXPSfVA5V+XI<2bO@X8r9fNqj=!2?P6xho&d#4N;F4GK)$Uxmb{ z{5>!@+LjdV}BJ?a$cLWU>|Sc_=n6-+!Jf@AjL1O6^TBK`JW# zN#z#mO9rDpiYW_YUdk0^nQE{rq+f;1h@ZF!p!JB1tdOm645)pq<@_?lWM-4I&wx!P z=dUjntigAR-gN56A091=<+rOBdzQ>!F(v*)Yo+U%qPs`6NU9yz*6s_7b-s5G`ObQt zhA2^|>b2CwN`BEl73bzAKSC>pbpiH#kLZy5AY7ocwc={5d6ae>z4)U;&O$`C8-388 z8v3vd-F;RSIP3nYfkjfZz@^;{*fHDxfO@q|>nI%&Vg>4Yddkt$)2E5#&RhKMHq4xN zHV#KKnCX^tAuKsr^@;p0fx(oauU*;d7ZUdJq~QQ-k7vk>h*-|q5P$g zz(bXPq|->=v<|}@t^cbYp?$A!>@K` z>E=-5{M#w*B(J5V^+YS=fp&&4xIolS0+Qsoj9&m96}7C3Ig6~_Eh^=f9*yrhi|PKh z7g{;pEw-3)x65HFvf75a{GhwLi*eRKvaez%qyC}l?p`Zjd;qilv&EaO&h9?s(-JcO zpB_d+Qztb`i|+?!WlZ+FB<(u9^)r<*Z>Q(IRo4#J&N!x|^aIW)h^?u%4*&bn>(y*S zhfs?S!kUn06c!bw|3jDI5X)++O|^|wO8c6}`jed($1e!E&9xobJNwzZyWSh5651L;`-;+$*@Gm4n&fB9~K`>E>Xpc$1on?tARCO zLJ6*FA4#8ZEVS=kBV<(U_vMSaiivXaM=a;+WGK@;T{VKkO1gL`^3*wYL0s`v*xr;Fi3kbZ& z)EcMGvxk-%zD)A?SU(j)CWWJ(0j(Pn6#!&^=E*6A^x7Q}t`u*mkcd-j|AO-<93)Bx z25b})P^pBySyur&>D#)#TFI{WZ*Htw-){hgHfX!s`kA*ezJY~g5Ed>EY{6i72l`}c z3Q|6Atlx1!p1b$haSXTyN3B*G@*^0{G&l3rYA@?{tq)}srNkm>$ofT*PZq+TFJuewOQ)?QdHqS`y3-10Bsf|4n9MQ;X3?j zO&xPj3+o42S0ao!(h32if4Wea4?P)5p1?JC4lCv?qms1KY^&bv_F6h0u5}o8g}>2B z0RIQ-Vk?0snu!d6*_YJazkmPNr|~6;iHfo6!iP@HC%Qp>hJ~rFKmQkk3}&vT(_lsp z2V+{PXv@767&-*o?47@hAgek`J$B&eTNuDVigkQ`)Ajkp1qfBvc?w|=S5}!mys~u0 z>o2ynJvD&rHYGV9Chc0CTh#4d3*aIwQyWIFD^pDPbmQF^kr_pEqUy>$9Ww9!td zksCJ$XdUP8p1biH3ja)MAGhG_`O)1CQq9ZGdIKGuXw+8ls~EbelLQmhewS~;{{x^) zO_7n2A+4d0JtjMlc2sAx3{w9bq@tpFo|}6V-JGwZ^wptTScFHnDWO1z30_c8P`Cnc zkRt#1fNghwzqyD|gz9F%!l)KHE^Ocd0e^5DK3w)r3UV&!Wa4=g;7|k*vY({}PxD5@ z>#*v5-*GOhrP^tJudCbm_1VevFkJ;1+xqZos<2#b=I3I&a=spVQ?3^j<9Y4Z;DELG zn3*Q~ZQBB?7qaUj*M<#grDC`M2~R8iBuKl8RK>=p1ayJJgFFf%U_j%OmhT|Tv}SX*H12kyyEP93^O8f7FHJ;*oj|)>XOV6KOG9IbfLB!$EMd= za2)I?+z-zB%HJhd!Uh92hb^1FI0^?9KJNS zu%P1-;ukSw7Rqy%Kow8xEgnIsZPgE7Fzy{9($MGHEa;6PRZp+tP6CSwLfS1e7*EOe zMc%iMS)UJv-d}rh>rkeVawem_XkKf>dGQl!X<;g$j$n&CU$dQqSa=>~B)|ZA5r>Fjmi+EA4<(fQGskvvIiRCMuN1@U;JE zMWQr|244gbcW;GLpf)-^ogcEili zNIJ?6$8rXD#L(5+uS!9CoOdjFJhn}*a>wzz7x4B{H>6HD+%O^s;vsXSH6rae=ThA+ zJfaK0mn^ZmQ@Q~lz6Dqm450NK z=P494Hk^ry8X3(ig#<^y`H(KN0vrg`X4K!DJ+#DohxAd+AJ*I(Eqqf-UZuOQi@y&0 z$~xHC1L+8{!eOYM@OS7{^`2LZ;4{%V`dt7;{TdZi0{AT z+nys?;H1cznCyWt)tVT~U_E^cm=nw!T2Jhh*GWeK^sLFs8l?|_FCMy!=ayf&JrtIa zcl93P^sz{F;jMD-cUlRym)&;-3o-?_lDdI`4Ow3JE6oNDas1N7m<>Wf`R3U7D*v-N z1p4ivZL(X@WYO5RdFxgm9Qz)ht05KxD)b2HUwLi?YYW@E{q2NW9p%>nVP~_-zeW)M#T0>F30ZzXgW0|u-p%`6|jN4@)_a8 zhoPa6W074&{%Z0o=;(GR*$vLrfPvRCkYDw-2N7AFSw8l0J3;D5wA9j%C#rcaL-((v6k98 ztmTN8M%g1N+0wwyY>6F;+jhL6eLp9??cZ{F3fcJctHU~MQVxn`_IfA5wEY0Na?r7V z9ekm-4xe$N-@^ewh&M>`m3tu_P(ZhXjZQ=b3HF&f0a!M)C>>xH&jgxg3di&G;G?y| z?g=(XVCM#q&eVA-X;3Yb4ypC|`BkV?6cYqj9dw@)A?oT1o*T2VGe>kUZP%@O#F`X! z{`6I93K~N30?#36(eI#U*mafKUU_}`DkN*Ke^JuioA;irp|HMp1M`D;p*7RL$Q z07WCszi!c{qoDqmEL2pH<%FUJDn<4$jwdO^I0d26*0J8-kQS=C+Fe$P;&`p&-B7DI zh1w~v;Na<6j7F=&b|!-fxJ8g^oFSCk_(oYmgP=(f{S5X)t1Xrq~P52{b3j_I-S1FpvZ7_p_aF1?N2wy zXQ>z5!d39BdA8lv|1NaS5ZE@MLn#!re3FX`<#$#NU!UFCsnsTTW$AI^^|RFitqMu< z7xzAX5y!KmdK1qCUpF-sjmCqI!DlzCo6-Aq*9d8CI?|?dBi`HQeo4(QA*X$o6N}|D zt>TuJCr&VQAeBU{*7dWL^M%LJ9fh%rgYle=y}i9sIQB7;Va_M!Q9KVcOvkcYa$%!U zI9#?#ftLLij~l4eB*;n8NyKZWT3}Py->^B9njqtQHl!+&i39iI?!yxnM|RVj%E51W{7BBg zp1ui@I9eFP6nlF1TQ1VFpW|!aH5R(=DkeXoY{*8=|K8^N;( z%w)83$1fUVN0X{`xiMukqz;}x9oxTslbaB|^^spCjp5_@w39~g^FV8`7FLzpm?|BL zK`xTi6wJEeIY_Q#0_D8F4Y@gg5dyZGq>+#*IoAm9(w>v5?nOPJ6#KhAumsipwosJ|ruTx%#mI20y+kOkjdqAEl z2Pj?!wpcL!;N>KT1N(Y=%W+K;NumiYn4Ow>4=s$CocvGj{i&l6K0{j=Tg@ag$SWlO zBRlj7ub9kE+Y6mc>r`5H)IW$cMQ>s#cWCvon0`e?;@?bO6mC#h7e zO+6algPU?)hHOR5muw$iSt(Y2FfzNtHZ5doOH3nQs#{syv-Ul&dNw2|V!7-BEkuw@ zQ`4guFakyl?##-W%szXqbj=;BWnLr6?t|t3v;bmRf(@FgYCHh zHJKhEJH3{DIlBi3D}`f(PU;Wx8fFzFZ0e~>Zeb~Ol)Y^HlwQLC<^@~YB?S_cr*9w?lE2tZG}kKt0lmLplx zN;eh^Lvqp_`Y)(opB=pj{HZ$MV{s;~$qJW@?gqLqgM~_%+Um4&y>ss%FYJ$^1P2FS z0u-SdJOcsRbGz@-VEIzthg8*%chPXYnANNk;_B;4O@q~U-fiT++A_xysEh=<50GhM zc)A2)$;H)`(OLJy=kMR2psSUgu!oHC9dWk8-J%AJ02SZKhFEcZGCe&mE%kn3Ja}UlIvs24ua3sWE+J^RD1f^w(WDZ!vZ&JRl-?`5oxlJnIDdVi=50&-op8+6+TBQ? zNH4^7yl%{_KJSpgI-O>|jEob?*}c)*X1lH#eVo)i7!y5^W42-#*07_*!{K2YqnLT+>v=k543ie8%t2}wi@lM8?rZ^zrL{=q{trv8CRW$Y59ijc>RNL8e zNvl?-8U@d0bsQ}#=Gkx{pHEzeYmsB5s_>Cun`l~-iRq>Jl{{-+*MP0@A_cCxqUO!L zPcEeA@XhajCc(;h;2PJW#DpcRdR7y;VPRoaS8bYgynE^|3L0soH<=5Xu;yKjQ$9L> zXp@3W`<5*)Ik_K<3_qx>>dfgLV?fRT2ID8Fp^>(Z&d&SrQ4@K@czONd#v$w)LOs>H zC(R!|vg$UvxncWj_l$gfYAmE4%{~ulWc@KV&nUC)7CR-jfm)1dRg)D%zBtY;X)qw4 zuYO7p(2ClPUC9MidZ2PSypnIwxI3~vZacs&C>WTMl2W;)tGC~PYoOAY*=+;stvh#) zg~S~7yXj#6@YO+fdLQM4cO_G%LiSrYf+34SfTnY6zU>aOCD7Um|H%>C{(4{gy^0Z2 zxKJTA(LMKg#n9|~U4v)_a-1y06O)qc+)mxKR8?vXJu$K z*G>L94gr27XX7erYBEh9%{8wLpB+TMM!bzs`F=t-K}Ax^QemdqYq&PL^vSzomD2WifbqqL`O^QKJ@Q@q)`I~gMpV#9DT$|_Kb zR`#EeIP)1gaJ1}6AKKgiX7mD`uKqoOBOIHIOI#c~u}H?@lTvMYxeroA7@e8PX}f3s z_s#p3(?d;Y(^rdqn+Mn#3kotSFkLr?gOG&SP-eK-k5H5pG)cbDccI2kUhSOw>D0+k zS~YakUN`;7V<8Wx*sJ-jpB|jhG?cGsY+IV49CDrDgfQvWy^Z@(3?z1hJSCIX$Gjo~1sVxeLxIra;)!wnO*xqYs zO2}oy!=lp8qiU!Z3~b4y5QH*vrrL=xgHOqtm?TaV(YTlhwDDNGNmyinr2H^yASwFu z+THH=2PTbz)Do6k9u~~X2h>|AK90Hb`8&d!7%T8F#}dt|9b8uprwG^x98`N zVzPOfQ8d!iqX3cCJn$)jtoPA*cU;kd@!*aS9BAAOyBQl{-kV2?1{D%Wsj|!dN^7TS zy=`h@gdPWG>r=R%o^sH>XEl_7?mr#Tkp1gEQ9zdt!M5c*+HvDvq$!-tuHWh^DsBTr zRhyT}wUQOiQcA8jt8g8CZ>jQCL-^?}w}wK3`Mryb7FNdvecTA|9yV|wa6pmB)kDR|Dp1m_0qJkdC7&g7b(v}(?< zJkXCzgfOg+hSYKW@59lu)9a*qjAWOJlE~G8z%$YOu zT`PZ&85l%YEz|hF4aFEub`|!p+Jy`8Rrye`BY1dUDK6Hb7XHll??b8HhUj*(>8vrJ zO*OZoPwhf#IQ=1-L$CWDrjxse*zjmid*9@2f(9nM9WSA3_Al(R_e0Ly4GQx7HTNUk z?#kGW4Ns;{Zx5E|L%++}#Gsn|HOIF70JK)8u$Q52=SG=BOp!s@Pb{)t#4KgMI1MBQ zb)tIaj^N#?=bjPznGc`{dk!fJHUtc9TykBZ{soYo1| zN3Wd*x1gy7@_&e2#a&RoSwSWa<2CVVK{zHR_YtRPSwXp*73hShlaCKjA!(k9m~4D} zj@sYBJL%yKyE;DhX?AMqqrQlxCHr{BwP=qw6{hxy+fSWLcDb#hX-ENR2m3BMiJ~`j z127^&P2_*?oE011!?w| z+?S_YC~%iNL1*>T#pcPWV~}U~-@O};{sX$02jCqVAPuA7@{_kX`W-vEDl6~cjbVF! z!h|w8T>uy${OZ1EYLV~Id48&JM=U?dd=b-kuf-izU4rk)2mrzb9fy~BDDjEIzHEwc znT|Nx@y3GdvsqPl#?Wi@9VE=k#wQ#83RKIri>ITBsCP3PcbM3^NNu-9)I&^ zI979V2U6(7QU7WtOV{`u(L0OXw8q(aHoB7a$8QPA`$59-BIreU__WjDM9ENl%&6dJ zfWrWnK4_7<`Ro^wsYCJ(kpa0L8GpbLx5CJ{h8V}twt#7`lSuuGYRk9Z8BfM6e;GJN zA_U=5I;Ko!f%l4|BrnH=sw0S!PhDYI&#|wX5kSN{;4VbEg_|XKF-y8wX(2ZsAK?n5 zc~AfmU1yLfvUOW3S=+xp#S+1bLdKz`U6iaN0+Ymdq_fJpvV7<^o@jVG6s z=Aes=m=OiIq@4M=@auGcuYE_*SN2*4#q;-Lh7^7b4(eIR*7|Phv6Mh02v?jDf}NqA zr~*zSg<6EVoP-~CKj)7Fyd4pFd!Z+f2>5`iO)husl_F+RdBomhO7!_43iif6XxTP^ zuWA|06t8w4IU6u8hpN~4z{lx`v&uD?F`gD#S)#}-{EJ&Na6QSET;WWZq~9QOHfUwcpJ&{X9P5(m4*ZZHc&_Ywzhrz_udz$bAFQ2lY$;&He2h zasa2p@uVVkc$~U_{!|3tVSk6?)+Vk76x(W?p;Rt|Ne^{j**&R>+Q!JIMINL3eBVP7 zL<+g!69mTp0dpp=oVS4H41Me+w9`aaG{OeY`bcNV4sXb!^)kL7D+lgtf8=im;Jro$ z^4?XPh3k~aMky$*e8!&hNt|c#gNXvUA)oIxzVns2Kj$O;BiF*DD{k5Fe&jD$W24e& z)cBA~$_Y90vBz-P$KIPaZ&HX=ZjH|ke9ceY98VB7kiJ#J0>+-;X9?s=YL{16lH#e` z$;XGcYgllm3aOFS09aMc+KueDo_E&>m#xN}^6%#Ef{&9p&fXv+6?RBTxG~KkctQ#K zl5}D_nlpJcHqdT7f26en8!p6D@G&BV)?$1Pk#}-vW=O^sc{&hIRheBszpn|i)Ui&l z6%Q13X95iMIl%g zg_#*jponFtB!kfbP>4Dgh5)Vn->o@5Kx`cF$Pu__*-;=$@B*9++ye(#10gDpgcpb~ z*vbiUkf=i`76Q5!@qKgmR}vIt4jC92gtI7;d5Oo3#xPlU&mM0)8vfjCNTmG;Z9idX zE%05*4L=eR5vWox08I2bZe=NH1nLz#%#l=69X{W+5y$hap&O+?`cVZ-dhtBbR8s{37ilKA;3zfTfd%5Oe|FFV)dhuhpwC3 zfRCnWrCu0s1wV+17KU+p&};IrZdk(g7$@7Dfq2*|+%9ke{WsDoexQEk>&BIOt$5;q z!XlSwpTUCNvTN5V6#3eGL`|ZDkgA|N;qION-Cl_c&flWTj#xz6 zM-?6tDQz%7D#G5Z+#Z2_#~a)W4nm`|3y4vKqamc8AXGts+rGFmLrNiZ-%xqJ>+Cct zTmg?#j2{9=1ph34dH}v0D|!QQ(J3bY%GQ+fdIs2^_tkz=5$1I`FK)wP5_wM+S#<2| z?68G@GRt`e-87G&AI>%`L>OhlO0Li18UR(eJ^6EK|1}PZIXo7cJ5As!tIja;#;sN& z@X;8|5?4G_7UEc2m7;6*E;&uex$&Oa?t5I$o^GpEUKHRIemXiFFe^l~;BW(2^w&E< zm(go!@`e<$%CeMvq?&S*#V6rjT*CNER2B}pPsPLPEq)^RFbPdPkO(NNAD$-lw@7iFxe$m|p zzbLBgPB61eT`SAGG0&N!hamd}<}D)D#2-zTSqn#jARZtLb6a=q`V3BpoTvzbGJ!N5 zO=2BeLj6b7y$InnfqJ|&1mu($#r8fz3jv7TQTD8v87C%C-z@>Bl4~R$kUoI*gqBw0 zQBx9$S!EMBf(Hc9M4fM$o;83J+A0%~Z7%&8P(t8N16d$FpT3UN04RI`76M&L$Z)~Y zme7LO&`AmJO*HHgL+|cEc#QTg%<3wD7!GS~hf}u3U>05wu#Kc@cvv^3DWR2+0`LaT z91-P*s`EebT?G<~0u|bu6R5lwjx^ZzZ(%IL(=SK+Uk6{+o;x$bfE;98{=DfKN9K~s zj!-q=H`;b<*Q{Zx+MWd)02!!MPlV&1Y=6*kUyH6j5Y%c@&-4Lmbj(lrED99h6d2v?r z7#}nTn_q==NJaQF01grBw7QX_*iKoua5VdDV0z%rhgW<5p%xsp;VpwWMG38#^yrV!Zp7JrFJglbImCD!>CQqV>hI_A$?TJUa!TdG>%ziF zn;i$Q>;Y)l18fgTad&~ACwv_Yub#$M<*(8|A;#aoytMQPc`y(f^`Lf}wmkmZ1flGwHk2FM)E4j7IJpcPx$*rJDDR^KF^i4%4axIPKNTWBngSy-ev ztYD$h%L5Q-D|Ae*l8dYaAxJ)ic~t^Nk$%==Gpfu8_kw4T46_h8FqyXISwF0jgHIj( z1r(j6lHlvF|LQ&Lxl}N*_;Y0B6EeolUSM1vBFhJ>h%6t%TRA%J#ngmw4L(Q2d z&~~Jr&5rpbzO^nIJApsj7`PjpX-xvDrI@sm;unFlfic`|ZmeM2S*(}Z+=;UUjcB5R zFMahjAnypwnOJxYLN_;Y0YL=hQ<`Nkj-wVway9Kg=S78I<_-?8ufQF{MB(>m7ZhCY z4UEJd7MzRbC!V31j(Q&|o9g-dn*^Lu)bcPO;E#!3H|%(IIaZ7T4(^*MFyAv8goh_5 zP*{m~NAIN;ExTeCJ%6xPnielYOFv-m?CCARH5vKxxni$kOO1cgP1{}Vc%D~X$YGo= z=`eO9sP|Q9{gB|RC*w}J&mx1vKG;h6yF4}R;+oX6xM-L$?)8?gO99Ird4jSxV`E_%P1K?Azx<; zhEy?#^yH%zngTHMYz*WY_QWye8kI_>=QbU!50QE;2dg(GY!*^XWOS41TH~rwG5^D| zsg+YUU{-0rZTrrK7hEkn+z!1sU(k2;rNfKGj#bj09xC&NeaXk!jnI}Gm{vs*Mjkx^5pry0Wc+~p z-Y(```h&Ke++n2r^B7#D zWOTvib_-f`yMBGU>beI7uU|XPjv6k^{3ciVIj}t}NfX}(Xgb>A(7}VPKL;{iaL+9- zFH3<4hT*FZU)zIbWC@5<$7|X?!`T4p8XD=FaBMcDXza}uC6+x5QP5}Lyf7CHkp(%88#~sscn94G28xrYoS>+En1sk)CV?lKEF`vNcf%K{mFw zmfwpcV8uI<{8iZsgv$rbwBUB8jjUMXr$nVHOH*Z#9tMlkAFmfN-2mW zNQg+O>BqlD`X~_i)ltxY;3caX&)xw@E^gI$T;(I6tS^ZwTXEdiz{-x1G=>*CliWv} zJ|mEUB(^%r-bNe@h>%ZL??J?edQyE=4eQy_@RL4vM)K+1ICjH&nPF7wL^z~D-5KoS zJ2R5#qS+x&{Uq@K(_1}S6ShjI1;sAB_cN&5i>BS|;qrpqVf64dIE4$G$94e=!5PPE z%LwR@1iPuAVb<<@44>`b8EZS%62XCdjMc&Fa=$+&^LCl`{vtLv3BIU6rmSdCwK@$N16 zN~XQy+p1hahdv+`ugL?lins)P|c)4_v3qdGo@=V{8G!Bod0?euG}+`H&JFy=U`;_EHoTjRuT z{`1F6m==i)?55+zpX5q`aKbbcd9OwYrzM77M<9zIQ>0mBTtrYk6p}PWVa>QWf?bR= z{RbL<*valadc@o(gNOR>wZ}ewH-%&Z6Km@lEJOTU9zdC@>o82fHp^lzdhpO8{#P4! zO0Q;N2p>Y|iJ>8ARO)WTyqIFOsP-(Q@4#&FZ6qKRiSD@&-6N1bViEA=akZu7R~Jt0 z=lM5f-Ev{0*9^B5VP?kbAr%LEzzpmv>5~x2or9T-0y?q^FH|k-Ri3^e)Tw00zjW!y zbw2ZDo5qd1Zq$7te0MY;OXTYC&Wjy8YGt3l%8x#G{eGyO!@cOg zJ6}g;j7%IZd}@jlSkjA&H9bFm z#7za%eSZ7)29iWr#Vt%4R<@e}uz~qgVZIP^BD^k-(MQIDRRSuqi%%=%4b*xOf2Wa0 zskG2{6JQ{=DQ9TgtF4g3}`Hyj|_ogejxA_tBsS%+@?<(c#*Uf1Oe41`Sy_I48T2h9#I^C0nVG6mXi<=pyd!G1hmZqL=8sT3qj@pijaxe z8*#8=Knub;-$C9{ZCW)rT@8@}2GFopF}P=c^!F1POw}?;-ma;^cbj|1c^i31H^_94 zA@9+>SNZwo`1S-Cz+SwMY>&Xez=={zT<~%HI4yx}kwR zBo2v?W(c^5(4&Ihj7TqlTz<69{kH~qbRgC-#(e`0?Ok25Sc*X9%aLk?AEEv2C8kh+ zA_axqT_?Q{*vKU`l{lRMSeQM~DvON{k}H`D9ZHp6HG8Ikw{NcQ($TZm3w6BoQ*c>< z5l@*OMriJ@_kA~=mW)u1Fy&D73w=s5Usd@A9_plX;r&DSU;R)xI3d9_BsMn@Nj%8Z z$dO;J2vjzW344cl3H)B&Jw5T9b-Q1EVvc_DL~y31PKcZPHWp6x1Hs3=59=dMfkcN5 zJ3`I`as#DQ1EW31qQklD7DrTQ4ypW2^L_pOI*9=npCu-e3^ad@~QrYZk`fCSLJxU%ort}cV~HU4u7*KHy>%v<8_ zO&<22ShFH~so?=vdfG*s!a1*%Qs=`f^_F%YmiO1>A2XCZQ1*M(oBAD9#Z!83FR>Y~ zWH1>e*|^+~3K~w?N+TCUImFNWwIfRM@#_hsHw_fXd20d6ytOI#O+|&Wo9?Bshd*C% z%^SWh$A&_iUn)PqComt#KD^fZ7|YR$v6$rKOEGnVw_FY)c8Z9uaFY!5i_ z1e%OY|102L2!23nKdzu?(0*L7uV`q`9)Pazwuk z>5ni%A&gESKl_p z#>_tJ*A~g0dA62!{<2b-vGLRdr_g*?WrpSvCCME_S})Erai5fAp0_yT5>&}EaG`dN zGFiFddB1uF!cm_tMcg6)CqDc8Ga2ZDp`Ld@gRxS>QzYR zxb*+80|BG!za?}sbl;*^oWm`v;YukG0M){61H5`2kj3kXjwGJj8%G7-3T~v}!!%bV zn%W5&JYfw1CAtt{OR?*I*1-M{zPXsIK`V>Q!;W6tAm8xY_sG_Mtva=yqy?4s31!V? zsml1%9@oyfNLBi(PO8S*iCePoRh6?`0ZI06Vd~)k)kxP71Sk()%s6}zM+bX?>i6oj z?%fTSI||-nGx46yV#6W6n&feS$jbqRfhJF2^pZF$iHd>BbO~M^)pnfE+wZd*Zo!7q zSbdK!$Am}s=0S8i01U#sita@sF#NO4x#r&w3JzJQ*R&_{P-~_N{om(kTgNKyN7_iZ zN6>2d;B+9%3~em5+sCA5(7=ka25M#MMcPAGTR)#0^VkOHXHu&km+whsSNZ=IdFXasFX! zqlw(kiB%(G4o4Ze!wxc5j1CZa1mF(5Eof%{>UR79@|FYLHp_qT7_JvO20j8HYn8n_ zVoXa*OPN*-v3`&dy@{3eE;O+?End3Z1KUf5;Hty-A#1mG2VNwg`&^!hPw?po9g`~) ze%&k31w4E-%YEHig3*AX?Ae@|i`cQtOOwKS)*s7v_4kB*jWCUVJ!>)SPLy9bC!-4@ zCXQEbcHI|2&YAb7<|53*4QziWBz`r7;GXdnR|vMw;oE z8++*|=cR>)y}Nr~1<(~wwG}G(Cl9}?@pWJeasIE8Df&Aj3x!-KArBmoERNt>NFtbV zzi^f*sE5;-?{>RopzG-SCbtIA;3Rsz#~rW8ww;Bab{$4c(brvQPqB9HR7{Ul$NT|m zMm|Ys{uS%sBD*Cgu^6Y4;?Du^_Wm~hzapU^b3IYhke2tlV@N@bI8)+w z6*Jl~bP}mXCDH_hLnKv{*eK?XV%*S2>MW$?f529J1-OQZqXM(#Zr;dE;8}3l#W658 zMAAlN)M#GFoLkM|fqVG0EAHIBEe7NP+_M5IkSh)qOZ-8c5gEEU3M|f$FQK&>gm}cE z3pdg}W7}`tHH#rrii;Pt+1UwU;8tiNAt!{57sI*NBFD_H`c=lp#xS#dirJ5sAHXDt zxI?00q469gk2w6L*JcI3c+G~irnTap8xC2k_}qQBx9lVys3P5YKo??Ww3xw2E(&gb z!}EZ@ni~R2aC|617hjwPQirykgad)Of`tf8D=@!t1T2xA2^G~U3SrHmoHJ*v_( z%$-2CLa}G@o9aC`IdY@t^F0!^fNQoOe&)fUi&!E8u=L158NGuf76ISH&(_vCqDP2_Ehcr zcc{hRB3F3g{31A&$AS}XS*-=@MzrF90qL=|NaQv5_7svcht94N(*ocbB7KDbr5rmv zMK6~d%w8puyv4S>f=9O-u3-YKgLxzK4#w$(=(0VKypU_T2m}U|%ujIQ1T%*5J8WTk z_*@4XsZNp(Mp$}ONaE)t&<0oswF_jTIu^tRLex`z#t>g*`0u~n*_~^)RFbP_iY7_W( zbP1F>^yDNOr{NV`0Y!)f48W)4uSAemH_|612kS=r`zuh{Ddg4aXC5#XMal&)-A~LP zTefXWoH_`+S`2eZUV$+PYLca5K2}J4f}Suc{xOrr_RRN#n&BIgcp3OgQFlAk!7%(~7Dp-Y$E1>p}o9 zP@uU<<*eQhP0I;?i8Ig=0v+bSkZA_qSapjBa0eo<$EsTY9 z?eDk&qC>!jJG-S6*cyT_!mNgukh`jB_qwR4-?a?mT`rKJK814$^-o)d8Md8hX2y$7 z)SA20hrpDQl9CBW{JP|_XgUT4!ti2Mp271TYRNv0%8I0?K3FGJ4^Mw;7Do2MW(2r6 zV7q^ukb&MenW=XS#U0HhM!Rvmam{Qa*`(V&>V>k9KGi1xIM5%@HSld)yu0&+#2C@AvEddR^CfJ&?hZ-_b2T9&4h{`)UQu-#(5@O@H>xAC*62z=W8i*Wg|GJ$}(VH-8cPcWhD?g0ees zT=WzW%a4e0$Wc+Eds1|Muq{^Z-ym)uP|W#_D$tDaLuZK&v1l-06Ip8Ag)mW+Uzt}L z65#my_z_ocGJ8>MW4H%o+aSl`5N0^nWxq0^iirX5aRw>;i45~JRUlECzD?(I zeWSsl-T4!kw{`BENy+-7q02u0;wI#Y7q58c&NMw=E5BuLTrZo{q{%Z|UY=f|De{bq z-=c98LS8AfEB{3OxC0+6s#HOGkzPxukD6;m{_~v_q3cHBA!|ed?t8_UNIVfloeo)A zh*dJhIf_GJx3@!G8B4547S~y(xmS^}n0LsaPCY5!3%LRU!t3Du$B*I!9Gc~k@b?5K z*SiQ`wMd(!go#yL{%k7L=IM-3dx?qsffK(dk8sYuzapZ$C>e-sQu?i7dEJws?|>Dj zuI`4*(f~On4tGs;Y5i%#_FRG!{TlD<>o1C&KOO#Qcb`t+ctS67`gYxFU(uhLar>A- z_)7D=FYo`P z#jl9A(3?Ek52INfBpKCipqIbUIXcffwCsc$m@7L+^MJoppsSxz)Eo% zECpKwjdJi~CraKCE0G0i0e6KsCcN-~z3=pO1V>IlrZSoR5d$PwcRC-XTWwfg7Q3l8 z&+7LRceL^Kc|=Mv{jM36W*|mS!=cN^0B6T1JQM9NERSH>`x1wJYib|N3R$Tr?`|-x zhfKgLaQV?6Ut$~kiZXs0NvhD5q1T*k@%Sg90DScL>d{g6=fjF|PSlrZWC^nVk`P89GGrQAol z)NlmeUE!rq^S?5Ihcj^H2)`-TUb|u7w%gn5xIBu|n`FGuVW04He)sMXbpVlKp%Nu| zI7IXZ-}{s|d=M6qi8fodta1T|uUQpr)RC!3Y}(!PUOgJ}$>r$0kNH+x(h#vJeWWsd^L)2LM!s{ZHVu`tTs7ii2# zeX=3S1md{4C+@z1m--#npb+Izv;G+?BNj3OWqg%3(40i2^33@SIK~N@MPMy*B%zrX zOS4uK_PDJTuI(*C{x@;mlk3_us=-XAqrGY)+Ke@8evZY_lOY(IOn3 zHy-R$g*cEC49@2X4K{43yKFHAKOel`;2%y7h7@ARJ^>!0oemDzQ;8byedD6RZd^p% zH8V|V-D{@R%X+7X^Zu3o>n*ttvxZH`3%r~4BsO9Y^ix_<+Eoi3*%QkiUmE2(Tb7jc zz0A;r%>mI=$n z249>He(%QsA=;#Un>dX`!Xrgvwrycd_NYM&;>WR|CnY|;tza@i212GJCP6`j6 z!;z8J+mHM_2R~lf=vaRp+p!5f(^3Qc%Qt`NS?zva>c_>Ng6Jn{r5}5O=;-zj%i8$< ze$_$K70+*b{nB>b{a{e^NX>cH53U<5ylR$c-tiH*Y2is@GR?NV1_fnZJ$YIuq7vtK z@>v~m0Gu+7nfY!2+xYPqq;C7Txw1`+k9d!0{~N7Kt42^VG28NfNFoJdG5JIapw{@0 zOfEXK@JS;5zIe|fp=gq9_`BDE7dETRbIji{^_>>Z-xjYPUogHi=I)^7n1{Xn^6#Hj zo&3N*<6-NBUHLgCI#QA(Wdj_G$*z3vZnJ3M+HAMx5xo3kfFhh4b#T#27IHo9M`*Xd zEacDXjE&Hdm5GKQJT#t&O26&PeoKyy>mtRJe~i{!iO5Jos~jtKV)SBB8l`1oguI2; z)w>2JWcxRqnr3XAwwi&U22@gYOV9+rQ&)3;pP1; zP84ORoQzXY@TPc$Dq%w0Z0IKj%l{z6*4VDkmI$|DQ9+cvB}Py-1D-7;~4AD-9LkTYv>G< zuo8c{&3_u&?Pg~Sr|>yNhR&GZn;#$LinrVYpTl^cbT$Z}A;O)SOn1?;9`iS3Eq4T(st`P=A1`jl82NaiG?(KOJ6@uY~j`~#X9&4%}Vp*$`D+{CvG75ddPkB zG*E4MD~I4|smE;i6uqH-ZS1m7cr^LnI`rEFK^iZ8ZE57R=IC4hQq3FUfSPO%ILy7g z&W~JcEyu}br3?LNF4kcG5Uj7VlqYw?2}tk9bSaKJxltaCs-0Tn9b*O!8@7D!-kbM# z0w7}_>zglL5DC(oaKXR8VA_~THO^$wKTVu?ccYV|W77Q%SM2@8l4;p<6^J_<{)c)Z z52t=pfJO(5m^_VUWyKlwlmGE}SJ)OjwUfHq{TtPtz8q;%6sG}va-(vv!P#_Lo*)eZ z-l`zv>5My>c5oPQ4RDv|gx~RgSZyuU1dTv;sRt1+a zshwSHD?d_{=g><-IOS2QU%OWPoAWD}9B*hkv4Zy+5`p@&{G|VnKEMsp6G3jte}dfG6G1`c?Qy#* zU#!X7+@8L0v$=bp!uC0NrEm1~fL`9-SV0e|!1 zhc91$d2D3N3wx!kqLq+bx#;poT62PV@(LSf!f{8vAE|e1Mqadc46M*jmwC1}%EQ~c zNjZe$u*a_6Z`(c0;e_iqypNQ;#ulAkQVtPeVUr(EmvT7ZRV}_da)M7r2!DFNcIS6V zUXFDiCXa9}HykrCJ_ zop;3RJr`)#@C9rz_Hot#eej3tl%iXO;uc`lMkfzRv z{@tq}i;^(KY4x{eBYuARV%y|7bLz+K+ zvwDo2wb+4|k?N~QDw3D#+OC%PWHMD53-nz3ZG&tF@_5cq#c|)hw_5auQ1Po$YJ={4 z8YUHHeX!c%MCVmVX=M8jRQmk~slbmnN^N(PTL)hKsq=%rLh<$n-=Z9_cmnQ}sdK@i z1)RE#5XCJj0sj7%DPmf1x$2slO~!I`{GA-LKo3-I8~xF(L1)tirKx$}1kRw7F}uWI zv_+q;k8MB}^qB5|aGGlCs3)t`3EIn$q19}5Pe5!Sw!L?AedFf(C5XV%y>#`M^)|R1^`Qa~ zTX^*6C%dn#nZga&*Aw1&LvxK;yR2J!5JIAiDtjwFLZ!oWpNvhaJCjQ*b=Q72aHvcf zGW9}a@^a@nw0*YH$mlHt^QS?>m^W3pZl!v)zXfSaU7yV4DP1F@2k;+%iW_E;BtOBt(hHOKL6QgV-k9o?wl7#?_?fNTsf)sLe^0 z2RxT(lW^O@4XP#PhL!9tKWR6=t>wjQQ6Q$Jdog^M+xeep=+} zjc=YTLEe`5c4y0;b(Im3m$MNwg)@wE-Oj>Ia~>Jw)tWT^d17vF>+qV}vwH2=+yBae zAEl~Kq?fNTGCC`Sx@~@cKX8!O;RftGK4Z&Ya=nV`Z|EFpA5eIBHLK2Q{~DvHz*_5W zr9FR=#8zz9S2Q2KEXZV$^DT0n2=vCw-95d%FCibYfX}XUPT}~EL=WJSH6pZrW5%QT zb=CzdD=Je9r%OG5Ti?(iwztFEr)%J(hQ;5XZ*_OzaX<8`_BEXpK5ZywDnKOt8eGHh5Vuu;lE z&vpL!)hXo4OeMMwc7EjK-)xB*TUSrdI|==Mg;7Xan|_a!Wv0KT@85iVUH!qs8Ka(! zdmJs+`T;`NfNd4dwx=T3mKmj7nl$H|-CEzu23Rmy=|F<;fpj9|o&3Z0~t8K6N1tSC)38mFmL6)+$X^KQG6{<0~vJb9eXU z-R>UMN;$pbhKUiTi9#Fd>l+SLl7P5T-(>qCVWDvJ`?7(3V3CSwBa^vrn`nJ+Kuzi| z?Uf7L*SNUck|92>uBKL0mP8nLrZ2VZmGu+b0ONy>;ID&}9pf>9hyHewa3y$MO%;0ru5>s2F zX)Ah>IC1ji6_~Ka;kr}r#A{rkc^lT#q_mUGrENBMd4^o4^OAx37(gb5)J z`j;^4uHy`%Q=9|yUY*0`~N7)iatcrMWutEKU@i(R%%1@$MFmcmfUKlVoRW zOSCD)buZNp5(%?SX_Bt{;E^LE^rFwq>j%JD+<3QUSn8)kbW5(=ph2h;}9>X{d-#K?ShxFm%9z(a&lE0SnBxvotF!$-5W zs-#p@SDphCGla|K8KqX%_$9wfTDdxOEe3KURUG3{Om!>$k;jsv7WVA>>Fbhm`{a)4 zcZ3l?KWq3a8-j(`G7`w;(~Iw>o82;Dn$|T~%yAO2wm&ba7arGIt#}{jn6)?S1AGd0 zXR+(k_wKDrX=*;_#AkG%h;w%T^-|k{>4_y5ak>M4tVRb}1fxO6Yv73!eI(hG8Ycol zaKg~wR1uYQnJKce>YTB%Q>RAWi5bpE9vK0R9tnfR+Qw7Oh_$C{YU&SvS(*6y^m6Gn zc6PC>+2^~uKNh`c0l}j&!S`wit-OV3ebKy|r=_iuJOrWGdE(?_IbHxU zS#sv*J=@A@(9A-eAR*|VRo)Z3#}-vnb6I<sW;#I*zj%;zq)^OZ~N|H&gi34KJWhf;17BEJQ!o1-Sf+#dhDo1ir8 z%Prn{-vj85#@BvZt07pL`Tz01{&hfrg@+6lM&X-_xd9D`(|~W^CQab?9rvG9A9aBT zP(@yyPy6yi-B<3Or%zvCvrslz>Apq54>~L~)ZbAZ*V-_<^YFXIsh=~q3K+^J)hlyu zI-hk(9YgD8K6UW1fzz8xcDQBLxQ$t0u4>CtJ%3Z5E0bRr+VoB_IygAEsEC=q+6tqr zk3G@>qSW4ChHS&t;#xgM9O94Ra$1Sl9hUli-l%YRR)$JBQ-40+MggT}Vf$w{!6sK! zR{G5aWOLGD&*$my^iENQf#<3P7L2JQLM;=RJ;EToz96+NPdSx-<%VHCyjYF<${FwX zq?Fpu8=GAEPRS{T3$r*&H+vNOSTwi7c6m`Le!k;YeduIY;;NMW(<|pbFF#Ujo2)^h zMsLxg{i0o)<9A;pIha-e`}#Mwp(udy!1lbQfLX*d;k1cn(aSDeXl5FtFMNeT-a%}- z&n9BlqIsCAYunyd)E3@p@5{FO$t_Pk7f#52T~6I4@4gdAQ7uP-Rm%4?czIog#8m1K zzCB#}UA|eb9#Dy7>B)k+n?Xd{rMN;z&E#1&AJ$6w!;&FfI334O`Kad8CtSYqdgK_@ zy#}_4#saCiH29T$%5@v9sLd)&vVMDTh0=`Xu0{|Ja6q1fE_W>c7l_+@={|ABE&0E?@82-ms(* ze=G#*XS!w2FpT{L3Yo*JA^4b?&XBIK(G9zhJG%rp zx+FVurX~foJtvRPWBYoYI~PSz(=`^V*=W-gBYeSKvXV_^I3uAEaUY!WCI3}rB@O7h zMKr1=G|{YeMVKl0`Wo73SXo7g)yIgw)XGxZM1jp|(z;i>vn6`UX+wzn^*XAKB-gl!c# zhD>>ago^%?gaT9JS_di$6pi#G?wVSzgv;D_{tbuk6N)r@ttf$h zz=tQJ!fuMZybh)avDgxpuWi6Ty?Si>rz1L7zk7nt!(`LUK*Z&yw8nVpT&;V|$oA)n zK4i};-39Jo{7`c)M{Ivb>}Lr^6GL;tnZ&mSd6?>4I$a)@wSAl?(S{R*&kdoj8gLde z(|6RBeq)q$QomHpbbjRHHay9VOX@aJ-XFNxOVn@8#_+iwJ}!_M7l` z&(}7PkXCV8|7PbIe9hlnXa0Yha1pF<_Vx}Y4UD&h8 zolwmzfBy#X`_b67h&=P5+B$avT|};^{D7D|WEX!TC(3=AfkB@GGOCf*UmESub_Fy& zk1?kv{<^L^@6k-P-}vcRd{z1=?yR zFkb#ndiV3FsjJVe4n+8Vz)+dY{68yWjx!0j86whwz~jb%~-)8N*>ws41P*9ose?9nIfR6&hbsjWWb zd_=CWpc(A}?H5EE*K#hgaEoZGpunt39p>24qqCZ()$5D70&&CqGB0tPo1+!ST!W1) zsk^mY3=XqqCaqxw?e8}cPd!xWNWwxcQ5%je+yZAYr_U0ALW5I)a1Kf_vAaJ(*Q(=g zlmjuI6GMHdb}i#gY;ekp+NNWv5J8gK)y0DA3Y+=;B0%7OO^X zrKf&~w6u$a!#mx6j>GaN_kf|EN^|Yd|~Y1?6JzbwTS)-e$fw>>x?=;=_=0l zY0-e;hs91-JpV{7@j%bXUmDHsUELM;`R2BwX*ZM_`&wvnp3GEIGNKbuH61oSid1h3 zc$)R-y?(f^Y2uy+b4+uV@LUqgekv}1KUq=HkcoV`edm<~0?PcXEHr~ zRZW`%=>fF>`po?pOJK7pyS0w9u5r{&rxa7M_W_^5o_)oYoTJ1F{{tX;jN?*ozlakj3Q{}ze z|4rbF*?pE!&F$5@_u9pa50nL5_$ACV6h8TRLGLE$ZS2)ufAWP}{a zeb?9VPGTzXG)Y1>@8-Z7fz2T;dNY889{7+gg4Z2tc&e(VBpZ488t3M(fxSDc;&e@> z3J9i$lNgBHesvQ^P2J#8qaqmzpn~E5-28Z(Vm+4q2jd+W^~x$J=+U$?&n@KArD~GK zY*hG{NmuklO2-1>n0~OMdLZwaH@=gF3ts1tr&~>T7?$E67>f8QTG8=C7%W*Np{mrS zrFNKoswv9-Q4w;xDBXQu>rIu~Y^p#T;Yo08r8V#U6mU=E9;{vK^1I?v46(}ro zCQPU}&Za*-UQv;@^&)^Q)c7UY?#pqItf_Sp;hI5wc7!11M^d z5o2l5_$4j>@ld{h*}043TkE^R#;!7}^WyZGQ&30n%ax$sl6J>G;0 z^9W??;&l4$x0gLiD?hyP5eQJE`V@5&=zHKHHAlOhI3a@Srntfnu!cscSLbD&9}%HH ziM`-%kCMPtOQ4ERGdp>_Q@DT0NU{M`+_}v1(B*%xVy)~uEUIQo963Y3+`3!SXoW1y@%jGmbQqq_12hAkxZlxo=B_wm$A^t%3}W1-n@$Aw z+wIH41WJ=~Cz6Su0Oh0UnRC~Hj1-!91;5O5$u750FwG6a zyMpE_h$dJ;ln8&#lNXxVcXM%U;XJK|09Z>3nw7ju+}F-$x33pY0iKXkq<%VDTEF&u z5xP9>Tnr&(>VgHv)H1}>6S`M2+mWgNtSi^;wSCwO33+8M(houU#09jj;xb(7IKY0^ zI5TSzp+h+JAzyc0l)}=#?KA!{NR30&c_23wr3Kym4vzGdC(z_6Xe?se&A^&Re)BZl zKXAlaLK~}AqV9D(b|E6S9>|Hku2LK0kl^V!?V~ z9B9ADnRgA8?OZlt8(Hn*FN8P*BEEEc`j}q*`dz-J$};0%(EW44`PHqjCM{gU`HHSM z25PO;E@QfGHJB(!Tr5%7h(=6lKx~6ag;#Z`O3T|ge&|79f4k!gB))hk&|zV$B7O1= z!Hby8@X?w5d+Q}XTcyev)|J%W&ONC%0sXKxz=JUs({)cA48M;AHevVsm$KSup z%kUl-i;D0ddlH4jT~DA-Vw%pVzeVVw5GDX9Z1Y43+f2>tr(54M)vA_ zVB%lXI@hN7&)ehvx4&)oZRe|Bxd{Ehfl5_w2!ulSQr=rlt?L9Dz(!>H-QYJ0yxHS_ zX1DQqI={M3FY%8nN8iuP`**i8MU}qxqbv@dQTd_scjQeDR_9G-swPkja=r7LQi0N< zXx8;LwDZA>fBiZTj{N`GZ_l4;8Z(bNWarng^FjGDUTwQ)n^=%vA(gx1@cMhRPIPc? z;^!th2}2|3=9tVxm}@h{A{Jk2DQQ%?y1xEv9@Kxy)Bf-9BFk;o-79~8uck4d$0098 zG4$|yDkhyb`}xK21T7U#Y!pWc4vJEJ)i_*0tKGk@yKPTqwe`5YXZ@kBMRxXoW9t;Q zjHBN5la%C_MLL~t`(OWvmj{lru)q-bGLXUwhDz|A%tMKo8gsM$+~pT`H#|K&#zS=w z{R+2yu@UG=e2qNdI4FWnLSr19qe^UdophiP{Fs|t0b;mZL@CFmSEPQ(UdaykD6aL+ zH227;i2aa6LS#wNQ&c>m2QtfdCa)IrYm-pe6M6XV|F*jdS zX`vm|DUreuH%6MZs#6`gVd3nkEG@Spk@ql-!wxcq77M!@CLqKn#mb$@UGOHtEY-!4 zhOvrmxqj`2ogY9-$}2bW#`weQ4I&nd*)n$iHZ?{~pP`&qb2G7F4U8b*ZYlZL8kl^| zK-14Djwz;4InG4TvvQkP?@Rg`n$BsY^va9AQn=;yy-O7b#4sWY_BVgD zW%;)G8;2jJ;r=4dfsJ)#W*Fd5HE}27B&9ZifbG4l^6O=}RyUqIOKT*BTg{xtvrjD_%uYrO|Uw30X2#{%Y9wJ#RTrp?mW| zOE5&U;pZYqT9RHI9))0`AnsWW08Sq(s2kxH(4(-(k9WM=1HMz*{cj+U$$?UazHgfJ zm@(0~NLampd533H@0enKy93p69bsl8b;4YX$Y!z*Ut%?YU9^|pz6H%W-Q>C!b?S2) z{Hd_9Hf2Ewf}4(UZMEh)^lt3r^R;~lT{va>bT;OhAY-a=3H)KkjP2J*sm z2gJnwWl71z<^6Fp3l==9(LC_-u|1WWrQ;$lskOpR8}Ud+MELmlv_xJ1x`1=XiknNl zgJdnU9UbLmp}AmB9m`S|ZhaGQJKz&R`*KUPLryMA`}^;WiN($e-%f|wVobpE9jSu9 z;iPtHd^u-I_o`EAnrgscs+# z3=;YMdS{jo_WKcnU=%HafKdG4q8M9sZ_?w1X4L$fOY2-h!LIiz?C#I%BeEPY*|5Df zT{m5=G^}81UaMk?Z>a2nI@D674h8GfH8gY?X>V8wZl&37DqzH2baA%6-Mnxudm_{8 z1u>UmoqI6yVZW)_6Q)d@xQvot{0%E*{8B}U9&*B}LPfdtLym}BiWl$B>eKpeSL)g= zTi*6C6QE0cY0VpH5_VaAg8?Ww2I4)7y_d~`-1|DJsXA{ZDSQ2%^XrF?RJ03L%)e7% zX9*kYF8Wb?BTJSnUM!Edh2rRh)Kg{r2uLsLx+B7hYj+Bwz8XXyUDm(zjij`ksvAG~ zz}A&adb)G|Xu-i#4|TvXw;H~Fb{0|WjNj3bhc5_LCdGW!suNkCyn%gZV-{xCH0rf9 z_kO|6Kv-yvG}(OL*0@4>nDceJN*xkh7IwXdT5UwZh2}w;$`^n+MiD1ZobaRyke;@2 z^X8W5jLgirH8Q^CCYTqC9UDH*bK#h41w+Y>p!-Bqf$IKdP1okNeMig*)~ov(!@@@O zwfk~>Aw>FXbq5IkP$vp~G`ye+YgTHXqml6MqpvG| z*vb^?cDorAdj*3(oz`X&O5#89C|MpG6FjGu5mhLlDs*??93x=Tx4kmr>C<==UoiPs z$PQTeUheJ}Az!sgvP7A(Pul#-yc$e%a?x^D%hi>VxS|p_y1wl~SplOnjZg|y_i)wd zdMHu}kx!y@e1f5@M76aexZl0bnSw{}`T7CPGvH zp4@GIErW1Ffd%4VhEGORVl}tn-w`Xz8yzbe9T_d4UF#aEpZ@jit@d#o5@GhH&km?N zCPyx#Bg&lA*8l{S;q;ZDI4u&5->FH>iO3(?a#n1ssQcX|4B3&BgX7eoNsTX9D_z%uHPx zc|`Q!G+XCXo1ojRDMu7)M9u@Pm(5^`XtGFTH9EA7QB`oqasDE?d-%A2+>RZKxEcU! zIayix%eiOb_aB8zcV2sS>_SjkCL+bgOGpps1^qsCE18EJxpWnrgpsp zWv>5%3?T4+gJX(!^=OA@U75$@?|FOZdwFU)VN3g^7mnZYHBox5jIyPJoBZbaxDEe{ zOr+dw+iD&V=jYF3AawckoC(d&ZJwCy)c#BODG zPV&o~2n^cM_OQ}oq_VVTpmEnnoWZ?$MCH8V-Hm ztCu$A08lmy($w1dJYWGZr#?jPNu=>Q*R;O#?wQ9)+(kotfLd!j02&ayigMj4q%gq$ z1iGb_YEov$wqMcs%W39Yr_c*?ZrX+iPkwA8Z*c8g(0<#_b@BY>fQTjTkDR<6c@AD@ z&VV&_U)&uirwgWm8N0Yp5Gx~WY`AB(K#QsiB_ zeEG^D7n-R9exM=jQ&cQFOcYXNrVy;@?XgK{71Jz6%9)=)ade5&P4(~nRV{R;eylP2 zH-F=ycclZ~C_EnS>C;n)>%`YeaX;Bnl{#A!iyhL(?0SQ~Fv5<*YG)GgDi8B=yu9U*(Z`P-eT=*lv*~cqP;O-#56?XwQPjxIcalqO zTd4w8KM6V_LOFihV^kjWGJ5`7+@`Vp?7Oz5H+M(ct@7NwDPzzaj0nTDUQ}S=-nD5H zk;j3ZLTwei7PXgj@yS=Xi{QbtfPGjDXI8(b))I<1#x4eW6Qf1VmEO~yP+(=iX9)TO zQC)GTh%vw!*GxpB-ZH$NZ%o3Er^XW9M0v?o%U*~+uj9*rP_e@_M4Flv z*i7AhH8apf_sY5Z9gfIU5-uE(7ORJPel#YnMik>j;Y=o+Dy?~#DReRhCnM{Clk$jk zHz7Pt7YYlNwom0&^c-`H&|H&%7mq}H_wMV4`a1R>eG9xQ??bAb%PF`RzQe3&<)+!w zyO%Cg+ub+$Ri6Z{A5u$ZD1O}Ph1adps~IGi2_gxeNV(*a4Ipxw-M1GKR{PEKJ!GE)(GUCbT6=QgL?2 zx~pZmUagZQOJ~o966?Gb*Db!!yMUX|j);XSJD0jsa_y``dFdZdv?>1BZNZ0`lP0C_ zHQCdUBNQjw4Gf+d#;m;|azZ^JxDa2Duk(%HuZCsJb~iM!Ec^g0AUl zQ$zBn&!2#$iZu?h?WY6nw7y27%>-ka^J~LQ_e}Vd+SWKTU_i^3#$jmyJi*!1M?&9$ zq~DRCq8slaSd&QeLymp^7j4|I1fh=+UwwOvUR_4_eMj0hMFgdRZW+y|vtga!{O?IzVNVJi-WA?aq z(tkpmvO#!1i8hhb0hQn$W&gN*bHo7Tn7~pUR)|QRMWF8BbJ2b1kotNR*XGmA6Y+3L zR%WJ`=m;<;*N75A8V)K^z~?qRc=+(<^Ck~3uMk8Dv_GkHmpbaen84!viKx5 z0o;9L#1Nv*V>WyRcOq^p44T^~&8`*uk0GSaqN$L3M88Xeu@6Nmx&+E9&tz0&(*f#G zxXDN2e#6Mbof?SQ)C);6NSy@K7oUNwjk*cp&wyc*$A-YWuZu3}f_rMrRU5Wq$c3c5 zlj;-bd#FK;l}9!ogQ7rRF@T(#qLwv?*Trlh`X2gbAJ;ipA1|oMqu6m#l;V_dH}Psf zBkG7)u_G%;?to%2(04PLL%*+Jaj78*PZ&Zeg&BvWw%srh;D7Jy^Tuo%+B5T*77mEm zw0ZMXMm1+!qX2s7`?j(^sX_Q6G?*fK`L$%_bj;slQ3U^o(AA0*sP6pRzqo#Ps!ROL z)62`@QB0#_3I|pAoCT%FkkKgh&7QiTOu7pvJeR+ldK<{P)wZMyvPp#<(;9P$ZXllg ze@IrTMgY+&>a?8H{ehC5Us_uSMx-sL#<(%=z0KFWqwSRc-|6(z$R7E<)w=7Ox5qAa z_yiL~A+z@=Z-{O2wQuaf>adG2Unll`{o92rRvXW(SrEF;vh;!-ExANZ!{<5^F)C$x&ya_4{kx4ub7|8IQe;sI#2PzXJZ~87oCidu zxuq`dStY(7&8TjpITfl9d>=b`QiyMf#gmWxzI8*_qa5~~<;L+ZcP0y+aBEvrsy>>` zvXNxeus~YwGKdmut`*zLSwjQl(*G3qtPWCKK+X=q8h|ZATUhmp7eFT2IFmnU zNm&h5*?45(mqjQF)_Uy_6tbX3rvDW&i?Hm!~3 zhUtL3b3`&4**AB>{qpw~Q1(LaXPZE1?>-$v)((yKb4Rdj30A6hyIVIex<&b@7H@*<`W z{^Qi(!q-jdny>w4d%XI|CL*AvHpyj^0LUEJ7y6a-tt)AW#0T(H2JLEX*R1c|JHyY_|$d3N@O!&JdH6k-6d z=7wE`yG&zkaX~C2k4t1rR%dWbPfz`hnIzGS&+zTmy*q#HYS=7j6e6t^xe+)q<^#I1 zNE)BLZG>V3%i|M%ao zh&xZIoe>U^DE5uIIY=J(%7t>Pr^ooRl82k+d1QO+TB*HYkoy$JK|cF zcjnGNi*hIY%Crf0^A+b=5%iL0)Z~Vk(a&)a`OdTYX0#>V$;nQKg-uGVY!8pW@kT1G z#pZX*=agUm_;G~v*Qe&+^Fp)h!NkPGiYDkU3uYdm)-KslTMmeb^|aq3KVQ=ZDZ(-M zTp3{JYJ6b49k4!0sAy96lD%J6)_yhXP8gV0Kvef&hN6YhQ0!WKwQE(V9jtC214#71 z9~!k*rk~)m@ev3Wx0t!8Y>fis4UI7O!N~!+L3uf{-*_Z_SP3AFw(89uGH_{M(7loT zT_{?ig586$zdNpec)HGMDxaZlRp1DQA`rfIZy`R+ct_8Re$+gb!x`V&==LyjX z9mLf~!j)UHkCGf4Bhb2sL>nhoM<+hh-rudqG@D7NIAQIJ-i~+p;S3pY7pBw z2v+9Sm->}{AKTj6WDR43WrWd?5Vy!626o8?i%pQQj`jH?n`@2l-(UUG#*|s}YKumR z$3#mA9rs|G2!zau=dORL_G5(*Hw&v%VYddMhzPAqWJCGwrO!jOpjTsemht? zN|C^nG6Yc*=vQ zrIp0hj<|O6*s<6z{`ng(LOOZ>p4Q&}{^|AYK@c35*gW@FzwtLB7)4L)+qYjoJ!Gy) zt&$Zhm<3ZRn3v=lQEwr!khoJ$Lvw!#8QkXG{#_5~R3Y!yU0Qi}Aa+zjPtJjzha-#U zxpP|*YY~Kn@h~!KxQt>O{foc+kdXwwZ+us3OfkSi+WW{9PK~pzzJ&8*L#NA1IF!8t z0s>eh=82k{ADzRS^rUD}LkOyid@<1|75A^c=%dZ!4Dmh`4U0Atmw!DhUDdBey_52P zMu2p9{cwatP!mG7134A}yud8dz1QH$fnvi_u_mDq%0!tensP)Ff#+)%z8eXpvZ8-Y zQ0SF;i!7~{9vl1wK~FA2NJ?+)SWa|UgNRxly%;OSqoD5RuUY^Dwhlvn^Lgt=;*Zha zANiaadsbv~>J7ocjCtiyNjkMZ0v6QFJo9C}P(L+LK zs~7$;db_%A>^+?c>Yk_WZ;Va;vwmj#K(ucKsI@;Zz%7hD`M@7Ob|++2j>to~&mp72 zR=UT<#pycRO9bonVpLzKMfiWw?STcu++>^j%B+}yjGA1i~ zMWcu~nnLeOKybjJk6~*o{Qdo9O>chMP&S$ z>fUb7`lbY>t+%Ut`y%r=#mMB&zmj_X!+?m;a_Zdot7mk5OR8<@WQ-&JFJAWl3s1oR zXyKMe|M^_)*SGjy3))hMyXU3aez)arZ=AEqp-VQl&sFDA$a7GwwlLJO-u3lXe5uj; zA1UYWo!W8nvswRBj`sh5yYp`ScfKd9=xur(^{$ImmXc~8+OMCb+wyt9wyz~lDXBhz z|D&+k^M5L^4)`Au%m4R(dy`-C`;$@cemByon;3I(Zfw$5{!c1+YVl0{ew!V}>Ll(x z9`){Pgpe9>@xS_GE9P71+Wn@t>!Nv;WBt9#&j!W9PlohKiQaLS_SRi=cZaL(?`F5{ zlWWrtun(aiwFitQWoXTacXAx(j35sMsclLK7>xf}KBW}2H4BAuhAiI0!(;8?7Gaq~ z{vV9SSyt0u`EaxQZ(tD=7J9e#e(tliVHdd9Pq2zuPzdN6ne1^%)br8YkW^;Isxm~} zSTqn&!t^3ToAgW*s+sXDVPJsGR_(DyaGRJA#*izhe7q(sv{Aa)ac-_JCcS`Glz^zn zGWDU(5vq8pIPIjmTKh+{G@=6fGizYBtJ-X^&x!_zZQdQ_S)=P%Dus-A=eEJOgvLOm|V(x&qD z>yQ7mGUw1O757 zV97DmA#Wgc7z!A5;UkxjLq8wQjGN!Iba1q{%WCgu2T!}U_p_Z4*9pjEJx+*IIPX~ zYh{XtIx<3$UvDxbnH1$pL_IQ@S`%@TP0wRCx)+sP^c455w6}XAWUCQI&#PBIs=gdN zg$5m*7qipaUXv-TY~Id%1t@;4KmPbbGa*3qLW|@qKgLx!v988{Tr<8F+(LWSWfsE5 z-Q};iHr)VK=XemZXc`b*iJ}Ms9}rFxji&tpnWP+K50ZVx;rWo>AzqX&ROoO>5&W?R zkH(pFDpeiiMxwP0zqA-somS-Us1+si{+Yky^$F2*A&en@+&PIL007QqS)*bjF8(`+f)+H0R&z7eV&fd8D?l-eE9I;2&{{(80mFsj4Sa**6h|zkxYQB1#4J!%?zO-k9o_D z!iKPT+gOB39fDG(Op9I_T2m$cw+?2anW0Gci6#M7gFmF+fk3UOu8tr-+vK1xBOw7H zNAigBF1_V!%*%F^q?nN@nnqvWzl5`(5X$If5HJ#GMiP9KAa%LF{%?xp*(aZyn=jo^ z-TLgNdoT{?OgJ$dsj%e?^}h8vV1- zdUVD`Ccl3e#@e7<7EfZ9f}6kP;cVh{9yP`oXvtBs1sne%nmAyhFStxL1gmvv(uJh^ z8?_Y=*Nuw-bDEO#8VrZa6GCTf0cazWjm}NhgmdNyX9#Vj-6r$7oC!~ddJUTq zMRQabQbEe7VZ?*;=1v?pC~8-b5)=9ojFs-2hV5zjXd~MFxa{~|xob&hb@@->fH!#1 zAdeq5_YBhZpZ^KId8~D^aFOLSuW~V$%fZHVa( zV*%ejvsqS0GyJ&oyAs8;VtjBru13I&&lSUxZA4EhkIC?mEwd6Jm*Z%3JN06=^!aaX z;g{@o?b3|PJ;*A+^O@;Hvs*F3ta|5!IEsjnV04YXLWjC%>)_*bVXNliSGr{UIryeX z&IM#q;S9i_$`H;?2mjo_70ZlA_ccxzl8GQ|!{0q%q_7vtv~2F}e4U>^(}f?`3EO&=AKil2l|kJ_iZ z5d8Q(sivFO5q0jXnwm&1#R6imxvizH_G2r@gy>#md`B)`=b3npF#9s0SOLv8Pj)S~ zpte0^(k2W3Ar;DzcKEaX;>S499Fiqin|Of?fVr)aA-(6x=WG@!!aLDPZ+G!tRC zf(buRHXg{~uzXW88?o_i;acV&S>;?hDS8nZJ}|s`?K0d1ewK95I5wN-_`a&Hwz{#p zwG{%ZDyF_uZC2UE!gpJ5_o^7w@U(BWN)Az*MVpHd?kVTv=h{SfrI z)cM3S6>eef@l>Ny(2FS)pUE^Af~V@jxKN5M5r9~0qC(Y2&N3CTn1n=(a4s~J7|pX- z4%kyW^!v!=#wT3{$Fx|Q7IakVS7dOw;LREV^DnF^I2t_A%bF^>RjLUBK=*Z-~?IsMY)9<(a=mpoog)tZMh0tpG84g8=1D(xwfz6vUCI0 zz7fR*5L&@V<4J$Sf$gktc)4aaN$ti=X zgy>5b&fcwlE3{9KTdEVgQBra5^b%$uc{l8~*S?J|W-ztx(|5`n*50@;$*=5b32ZqU(qn zU#t<+=A)^c3786(=t+@?%s|}zOsr~!RZFG|gGX!?14!G#@IewC`9h`s;@cPjd^uuo zwKZEO|MgUG;Ow=H)sq{An4RW*?@>dzNPU210t+?y#c#bhXN035kO&l*#3YJIk&Pc0 zH{zsnKQmK9u)8}_2@KlQ6Oevf?bA|9E>U`Yd)kjH7)>KWj)Fs4e|!`muSGoybmR{9 z3<3srd`QAhrpyyS$QtAy!aE8wRGj3@yb+ZPxtn(-v3p;~ZOWL4!!Fgd2ALJvU^7wK z2`6JDdOSwDMd04JOZot1_o=gHY2jC_MOIo-RTT~iENZx#V=kJKNaECt$-TOx~&vUBfsHpAG|L$5ep`XiF$aL{bPIOyg$%8l1~d+>n`pzhIPOg?(EZ> zq2we5?Az&O<)nZ{%l>Hxvk9Y`=prU&MqKe5KMiBK+N(H(|or zfY8TH%LYVvF4$!&%7FaOn_$Yynl`>@?e5*X1;`WTgXj#m&W#m~Frv6mZa4ksPkgc= zqeiv-;}KTMEXB0jG9eF7r2M3$!pNchqIgR3W#mmj#xYhdO(~gFOqbMgddPMe%jdWE z3oM1Z+^-3(ls^-Y6c85;^IoL5v?gh&k#lGY_&^~;m|H~WM>G3M?CrN3c&N>IC`|in z`7z<&NhzM3rvMOUk>E!43#EU_Yn8^Pj`CvyK87hc$;%pC`8BCh_*sbv!l`v5uO7it zMovyneKTo@Kw5|UjfqHn=9uy0PUgj}V2?fK5MMucju@sWRPRD294VQMZY(>xJm4Y+ zbavXBH^M6kKdB9grCc%rW?}`(xDGX{&^e3t4$?{C!yWmNG$|W&7gc>UhCs!05$_5R z-(uLMA6$(l`Uxyc!0zk}x*O8wY_Fttv(eDy{mZ3(PwJFD)(2&XlEyaIHb8 zJX3JEs*;XlbBJblQPU%1O|x2urkjMr=Y^_tIw3(P&Rm1X`zSUF99Tq=48A3p|2-|8 zu}S_D{*+jtqR6PsaK>?3yv7uLl1%eBaR36jEWtE-^Q1(oCeUv2Oo?~a z7ofN59Ebvj?MTeROD)E`zEeF zcjk4@-0)VKnM7iq7mMXG?8a)&*&JMeJg58BE)0TiNOEqumKV#%5;{v^+Xyz^;?U|e zT4b~w{F(LWq3pMI+f|jllmQM;h%$lzVWrnytqEgST5}Y5Tc48dSxdmp6Ha3NRX^R1 zXyyEg-+2GWC4;^dCCm=zL|ouG5)HkVFJB(XO;V^Wh%yWkI!9_>*brPhmjSCqPP1i) z+-7QDqUDig1(wnR_+Q1l|J`~bKwM&D$Yh7){{86s1>gU z_Um!ZC%}*R)l$^E#K#xHbF0sMfZ!t+432r$ONFOZroRVE^qKU_D#P~cA>Ru$yVmyZ z#yR0U$DL{Cb9)V)X0c=Sp88j(i2=GQZ(tm(v8y6OI;D%WVZt=+@6z3ME}JNR{`AR$ zIHq$YWuE+s^ixX77n2eYbgBwn?vGKa!Km4173d~hhzr|5! zq|f}iCHo!0L88fu1j(pJwc{uG2SFJ!9cz;P#vNAq;-})cL&JUX;>E8xL&*>frqAT2 zi$^T{i-4$^S7JKOqc?`5!u`qe??fH=7i=v0k;K`?lS1@nu6yJKw9g4jd^yS4BMSycI(S8&Qn24nUn7<4~fdMux|_^$q~k|gt~B2dw&0-Ren@B z{N~7_Hu9vR*nPtmEE2X_^kh&Lo(82%_3|ktm4j|NWOaJuUuW zCh8CVIqv)3^i_tBCZs1i!;CA@-;tsps8LqCbet!B>hy-CoC-bK1TvZBTmm&((H^M(?O7_I8Z5Ew-9MF+~sGTu# z&<#LhNUtlNU({EE%Nwx;ESyG?C^#g+TB0Z~K5;ijJLH&v90^Pfu}s?i{-Q{ ze`?Hp@Btmj^_4AahWsGpEL5n{zpZj+S>dzW_DQSE8VJ7wo@P<-PR7)&v7hujRa5;}t zA1RAN!)px!>)*1^pQPJ;}_ybM2N)o<7Aj=V8(h_46x-LXpu#q0cy)_`Ly3mMqa_z~;(( zZSgGX81$z17}R>{mo>*cvs$Sx+2pGzjd+7~gJ87;OcGhisD7dq3@$wzcxE|eL}M7I z&r~!*5eHAJ2%s#=9WP&AWLs4c-$d-xy+3TcbZauQ#Hr|M7;t{O+%bOP?Af#Du$<^f zoi;vhEDF*|%a^B5QoY3Ppj;zNzR3ED4psv!5~id>8>tBDh{Gb$Q9C^=OH&+KPNSVP zhbsTUZe;?pVKtesu;hWL7j{wtj_6MDoa9wsxT+?}|FmcA3Df0c6H}hlGzYa0?I_os z@@`)h=0A?B5+_^chBuA|PKq)zzX)mX(ShTI?1%HO*jYJ6wC`fj167hoNKUU_Lh2;= zt5#A<@hD0a`0V;*iP`7(J(fno2+$CMpX{`=Z-FuS0)Y_KtVo}BCbr-shVjH$L#zIw z*PWU!3%-j=lCgJ4Zbb@t(+jQi!a_qNYqifTtjtfkNomkPr4S8D zG?*GlDM>{$sWcfXQbH<<29ZcqG8K{{LlG(zMH$ZP`R=pN`Tc#av#)FK>$^)npZD{A zp0)0E-}hS2xpAvB`|hz@&EtHosks9^Z3}r1yA&Ws7Vttg6sT$1yX4 zrU(ca%_3!}Mmuo?`RX}=vpNX+SonesWC>YfbR6sSU|L;WC^gukWq!eh4XRm)Op(G} zN1dAAXFnWxHU1?r74pq@+dbX-j~1-hR|qmVRkKlG4CdC0`Aqf;U9Ik84E;)BB6!qV z<)u&*^-0|)z^ToldcWH&#lJpVF6HxDnjSQ`uorLNF6Ro>Q^Q3ArA~9LyO=L7Y7I`d zTeu|`QVe~w{A0)Y+HD;CL4PMl2^?XbHsqYcnmg1%P}VIfRZ9N_o3QE2HF8NhH%UI# zz#!}UkErXpMdl8lnIAC*{f~Q&g*e7C{fAW8o`t8g5jsU}Dkfq9$i}9ZQF$@-%f;mS zdQmS=+x;_toNV9`pFjVxM%lrsmP`8m31v0I!EPi+-}Gd=F#e^u@J%yuHNehpp>bnl zBWG75y|=41r|vQRrv3O}s`gOGYLl24c{w@pCaItcZuM_ol6G#)1>t(;fL613=T9fI zoAQ7jJ-lM6GDFB0|MF{$R}PU+5AbE6A{bU)3@zER^a)-1I87JfMLfSj&@x6wXA65L zKscG)k)5eQ%FBdmkwR*-oN+a@}IxY?vMsv^Q!4#pr0aQ3C7#NumMH;yTY zx{~ie{o3~7?>WEsc6pSsv^Za9^q%SE$kW#d@ZsFoC9pNe>$g#A6W;*L69Rmnk^LK# zw;VJ*?#ns#Gt}nsjio2V*o+eYePTuN9A0srH2r_~OORlIgrgB@Z9Y-qe%FS5Ee$J# zvz8Ql&-bX>Tmq|z=eFFGSedtKr-Q>AG4KFoROrK^q#~c*D&P)vea_E2V5DtBw1>N6}&>T5)I|zW&G_#R8Y4xF6@e z-M_C^r+71<__wdL3D@%w%9jd0Aij4VgX|y0#umg!oiZ_PMiUlHp7mns zsZd#K9y!c~xV|?!-4SVqP_hKv^_FXz-+pg%%$%-*^$*bV)@C=)ictM;$Affn0V{gb zB60Ho3G~5ADymPF=;s=76Ii@w!QOAh0`q1J{#)R-%MO>K{tSOx#x=TE#lUjn6Vr}g z)QY;&gd?Q^vbtIG_7fvy=D>-)g($Gwav9!zHg1JHc8!=w_rBT1CX@6i%mceG@#cAD zqGkZzPM-E4H5gDc33X-B+T~roK*|bv!s?#Es~9`dsY#pWC+NN(e=Lx51bP8Mem8%Y zc|~Wd*-mnlxKY98=l{$XSmqM)Dj}K>OLl?4i@%OXd$e0?v#`GctFMcak0C;%S3iv8 zJ~(3p=g0pxgTssVRF4yNGA?)P^P94t>HQP$4~19E!xVdhnp7;0$hyK3&&D@c(0_Ws z!--sRfGOd&_-(MdT;C$*LjA0-&*5+neSCg9RueJgRVa3_!r?gnVfY&GNsq=*yR%yB z#%69V61omTlHh6hr$%UGq-)DvcXQ|-5RpoN8BtG>ilIhU#qMpoi}vZWu~wK`nbP=iXTXqfn~Dr0su7&;oIl}gfn@(s>%w|&oYuNOF2*1ju^cF6%P4ILgySkptR9V0N}f2_{xZ>1 zaOmRg32KgR$_1i|6yc~>FYg@>G)moj%@DSCE|PK+2a?#I!_Ga6`Uv2wM!J|BeT4pn+4TKzvHA?V-H+SoJ(x71#>y#Cb`p%Pz(e~mp>X|cV`c(*(-GFNh zMfhWoUPzuz1Kf_?p-E$FNzTn7jCV7v<&}pmWTq3x(+Al ziE_&LA3j$zF)k2)-9q zlg_2;c3OyyH!;ddy;gqB5^`0!m+%(=|aK;sY|I}{__FHyqG0@TkB;JVu-etlzOA3ugr zT0y2}=Jub@Sa_gJURkM&@5$?0N*bo%6H2qd3zXzgJMY9#g5-pp7Sfro_dFLP3GX>m zz0N|2r$Rs5Qdkf!SOz*5_}K|W7#6}nL>P9B3MrVMUqXx>R5WKdBhExC0DAOVfP)a3 zzD$pUivYWR-*>-npFYBhmL_!-(x?B(r8D_gJ5{_RmIfVUQHP08%s&)Fop7qq88nhp zs#%f8cLrg+w9`z(jpkta%1)CeD+l2?x*iWE<7J2_v4LBDg!v9=fh2v4Cax;zIu5|K z+)odH2Rmn%pkIpW+%L*0btpC<&? zV+}XvAK~SR{WjVp;^OgsqyGE$<5z1HzElHb@V@Pl5z{g=Gt2)j<;9EmOY|c&ouSgJ?M5rq^+)25dlHwtf2z)8)Q@=N0l5OdAr2 zh}c|?vsFYbEQSX1a3+8=fyrR9Hsz-hvITP>+D-US0zxQ7SsE*b@(4e0{+Rc2ZY5v! zx8}!>EoQBNPBB926>O#ung$LW2-KOwpTtZ%aQgDuYCU{axArlay&on1potqpVL3FH zj88_NVj;zMDQtN{_!o&ALjSFurT2Tg^PCOtF+crd(|u00csuwzn36TYYp9Mnw59bA z@fOb8&SJ6O+W9kOfI7!E`Dkx(jG>*3g6ByYV9OFllu-wK?iLmGH6nh%ex15==gtOS zwjf(Fm(CSbv61va{&)QiV>pZ>OtXI$&Tpw-fTW0zOXbT;A3ee%KCy#TAt7WDF*SiG z5EHxa+Uj?Og@w0nS97}W#+oB`jl`%Z1Pid$rpB%RnUIM-Ie#oaB|ONAYS%LOo)<*C;#I<_1&N=E5(~dzfha+!c4o+YV`8&gK^W_n`bAg*72V8g^a07u&bPr`v&3yt-|2n*6_iH-F%l zwg3Cy|Gzo%cS6%uBW{@$dTr7LT0$4%L!G~Hp%{Y11blz%mcNtd!>&+J^^=#sl)Gk@ z@*^ys4Bt`TuofH3JED}31Q{K^>D7#hZ#hi(rJ^E!99|>**~I62v9fI{;I@jeJ9SSJ zj!~E*Nj0=Z{VCtBZC}&YcC&YhRe@A_*ys5nuMLAH9ergq=cMK3BbHSjr$(uttt$>R zJo{#*;&Q)hb9@#CFFU+!)Zm(ie`Ozv5_YAj9k;5P z(e>}Wne|R&3+bUlKNUAJ!ZAJXM#~+;Wy=n7<-wIpUh|f(UhT={K>X(bznMFCY66*y zD=OaEIfigmXWX8KZq&|F$0tZjp1OGPC}P0I-K_~;2rbSoE@kY+(1?iU_wSqZm<80j zQ;d}F zT~$@p5K>)Pd5pPzc7?J1OH039J$C%~!8>;*z@yeRRIk|aSRZ6*$qeqtl?{%h*(OsagpA3r^iCn1f zulfDP7fa$dUDV_71o{OWzHvhZ5POIgv6%}Ngqi*NGm#12DQt?8A3sh~=WA9KF@`7q z!-v_%#{FYsV;fTFSbj=)1m8!&AA_@09Y559=noaIc> zsk>l7SIsRSB}sTeb$sS2fBzG}jeAM=?%v&2mLjEpQMFxoc(_4EjdwB7I+@`1#N~7y ze2&)MM%9k(+es))nKBqt)>EVnRVaJA5BG2RI(*y_aqHIhhALyLjT^gp`CPi>MZpRm zKM3tTvDeEL=H^3?H%}7N*CD3peD)<~%wM#q%wy5Qg;%kMT%6)Gn%uu^(_mb4wRLr~ zHg6tTl;Gx#L>-7pl))_OIk~xtnCl;@FpHai9ajnC(2kV&obRSM`qdpJ>ArnC5DmF& zq64hBP3^xtE7tbZnKOUK4{BTD%1+f>L^3|FJ05?j(Pkm*|prHqltf-UUtE*U>?-BgCiF@dSV~ySuHr{{m*smVEp6+U1tw-P$W+^;oyrOP5NI znKo%sKZ+RBgniRduiKvee7h8t5e-3Z8lxsNoY_Qk3=Mm$h#{Ymbo57X9PH=XJ_~3lF5K= zdoJ^qrL?slP5JdtX~B{uM;W;E7y@YkMw4qv!1kT>tn+RU0Wr{n$m z_YK0^&}5&Xk5Gd4u(qz;Q}*fcu3or3_j9#1?E3q!6)pbU$E1oSptG)|L@{fN^ZO}< zk{am;9jJ2CrYJMB7@Pigde{aX;)wTXdo;HJmrP<#PCr;ew3Nvj)pA+9?92e z>#K54^v5)J7#8G3T3TDq+=djaW1hII+;Rhde_BXN-Nw!kF9FnW%P~-2GLZr{i-5ub zZYS}vun-0Q)91MjOi(0zci==w_Uw5WJe&29vp_-7nU(C!{dPfhz{=X6UlwgNoX)AB zO7tLDalw#?ZOzpKbN$_6@JoQds+yW9@MAM24{%GA*ikh#-bkHp&#&&8ee~F|M4n=% zr1kH)4X<7;WV)-P)u+s-Z{PN1Jki&@Te4%Csjj&$uLJXyw&6PEUc3u`_I2sf#dOuG z;J`o$s=~SKZN^|8WsS;EH)K@TNzha9%FN37-|unXzJJfg*p;__BTE!3ngKpgDGH8(R#xXuuW^LFof*WeGaM_wTGpQdFqwo@(pW7fV zvdV%T3F`iEFPEIJ2M_WPvj}g2*RFM>hqhyvF3-}_rC`MqVWZWUc~X9vZ9&`&#)1-2 zXK7|Oi0{!(f=B>mq;m0jMLBBfgxdOg59(>?i6c-k_kPyamFX4TuZ!pRl@Wx+QTt{7 z{{8!g`zshzMDfvyi1zu~ZZtL!6$|6=9U%*-W`8CT^%bNg zS>nt+19QiW9vz8R=Q&uT8z4i348>u?4iYgF3k&5XBDr1;mGwuL_~&iY)z#Iv;WNGH z=a*DaF!u=n93%$Vz?q1dbBeJ zfkPo7%9y|q9KcLF-M3Pbk~7I9%pc4207h64Wkj-LW2-_)qD;Y zkBQL$ZmhGmKE^0{s+BUT0aNO;_!>(Ki*`;$>I&V^iLxZ37AS-b z^W^egt+)36`z^)lrB3H1x_VYbkYAf3Mddg27fR7@emYrw8&ziGXi#FFCg-@OoR z#BIFH{#ke1jST{ElFHUww{G2jd>RL@j?9=SCnu-MEJ~An{!TQy$xYcINrjros+hey z)bN%pn*YRg>hB+gCwO6PMjNhS#pMdUQZS6vkO!MKY!KNP@~q@+E>aBV?9JW4)=R^O z51*+Z{q*C<0ORw3dVv!4!mf%e7<8yD)$LWtU*a=fUnM)H@A`ZdQ|z0^(K;`c}5v>MI?ii^4<6=9>Kb(b0&H{82dcV#`)Hdx2ly%f6_bbKDsdU8hc+N_g(V z^K2Ry8vNHZmd{}1QFs0)qapUx*|W(EiHJfY!!A&US2LozUiSrGNe!`GYwFaAdVXy; zC97x8nLAgF=^J|tYMbw5E&cFN*onnv`26{EaZOFH4jnq2Jp2z@vxMEi>zTcFZNjWd zMq^ph0djGYb)SGsm#AvqgznWi%f|S5|B{yA_3Zhn*vyz7!py9`F}tAigGP5 zB&0L6OUg9U?oPAQl86gRqepi`{%9N3B+|up(uUjf%5XP7^a=|EpvIqHxj zrHiK+zp_p3uhy<6w^w(?s>p)hPyCakvBk*HFcCE5ui@QxBCDnEr+M}4P;HgZV#f_w z+341MV?b`DWpu;$vw72&6)&Oo5_{6Kaxg4xl;e(fFJB(Zo^xl>)cz7C z5vGY3e}U>Y=S+2N`O8SzK>=}HJGLi*4;wjBlFi|@kRqfHe=aE{^_<$J>e6a%k1u0U zj(U4nT-xv2^s^IxJ95mJviO9Eh#vg6+!3{Cr3*}ZDgTfg^6(*1Zcu1ws90PK(8%+h zjBH0_ZTR*5!(=NfD+O>Mjil|vRnqrncG^#uY%!j1s=`o5MStmo6qFu^4{N@tTuTha zwbNTx_QSf=<2z>XcDozd-p;N8uM8}5VZwtnk&eqQimKuF1v=bs^Zhn>FXQP-1!;v* zqq<@_qFg98I3j(JS>)z=SB;I0#UI3|9J8qq_aM#J*SC75HD*lrrluy{v|$q`_GM$} z{#x!20M=c*b|{Rfu6%orPHUq zovARO-*s$;xa~F*!Y7-UXEEwsiGKlvqhPq{SmJTNzYI_GwbKB~9`00+Lui_HVwT2- zzH1B9DFuL)Gxjx08l~-s9hbG6@%!twGlRdRaly*|s`OksQp_I*#k|bOm`7=b6`*R3 z{`C*Kn7y)gzC1EhVJ83l=DxhMwxK)!BF%Ubf;2`Nto#iI2#StAJ4z|`qs_09XI;>4 zMfPh5xpd*e5n6T#B^rB5qQ>KS>|o?J!B3!bKqXlsGA8EM+a7v6Y7}~evh|Aau&{Qy zgMUr(kOtnX$=(SG2>~$A0?c2s&Q&R+k~D3I(xhoM&cR3eAk=b@eTSHo4j$4XTn#<~ zY;--!#(hp_3E8Ze9D>;Em;#f|>1!4h|R9B*vwEt*)0@Y`WIF>L*i*${YN% zK6(z-84`N_gvRzAJCYH7W*PP)QXOGH^)yD`WJFVE5YTdp)qGiX;AXsQ0ny+0?{8KY zoa*@MHb(;(==}SeZD|~#Tm7>9(JyURKQjdKMOrIOu zc5TUd4UdqM*0kjx#CytPPE^08N&I`g`YPFih{WvTj>NiZ{l^?q+9_x0>+8pFm+f9y zldJ!SLm*&hZ0r<9AbanrnLKf#?VEeKIXS?{!5VsYz(As1$|@Z}0}UD^a9L2xDubdY zu4%6)HU(B(t#=;H>12Czx^0c72UY0TH{s{mXKDOC z0`;&lV-^9EE~-*!t26U@nVH%9;))R&S>U%p8~0$0xd@V;GTwIm^w<62TFGQOy;A`KYeT}+Gc50)j}9!Jv8vHN28@nfFU@G6g;Jh}1E5`RrEPua=0i|U*5?``T; zNg{=FRQ8bmNwXn8^ayCWbuY30F0w;Qv?#%8oHjMx}+S_599Va}&DF@D1~(ziO(rpt?% z>URp=)dzQ_G&2_$ixCJ{Lqbx*9}^TQw2mCR2Ok(@uQ+w8zqIZP3XA07>lvj8b5cAG=;9yHjOZD~1l0ib2;X6j`5nx5EDXnDgmlIR5 z5m(!}cu1Gl)GT57z#958Yqmnd=t(LGwKw`EthbfkTEY=Iemy6sVXYG%amy7fC99=G$DO zns^-p8+1E2z6U)R)Jm5&#^q^eh$!;x6yWfwpLq0&(O?K>BfX`MN&{sIp0= z+P_93v?8hKmgO9ZkDtzBKPb$G6mnnMZ$xzF32$$eesg={fi!t|>5}|J&Cp}iV}@MG z-TuISY2Bmu?`!&MJ1Q%BoO<(SS&+j{fQ&~@<4NjDnjJ@QWiiCaPonIePf)OhY*mvr z-MV&F2L0*OxwFIH7d%Gq<5w?kZ39HT!OL(Lnl`AC6x^HdXHqz0*AF>0RaFJ@ zU770-+#J4*>`WEP#bLmB72hejzH|dGug`5GS~!#| zQ{T>6wCF^G6tn$bVu19mJ2X>4f!$|$d3Ks48|1v)?)tC2Sp?@jd%BGiUi59HAbQ82 zs;Ia>8|y5KCczAu{yef}(_VFFJk@gy47~a-_D-|;-L{w9=)>fINGplChRIo3ryTDD z1u15~eXE4HY;i6gTK&=Xvd&xo(&cCc>el~ATt9_f1_H3})u155O+}K5;xDXFI(6@S z$j_IimLi%ad0f(QyG%TSpQLm+{pQUt{+d(YF!NO{!Wm2k?^}I6O<-E+` z!DkSTRqIF>M2+O!+;cwjOig8YhaclzE_A&Y8L2?QAYQ%xqrB!6(szLNNuqGpzD7__ zM3fVN{phV*<9s=>|1b4B%Af(!*b)<)BU|CsNo6ox6ELWI3c_3A=2U(17ck0}^LyUUdY;B#Go_=fb zt!hZXGW^q#(b03PtcHVBPVl^e;9C47o%F_R<|XFGyG*WcXy`31y`R{0^*!6wa&`DM)Y1fqPQa~h2Jl=YV$Frc%Nl2XxDyu&m2+6t6^ zkopq)`7Q_wGa>Mad;)tx0Yx%mr05S_8T$!SIHx_CL=s221#B{n-7$(msb-JM0Jy5^ z>IWkt#zGDwm`Tz?d@Uq|s^Us;e^;g&*}cEp39aj)>pxB2D@0^>d=PAxoF)UpfET|r z3a$*YFeuEk*3P6FeC#m0??VWvxY zHAlU?4*C1bf)|7C-1($npBHQTV2iw+y}cTA1hmaCw2J`)21MS!KOnr51J+U2U6E>q z_CH9s?(I8tsLm~&6{9I;OCR#v52bs6qX zz&ngsn=k|crokqcF1M$zU-#f_J>1tag~c!a`t>Lyr9Q^(_UYTkLAuD|2)zhxe;B-@ z5VRI9OhRXu`ECa9z{^wJc?gU2#rsRI$UDi(a`)wCg%Kk<@htiG?PZ6|*n1Q`>TW4b z3bWChIQHz+x$D;t=P?A%~PN&bD zX%A^&!fk(s2jB@5RaK=Sm8rN>f!~545yTo`(d^e8603(u{qU_4J$rxM#_Wk#E_+$X0R`4EmR(H;(a_7S9JdR~$av<8(V&O}qVw zB#Syjsy%)BRDvt{gu+X`4^w#n-u!E2Wu48*Ayaqu01(ZBpB77QU|=A)yfG#bGl^c} z-9c8bj&ttK{aJz`T4YHymJ@11u%7p}zxCGOR3%sq296|Sj66(*=6HFKjLcC;2Fkd8 z5P{paZJW7!xAyDw^g-iFLG}v6hPCJSOGr?yLHu1^>{Y>Yk_uZa8z#YkQ^71#3}Onn)`sXE~2HyGgM z&ab3kRv0-l)U3rP$3yxcM6fU$WSi>6oZka~D$~~!v+Uv1r+dTyrQck$bdZLQh_TGi3uOGKw)l&bxQ3IeaA+X}Psd!)ED4 zRHLX#djEb9XxY2Y-@P+&rnHt|W1_2dRXQ8K$j0h=4wi@PFg>)s-KLo^Q7>9skDWyuPshwXxIQk1vG%!M+8nFwDqepoNhK@-cTF4EIN(AehSpvx<$nr^VhhtjhQVs@;LJAGBjq%8~N z4790fZmn{Fd~qi$fUvrigfwf#ih=M*uO|n$Xmc*MmHSE*vzI}v^nG;XvRu@-nZZR) zL`Qv>xiL%Ht-LJSbHBR6m!?8)2j0GY5`ISrB_vA3fFogH*74<9P!xf|!D7qgzkko^ z=_rmMjA1vVN0W~*BNqHd_T=ond#8c3dHxPxtE%#*?$V2qD-gk^hIZt zMRYhbmsK$n;^H2{N{TFjSd3oQ1#|}$lMgiUka591+7VSW7`$ocD|+ipcf zNcZl2fTFaW1cGVG*{JB~=&Zd%b@s_Y79M7U3$eoCV&-i(OGS@$;6}l<)95)!M+#%U z=d|Cye{U>ThAzu``?f5gTRiZS0r@An>pd|otveFG7FZTQlQVMA2i@-Dq7}|wxplor z#=yxBmi{gqbl&U>d41G0HI=bHmlXCcim@Gd`rNq@?5l`cHcFrHmEF%MD=7ia=i1rX zJ+Vx`jXt|8;QCyOhrWx^WT!iu~Y;~oIWR;@VDG( zC4Ky=jQtb8EWfH~lMMzvq@)(0>YCo@$8Cf2`S;p>(D{`U)7ZnC&Qh<*dl@G}K$n~)C(vmNTLPtdf_dDdq4IdgfdW)U=5T$w1 zkK8#IL#AB0|BK*qjE`J&(}4}Z8epMMaPG07SbXy!L#(hk=Wp9MQsg>i{6B^{(s+8gO$vxnOhsdB>q z)UU8le_oJ4bai;c=nW~hPX9egReW137B}FEk5AveNAiqxwiQe4Zf;cjPL9zV$olp6_URtkWe4zhcWvs{mT-Tgy^U>nf?Jy?V>m=w+ilY% zkVc^t<^kwZy0xMG4?@zvIsMJu!~697Fg&5%%w!B>Vpx|QmmbE!o{r)n8>R zy*5fPOA)PR@KP%*&yK(6w=x-4tp2MZ6@A6ofkT-H8L)kwWuV3=;+MgHFg+b;dIlL3m5dD*j`|lH)~-4q2lQa7dBZ}qUHk9Rr1J9exKFotPFu+?8QY^r5383+fB@A$pugJt-vP1e7{WNu zuWjyZ+_>>G_ji1`Al zplSh=Ey`#?D9G(zVQ4sud3-tR44f=nFY^jeWy|Wfa$>Cbjk<56@82)~{)hAWEICLc zcf#5yum9!|$Cb!0DhP9m(2{aeu7qFlonk9Mgo0xGR`~dWaI&CgpYrYg)^*U8Zy$(u zm6$=QWyhP{^`MGzO!-bMwPK*f*g*!i;j)7UZ7)}vB$aJ+hMK^j`S!)}FLK6`c!ruUb4 z?6kI%XI#SU{{>cib%Q&xIZe+b>hmnaeh_f zqO+2A{krnT=tedIG*yme+AlDIDEutbkSF{t|Jc%>CoS3W;m+@a$9x2iU^O{Fr0F|v z>a3;rX;Pu`+6aX_^an-=K?gs)}7{^0CIh4P*4W?*-F&j&l>) zZEtJyAhy(@(2Q@K#6sz zB>>^tDJO1cy6IW|8Ve?cqk(0YY%tD}Y5WQ8(bMR|nUL+fb{#oGWS+1uy>T(DOvkx~T*jdU zA15>Q1-xcxc=-CF78hyd*^-J2F-7n=9)|U3fdLiF(sBrB`1|i)85^{b-X!hsr|v8X&VKV|{oQp~B4rvfHXd=BXRh33#ST}VE`i|}z;fD^~|s>;fp zsni8)al@Wg3Zb*ZNKYT4v#*EI2jeGhZ8P>UBisB)Dt@ThSVwU&?M}O?EvPTyOK>_h zq~hH*x3*T{-dWTc!tyFR^8ESp0qXq*4%~XyayutxaY98!#S!?gQk+1{jL@{r$i_)C z#s18{#>U9M({Qt5SyP%eEuil%wtB?Y>98lJD_82STsfE%by351K*eP)9k}zHonje+z_ujB$2l7ML6J7H?65!;@-DPaool!2(~+((yLEH{oUhA&V9)umD^|L<%n}$BMNtdr=|c#mA3wkb|$ooLL&6ZTEAXVk)u; z*Sg&eX2rGJ-Wo%l9|TDK`tHt9QUo=Vur=5f#SMbfh$u4yyUJ+T^rL(@{6F?okL}xp z&*I*cP{6UcqG6Vu-M3zDH{c#D;Z(U?Qi2xG9_$Nz1;YPJVr`hRt1He=^4MwUHKDy2{$pq%SbzC8Yw-;eGw38M)zc^-FOQ~q~g*Yg~;A#;bG(_g%J9%iq> zmoAV~!0>TqonIsso#+LlL2G&^Xz+Y8J`$awx%o+9?qzqR|F*}ZT=)59U!jST2p#Es zk^`|l6RD=*@oy{B*|R(Fy4;~`CuwM4-mTpjRso^;a8JW%^c3v1hZriC_~M0R=!`8V ztJ?i0^w;8zS;tT~VN$+1E^9-k9!n^+#=M|{Ec;f>d%{juv4_q8l4LDUIdAHpo4xk@ z^4?d&?q^n#3@!LG`6ene>}`bU0X8X#g@9;SR?)k(JY?r(f7vHH&mjqu42rsH6;7Cx zJS&LjzT7an!0|>82u)uh) zy6ea46bto2O~7w0C~*6mmslKmMBD;qmZ!>3foS^ZZg|Sx@}1i|R)>~C4{BwVOFVHI zJlKo$^e<~xhb~}4>^69w|1C1_pDn*qxqPZjN5guV-2^2*&{mMOLR3%7nqPOAl4ADLuHS`7!f~1Pu!=ICA^;csiOS#r~I3 zYtPY>nX+SK#USDFBqjb@Yx@38&*4p)%MGW0ni)x zMH$?ra5>`D5!@=j?cgZ?zX5V@Awv(0=s@8f~p2RBQ;>Hqlh;oPMB3>ZVDRZ@VEz#HaICExM{7=s=B!G3JY>`1-@kvBbmnGdXD5Pua7rD+Mb%6b8aCdNOw&gWBk;l-8XG_Vxq8X2{Tc%Y>>EyH>Hg%&46!4HwKqBWDUM9NlX8g! zg={*4H_pn>%8Ja8x7m}mP||vbS<6_^Ox5cN@Zf?q^3u|S zG!Lvp4@4vqzHw?P=a>>NPtU;X*AI!!LfD7ndQQ3#ws|=4oJV20?}840zHrjg%F3u> zSf+RFl35iS7=$y{B|Zk4puJ68h1+%4$z0Fan;NV8B`pT~A06cr@&-35EK4cBbL7~u z9)Lg&Y9u2raqM2&Wf-rCZG?KMe@;%0);n5h4nlY2`U~eI)V9U?^+}CQp$l--LrfLZ zPEw#{V<(SZVA*U#2agma6-`P9A3f6Ud*rq^Sh{)Ci5}QzFRE|orX)lj@k#WY%kYC= zKPBk+=&e~(I7K5&_U_1!GAuE4eUhDI-bP#nVn(=sgKv_k8JPKal(M3PPNE3M#BeoU zvq5xdn7 zgmkAzAIXmTF7{RBn;c&`qgk-e2Y`C|4Zcxc{*kIYC^-08bRsVUEKB*ct;n3cgURe0?<2Jer*gBGz`o8^j|8M9W*L^k3w?{R5MonDPD~6t)=*+c_uN&N4 zJwHKOHt!YA`e+kV*V^yh$Vl=)PS0l}sm6rM89lQinQ>(rIDjKz8Bj2yGB&mJ9efe~ zmbR##zjP2Sc>VN|+{rq0ntEnEEW$&e{>|kYlP6b<4P-uu=nR3a)*V=B3&l4Ss_Ypq zU$l$;P*@bKX8t{QbXoCs)<^GogyX+||9YM`LPvo`l+l~EW60DUU!S^sj?yHfIwR-b zTUx($9;$Ww0}@cxaVJcp%Vfx}fqS;Wq$gENdiLV~6Rdap1I0&%Rs_%r1{G9{mzgq- zuEBNOJSw#^!d4PR00m974s7bPXgNm-Q#fjb` zLfG3r-Zv~9kDt7@gF~Yug9G4>vUOTr$>XymEndghSHJ5;<`Q4c*VpWl-&K}Q3mr@p zbZ?UE#scI;|6k7`(_Ux~*6$0$4RT3(^{N|pupHpy1r?69ptJ3omvo?^6e0xWGjcc8 zVzSMP_aMra==+ANI)5%N*F_$J)=7l~45TT62qGFYj(jmB2#`w{d)Hd2?!J*7ZAv2* zQp8L35y^U%DSJT#M+k$A#W{f!BTsfMAdcdh_NLhoFP0e#7A@+|{++vQ8JMJx*2$l( zH}QW2QDtdyCrp;lFf?acXObbuvY2SYt<$RhRg6&z|1*dkr$LK9_NGI?674$NgpM)fyYe)PnNXFx4si_MGMh%cJgVih1Yxf5Fn zO+mvsqj5T@U}A{!(&;9GW)Zp!$m8UwSiil0>${3oq(YJXvMu_Nc%z7mR1?+4d%wQF zr^1{%sqNeA+`K*xU<`kI#Qy9DTi2dE7Iip_(}1oI?7V7N)*Hg>_2ucZC}y3*FL?}> zk$L*zgTEy{GN5k6^5Uex4{K-`qDw;SU1if6OZi02{(tLp_-NT3Hb-x-jWMv$IFAcv zHUbD`n*Gx0U{4*I|4-4$2(7vA-Mv-9jObB!mIM3FFZS7pfz~#Z5h0pmWEkhdZzK2ueaF6?nvVd~A_d8wg=M{Cy$#jp(4NL zPWANKOM2ne6d1DX#}sNY?Hc-LjuLGwnPK&M`!1jtEUd6Y2kfnTUocTkEs;8i5wyEA zVjv~la(aO~we$@PL|25^E*Jq*05pE%*pM|!lP9nE`gCCKP9>=xJw!9!N#70@e$p@t zqU6EFudR7dc9EfB?>pm`KH2murV~z|rTvU=$Pu6p@DOxC?*3v9|5M$DwN6OYpu%4v z|Cab_mx$MeaGZ$And;DVtGV)$%RP=?jid+1e9-cLYuq|ZopW`vu(3Ig%VwU^ZRPX^ zSD{GKQb1u8HOYbTIuh2_)-X18mlb9=Y!tVS3L6Iv;L5h$2KMhSh#Y{6bZp>B8*Rot zN)qP+SEj5mG0|L^;_vUTq@vP=#7wlgBWbQn+?5cu zvKvDUC{;xFhO2AK6oU!Aett~U@K75gt|*t*U57?-Y?ZlrMpR`4A|!BeJz|wq)!VA1 z{CpX~C?vS))HD=0UAXffpF_A=U$72oykZ8ZK+w|z>|{F>PX6%z{r1mGaP$Lw6edi_ z%rCEjAR|j{#|j}huKrh~53*tD7Y%{{YH6?n>Rp)joET@{wbipMVw-Z-JK|z5aBX%dO-Lhr8iszkM$W^nMM6>Jm z0!ubbc=%|pL-Sw0A}uZL^ySMvB?Ohhtab5HPU`c>qUwhFeP?%kUV~_QlIVfEc#}`!on!54vCMR>$GxWAw7tw#b2~u0fF67 z%h7@@?Cd<*<-=GGv=9&Ml1!TY^!3najyRV8KI|R7gZomKlR`JAa6teRwHZAV^kjl5 z5oV4!y>3P*rviTthboP$1E7V$p(iuro&q_|GDVXnr}PNVZV3mA6@L&!C_O~pxGjJ6 zyxl1F0_EvZ5sWl6&PCn$5D>kka4_BIdfT?C(%^{EU6nRZfrE*9dc-2fIp;PqjY-&c zpq69|hOb?__FF;B37=M`vZ*rvgql2&YKX_0L&+&92K1fI6V^>S$C#m63exMSyrl;Y z+!Q^_!*KQL*V{cz|LJY*s5NEERc0EDz43f64Z(qT@17~Ee|Wp>5QD}<_4_2PPBuL4 zQ+m4BmltY^itQ5GTBnXNU%!5sXyKyhQli@hCXi?j#1eFx zAqDtt9>&J57iyENC&!;!_&LkVcB^@Md9MpaZ{z=o}~8 zNh%RP7qO`k5m#X9I7)G(d*OoAqQWz}b4jGhd}#|>#o4lxD&-BIl~}BiwG34&36WPAP3?ot z`yq<1UP>mn^liQ(#&uG5t;gk!{qV~&$i9b}4O^S_0fJIpPLWNboHPAg% zL2JGH^jWs)lGqB^yCP8u=5>ZqDWzk}x#h3MqRxV^AWt8nRHhl!cLxPfcis~A$Z*5m zc9QrhlIZ7R4ympa7;R-kanJvsDp0Z^5W#` zVmJY|Qr2J{>OlxU#x(AA=)D;aSmBK0eRJW;M%#Bj*r_;=en=V}E^fz582p4v9Amh-f7g+m9y9QqxBagf9vZ=}O`9f%+m>qG>Tfi2*>5Gv4iC+n6LbH53eh2oc^g7AVntbtXQ;%& zl(tcy>xjAli@tq))K|2$7#sJWK{qL3_u7pc6MG(GdV70XHZUxfve%F>bQ!!rD(Y-+ zZ=a$vYq{3;(vCE89>LC{gV?S56cV=&D`E&k0#Arj0}zBt2m3WJnt5j}as z>p6C~T`K%VND;%ZC<+Av8_qblef^ifL4G1`!Cv_3iycA(Fv;U{ZR@UsO*IvbDFDBK?ROpYnx`JCBVc zN*=TMzkmKrinMqpOmp*E33(ROCqgfETzJTvW|wtr9cHXN;~NV41@x3MCZ< zMU$dvNif+?oSEY7ztYsSxXrTWN$J&xt!;G)(j7Zq-`sglrG_2ff4qP9(F6P> zuWmk-->1K2&Qax^UY!g*k4GICIUpi1aM&F85xvJeZ@=tp(TDMmD|Xdws>qCd92wfR z?$`9pRQK*1Ghbi!y`)`%vRSj7jyX(@0|+YzAr&do#QX{RA?A&3p&2O%G7L8}WJ-Id z$vyC23SD}{z9ZC5+=F<4+9r_!%Im+_v(aF3o@QqDf_k;Mz#K6#94CFtDek;{Ae6rM zJx`E7mAIs5>?It~OwBw9arEuO1MlfOdpr|CS|ncoJOCoTqvvqGB~?|g#$%BMu9<^xrR->CJm{yj_Yfqf>(-pWty>d%ip8DjUC&AtzPaq z@-BgdMSgBx$gQx7FoxT4v-}<>&AwqgNmp!y(jQtW%LKhFdXJ>Masbuju{GcPnm-DY zG8SM>xn;}m$RQq@S9z=#jZO#XRK+b>Qd#-_^GGv(7lXN_5XMZ2oci163zHL`@88C) z-Z3>3>b*vfk=E; zeh7?hBR8uAg@qjx_;n9l@b5k0pJve#cM#w2%Vxjk3Q@hUezJEET+UM}%FL51t_qQ6cR-z!*{2V-5j8&3%aTvcqKQVBD9I)69H3!i$9noq zHhe#P(Ziq=IgEZHBAz>E99+pJD;K8y_b6^52rw)fb9&z|m z_1hc0#CQ$P%y!+_Fl1;ZLZ=V8URhQq=)n847x%mOSP{2VV^fm_tElDhS@K2?Q3&36 zb{_~T=mh~dqnys2Jqzvh(>qR`I@Oz|W7JS_BT>LMWgehUS@+3DPy+UGK`X|k&R8f8 zz^Wx@pKAyU4ULRCC5FlXgdJwo_3t5a;|% zSe3lIJiOMK_z ztrCG+vRgM@$Qo+4D7*LSNIdHRSKnk+DC^)-G$>k>OolG;o79z(S6I`ZWLEa}3~ z=K&D@Mf7Rw=;(;4GV*~; zt=uKrSe37|7{P$enm=FiNz3n57?Grj)*ZZ4MMh+j=JT){GY;eAgYS;SmhoG0JuOF( z%%dfGrcHY)j7)v1rvrKfPb-G9CpV4cmZ#&`eg(J$X?@l=5%xvQ9F`cwyfQY_ZOxB$ z8UrB%q<>NQani(z&lnf4WsP||Gp|3GVw}r9fUcOP(zfzzZ!@%cFHigX0W+m2h%;vibJKzZS#vyasoIh`oPj=8ju`m{P@~AH02gOYo-{ z(1~gfcV4_;$G`0|Te)&QY0u7gJvkXgB4YKUGZRBYNn*HP8b50qIf}b-1d_t$geSIL zPI{BVtwJ&5&uGPpdw(iPP}s|h&!*kQ>iUcWQpbIX&oI=Cs{vjxPRf(pUberXB{Y~1 z6+Vp*6S8lwnTijs2W3@2q7=mP{3S~^%^sww`Eu-B!?Q;NQX~Dx&2VwgwCQW=wYjz0 zO`t{yNh)iZ!w0xsDfwk5S^ljq%`QepMt%)`yD4IY<($h9@ze*A&b)4~g_a#L;`%(j zesxWI`LRGOhyl(!;SQKO+KCVq$bcz`INvKXutt;-Yec04st{u8W^)@G8-bwcEQpHY zhhOD)^}Y7x5szRnqy~mwY3fv2euNJB-ZulBrwxIc3knIz+wA#+!c>S+xweBC8)N zFywpF^G)mO>kYa{IXs~libg@o({N0Zfd@KF#c4Af!I5&HLQ7X)9st=n0I_QfAakD zR0GkvgFtS$YE^gaHk)m17B*-6{ihhVx*vVhv(fzdQI=T2@y6}wYNkrwHghz9!oUf@ zGYY;E)@$>Q_IlY{$EL+l!uEj8^;Ld3pkEJSvYz(>d1+<1$b9}i~k4{2*<-Kstw^aZB=a!6KAzOZL zpP6#O?TB}_Jj{kYGqdv>5Q94(m?_brgS+yb6^56{4F|BPvk~U5*Z!2>Wa74p8i-+I z?lgSDzmAHMAH%lh^-=?frV3tfpS0tA3~0~m^XJPH zYRF-1OKN1Me)%W6`s_?kE4Z(>Mt<~^lOpq?2_$RkS|->o|1*|?yZBcVZ)86JB4_Vp zqm?U1tL;5XmH*m)U(VwjET}FKd7Yiz$FI94=gqUxr`g3r*CM#&>@||GF0Y$)7Hg-R zd7AGjoQ$Fuhe(p&Iw^O+X$`OWmJRbQD>^HB-0C1g@O|HE+we6^1UZ6eyrU|oYmaC1 z$0_ZeHbk^`aRO9(bg;B6+vp`nQ@rTW5OnSW&+Ecg0aXj*cYSBUtG9;B*hDsSq znL;JWOeG@9l!Qcy1|dp>GNqD85h*nO9{ck>=R5t@z1DsIb${1c-{Vm4_iK1Q_jB)S zU)Qw*?Ascd`TPymebL*i;=5ERc+oH`>`zdKrsK`{w}1Xqtt*BX$izHX&{ilmS~@}X_?X&euft6L^C|r64?}`|6?q8%c}enz ziN@EslmGEoUcrmT$y5EGe>>6GVa-pCoUTEqVPVmO0 z_kaCE#*r;@Q~&D&JEWUB{jb0B+TXg^{M`RfKj>%p;$FA8BNe7ybKDv;Chl-*qyJnA z|N6tQiCa~=JbSiri(&Yf|NE&`q@B*O`rp_7|Kaie-=FyZ@E-nO|De^nbDJPR(LM5m z<$@H|y{ht$t3F*2KaOJd*s+&PcM%Yi|65ek_0+W8h^n_K{%jguU4_$~7x+=sI_={J z!DYMpz6#zrVsnKSvcjJxZ)D$-WNsxHaKGVNtwN`GA<3PX-_0G_*J=kb5N_;)`Jbw0 zLvt4{N>EH7v4e3(0|gO3YZ~Z9c&{TchyTpWH3y{o3X(B=&Nmo(r($BNtWs1|8vs=& z0*BRJ|6Ub96aygMhFXbJ{n|R`;SC^ywAL=}X4NA8R`hED(7L_5r>%Y3VsL zG+R(gV;_IvP>FCZo(q$#`t;uwO))S?go?Jv)E!1mzxg*y3|?v1D#7#Uf)woFpE;{m z^@oMW@m*nbuoV6crPJTIia?G7`EZDi-r3nzuUsc~ibh^5rjXv=f_z|~qELtYV~nx! z{qY51(ZAPl0MMDvN~sL2mZXva7v1@al|e zj!u%xEkLvAc?C7%ELuogj$Uwg9!8xyWQL$o@ly?|)t@^reR90v%9SfQ$5Vf1WSEm8 z*P+G}ey033Z3mhcRPYL9JpsQsn}`>bIfBa2un{5}#-wx4HY z;{S#EkxN{bL6#_(E4lWyW%?)wA!<~z6bNBxUbg|fryUNj6(go9>fdfJj-J1GaU8uQ zks=>^7Ct{(3D|3i?|?vZ^dn7#V{rFttwMldkIvUMQLP-yjf8WlhuY?9(dlM zxVYGwgcMlesJ}m6WZjS;OTCy_#(}8%!=+`*meQDpmXZg6G(GM&xa1%$twg?P+Pry{ zW|eqo>?Hy6o%=gm3{0fa!(a6eJT&kop$#frwuAfQaPZ~K#3@q_ZR#iI9;b?N zcnFAUWOA{sZU+!qv9fv)P{VaXI;af&UNQXPbY-^{mDi#%?(m{?us=+~`2qLxZ2&)L zX02DP@&RqJL4jENCG`fLHS^)JJY0v;Ov$}|di6ray;*cH>*kxB0BSQB%Z>6AkwD@+ z0MNh2qJH!IN=`GW5U|tTT|haGRGY8tNfXO7ZWtf|o|WCF^ope!FJI;&9KljA=bE1e z#(}U1AH@U$Nk^>O2nnV9R4R4ipk+@_`VR0l`n~3={vr&ulIh)NjpCWFzof{ZY{0WU8K|B$TWX{?VI}XNJ=a7;)AXu1()-%FM#sMbW(XaJ=AJi;^Xj36 zRP#bwW~tlqxCCX!cV{6jH)!-pOW}dZi2M$ zPf_xB_au-fxF1_#)HxphT39GxDamI4!%?5Q?+`pr%1@rn+>yGq8%f}WzKQ1m{ha2& z>S42ET)BN#2bknP+zUIkhC1RMb~ zW?OAe>K$I20#SMk0p&EUyr9i*$ZrzyMe+f!aVuXn9Jkk=R1?6|jZ%MdK*4wrT$%<* zJU@Z^Wv2)&2xUUM2io05DkUave1lZA;vIl{cZ6+nE>9$2<;3oaCm|H0MirOb+X0H( z$Y+M*oP>wo*n*^dJoU+xbvwqK9u%IhJZ46bjg$$tEi%5&^-q%R?HBL^$s4G?5 zHp^N}0OzyBVIAt>lKESu&F>S-T0N8Ci0I%1oQlQ0MJVo>Xl1qYKxs|)Rhf%ju z)a{Yy#Bsgki1(v-wlzc}X`B6F*5_PxUwQ~~p+Lt_8+M@F9rGV5oCG$9u&7n(0s|AR z^sCz3ilUiaq15Tc2}%ZFpDKooN)MS^N_st*qxdKe4krxU_Yxok#`_Ex5yOZdv`J`O z)ul}$k%E)4MMIC;TIUmK1;R9^8@r3c>Gph#$KYp*<0zC+X#&&pyAs5*q&{pRiMPf; ziuVmQRXU6l+Ctr%$8~+q2zzd@HgVC6Y18V+x}{%1LY|1#i6Cz{H`zj_KMKW7TJybU zpJ|E{BCTeD22+1Rud27@u~WUCbJz=)t>X5w`y4OV*AG&loTiT$3?Zxk@$p{4p=rUH zVMCXHF<9_+qnbCuXd7y6dYqFt=$%N^qIKULcK;d7H}Qr-UDYIE&_}xaKC4|IlZyin zwb*e>XRJcv2el9nkluaz?7#V*Qf)Vfepa-jhUyN*uD;{xe>^~)oA_)NJP?j?o25%N zp4Zg851vYTT%i(oX9KLRLzFG4g;`_HOo<4kXjo%{*+r{`~8D6)Yd{RK)>1g4CrP`hGVp z{eW9=LmCbHa00kQK_p2M6hR}xqzOsNL*UEf(XVP{_#Mu1S2(O4aqLo?HEWh6l`(|) z_(6Lzb<&SAe~xGMfIgWC2vMRIM2j=P*J9LFbd|--KdRqQXsW6{YE&T7);1KM(@6tb zyOjQbfjb#UwdV~zL>JsPq{6cFH}83{*6nu^3?fo!xXTLzrB-@H^{z&PMwOgKU2Ngg z$gsp}O2?_NZN!x;e?C_{O-R^|Ofn_F4O1&#rsv}4?j~>s_%xv1NmVhh4Cqa6#qVYx z%{%v=Qz5(dLi1HrD~nU)!<hne2C#oRte62- zR~FV~jU33Rx}TgZa_=D!-NKgFv?Owi&6D^g*#rUmD?&@p{roxeqcaedJP4qfpi#cT zl~F{|hX-pomZ!X*;p#bVFzovO*6i z38&$vjJ9kd-Xb`tO?WpzwSgeQP*UiP*Ud<}V+l)B#wxH8f$%5HU(aA^=;VtNe1t1P zg;KDmRclayopfe+LCYo^dsR%J-%E}&j$dHQ7#cq1d{5KPovPV3m)+MMH%ghqK<0!B zw2jZ@=Ooy8I(3FZKL#()L@mzVfZ8|LmT#y3dzv)CX8ABJEs+6Gfw|?rckM7P$9(Sf zis18o&bJoH69pCI8Xd6--3xm~CPdP$ZF74-ozVm-+Ifk5kk0A!#R5CZHvuf$yvmi1 z1ri2kJ4t%MX=^v@?efCfz03z}PHMk}J_;he{$*1__{m%uM8Q5-^xm0pD^|?Ft<@J) z{cF6DLgW+_G^UNF{PPQUBuFFwB?vd}VA$eBCjG1kS$?r!Rg{QCAsOx)q4W+fZ|Gtb zL2IOa*|<((VLUP|RTxLS%|jLU4+3I9n-I%hNWyV*vM-!EwUtC%%>Vi3LmNkt_KP5e z)TjR@g$q~&Kp#LjYwrAQ1au{?%+v=s7>Qos;NipKY)cXU2mimVw;EN@p_m54hWgz*w!DJ# zT1?UfD57*;Nc)%du-l_soh_{GJcJJ5{9DOM$VlO_s{hEu2wyIOH7532jaszdf(B+- zh0Dp+(~pB`UTOIp(x>*`?}T2m@$RM-kk#OeD^ZaCbJeKDhd_$i#08b!*L?=pwyA9W~mMZ1#NqFgobkC7*Z#A(Me>AE$dJo z+#}UhY}>YhrKKfD-kc&A@Zhzyi3y>?SfE*|qRAPe0i9E)kr2pFu(BoPl&lPK!N|GnphZTr7WNdMGe$& z&6D58{Gi?*46BX&Uu&b!@O7{;1m_IwZf4%q8Owgs{eWwn60+ow12rRL4mP_qc_AYo(=dwJL(hYhL z#_loj1oRlnei9{tYzbHTZAwu++BR#ASOZoWS0&0LlEH#Ae_6j!S$0Fahqgb_u2XPY z@}3U$F&}XLOL|0qE`3>bDg6;)Ww8bnn|X*K8L1|zZmWQiPR&@Kc!wDhFkMV#>;r+d zo@xVT&ovjvG-_98e;^}4xxijtE$UDL#fIFp2l*$5a3XE~r_`{fBG45m1u5e;oOlKN zpt2C}EHWNr8nt)xekgYAh+k&|ngs%RPuKNhA1Ep*ZDV~H!u8kI))o^fjiS1Bw|Q@0sZ%u@WD^Hfqx{KCNg!*SKvTDQTq7$9aX;W-BRay*l7|fP1@W z%r)^aEO#9|e0Z{0kY|h3P#z@CH1TWzX{c&XjyD+dtZeSQdEQbDQ})S+!5ft4%h3v> zWNNPS4ntMRnR$64wMR${T;k)l-q5g~>;Y)hfE{5@bLl?`F_ZRj9xfykP}hN;ab~v4 zxZWBjpd2fZB$bSfjn}SUf5c*!_vH)3Z)u`&^2)<xDOto*{ej$%q zNwS4@@B4_M{^zaLLz+85N9tIvMN*)0D5@)} zi;gA|nKqBj-blk~63Hw5|49iF{Ruld)kOj;DVP${~nH(tCLpc2IKOhX;5qEN|(kiqeUzs+0 z_K~CI9AZ+{6Gb@#w@U5C)AiXI*$xLf@FQ{S<2U8ZM8rA0$l~<3uA|M%5h>Yn&=A9~ zWjb|Z!Lw84@JDVTmWBj|!PqCmFg1jD5&Cs@G$a(hqHZ1LJz9TnY2;)MO;JcE-@XkP zoQ(&9A>?)G+KG1}#|DXW^%Ncyf_Iquo#6i2-;gmk?9a9F-lJUqD2hs&0g@$Ee;uJh z83MY(__BHBQu?VSrH%AYJB7VIL+Na6uzxpbR7`&BP@mH$X1eOp2HX$mD<6nkvg z!y6sca7JK%g!P!m>V?|#@cy-4J3bI-#kP_iYA~;65^b;%8+;hoG#+*D()j%XDFRFq zCS^Jcso;InTbUMnKGPyaD%uUM_gKniSR~tdIJN+6*peyvs|zEW*`Goam6c6o5itqW z;rz)r@r(B6Pu6^+QrvG)4&lza%t*}pA%j^kAn3xsFxv_jG%O$QQ$daR+Dk)2CT|H8 zg=~zxb6-%9LdMK0@)c&IjH6pfVu^%eH-d|k-;A~RSt=&$s3?>aqK2pCiIeADWFJ>B ziAJuxWoXCAnr-8ukHKgh&Aw(Js0RoFl(QC`RyLJDnciY&KxTgv+h~2)KU;)y!RW~; zrdJaSO0#0TZ}hCNx)!bm_nRzMASXbHtkiQvOw(Hg`#ew);hbZDk=1E&#wZK!q?Pmx zXnVu3hVs@3Sm#g9u5KeDOzSR$MPR3~WHZ1&T2q@gbfg!qMES0D{p#cdR_GIEC;)2FL$#*{iCu(rxt9^|iS^}D|8bW%zPOiqg){N||S zT2DQUS_ttHguDrX6Eu9cPswuAbDJPqQ^)%E8%xS53MJSiO7!OlG-IgADB?4JP7}A< z_K0$*!UwVpb4l98`hsnLJfOrgIZq%(;f)I3$5Z4YV4Fv@{Jb;S7VxpXu5&-eZHTxd z2eKIWeF6tn);gfo@qC4@Ne0kU=GW8k8S038Am_e^*VTM zc1{zf;=!0)eCvLviOdM#A~s3TUrheu9b zE!N(Yd1465yi;oe4-b6ba)Xf8qe4j^hrNCKHZz}JG)9&lXWB9c7EgLmx zur@V|=>_Cion;6IVo3Q~x9Uxp-GDha4*0J8!k!aBKA+akAne3%44rj`sWb8}nl{}^ zLI=4K)wPE}sL<`D?@c2-HORo1awHFYLukO{z=?Zj&Y#~IeMF{%7D=A)qsRf1MtTUW z%)+z=>KC>0no0K(oWFfDHe$5LaG zlIaB*U!I<-PsO9I{l%8PgWNsFGuhQ3I+-a$_984E0{Uf=S0|X%!6) zH3-uze7yix%?6??JwOqL!PfG=o~^H8^6D!K1i#mW1>xv2iARS=nyn-1vyYiTtswJA za0cdaATc;J;#OdCp74n|OPF3@Cm#GIeS)j?-PvW(uSJx_ zxx78CAdhfJg=NUITB-}}ERpJ@1P2YanHqd0p$XzwE50%rFYMWO$rM)N;K63X2DBcT z7fSeijc#u+N{LTR-5VRX>F39$a2w&a_W7YdR_oZU(9^{r5JD&?MZ(!=*OL5mcgXoz zK=pVCK4-o6>|rZ4lq{7Tk>&gWn20-4Tp5{-pg~ymV&%h(BuXwBmnT;h8ylNB_9Evb z<~24P2SWF!9unIdz#aF)t$z{NK*n|fF!JJcD%FKR)6rmf`l#vE*;X&h+V8a1&B(@e z1a~CqL(rg<)(42&V3xJ#nm4=OH^U8Ii_8odwiTAKfByVg`+9!xrb1SlgiH<%Qpg>|1C$-=+KpQlcKu)q50b=74ioN1lma6288s?4{X4v*1b_!ppFJeL z%sH$FRI!8OO73@i8`1zTdU-jNs=^+VrWx}bxsM3XSv}O$B(J7gQXMfu+(X0{MC>Lh zStH_3eLm{-x(lUV(csIWp>+!1MZTQ9U_mQRbAVF@jC7=hMw4eJnc$o7?rMrn6!Dnm zknt>br}AYhjQFj}bC56xr2a3rPgDO{T0e^U-*xJU+>zoYVp+Oc#KW&}WC-&(^)8ag z8~YoRqJ5eESjSg)OMmtw!@tdpKt)3i^>Kjq6WZ@oW#{+e9Q(+W;vSR@#NXc2Uqwbm zXLODJ7XY*3K;Esx%<%yj<^gG|(<@NrCVbBN?A_Z7u#R(D z$*{@BieF)gH(fY>KC*YFxc%mZi}YeHoSB>O4R zyp|%=6F~K0T3UhKZAi;lG9D08s%1H}?bg)X4%lcpcL++;K)a+0z@x@fSaL*c$iXyG z&`hBmSO0e4+R;J^u7I1jXb_B>I@Q-_MxJ!CjS7*?r4kjgzHdPE4=J{Bjap8yvF^5)E%WemD8`c6&7 z%AY^;Cpm*U^k4z@@Auq1%ugLv?!RoSHl zvBwz=<_8k6r_EArszu;t?<8}DX3f+nC_jR$6b(5wZ33N@$t*U?gMeRPU!b(9*ENvn zja`x&@UW(q`{sJY&G29ZoGkIU6zfJ57!fJmSgQc$4}_zB?6k-!#!(xPkM1qqEaNof zYgkZu^R3WVY5@En?tS(aYv=gGfyLBgLp&H`=PAod3YJkHw7PGZ9)va&3d+qa(pgUtkM+@-e#7?9ON=qvguDG$pVoY;gG zaANf# z%MWHrV$+MqHx73MO6MynD_5*|K!zln;@o>!2;I483LcLc-$uDr9p#P8=-28Gl!6Ip zkaWt1u=~)#?i*n+*3ssC+=d^9xiu;0BaRkQZG6PQk4T_sU=U{p3G3;{8gXBc>sS_uv z$pgaXdWJH?;P=rFv@h&r_U?5_RJ&##GHgILUPgITC0{DRl43BR znvqkzzJR9wC@=qHCsU;eG_4`Ro9gCJreJQf15NDSp#w)J94&kXk5{HCIDG3S^hq^z zfdP37h?a~aF%%Dhf)E~ztO^m*+l^f}FyqRqC#NK9oZUqABtt!buRc}1^hq)=DE+&Lvy-KHl^!^w zZNLd+m=1}6<0D!)cCv42BRvi?fKE7pAN6Fi?>QfLX5UbJB!nFxmUH9W)VNO@-n!}s z0W$ALF*ePBKFC)LY1{MQjh0LFFYbH zg}NZic3SNw@lxjPYY389hUL6mb<&N#D-wqHbF8VUTtoCJct2wiU$F=&V!&eN0-PbO z{YdH*kk1S3!7mq7^$I@vC%801-^{?Yx*{Kj63>WMk>*PUezr^SrAt%XD!yo>+KYoZ z2buEMv0`jQfvF0q;zN}(7BZ@UNm^$|avJ|K@-ZL(u5TEB$y~Gc;^zr_>@6u8%r6lu zqq_lw+FWbSR07|_#I67){@Lu#}^@zGh1NTHFj z8-ASVctbm7t%q$2Ax#nBYSD9?U>4E`W8lJ}5xJXDG%6*OX^lq#Pw|xaegv#1ton*B ztRhPUx7f-x7YuVFl3$#Ay&0^4WD%svp+Dq{_xZ|VxI*OuEbT3l6L60)%<4yr4%@lHB6yJ{r ztf6ir`AWUBeDpN?ZY2-@*fBwSC?=biTERQfyx@1RpFzuT1t?jf+zEG>NJCIsY0pLAfs%jVk3|utxCOep##k(CZ1zB?Dh`K}nU8?R5zz2MjA4Inl&twbvL@y>Q$pFC z&LpI8?%q2woHEy^rI&FKp}tIeN1N|AIGb^Xe-bsdbkwK2)@huTN0w zGvMI7#e*#*dm{BE1bML8&jm!$aj)n#g?8Z;N&?dpKx@*;z$Mb7&8Er3UjVYuOP3ap zx12S5_USWc#$D|ClI{-AZ=L}n#4XnCDOVky`JkoZOi)glZM&G!BR#>vD2ld$a*L*l zl0m?+rE$eYMIQn7WE82LRU!Hv@*22LE*P;^_2e*e!S`Rb@z&n?~ zZId$!0C#B@7#)IGl39q&64U8Dkq*|agJJ&;-1N|bJA8XV@+qn}^b}sD#!on((5-Bt zKzE3DMm1`S>`9zMb-m^+;zp+2Ip%q*g1SWpLfqqM?|Hwlwk?o9GQD_K=OBixI7jo! z1UzO5;p%W9eU+?X5;0-b7Q8=4(?kF{ja96gKdyHyVlhRj->qAa@%K(Z2YZA|c{=%P zCng)qI{_QQumW&1B5ry0m3;^d@4+a{QrutW8Gq^JZ;}wqf{G6mzCunNfZd+x<18M` zp*fdY{q-C^*B%xIkBQ0Y%wGRfS?RcU`{zrE&=pWqH41I6;{+ptIA%lA7?2$=C3eXX zipez%itIUt|^Ck(iP``5U;(U$cq z9YsSe<5FO?V4D@5^_BF!8$7mTen&WT(!fqkJTqswvq)z!m*k;|w3gIe*5`r{Z%i5*CXJIB>V%aOcuPI{RAyHGwg6pBy2FOT526!QyMTu{3F6 zq1d%=0Xc@U%Jbd9DxMX0B(ju5z2pA=(iDQiNywl8(LzD2<8z@xzaVk={u@1y9}T^E z^XMomGnL^{)_bzptz)|@PH5@DP)09G;tyQCl5O#@o|N}g$9^3n=e~|uwj%EM5H#P? z_LGN_dW+=Kl<>w5=eIn$eUxl0b6Mqo$>|D1P7XmysxMGrhVC^nL* z#2Fo}`8@h6ie@+{AIRCC96u(JA>V*lAlm?48#jRjM& z>-QGc-P+VuW*Px!ZY?LGORVPj$o$R=+X`hf!}QS3xIAeTveCi7gez;;xQyre6f1L~ zYJ}cyI)E3O<5k#~b|cPJjH~RXqLREk!NjY!ir)Mf z^w&jB<)0Lj^lLalMR2?>k=QzPpx8KVHCZS zX6CgOFD9PNd_2W(-DIQ18Rt4b-$`n>(8gvP7iUf0YPV-&hKEy|CJmtEE9o|db}NCO zw(5-prX+Tx%{{#!dijP}`0Lx!3MLin9ho(I(ydlpv=0d6DX7;<;VQrj?W1ka9ZJ4r%hOzAf!G?>OnU? za`Eg0=QC#L=}9Xf2G!oejtA6=_>3XIIXd0@yVX+02d zl?oyiM?Fx2k4JC%z+@*5FEu)mgBdX;gpC!Qzwbm*Dbsbu~%pBuX1IAQQK?4+W*+RNd1 z(UfRN;prI!q$;Xq;j_|(d@|t3x-OpI2J43mP@HmnzWdWDn}fdIxn;c(RLsYqwyF)4 zP{7j&yoh^=y#0jU#%-XXSAJ*IuG`(qEBH=7t3~H-sRNnClZ#5`COr(|%t!Ob>>;u1 zB!Y|Lk_1-V)Yx2^7+@@&2k5t*Hf&<+?@yedNmNyprtuF^8BK2<4-Px!^r+vzxd36F ztR!prr81KTpaDH^9fE8>%UjpdQ^(S;;*6Dkv-pGo@%*T%c451D`82_0X;t}ywWI-$ z{f8^KSN&*={z4eu1SR^1Tc-55Rky6Pbmg__7sJB*mj68TiJ+3edAj%HHepm_V%5W&*qzzn#xrhLOp#@7mBc9uK9Lein&m$}hrA>c z9;VE`qLp07smo;~8Zh+_Z`$jfsxl{~Mf%2@WK&^pGM+vi?dDdqbn~PpN~wL*JN2WM zIaqOcE1vu=Gv2@5T-{ke`OHb5)=r5T2IFCnqFGYY{&IFVm}KUITc6wts1km_$YSBQ zlx&P@O7w}N4GTgiB$!5EI}H1DOm2gFxRCD)v)!Lo(1RI%I~2o)+lPA3?0-B1hK36H z%9Fy9A6P-MJjrQ? zLBjp_H5$7MeL5m-?l?ec70*`Dw zV>FiZYW@BNu3b@xB@JmuYBn|anuuU|G}W;MJw)39+v z-76jM4%~LR-^JZc)E+!*PQ?(PI&SFi_s1qTTlG7}YM@n$>9@zD zeg_oj{eGXeF}0L<)|e*L#dY@*2|0NOrs{7^$v4E{iy`R>;u-^=BA#kwlyR$GcxYn% zmrMlcY3J@VqVu0G=YrIYjv#e_;^X0P{Q~X}bPgLy%gvxi35i?7{z_$$%S3D?2^Bj<4-2ufNXjV&vtDi&`^Ns(nGTrVmUqzKpB3ST3@yK z*u~Uu)IvOceiU8^zg0cu@&`-48%<9Owx98C^D?2aMqsDx3Na%8=h*KtrLCk{4`;l`5_H((6Zt~CA1 z7VLq(Gp@ZyEyN9ZsGK*OdvgGblU`4>`l{x_o&xr57CbW6LMPBPNTpatt2%#^Bx=heDIUy|m~+#2+K+3EjbKc1 zP)nh1aK&+xVaRlADnoJJpo5)%xL5d)eFKjVU(>aw`n!AOwI{JKkp;(LN6UCsu>~3Z zCH{Ng@R)l3Pj(ViteHeS^V$7fPbezzBKGjruXvER+41SgF%+$lR~(1XbmTDdc(?ku zNr`o}>tYpWX78Px9wuXSDy}tQlJFkd*}@wW13cmI3WBvJ=0_v*K8F;YM#6@oG?=D2 z(G>!#_`1-d;fkht)Eo7n_Nn2Pqil5eP)(El29~UgNy(2#9^yK2`{#8d2cym)pf>`X z{H0mqX+NU#h8z!-g<2i$*`7Ha)Cp8zR!IcT!sp)~;7nHU!VAm^Vk&QRHxgod?cX%_ z9bL>Qk+wcu6i;s_9X{!h?191v44LkU*IVUlQyn-{=~t}B22FRPBA3UByhwb}4qcZl z!vbbM7CV;Ap169l;Ev#AUgD4*bY$PIK=J(RWg4)XWb^OGGni|K_VeX;0}N zbY^0RfGRx)wj{D)kJ*2=+vH2Lyv5fBlv)OHlCJE8oqFrt1KvrC>vMg0)pr7B&nwF~ zKeIU?%%6`3rarxCZ;NkL9nB~8zkL7x-LVtL86=4wtu?zDO*mui*y*ZkZT0V`E3U6j zA7y+R-Bt(v+8TPFk1iU!8-&h_8X(5MAP)CdN6D<5&w%5HMt#PW+J=jeB1)nYZe0)E z$z#O6s^O_Ll?5B~+et}FGwi^hRzfL-_C7lK?0B2i`xGqDuFwE! z*i8}193^;eP;FH~{Z&)RJ&1uCn~LR#?6y=Tqzn_6x5;X?QwL*fh4;*a)sxEv+Ofil~7>oXT~ zXo~pmetoVZ%t{YKvz~7HJO^$6C${F7hhrqV69S+O-?O3O%ee@sbq@#6y`+p0NBpFL zu{-DktI+({H;D8UBQV;AvHCr3%^#36=PFmdEF;t}IZvEESLThG08BVPN| zixuL+Bxa7zw`1RChtD|I3J_W57-d|TJ@uuAqL`SxTb;~tmRB}>-nC_00h&xK`{Y{H zR;YAtmAu)zMg3`tGNSzB{^ z!^LrBLONl&B>@aAZr(we;+tR%;Z|8KMMNu&Adud}TMpvT*TEWxA^wB%?WX<~uBAcs zuLB>ClGcjv7q_lPk79tU8=PJQ3CiTCcOe}1b$YVS@?e^OKA&>sO>x)fn@;{}sME8j z(f*$~L`M|;_Lb!3)}fZSwS|iakN#$LF_&Z5ti@nhnH~u_cOfq9^Y`zYb;9Dpto~B8 zBca-VCWCkDSN!Jc(=Y4v+m$$ybQDFVsYkbi%e@_}eT?g{jy^)e!Qp5JPawL>q}F%( zZEw`l5IPpUss61GxRXZ&dHp@L&Trg>7qd(T8&1_){kFYyci=hnY~G7z8Z1a#bbTy=%g^-I zqqg0iWYc@-PbQfAiMuqF!#n-ozqXuBUsjSnzFQ8^z*U8P*EEMSUe?6AzZDVyBL|18 zi|P*P(PM0Gb8bPs19iF@q{|JK0W96tOjh#c1eoxt4B%GXaOZ;XWV8}kZEJ39e?7}i zSe&k#PG0H*9l;ZPHSbc?`~5K=i?idhrzBn=;E6K=V4LU3O~@{*N)i+lF1~YEc6}}} zZ`Q3^1_IPM_uP{RIq{nU-AqIJDj{w`KlAa2ixPXCc@^8KwKy3U+$A|2j6HHz|6~?K z8$qQn&H8%S=3)$6NHoLk^<8hiYVwk1*4QxfX3?*LHh$`$wd!@-@YwQ>$V1-l9YuP) z=(mc?D#ozLR51QhJ=(Oos%p^kXX>th^N;LTnIB#0JCM_vtE*67_~7pr=72#^Yo4ak z=TI|BlDj;>2*$oe*#??wxN-9n$M1PKm|wT(r%V+$Q?spczzBQ8pBJYu&>YreRq+6C z=(Rai8ODHNBE=!D8SJ+eGg$69mk!e5Uxzdv7!Y1wo~!38JCEF#zA|$U28c9z&mi&W-dJEYjan5tT*#3=IEM|>rR_TGF8;r(MBiI0+@!P(Cu(hce$#S;jue; zQMQ1L3({^goZ7YQ#q2-$e56>e!I?^N#r3&A_jo<){Poa3_2}Nc_n}Yq?H8&9%WJ{* zmaa)lBE_>)+=qqL8CCg|lkX|HX3X#RE*pxkzf%7`(DYmz$qax$&B7Oqim9!sJhXbM zkwAZiuflzSIwySV232i;Rffbrs6aySdbpWZ$7Ui<`QNud+;_bP-VrVGgl0>#708lg zggeX_(;+oWBD2nU+!iceN2ddz@%r1dM|N&yGT4!n60;6-rFHlwZYM?9uvyy|vLA`G ze7>hI6mq2ss|zn|z!RZu?OLQdn8Q-?c;2C_sqUOPqMnCp9aEaByMO%)PD7JArwmWs zg_z|7NH^g`GW*Z`vSS+cF|g7&L&I$!bvAKStWN7X<+#=u=?P&}h%PI!I1>|ttdri9 z0vjd|dq|Yk-Ei?)oVe5D_AhxIr|3>zaDx8bWq=-_djy# zv>tu3pR);MQ>uU76Q6jc-ExFft1KV3}3yaklY-jf8$r^^lDE?2J zq(l4glYm4&4UYm8=yloDiH zcp`^CTiF_=`}U5*RxD(YX6Vcdhu>Uop#uUmbIHfw$dGmulB7suN%eO0cZYE+ia97J zQS~0Z-|+LH#*c5ezuQ3nUma9{F%T5;r^~big@r_;)~Q&^xlI8>Woz-g2|>GnjHsOh z^f%8{s{`BHn)66HoT9OW03CegiXvU*VUL#(Kab{=b)0p1z9%;&f&VTW7Dw`RiU&$DMtklA-u?p91`^MyVDb7q=k5M1Ioy}D816hLpkt;=pdA8z_~8; zOxo?`(ETmTgA8A&qLQou2iW7cu}i7v{b#%xNr7m4jYmm+Dt+18*Wdb1)%(#vzvhoE zTDYbpR^pTUYttyPL4jW0O&Wys#O8nc4s?*7OWSzxMW<0bO9t z*(;8suC@l_mz6U2+OoQbCie6%{7f=*RsTb6&G*`A_(+ggGF^^FIjjshH)Lt6^0!l&b~`0?E~?|>P^2sgwnWAaUD_%$KW*+ufzD?p@$}TM>*Ek!lB)atB;}5NK{Vr^MuGDPgpzUDvJ-pHufYVp ze@egG?-qP=y#x-#(-Ve%?sxZEU7nrXi@e}e}N8w3Mn`=RR{X|r@;=4~;Ejmz<1iasn*D`B<>YX9XqY$h} zv^hjh`3T;s4SYvYfbjG`I#&^a#h^qOMt~^t#`j^}A!bO3f4J(h<0&=SwKc=V_?h;X z)KVIEs(wy-sN1w*w+CVm9#>Nq{)bdd5TtdeL@Bi?ybm<7YiCUcLU2wdol1v_r)8J< zk0Oh*cv_s>W?$VFNrGaS~tpbi&UwM0an9M6HA$sI;q$RNveK_qT zbi)dgF=F^}iIoWjdeNP8p+9z{FKhd3AB(gd4;UZ4VQU#SQuQa#ciG#UgM|;K+S80; zH$pI@x!0N?Mj0nFd?T~|K`>D)%W;D%#ez>8B^PHQj1HMDDQ?#wKQ!yx*-++LWdnFl z93m2OSwB($lQWClfM(l0&XO2qdtYS2REJ*DT>)%29Cb}-Fw@&tiKO1;d#qU1f{5up*MFyPqOzQSJYYj9bnbTBpG*VR`F9i2&m2Aj6C) zC}^{6C+?kwhAD_bR|~Hh9)??`nTfq{3jTqTFLcJay^C5`lMl%h1BnR4 zPV?{uUE-?CJDqU-`OqQoHjYh4LB+(qo-Udg5)ATxB9dQFGY|5nv?&8+mK8l`krYUg z3_K{(IdM$Mn%tBL7-Ne^fTwUYAr^pnxAqDk*PQTX0-K&}@Y~d>ph- z>CQ2fh3=e8{OacRN0t4i8gr9~`hhA=Dvpb@E+ns7fcUuN<}*a?(thWB?@k|uuf5Nc zp2mE5Y?RzsOb>$cp zf!G_4dB{8?G4o&TH;PCiWg|(L>~9GT!f=%Ql0?l!eF{5d$AtUz)=AD!@dp2V*sief z`dl-1W>5oh0TS~&(a!Lko?@%6_xIxgR4g>S^ohnjJ?4pIY5MeSd}GsKCjfT1i;JQP zJG5vtrZX&I@xq?mtIem>KyPy9Nz=DwT%6N}B6U2?4mQyiLJOkit`O6V(NO`{y!&Jd z{gOwyFzSuiAD}-`y?M@=rU8X-Wt-pRYzI^XvFB<3QZCRqiLMi;?DsTw4K-0Z0)Ia zds~w1l4vE+l@=R=w|6#ok{+=*tvC09RV8G0_H>VpU-*#myj=uZbq^|!yhFAs7uS0J z9RCY*B57nvAwd6I6~#Ko_E@YPSu8(Z&Rb*B?D3a%Bjr+<{cK1ni3bHMuO+AJNl5t| zFYOVlyb8Dh2b>P+XrRTd)2+s9_gHZ4aZ1X8$M#d@xYhP=DLxDU+O4~G9q_R!v)M_u z&m``omKE=JOv8rPz6mw!(W0wfdybq0PI`MZVKCem3T~{se)X~FL}2Dtr_(i$&FFQY zDeE?YPZp}>ob1cwyWog&mCym@f$>fDmF26nR=#Y3&gwpqRO9xtsC(^qOCo)e`3Cu2~n1SZuOHNF`;= z#{SxGXQoqLbKu~?9iTWlz=h&a1*%`|D-Ybh?EqZsgjJ6K{6}CNdgED*75dgka)^3;^^4i z@s#pS4?I*idH!MT2nM@s<@mWBd-U1RX^6mN=n^Q78lg*uoe_`T+2OZyVcE-z$L#bA z*vWk6%i^vQ5A=Q{wz(NILeaNQolYZ7#*HyYy)FGGxs;@&L&yAYu`(o+lNy(umJ4(w zA}#U`ppb0@bC{wpr|c7h<19O}n>`m>eqA?Gb&WemdrDiAVI5m)b!-WM^!adVIBAib zeq<7FKWR@~znuJO-hSm?#3^csKaja(J999fBKNw&QIB6@(TL+ume77Z$qwJ|qSO3^ zDrZ&DQ`4WXcuM^p%^avx#T)q5w)mp!D!34OYMdBNHbLiN8z&ZskF=OTxyOT0t;r)}lrGS4@|X*2L8@Ls(iQC?e99_Ml8V zAheR1+q=}XfG9}yN-n=c2OZf=@pK?7Z=?3Y*4Fk%#*C>sNWMh|2ZUOe6Nn7z>7_Lf zRJ;oOh98{Rla%fsIU(6L_8wbk)>zg2RAGxfX!JtPOv_vA?vV&cYp9ubm_UPXw?{@s zzWe0L3hmsgv5Yd0%<6YLWn#$mjV(JCqJxkGlae-F_t66ncW}F6oQ&W_<_^eAD6Nhu{pxK(C(xi_Lz8bFoUg)! zU<-&pXI4^UT0ThAK4dKo@-RJ@K%^4O14sgbvPf=FQyvxf@Ken|Xonn);uspPdn}1e zC}Ek_;g!}0-*2BkKt<98Tw1zx^l8NTnv4!#{nbk>0_5&N5hSA^@T3zVq=$EPnQ_{H zC�|)@RYkOl|Pae#+j-OFy!`HuTq?^gWsyYJZ(}5wXG76bl^FM(yI2ifmUtNZbc- z1f%(R|5D(I6UC@7g&!t|Oz+-Jfq3K!I3Xokv!^6o66jVuCUzxk}AP{zIDqh9O#T{I-^}l^> z?L6$@6exNVCE-GT{PvbpbAk2g-7$x6Z(LRWK!w|$Tx5?F+nJ#Uxv%4#EGNMnH{|&Y zf7`4S^s6oJ|B|`cnE$wwe{QI1y-n0V6kE{yx@Z<#>nrfYfJjelF3;NaTAv^N(I731 z%RS!$D4D?gHr7RjczC4${rms*R66fH3M?de7Wm6p4(PYd_Jif8o^Ir8Ua)q=%$?K2 z#^InyGF83nUEBZm{d&f77V%3inNNuEor69E^CbW4gB6;dd;Z07+~pT5Jf?^3nA?Bz zq4m-8uUPb3J2H@Sd0LqH)30T}PbG0LGjxa&{D(M<1~_4Lt8<-i&L5CzrycsDu=eje z_4#rk|9Xie;dN~zEP&c_){SIsFY<6uj?7}Dn^GN_#jVxRr(`&H3sO5G&@^YLj#*&4 zw|(meo*S~wD`ek{QB+($R@ zgSq{#73Is_yLW597`48B%js9b>iTWR4D0IL=D%)*g}0E)9Nzlvk-YpGpXv7k+ufSk z=aB!?dtTp02DSo@Z?_`y-D>9d`E|cCtC!OIoV9mXN6r3I8PFIwd|N`!hRi27*Ieu=H`PRd?{I{3idGLTE&G}EAM}db=>1}-UMK8Z~@?&e;sk;RocumaVaa;A1s`-dhcxQ%}3^58ZdN~U%1)VIR9dfj7D)?qY+h$!j z^y^T7!okR)FB^CCF^;mQ_84t#eX9FT>?kJKo#o1xM0U_DYscXWT(V`4!v4sy9)_xW zf^JsZ_;+^++a20xcmWU|(Sry3#_8ep8ML{yO|JaS`$BJ`*B*txO{0@W^Y$XMik7%I zl;5jpxBXt73orW2-%dtke0uAXC2gvFGPUtAV0*uHZAc8;;o?QyTe$G#o3~9{wNl>e zMCh7(X|Dc2==7>*eN?#aHWhjD!zrZDu&2vEIrc7EZh= zRhVQxRoc}9+GRCVc|n}}_d+WTrF@g;^e|`g#@#R~^*)U{d&EXrDS?KT|FV0{GKzHc zx_tlAS=}aj{s2{^j=bC2KA z0i*aSNRUmS4KE(mxO15h`Y>BBKW3?sp7OcpO}USL+J{tr{0;3^?0kEkdY|e` zXv78_NhzN1(0p`BZu1xdvWMet)p?p}-#t}z7oWP-(81ej@x9c>FMmI^IZah)mwZ4$ zUAJ_dWo}J(cjM9|n~0ks>rM?`MVf5%)^K<)N$bAyn) zJ_S)9FZ2)GsY1aZx} z#idyn&b0hEFw-Sc!=T?C{Q>7W2LJuO3j8qDn4qZHyU$Uzm0uR$;QI2sZ;*|R>%o57 zZ4Vzke!XVL)**iD+qN7ZbiktTfYe64l>GLdJF=Q5w_7vARJ-2u)ltoZQktsrDCm0h zIW%#%sk*93XVr&pyT)}@nvBkPt!c>g-kry8)0)_$PFTx3URyK5oo`zzb_Z6xa`P0& zs9Ueh&*N8EA(PBlaXql#k8X-w&odDIcD-dZY9D zE3d}X?EbVJtozeejcybCzP@+u-oN{^_UWBf{bjF}8%00=GWXEL5WAdtBQOFt#UYy3r@qd*@FL!{`&EStFA8tmG%5AC9B6mu=1q%| zHYe`yt?Ko+>DJ=W1T|0gdoafEFb7kc5t#v3FP`j4)DBuZKxwVX zAR%&iY|uWxx7}%rP|cHXe^2XUvB_(~*>Z}Q}IEdb=yT-_f)Wthy>GQ=8 zvsABpBrY5|aAn%BE=oDQN5<)u?s(UFOzxmJt8WD6#vPm7i%8bI^^@E?2W}5=ekGCb znPEirlK~6*v~4!rs!ze}k5v~mbkAmJvb+0F_aEmJ9XQkBO3l&hy>IwWlgoiZglchCx?vBo4?_$_+H0d+Ft>^-BNiO{;Eg5fZ

    gH3DQBqCsr=w; z`EIi>Px)PcqQ-}&KWsdrvQBq}+Qd?Qnedxbx|J@0>r#d(Ds7FP--_h&M!y8jf ze;xEToq6HKl`K2%13aWnB5H=T0e2sSY~SK*6By;8`C6y$*)@-zyWjb?;N(F~t@n%S zH}lB7^LWbK2MbkNYQET`F=0c<1k3a5?^bkWO*9E)%ieCHR#1ObS*NcnFU{)vf2unB zaHiKkj-v^oCN_#ZwDeT=e|F$_xtnt-Z$8%RI6g$a+l&NFXG=?hd!-} zdC2?FQkYiKzUK^@5pM{#TTuF8Y`+DTJ=~=2gGPN?Ykz}W5fBL|?4XMbQCjLCX?n$P zh^V9aAo*U?Scv%g{WpP{eMa@EnR;dazo}7caIyQ2D9`1`Q})w za05w3HCoK?_;r*AgqHOFpS>GU_6&LHGsW9{$*Fs%d}b`7Ymoj4+b51I>YK@YcB_`< z7`P3?4pF(xIk=#e3#PpGdj2Ea7Ct|lR3$hSg;HXk8D#m^+_198538CzqX#)NvtO(* zD~)NL&&rYsrrrxj;OgX9d$cPPGM$v7=OZt(%OrrgyL zV;-f-T;2>L>f!nV2WMMV{FJQK${DBe+pkduM*c>6ejw?9rWrMn2A%Ipl8ImDEErdW zOCL-ZI!V=UfEA5*Ip9DBGh9o8Q$kUWD%qy}Yb`&Vlt^h9hcTeFl# zsNu0Sp%e>E#UVkM!R};Q7xFB&G&^QD;s;0x3TLw*|t zQF(J-Ei@exyjxxe9bq0-08%S4@f}Ld-q>l7&G~FPl2R1)DZd~>UO~atk5_g;YHRGP z%X2M460>t2gR{#p{oQu2TvP0x7zwk}x=Z+Ta*IA*0X!|GQ+rwHY5!AZaf)--_;w{T5-+aB^ z&L3vcS!X|UKde&=uR9pvAd$PY)odzNWv7}O$08di4pp9`^`w5hpvbwH(d0?16@19` zYD2CZX-FiLC$Fuo)uT$x15Et;JD>Li8RRdhToi)f?>bs{Vmz{9nST~D65rAr(i%#6TN~CdX zQ8lhd=YE_}Hf)u{ucMw@KYq?YAua;kL8(s8CvLD%a%v9*o9b*B%wifZH%fHY{hkOr z!eadtw!M*0V#mp;l+SxFS%XwYExPla>S4aRO;OtJt9tRTcx@99=66pP!ft<%-j3-0`s-Wpqu*Sw=Ot?i2Mo_h0$OKdUb zG(zRED_eUoIYkqWrf*DyWcB95j%<1l03zQOO#;Z5UCimv22MFrCWk_Hh5-5cy|Kq{ z^Ze*?XI(I*aih!yzy3ps@~^^<*3N z{mbs7Qy%iuHmI_KR_ovQ(QG$)g`F@-DRHZCPUYV~a`-fAKMhreGpi0$(15UfY$HVd zDIvwP#iMGW2|YD>NI2*xyWeD8cu>!d91{x{naQ;-!a{4>YYQF9Oou7RwiUG$yJDs1 z2pH6+i6&b%5cnT(s(N(rmOOeJc7bg(^Cfe!P`zqgt8shiaeJf4cYY6TzU93R!1?Bg z;jub2P{Xj;=lW~jG2yaML{14{?!%dSMza!i$g7JTFxGi^ zsIX_PC*SkD7o6D^G$;sbdE9KtxWH0@<5S&7E)A_CT)O|g8;LUbA;cqF0D8+10HAy_ zJcXK=F`?#wUl%&JiD?9<3Eg(^K9-BjDx_g`?8!xqHmZGKP)XD1gtO20F@@GdUf$JP zSgDs2?N!ELt0DB=%^82#4xQ|y%Drv+LsBk3B# z8N(AKHeuZ`J5)<1{~24|NZc=-ysSAgcPq-(?OKX@xj?zO)+MSz;fyL{fv_$V-O(RS zj*p_g3%<-_S}mP4{i3=kR=#o0QI3@)4P8Aqm`V;B=Dp3NpF}))3DiBXdL9MD1Qs2B zKblcV;NdfmK3q!t)>v9PvoTGc{>k!GKH-XeQa+&m7kQYX+X^F{<_*V8U4dHyT>Usv z-TWV*oBIDb=5KMiv6Obje&0)~Ewk!SONBGADU~%w@7HxSM2GDqt3vO0w1pS$lRTGv!Y^QL&!}SQWl1=V+(=D24!#cyM@=y FKLMo_IWzzO literal 0 HcmV?d00001 diff --git a/images/openshift-telco-core-rds-networking.png b/images/openshift-telco-core-rds-networking.png new file mode 100644 index 0000000000000000000000000000000000000000..c01e38f34898832e1545c03b3cf11c41781dd019 GIT binary patch literal 83846 zcmd@6Wmr}1_XUh@L_tY0K#@=k1OyC1I;FcC5ornO21P(cLJ&~8dxOL#Hl-rn-5?^l z36TbYGuNa3{_ofG?Of;O(kC{s*V^|z?>WaDW6bqRNkNj7h?)q6LXk>KiL0PcCwfpQ z{Kb<5@Jci}S|9#(%1%n#0fi!~L;m75+Gah57cV+K&~j9@HFb0`vNu7wxVW%c*jPCj z8`+t#+1i`MuJBW%P?u5C;v#CUaZBSauJ6<*j@CK_SV>i+=JoM!GF(n!c}+}|lJatB zUy-RO^aK?>g|q&7=Ry7BnqLW+7gJLzZ#o+-U)DIca4h-w87a||&%y*}5_S)+ki3bI zJ9C#QqRMb^#$>jwqVs^ZO0LPQ!!vYheb-YI{XQd`(`vxHH$#4;#3EtvS&EG(&VwMI9xp)`L)ro5L z4iib9Wz5{W7XZx;}`?YIy|1%*GaV4VHQ#EkdE^{T*Y+gUvw}CZ=qS?SC6XLtp3eWKmztEZ!-q zr?(fMii2c$*||b5-}h;5e~zNyuDI9!7TONo5zj}s{FP3?y~MzE%3o%6)wz<>WkpY5 z?d#dQB&j{|W6H|PcjZ-$8$2(HE7ZDuUT0<9)V@V2@cU;S zcDZ!a{2Bh=rQf`@gw|IO*i_qG8dl}f{b{;gXxx}nQd05-XGwjU>fAZ=zAWV!J{Oxv zv|gM-CO=)Mf!oaac%Q@F{>900D0<-Ta zh+L4G&pB^vb&@@^SKH{NciL7_(OrR^Z+pIdnRq(|7w`feXums?ke0}JlqvmY`c$bp=WI{4GTsov&ZX%*m+^ z8wDGT=b*cmuRePb04oOa)YuQM{{!iHeKJtr=4U3+w}Z0kep z9(;ZmVlXly;_Oc>Em@$%n+wS&h$*x5E1eJ5($cQACkXImIVW~zE-gJ_)h*4_@iF+j z&b}kry_W?91r6A1b{;87tvFX=oR@Xlr?Bqbg(^1u2VNwXc?ZZ75uLE-apT0V!PcGgzy$}#}OIu~;;P1VlFlM<8an!rkqv7{D`Bftr6$c^| z1uu5>jzOqxEe%uOef*MuOS9h2eBiY~m=VwB7 z0|UC-HJ1I%`i|WX!qXraj&JIhY2AVo_=MBIt=}7h+IIpMV005cc(GY0nVbHjM=7s^ zf({DCmz{C@tF`2^^8Xg7Ymr|nI?ZFXS~Jn-u)=$HCfaxWhhU;2l9sq9Jyp@P3-D!s zp9asU9D*WA#4o363onu+0b&#Zn{ItS@QWknxr4j3-xMdq+eM>gK_|0w6(Q07ZXJP_Y*1c z8Ikv}^j0{SwSTzBm_=9tM+80|)oF9h8_dkiH{XEJW=u414!@39-4wcf ziB4v;)Y@;Bmxc`DRvw>_jKpoZ)Oz%B@$io|*WbngM*qcWV{>zJOG^t;xecI*U%Q_D zBX{Uux0|YZdR9UDJ}`zQhj8wH$D|r!)jqk>@efnHFsvq62;V61Ign-1%vZCqvBBVP zT;1DPID7g9?q|pr1c>aQp^sOS6`cO@q}x?cUo#WjbB>-))_5T&=3_4(JG&x$Grj{- z0Cm5P_Q9mjQT0)0)7-knTt__b_LxKDyLW21&3^@MN|RlnC$t}qL^Ys`MvSwJ%xeX~ z-5^2-^_`I(tzAUH{f(!GaGqEBqvam%ajZ!Pzppwdg~&($Op z*zT63w`hBR=lquD5gPfGEZU&INuQ9;^>Wg9JH_?u*RgX69vqLpo@|fhE@<)icPn37 zf2uR?=6}^Ti%;L!*zhgv>r;ad;-YR~6_X(mOr7q88#u9iPUz9be7F0QKh?1YNbdyX zba?>MlIpn78o4T}s!m(OX00}5R)9EkDxJ}eu5)`7a2T1Gq=gQ5Qom16;}2+keE5tG z&S_H2l5?e8uNr)S$e0-B%-(GISg!1n615Bqg6d}v%SJhkz8sfN6cl*)`-53Rx6sjU zXHwVLl-;bnn7D_|tB>UZ6diu>9|cFJ^DkEl6siHL8Dp zA*SYcwM!Rqe?UVP2(aWtFb&V$n^LfZ78VvGrPi?|by+2ZmGzIh9XbW5&ZWrqls}y= zdE9&kNrZSt+i$6hi;A9;T|>VT`aMOyw=*4zp4AciCJ80q#q2sBF5}Fahn9Wd0xOya7fkB?v`iFb>RaY?%@`@QziXzmE$&07_;;8 zZp}Ybr#)QfBuV`;_itCH29qGbf~h%WH1(a-l_uA7^Z=3V{j4MJhl6UVj|1rV{(Tnr zgsa$pN2!8|iAe|Rm`y-Rjf8ViFdWog0Nr#Pbc>x=AgLmyc)mN;1f!F?DIuz+_U@6f z@#c>RMazjQx7V*;W&eUk<-afC<$h+Vf_dNem=(6@v=8^pav(>DOxFs$UoTS-I&eII zdq7fpf3_;4vz5= z1^}YG7?7-i*0JbGlSTqGAG?@i*-!rWmQRN2&~nAZ$Lmie$IDncA&tp05WzfZZk(cS7Q;5zXI<<8nf8$VqbJrR7Ova%JmyJav*gB zTnHyC3m>lbaOvV)TU$HiQRlJy=BrU)fR5*L0Su~ESXd|{E!~}+nUK@h*Qe7@_V-Ti zdt-=t{v{>z#V-?AY5#u5H{-?S|NZyrlmGjnlh6MD_vI&$BO*HItmgld(aP+K5h zG&-Tt5co?c{;gG)TGz-wxb6G@lI8jTE(o5G>i-QThQ!Xfh}Id)`MW3%h5)0cDLW3w zi$)@0e}d3g=bpSOAsLPN-)|$OoK#;z>;HZ0`+qBTx~0ZzUrkHv!fhZzxQCypOX{Yl zr?+%;kX?|KlT!n*aI}^z7WghYn(8(q)G4%XS-x|yzrX(;v?1>j6AQ!w$(k)O%y;f6 zGbXsEU|m)X&^mY=2EXY^{@W8@3G1mZ{yH_DZl8wwH4V|wtyQ#GmD!Bv1k$! z{igrJkNCcvUf<=g8n|_yiYhS9?1IG5YPILCy?i{cIu8%eFIRr`fR*qc&Xp!H8I zc>sR1tE%{yoba;vjxYR)*O&JJC<1)>uUB9HVp0G4y!-c8jE`e7fKd75Ko(e6+Y4OL z@bXqt{G3ghE}!#Km*LUgpp45$d~;9pu3Z?P3YuF;)G*x`XG>k3~jD1KJ5p z=R!FJ{Y$?TtsL_0i9&|e9@`p?A+&x85%^ z#DdW8pMH1m-NS$GSUH}L5I`iOB+H(4cpy&=`N#KNr*5lP-(DVsRm0q|CNp)TR=xGYC7bbM$*oC=xim zp`jt2LHs1|!vDRE8T8^s!yzjao38wO-~7l+pt1FJn;Lp+@CTa_9RCGPF2D=pE~Av> z65Gjy)3h&@{%=b)4cyWJ$>D!-geB50HqHL{u}g?D)Fv zXpfZ&#-!@kjnmy&C)@@=ELkJVWfe8Gb0uVf{>R#hMgA5ThJj(TMD%(GpCUw+(CnJ) z#+XAV6VNfdG}u82IXSs5E!Z&h!12@C|96cL(ndywxU96av?MUV# zqJ1+nJKKWW9B{Wrz5QRMhE2FE0G+OCrUI#X?V~?@5YjX58H{6{h>3dQ6*7BzdMvBC ze4B*-rRA6Fqt2|^+1c3L*;stC6Hb-mGP1I{PFB$`UL1E?9^ri0cAJD%rzA@P2We;! zONk=&`(J2grU>sr>#kW~P<`ynhTVz5xE60s0=e~XRiv&al94J8^ZqprOR7YSvR<2AJDhVLbzCsT;NU}gNwrG}zw7ne2 zPlkt|sc&oyL<^#v0%`x3choUYKHN5X4ge5nuR8ZhI2iI`0VJn%207%9Y&L-IJF;1y z444Pr>(r@JF+2`8uJE1~fir*)Y^vuE0%2cDR-NNCkN%6{EPC}53qX&>a9PLPP)wj1 z*w=xO2EYq@ssp_dMDjqsOWp0q7XsY3SI|tV8K+b5DFQY!2|~QMDNxs#xVTD_WszLw zYx>y|v&VgxC%(f{zZXE}xbvtr!Z-cqYhr121|}0y_isZ@lOIiQd2xBg8x5Ci=Y!a5 z!T>mT&JmXn+$Qm{LO8p{j98l$l&HD8J3|G?leU*^+k+$t{P zPSLMlzs`v>E7U@oNoXu{e2xwgcI8rkqi+oi(u81=U+R zV$pxm!09t*&+Z-)k?jhVtM2JP|cgX)Mm8+J3zM!1i zhPnS~Hk5dPZib2&z=ki(c{)upOxLzr9qv8nX9Xx`^FUi!eR>bwbCF3ahx^97d(P=U z_tbx$o}N9ivr!~f`1oZhTLuNbWe$DkVUxZ2)KI`C*QF8c;oRCKNz|;R2capDWGg16bsd#;<4nhmFTmxdh6?@)9OA92s{%@_XUc9rIgn08 zK|w+3r@nI6@}~PHuojlxDGx+CKV)Ph?|t>^Rde)Y{?nz}Kl}HSC9v+wf4U>QsWa>! zq%_|oy>Or^OUydys-gUkRJs(@*1skr90Ft!b**~mdl2Y95f(9rbHH~2X_O0aCp#@o z9AUR0-};zgN0$c>jRg;%APdUuAr2{Fh>WSf3@6Q?K(pF?Qw7L`yLazGd1g%Pq*6Nj zCoIz5zrV1Ticj(4vcPk>4|lJdLmfxNwec0-Pb0A4FU~3`0hg$zrj`bp1;=IaKeTrq z8;D6q+)Gfo4h~j*G<9@hgsyGxdg_#zzom1Zzp`MrAMG;X#`usCJ8q*1hj^n==Q5oLnwT^(B0h)v2r{aBl`lxUnr1xBAEI|%0SL!$j9E025w*blcYcJ zb{%}H{6H!(e~Wd34@2kr4G${aZX@6TgxmmREZx%*$$~HH2suh2o_C?s=g>j;C7U$R zCWD|L>^%GPnSWk_I{f&C@bN2BwvXU#5!4Ax5%cxy55h};Z}g424YfMj(A&*_u*|$$ zluFO;8T^<}n2Wdb zfnqH1!nZpg6=umsvt1eF*+QwnErMz(9gYz9?y11srDtRW>_+~%Lk?@WLsI=?zSNhc zLN6m>?CI&* zw^zm+)&EomVF$;hUB_#l{@?z20p#nWs%Q2rG+#~VdPoIKsmc&nVoQozEiHa8TNM>o z|8kx8)u&3a4!D7_nrHsvce(oFd{251;H&9L^%cAOC6DTfkjx|E0DN-|NW#AQOOSkk z`U<<|G~XoxAp58a2vLyuav;$AM{$2n&dbNJC?GrHp`EV2UD#YWs_7h@jQ(-+>SX$b zzy~Ha1;2yINSaOWJTA^s-uBye_X@v%#6DIf*lhFmM*L(~vLpxym+6bCsTB=lTWMFDjxeMjX_ zPS6mJls`45t!Dmj!+e`S!Wi$m-Z#>5prF60hvhPjZl2lYgkOBtkr-Ss&Hby9Pfhe7KY`Qbx0sBWTc(XBp;f8yx($jJ3@m(-(@ zT=A<3Lm6kHHEkk%Y1;8&A)c;QZ(9N%d8h;p8RRPXe4|Io3@NUyFU&MLKA_12l9@ zpoYUAAJ(9HhS=b3t3Q_ru|sg+m#0iOIXNG#I7d>Xq+Vo@vX$uI#oYoiQWuI4ptPvQyFBbD)6MYU|sM+Z z0^kvKP`!%9{Xb-ISKq||inoaqN2?8D?mqg@L8|GxFA#7{!=35$k%g6b0qNr(U8#J;mo=IU<4bu2v&0`ZHq7%~l#^8Yo| zsLgBP&m9vL-OZLoD&$L3@&6xh`1-Zw{Fxl<_7#yaWYDaww{&o$U9`d7cCmkzp~n z&{7r^Gn`oR7{Y&GKPlk_pOj05o=&-4vVqs8y7fp2)BNt%@~9O;neDGsM#jbIrDC|} z`UI7{fBYh&9%f8#ahtIzm@ssaBKHZ?HwqJ&8XJhYn_?Zj60i9~Id*^GM6fPv0NsUQ zh3!!D%<`;n0kpE*bUn1TkD0Gds;BNJTl|lfQBwSJF)cGa-J&V%3bb`P{#UTqW`6vb z2NI$GqFG!rICVabSgi`qtGp`yMf-h<(za- zWw6mnbF$W^Ctt?-g>4gaFOG#=&8=2N^XDbJBP=!|pD#7?TK3RFztB-^D)|6yPx4(= z{L1>pf@mkDM6x?F7Dl~I!s}oYLVSh@kmVYuE#~jW zyeTg)4`9P1%>v;=&~QieBue-P(V)!+5PjpG>+Of9$8VD$N)`9GV*~<<06i! z`W%&sa;tgn%7qb$?b_MSl%JBgECUFnM6 zOnsR?OUVujPm&d+$pjoN0%dT*>!Q$d3De#o5d&=uj)9#XNc$kj?|zoc`T%{`wBQfo zx3KIW_tfXQk|BUy)agvofD_k6X1)zyX~qyCq}p%wTQA zftQ7X@Jny;?AF{^gRyM15AJGZf7kdNX24LW#Z;4b{_S5WsgDqJMVunj%*@Qf=AdCT z*{Yr4$A=LD?=YfoA52x1fo30OK<#ps?0myYlJCS@LNlVYWAR z!F+X>_+-yNEdtdSYPTU}SZo@6gu7%@o-8qF;MA`UjD!eIEiI|U+izol)f&5AHE`HC zxqQ;5(q261W{`r^tDK*SEzU(Qa*jK-24$6bUw9gIYUk$T6dLt{W`pSd@;kx;CnZCX zAUqm`72(8kfyxdZIu%g*ww80NFt4uBG7}%|(lMsp=lK2O)A;zSI~toSr_p?YrOL5U zwTHXq{DK0-RloMwRw9yUL$9grjLpRoJAFPo`n(}Dk5!s5XYGcFcXKz7_4F&H|NBJ5 z$iCn=$;yPR^eISX-IpVqY2s9;y=6P_c{qkUF)}Z@ zE3x>?+Sr#=zBb)@b=p@q=dT_FjlDXDc9!{P5aI>)hPOEcH0`an+Q*^z(rzK$1yoL@r@9ps*iJCHFi1d}tS@|8{X{ zBtKi8XXZS?!sb+?{&jmdPx~Pf(&c^?%IQI_yOBJ#nzBcOlTXOU2?zl_0bgtW&b#Ag zf3(E|ohXeo^^4bASaSBJ-#t~P+*~-Fnk=AeX73YLg5IEN)%r?Mottko#GC00o@!p(5TLVVR# zx<`m@HH*FMoQ18?Ph!ud9EC&L-PL>^ttlD~A-Mi^z;MnhW7$w>Y58513~Xs>G=pC{ zFJlI|98U>Hyseqct}tBcbzu4RW8;NwV=E+XcF$@<XV^3v;$k@1a`bj z6AqXaO|uC6!&8_os@BZ@*Z~$`M!l=#kh1_9w;jWhVpEhXjnv3w5e@AwQH(l&a z)Pb$y`KVZC=ItEglyRZXjXjPj42cO{)&yRb`QFK%XU038f(O6XJ7p9VqdNy3C!U`0 zZJ76SIuTQTySOJ*T4IzH`=jqTDPMMaz}Ay&gXI*H>_?1lHp#jyrRQ<2tPk96twsE; zd@vZP%mc6DmD-Q-8KIW!y6Qp4f(Luyhp#+{bqlIaRl*%{Md9y z3@)U!yd35oi~YHXKNid+U}sT)E?1;04$O<@U^!%FV(Mls8%vR8tGH&~0VQH%$T$>K z5+&|2J3qGC-@lT0M$2pmmxha-zJ0>ehqQR!JDdfi3IqoL0Llc92yQmkmMkU$Fd5QZ z^YQ6kl?>?vcgyqHrSVG5duAPKU~fg7g|E&sw(PV^DuS66>BA8w5V>n7a5NM6UqnpX zVD!T{;WXkwNkMEx42xHHP`(Z9;Uo1Kr6J*UmQ5yHXwS#C$+}`k81*B%N&Py-LrlA@ zXNme+x1`VP9F4uz7psdK5|XIrj^A#B=CfLRWH3BD{9+!Z6sSj+m{A>@&REl={$SEZ z(S{x^L}rz1MR9k`~ z3rZ;$CZ{WNRl~266p&=`0aH?fXlk?*{?5=u-UJui!wzUB(m{wl(N;LRjOT!uM~-;c z=Q;;LEjWMn?9Rd)q+TB#9)zcX({dh^A9d*YKgG=@`WPUL;I(S#l)+2%5%hp#vsDNu z)ZF~`8e+&nUWKOh!tFW4cA6ki>#etkE}Q6qckTy0(3b}kk{sy!uSvtlu0gt5#3G94 zFyT5)3RJ6Xcods{j))&YwqdO|o&$&mX#n+P!Ugx2?j5YS)b;{LfARL*a@pj-YDFlWK?GldVM!B(dLeu5#DYdp@(Po(OeGMKedMLAAqNX%AdHLCeKamE`1orHBNtwkjatg^(10x4X` zWo%37eab^;{V&soy^D`W!{Qa4r9~}v)i8vzdE3Wom_-r18AT(Y-awW8Nv^ra_)^!x zAd>|Bcx2A@;Q9G{pVff`KgoTX6;?;9d1Kwp!+1tZ^Uf$Id!MAzhEVzf#O$15u^6Rm zb86&)&4mEuHMjNIW)AsrcN|)TZy+>p9s17}Ky)Css0C57KA#;!R2iV5Kty+H=Ntqx z9FutmL`4qhSh)abjT_!5|7W8dL#8Tt95TSJ3gUAU5&474g^bvZrmLY@aNg?zNL2x3 z`pxS39Bhu%cvV7NXI&8s$bzv75r=#rwXKJX9w5U3U{rM+vB*5pUa^>yTLPLFNZX1) zY4@#qZ&xPq!1cO8Ykn6IkqV@a#m{e_MPM8yE7)#HDv{eZ!Fg#&8RtY&*99p>tfL~& z@cAsdDR;BlH{;U~#)_)hh!fr})Q>UtGI$Cg$w;wL1xhBFA~yitck5Kr_2bOoB<+L7 z9-3a|o42@~CTdFw{O9#`cs5qo#+|laeKs=CZ)RVcKdP=`F?Vs#t4rxT97;8?vS|Ac zS+}jk*@3-Z7VVM*WX!H-O_-1icnyJE1pF-4aYz2%%{NEO)nhhgkBnK4(gGT%-}Wx3 z>FMdEJAj`M`tC>v;Bc8qHbdi8K_8A{W$`IS%vxZV{y_sN2Qd}Ty^D92qXHV}(-xe(MSGG@r!q;Kg_X4*(!RyQ zF_hE1&*TVtMi<}XqJgVS)x>VytJh8(ZIex;V(uHSLoCO?WH8yTu7$3$Z_U4UM!CAYb?u>6y8tU8DuPv-A3$N=38_CxU?E|hTfDa|m!M_*rBcCs=v z_W(VJFS-FbH$pTatT1Mvyvir#drF>tfK;3V%x{%6ybJ|xH$&EfslR~sz7_(~@jY}NIpuZe|C8Q4=xtVaX2>7ED5)@u~$H8f(_~{B| zDFy`U!;MTK#MuZ{E@wjrg|f^kB@ES!2F?^IdCI90yJC*9?HM2`o3DMZ=OpS5EjQD; zG@Q4oD%7O~o{shbaJdWz!^%BYm&iU`-T&=dSR1^Og{MzFhv(hx9At25R6kRwNZV$8 zQ26dTtm57+O>%vmbwdtcyM1tf{5E9QEtQ*Y2+sZ4)doIJq`GbD=|ARioDbu&8Pl3_ z0M=dIVxfTWs+pM?4o1X!fvMB#Jbw%olLKgMduce8Z>3xeakcp5!OzAB`P2}Q(YTJ` z8YFc$o94-|N6WW(1fFn-tc!<*m_ptKUj>)rMO4>gfg%pcMw>7w?Qah6=gEsjsimv( zHOwa%S#JzwO8N-$AG+nQTpW-%btl-CN+2eh%X;MIz4bv2X)tEK4o2_DXD0}FSOWiv z+^}}7clF_~=vuWMJ$rlmrbZO1FA54(vG4&nBf5}H0FC45KaUoeR+vCw9l%ZUlwcB$ z+-rNbqF&viZKpu%cin=!`$b`n%DNhB-?I4```ov%v0&wz$8pbd8_BXGEa~;#UIYM% z+YR4R{i041lmj~yBx*H8rul?V-eft`9*?yHMc)!^Hj85w#UP0`=q!)nqrR0bG7Gb_ zu?-;ivNqkgFSfC>6I`D^iE237c28mh=TEx|jI@!|nX4JO zL%{TBD$*k)Ka4o+?coZnsxyGG2@Q<~W4@75G7LLuS)TA!N;u+GoG`AbPUvP;fa=Tr zSh|1U?nsXGTv@jm{p0<{S)a(veaRA40ZhqpX2|(w0vlaozA?@p%K=>eT&!N;({>Lv9?sjJ9R_CV=F=yqf*s6B?$DElJHyVZJc zW0ju#_?vMTkx_!&IoB~SgG&eNwI1rThnsU>fxW01SI+yK&H6{r7UuoUcli7SF^6#j z$x#Lm9FXP+M%;wQDjGyqeV)tGa2j)t+^&Lp*XLS#^B7|FY7gn0Q+@jMY4rqB?lnN9 z>Bm@UA`p=I)~&o^4}`P_Su(P2>f{BGrxpj$vEVq$I6TG1`wxN@{9hN4)~0*u-Me>99PRGab8yW9XdPq%@t3hZxti<< zPslY|aAdU#{tK8PRI#wghN;0cpv)hD+rxIU1}q5fYxPvPJW5o1FO}}G$(vP+8`mBT zGZZkkkeZbe0m(C|*O#=3noXpvR&u5>2$);0aI}Yo(kOGUje`NDC z4@5}GM6az65+O9%kcv(Sz);{OG(#*bEV4d3b@Y9eE~~sAFq|X){3@%d)upUSfVFtQ zq+-B;R&3gyvqfEBUk~FWd$8gml_APrXG=D1y&NZXJxb>1{D38htgN;a*LQsOkpaTT zVZQ5%_N^$1!IAV*F#P9N%u*k*V<$b=gW)c#eYr^y=bilf+ajQOg6%PMZ+o=>V>QV2 zupbo@V`A~_nM2x!#1VSk9rJG7vdQ{zPghvSrlN>wZ1KIh?R-y{9N2{x>&S)Pfm#KeK6!0S~w1QP(YJpkyZiecmf>vN%Z-U6A&gH&@H<(HB51W65OQ5KSQ3I+iFiZCz-rPi#$hoPk!LXrwP`;XSw+kEnjo z`}|ghxzHa^kGvWd95r^FkHb4IoWT!E(>P52~a+y>0f+={MXt=br6btr--%rP>WGnPS074D8^3&7P zgT<5zY>}|=pP(~2O-pNCQsxGb-l@!GbwYpI;Z0Cb9pDCv)2C6InwmTN`&aMWDWoeP zpmcw@dnbzn{Lftu3!GOb~?IH$YR^I%~qs-+}*o z5f~_TBVvKh#lgX^pnzRzm`Z??v;T)fh3D?wW5+-l(#JX)U5S6$_I*Os;bU1zgO*iS zv!u|`jvQrAvpwQ+!}(L6J9iu)!IRj&KK~alzWto4L!k<_-T)aDV>0H)ef}m|sh^=} z=IzVlAI%2ybP6d>J!CZM&gDuwgZD&xIqST~E7X~{yWaU~%&b3N&+a-{vUh)Z!>;b< zxH>W3!*m)W^}AP;HAE#l&*6&Kw@=Sb5)=21ceb|r$=&b^4JD_crJb6ZdNyd_L2Br= zc^+DH-Z4I>ZuK)V=1yWU@ zZOyI()#}VldL`HimiGk{7w=3=OgOo-&fDs+hP}@WI77Y?R2YFq1(zG3hk1sHE|;We zlXa9H(^AIDD`>nu^!Pkk*YK#~y_j#N_u;l+?JQfmaByE-ZDKKc$HwT($Vt?F?q@Y| zr21M*;A^G=7tNHY@wtCq093jP*q2?O-)qWI&h5YNzHt-APe5evB8Y!{LD2JlnhKN6 zIMyGk=3+kEldKaSuit8?akOfQi_6o?%5#1g(R~wq=v(dNBX`5HaA!1FCl0$czp=H| z01h4Bmf@u%7_d1Fo(HrBG3*PlO2!~*d@L;d45_LRPOmveNBqHqE6mJ9sGl%~_PKe% zCh=QA?crWc^g~!kBk%%YTZ8;XuiTJ(@B7)05I-^~U~hMK!Ss`6FewV`BimjI{l+<~ zvQDW3mTNxyub*D=t0%eJQLpggdBKPdZ!n>RLH^|Tmf`%VAS&(fVl^QjH-Buy(CmEf zbo!7Zb}E{rSN~VR=;!WI%4eL&jlq~w`v;ZcLKxE;H_He9m*YknJJO_86tLLqyoONU^6;8inGx)9Ll}X zZwy1A)N$+-6sU8W^Xz&{4~@s^@6lLqu17~Z7@4MI%>Uk9$Bu&jAf>2SW-cf$F3xGu zCh9IHM>65Ht^Xn*;LG436$}kY08`Kn6H~y$5kkQn(5F=t-!vYdw0pIfoPY0~sHmuK zOKX5)0>1^vEYu(DC>V*Y9tPaTzR%J@O8|0O@!mY<3gv!ft6E?zIGavYLu266lUFZS zuw}IwD*Cnl@v%hp0=uIeks0sB)8?iE=?BKQHZ~rQ@y3z;qKI=!!tjgv|MJdh4t`niv zXLR+FpXrS5JWwXo#o1V^Xl{3Wt8odyVwh^o2(YSW=Zg<6Z(6ZGD;jgBe4}o$a>!~3?%;xCtaB8qFkq~cVDf9%-G zB}0K5EG)U*zbsc?9Q!t$pyF+mNqq`!VWM@8ZbgOv z{Ji-BM+~X%(>jF-#Ow`op?)gaUtwS2XptAsm@dPC&5-->D5~%r(FrKPpZ-8)U zTeutWet4AZfeB_`&%)-^uH{nbkwC}3QpsYxo1&xI`O$tJJu1b>NTKM~k|?8F4R$DW<-|{kr z%tNzjl*3BGs(A<O)D9F1&d_v4Dn+=ux=8dSHm zl2}q?ubrs^_455avk;r$GG=JdzQV-*)19A2ZHYqkKwc}|Jold4mQEp2P(L$wE4Z0i zu72Qb>zd}sgO?A^-0b%$%RviYe9iKjnnO@q=+(zv&rnNk)tQ&fqnMM6c0~8@+@Z+2 z^=TVQ_2@&=zOmAlk0Dd7@EPRyH_4Q+II=EJdv#N-=`1xtz$qM&w zrf6=1@sHu|-7_mID=1&6@_^S1Mcm@B6#fr{I=%+KY3<)f8v64ABDA1LND-8pYkY zard8!x_JEFS%Zhc1dF^$OiXN!V(vF9)I8|ETL#U}6(J#dR2Sr?H}paiN$qk_1n+q5 zIgTGd{N#_wiiVI!TN43#o;Cgqoa3ub*$>|T0%B^R z0kK}7_u~l(2p++Ny8CqB;KY0DovC>&4hs$SPA;va-01D;a2fV;Tew&*M1QYgrON+t5eT^18A-Vo6dN? zGKGDG7>-l)oT3@;K6@3g-U=8u}!xv{z zj^9%Q_#S$i$lvwHTgl6mLEXSF05p(fm$-Q@1>RYLcmXM%UZ*yH{`>@POcV;LoheHi zzObh!f6e|1UlmL_he=&C@6S1}gRwL@O?Bx~5IoZ2{c^duxH=u_0DIaIrK+z_=+9Df zh0Qf9xn~mxpJ{20dOy`OyrKqMZtJaS@baC&4Hg9IpaaUv*0$$>Fzg1uML8Dx1&*nC z8`P}lFxFmTHP|ultf;7%r(a14<0|Ih#?-d=ISmwY7eIK~Oc8N$wbUHWZsY~9B=Rn^ zoOyMcni_e=48WYU^z{Dhu!MIW__~U>{3J%hFhV8r-fPMX#>jDzynOmocQ*yREoMS8 zTkR1f_Hy@4W?(d5`1+#8+j3!CZ0Vwx*;&4SyDOkb}h&CD_ZyH$L zyQ3fM7*ZA`TpT|CKAE(r*_QqF(O65qup~P!roUoSu4dxk&bU5n6tS{#Ry);f4ABqu zCr_T7zHq_(Zo}|!MT00m?{KV!!elI^G!{?dhN_97%13q$DF0$ zB=8}>MB>(TO44@hKsdku7%pV)+j?^VPL@NR(dPVBu7lNy6|P5k8C%Z9<_dtD#2+8D z-AR_eMOoM1fBx;sT5vv^YAQ5HH%W!to7FEX`_YwnoolwXs|1D$55&bA18ir; zz1%9OWw%YVIWk9hy^Aj>KYidHs%;$VV(AT!Pw@kL`$HJO8iJ=7r1x5sLi=(X0n#E-sjdY!ZQc5D-YnwBf_cK2PA6%&#B>|0w`Y)f}qb!*4SUJETTo3 zrKq+x0P>Myc$?KwkAV+hly>IIguC96%aW##>6E)fFPl_0hfZ5)mm%(vR@hkrC&v;k z1#xw7q#g^FfEFPL+TX&4O|>i|@BL-&$7W{uC{SG%`CCI@zHB<@aJ~n|MJWlWLYYi+-s-EhGWtf)G&dx|WjO zdzry9isp&`&aVZole)249!rl!?+Ct>YxH5<(dOVb4jI4I@O^UABeipJ^%)-R{Y&vN z0r1?Jv$>iD0RRPw*{e{1oX5G|NiK9+lc0 zWs5d*8V?t-;y_9G6t3ByF(Tpu-~0XXt@W*Q);edMz1F+lVz>A2_xzsc z9oA`x?{wO^y{W%d0BcNhvYzIHi#_ z=%e1k#zd?3!&z|TKo0HkpsKcZ$y3_5zYblR`*FkXPL#*-F_B=6z{5*?W^ssX0v4>Aw_!Jdizuyx(S|f-KDeSyOakmWPD~I=MIZgU* z+5B+Y?}2B(a0Ins3IBTY(|wNC8Yxq$YVpN|y~STUb88fjh9E0P^b@O<{0@g!nR~-E zR#>QWU*E*W#Vx@hYC3V}iTe>37cl@P?+`8^u44--berJ%bavm&)bPq=Z5y#9ROi%~ zixLPA&sfQfB#X!ukeI)B+jP;$XbXz48;UPDLgLCA^c!Vp$Xjk=L&FLE3KQ7t*rTYW zwZVOAXgOLt#*08CudJ*rhjN~G&3hX->Bp^7 zG7_H!B(^Rr`CZxWe!KJL$dH(Nm{_~L@wRTwp{udc)u-EDZ0!BwG(C3tjJzn~XvQ^z zij=vzxp&!Shq<}A^}Dx9aBElEB4IpS&LoqgZF1;Uqg#6)ngvj^=^yC;|0@*Ht5zCz5(@Av~& zF0Qzl9K}-kM6J`v916~lup!BKhw`;aWL>(mXP9{ob}qBwHtzgcxo}iJ;I>!v8UvME z?NuZ1Gq##s{Ai}E=o?Ds9|#bWss+#!g=xO1ykKs+;m=kZc$vfL0)2mB&{Iu~ndu#I zzjyGJWbaun8};VJLeTe$goh4i_^BsV1YcBph>rg_6q%8Cgy{;!ZCYjP+=Z)K;cG)d z;)-;SAk~YdQfp!&E=>+-c8|adDDZ?+4@aLWU^zDcfzd;tVJ#kR*GQb$e*7)8|)AYuUtVdZ&_Y z8uYZauj32*va30o0`^^=D-c4UxeQzgtGJDJw3)xZKbt~)18Y=FMkWI0G^4@|u*yo- z*1SdMM{bgCL3+E6si;&C+!JBx@W@^iQh>pD)0_LUwDG&+L4lkrTE?eEGq$}7OZ+wd z6?=E}as@ZZnV6)7Q)@>KtXGI#CG?ZS&EegNUuQmCWvezUoMvYXs~z>IH~QUe@H6A* z!#nn&aOvKY;ky;(JN zVRsvIsMbwh+l>tq^ZJ+QBX13F`aC72 zB6CCX;F`5-o4awYedFWfYgv&P#GkyCT8*@U&Hjp(bQN-?fxvI;*`)>?I@{Z0p@!ly zL)`rC@zKqpBBt4_sMp%FY5Rs73txI_PmDy2hFQP9{sNl?9nJy7_k<;OCQJ(?-+ z7U;e9KV72sr=~naR{#P0o?aj0WYXPK+MQ*pdg50h+s{ZIo1389q`!3)ZbU*5%5zjyDs%)jOcf~W ziq+V)+xerSzoPH(`jV;lCxE%FM83TH@|;_|XEQQ$I!8xGMkc1yurFD;xqJLCqC;Qb z=;$p_?}2RMudEh+AV_0)^X5%WZEXe8jCE(KmzD=^+H=a%(hnjH5^(^bm+-&%CDM{J zSeaPl!&^O@t>iC5BSF7v86cJQzH2v`KB35_YT&~_ z(LVr>x#)}bR@?_nja1L2A4ieKEv29cEC^ep7ad`0>g$(LqGUYTfO_+zw%@4V`DWdk z7%l$IcWt|^Xa?Ro?K^AF@_gp?HT_*zhRKxuGeH+Bf|Vi|RxK;w0^|mr=7FW9(q~@Z zw-8n03p1?>+_bo>$xd9{)c|C~$*!hnWP^7X0VR>B0mKOTu_MP`2TxPK+d_XLaz=z--eq0FJ#1J9bl9iI=qllUI! z;^`ZHBSlPAdsiK2e(8Q1B^xt09sY)E&lRLJ*S$0qfk|_jm-&2l-*tu9cK68ODYG@4 zTtCLAv*ce!$z77n*n+4ud%NM8-3LpzEn9MG;GnwOR~5$tEiILH{lJFcRKxbs4*Jmn z!X}MeMNoZ4kwaxN=yRg+JeaNu!ZZIt3de#404uv{QM@buD%}G$`OvF0)?Jl zqs_0X^Vql8Ej(NndNRtQ;S$xt3~u0FpmEL-Ma3^0zX?`WMgGy>mnV4nnZ9>4i@D_a z1p(@VLC)mY&(wG?9*4?33ta+slxX>jfNvaynLx@9oqLRL+{_)HxMK@ z=C>`ZZJ6T&(>%M_{rj!in_Kz|*`697m3gfoCC-Xq1;@->MRCd6p#guyq=M13WzZCS#EB2mpOBulF#$T14hTn{Nr2CZxfmA@SHaI>tzBR>V#BnuL~q65>^r~YGe^? z0b-5i6t{)>>As;M8!@46+h`EzTv6tJPGBR52h$94SDZh8UO&fNCHfX`Xze(9VNxha zu!5lYs|DR8^@soe*WUcQl#PUIv=;#WTT}9fn%Wv7@wG^ zfChUU0&!;%C!DIrwB$RZr#gSxR)+Da?q2Xqsu$I@xxB=D$K%;&8^xC`E)@b{Z5$-h znF9eqEaESu1&k+%N9Ub?b{KgA|&{enxu+p7}4{yOD zll~uV2tT9OEMULTY3Xe1CIL4C8=K9zFfZ?FN|tS_P<-bCPO@Q;qvP0S=7g0`u zXXBZ6c5!(L%}%y+zfx&+5L4rxJ$tepx}^YIoCdl*(2~jmD+SIC@jDF?u}f%|sjOzU zRU1$9zBXT&K{YZ8(O*g0d^7}9QhEx?quJ&X-C;+6MY}Wd+CDxmSTlZ2Bcs6X;tOVI zXr=$`(mt;3+rd>$nS%Dg9~m}nx`A+>t(p(Q&Pr6Az`kxn#0>H;Bipebsw(mhSPR3i zM&b_W$Jj7}oJrB#oE!YG#;^zt*%t`v7%!tIqSVOY<_*hNtcXK<4mpecVIDftKcOL# z#Lgh90W7l#lge4C(IVEREQTj+4rcXW8w+y zisr_o78}@)TPUuMr&5(2NnJkisU@i7$CnZQ_WS2L=lK~|0sSlrxb?>kBIw{Sfhxdl zyLPRD#_$E?I^&IM!ZZ{TVvq=8#jIUt{yKDWehDCd;uU{pGv74Sq9thM>g4KLiNJU= zA0p}H=xR~6ElkxlozrF*ysQ7MTkrGIvUH*Mki7J3HyROp<0cHH*lnsFAA58Qey|U^ z@8^(+aWeKqA44de1#KFv;E@Ls>GIWg>OK=;Gqk7fiS&=cxpQZd{97NG=7`ZU&UDOxWOYmtSpb&4mAbsl`)%KkFYu zM;}nRijJ-tw$-JR$gEJ~!h(+L8}c1L+(Jf)=L-@Iqv zzD)@Ht_B4KRRlW3YODeq2#L@pv|Itw6YB7wkaiHd-AZhj(`IJZa9W!;GX#wePqM&3UU?;%;afqrdh2p zH2ZAnk9+#z_FeiZ-X&=BXN$gk#4I@Alj?Rac?+{9!^E3 z92Anc2nei&O=;@fi4_A`T;#JRwaF&0?TPh9o}>4#6J-e0#Y&cOe2lbcL&t_W zcY_NAO<^Sb1t=!uH!pw`+(MFTiVAH4R3h|u`u*MFx{psEG`8h9th-S|OE4gC1=VQl zweFUlhKBcT**~o4T2@_tWhseo`?1}MMuLNb$tOZ;NJBxvnS#E4gh|~NfIMVNpmjL- z>`E|c<6uuVZyCn-W4gf#Jfmw2+m%lJnfqOU%`Et5FEufl@7@9$mf=Vf_D<;Jv$eo$ zy2k=%ZUzM%f*6Bx3b#h&U&y7POQxQ9_swfz_ED_}9xn=3af0P`F%HdyZ~g%!Qh|HJw(Z-OqDM4M3R68n z+q>5u)n>Ll`dW?&HapLTsjI7NobaQ9U)tX(%n;xk>q;p{CQ2;RkUbn|1#}NiKff8o z-+m2>{Z)4kttQsq)dIEtR_%Y4M0oV;PiHRIUUBthu#i-&H8e z2d2)vt*vcfUqI3Op|fX2;rU{;k~sl8z!pImbQ+HV=)D3U1n}fR@W=(DLvXM6@up zG}=OIZOqJ?VzN_v)TG@f%dtH&ryUU@QM}=g<73L59!KPN*dS?o^AhJH7_7RJD4qA{ zr%p)41HWia{etObn_N7j0pfxZw3KZlE9GMQ*4mRS4v~_J&g+87sVSca4^9L*UAA%Z98ke(&azo4Tn!P|^|1OIzr`{ivw5Vp(@g z3=MDM@x`a5t%Xoj5GKQ9EIY*(8;z_e++HuxFxCTZyU{Z+3p?7l9_!-_esS$qnJp~B zCNJUD#vx2BudQ8;hAWD?y7zh_2-ps9=?rk_Wo2bWLxbkibZ}r;p0?r(!YH^Afz741 z*w}SEQDMo&r$YL$l1P1g*04B1V+-j3!#^ppHXqZO`iLtirR#?hjlyR+{!rG71A^cs ze3og@6tknM`Z>x9>EQ{Nspfbq`%2*BxlNK z;n#;t`8u<-qj%X_oe2N_{cLy320)1-q_fzxBV^xJAD&4AL>W^{RWn*Do?hUbMoJa9SEKd66-OBWJh8YDHj($s!2I`}^KhiyC)NKMUZ7YPn?38jC z(^V9w^XFGk;EMVe)6LrC3GUSllq2B(wUQZ$J_6Y=k!Tj*uZVyHWBc&+(WURJH7%A8wI$XM{xF2?0AwE7u1rC-CQdNOhODg%Sj}zxK@|+1s#c ztc;D~Z0l~Z??2-b<xeXIj!)vGcqmmI>h1Rg?kr5z4LYJ-!+Hxfg6&%WV5tTlRJo zlfG&nz!v{xaAKRM#`rGqsR3Y>awPu-OjN7`m!u>s3Sjh0-Nn(V%udV!rwQT`tJVy5 zBHs7tdi$0R8@3W%jfwYs@%r^TU;%E(2>QN$wRA}vS}y3};X!r}v^md-q#9!4!AjLc zEs(ygGrtMYPoy!3#?Z@^hGdmQ>88xx7+^cV5y{plB+#G|(2Wf&z{B7rm(NeEZG71ZXG#1$OW1g6(F9MIhNi z#@nLgk3*$f1B+dUePLgR+}-wS!-*kD+e*F01 zTA2^Limkft?c3JkG8h361m$>eIGsqPpx)+0t*AfR1O&40*JVj%+F3uge!CNoP!*Q93lX2MydV8u0n0VbWc6@P1V$W zpFBUA>$it7H#c;Roj~yM5&u;oNZD;(LmzXt?;Qv@N;#a52v*|S zJZzAPo0}UE7%!m^Ng`TgQ77E0&P0?T^&n{+fi>=ZZ3b!p*R>jU_XW25DM(PD_go9P z$M}yQ2MrC`ke&LV)^j<)0R6(V7C~lcbI5pMv`ZcnkO zv0T6{2viK0jc2(?38M-`wC;z3#pyAG7uCeTAvx^>QLF%4O%hqVwk#TWq87p3H(xE# zX+NaTe)`l8+$7>r621_S9OM`b$#a~RJC7dSh9%=bAX1GG^CTWPNp4{*UqFL!Q^s>< zpi04Fs~bUGY}|nr0722o>lm?fTS*4-!+#x`GZdBz*h{2G_sgm( zhyiq;zHVyTh(I7M`)zhm(7MO!(Kiw4Ev4WOHfI@R%zRv8HT>bgPDAKjl+U|P971SP zrP9jZ-P6N?&wGwDM}kS*>k7*^QCCt^`4?01=^hWI>Wjb|vXLj-!SPe=4*$UB9iH;0 zTfMph)ZjR}9Df8$T7{t#>$IOqpquiUN0Bms{%by;MYHLJ2HWcXPoJ)Xdn#>1tauu5 z4AOKZkhGY`;Eh241rP^-ID4=iuOX=%e)tFs2T%!3Lql5N+bDgCp;A}j?wU7G!*Q27 zj@go>%2{hGkg$fcyndTw|BnwRbHOb$Zzn{+gUv0ZJg!A>mAqmQ(QtKlcYh1mq4i@M zBkelFO=8!9b|6x58h$Srlhb-`L~cyLTFZl89~+Xxuf8T`w{414M-r z|nqF(eWW_Lr}Y%&+@chURGhaLTowOHGtHhMJeT4U_E=cZ=s-A(Fr>Z zCxdlzM|iC{QJzIbM^Bw`*DKA3zHHU%)eW78g~EyUs^;};A2g~5+z~TuZ(@u<{MJ_4 zTbN#fFB?8{FFq%%4#Aq!>`!|%IpxH~{Q2|8eHIfgKH?NAtE>BB&!D^7jo#kg*}1n} zCm|rN&6m>N3pOk_MYzt~1d#{xEJyaN;qpJf9~fQms(+EMFJ9qdyA@=tRYYgCo5=ev#FSa(nT;4%9XtUUoflTxRE*QNk| zAFb~4afw?Av+=`pTPXqLQPRTPXj=^l2Qfn~Y*TR>VP+G@a)b4vR_7_{1V~d8ZPk5T zhWAfFR`vH~p2$e+lX#rEcXKwQW01-Ec1~(In~4srrECHpN+nCPrRox`4^sBQY>t*q@{j9kxQi6X2+8y z6hjKV953(j9e=+P8(I|NQ=roK)L#g=|Dsc% zTadpKUhA}TCx=Mw&TWjV`s}{0(DOh&lO9I-@#=m!o0Pm^&5nEO6?T7tByNb?L?i1P zt?66T4jsaBSm~l6r9ku&gLmBq5Em(uF-CqEO9k;Z|IYn;U2I$Fx;v(Da9o*>b6i6( z4!yL1cv8OXUpun_S$A9<7tv52rFgL*;nj4-6#{Qi*cL9#ox^EI(a{xzJb9e2>uZ$ z#Th~uMZSx?257JlmfNChZvHPHt)s0im#%x_fBES9$sQhKXTNM`&}>4F5K|!J)O-KA z!W>GW(fS}IGjiq3|NXf))R88Fisn21`$N3>je=Bg7}@&&ylR#pSpr@n5WRw-Wyyc8 zotZ1Nb}h3XPuyd_|GA)CYl3Ud?Zy<$m;d);>2VEzS8qV(zdw(G)HnaaHk*(C%MTGd zs3plMPFMU-t|fRmFC#62ymPuznj4`O{AY7B1p7eu8Jm_ywiH;rd+J~U{=TS*O;%zk zn>i>aM^WTrK=We3XsrOqa>#C@`OkB?PeITDZo>epDta~D_}>q}|Bw7<86YKG_V2&C z&pRSLy`1?K)Rz7=9oZ!N{`Y16-xX3+;4fBE?XeOn7tDBEP5_QO zcj&K05LFD+FRDF=HZ0^Q8D2vm_Gh1xDI&IEM20s={EdURS+9f2Rbnah4+6#fwyA}cTe+z0O{BPf`MqjNJd4~4%q%PN})SYRsLsvO6 zWm+GF)AT1IN8ky>bz`EHG#2Ha5K^geee&YQ$$m0`}?4 z7q*wA_5iV>Jmhe?8Adl?Sgu8oIG2!CCK}boiML@yy5qLV{j4D*l6V1#u$QD-- zRs((vGPJFvwB=d^I=7O>REl`gL~;fi?;6xzh>0U%z(lzKj9`Y5w~-J6?Q9n*?BV9; zq!pqA%4C7b7&#VWmt4O5Z1l3GW^e=7#VKbjGUQ!nL?4%wYyoMgVLl*glkM@3r3HV25D{}3fYPGGYrwww#e5K$%rZ|8f6P*;%R$#x z$BE4-Q9J%<0tgtoR;fS!IQ1D1N9t1QikU7Wyei4cp-~&!b1)WJm+_lbO z^}UdgC8$KjC4>_87b&QC6-WbcxzDXm7puCvOJRW$#Qb$;;DVv!h($Gg~hk+1;h0%=#I&|8ZEWTx#=?GiCW72?ItZ9a;aE5q zsIB-t9jQMiC$A?ZCDpP5MkpvQhUmuC)h9JIwU(7^v)u92bI22mH28^+%(Jr_QP3U) zZ?TJ?-;CjZICcL?CnYJ(&vC|R@5B-S*hbMj%v^L6M7x5~zBcj3K?YDv>Q$W88jP0o@rDL|bOQm!cuCm?bS7??td&5qJp zKy4UsOm`!Su!(eDjVA}eyqjp?$DoEn?k|8)Zxdn9N#o?5n}>G+^2O%$(dNr@ z;cGsE(kg>#16!1)=dNnvw6RleVk(9s9pPC@fhw!ip2(?C(|`%;!Y8Yc z^z4Sw4EUr%zV7H6q6EjSbv^TZfHN1z_6^;s4o$j~Ixd)gh7B7`2eqk@I0hhfD3lwD zOR#OKe*wTZ#q4A{$^FA;0ezRCzEA0C_iPbLJXhXjBF{v8HDb zH#pFs?{#UUCd$NPL>v;BI3&kzXt&trkPu8rbFfXmkc+^->;`DAmP9U4@#r#3l*!zC z`0!1>kfLJV698Op7cUw}1@Vjo?d_4nc-q9Km@~V$xtGB!+*6Nl2h;HaK)xVxQMe3w z$oB*dv40l##NtOPnr|SVsRI=%YzJIA6=gcTBO~jAm9OIm5UR5Zr2LOGNAWRClc4VD z)6huC=Nc(yVzCMF2k>i>11)68ijD)2R7lh^5Riq{%85c)OmKa~?Lhi{b8~a$j}ZF` z@rIX|*8%j+2CTF~1Sb0>%E%=*H%9Xpl-6Zd}f=w!S!jt>q{6)7jt z2f?FM#wa2F@Zm#NF)Ml;ft8U*e1Z5!1a94Zit_yV^FW!sNZz3*-vDb;3B%7BUbk7~ z96myn1u-G6Mj=_7s2A25+D(E21kJvLPuB7PMvbfxLD!(A1m{JU{&+->0~n#wa~>=% z1S}hHE5x(;`|tksjc^^`+On5YVj8T#lRS6-I>(9DW&Oarh@=C2nLzaV#kFS;NLY~d zm}ckDiTvX=w`b2Fy~3{8WgBd~-vtEU;-aPcUv&^zb)KOvu5NUusikGCqmGt#z);tC zIpZKrFdv^XW!ti43c{dM)wxEPd@`JxFH&=~UPsSZC&T~pc(43Od+uRWAknU0{~liA)5u6V8W;_b z@@$k055M%F#Eb&<%EB{B*>+Go_NBtVS^!@vp7(k)GHnM9l^5I6$;o+l|3u7zqWjv` zRz3XQ)$Q~+`n?&V?GYCJG0K6A@13Ku38jM1{HZv7($v&|Fxh}c%|#<8 z0RkmMniW1$;@v?sG5OU9e_qwsyR?NYZ}|a&w4G-Y+>AA#7&6a2wJGJ^8n2G@`DrsW zuxM6S(%VTXz>uO_smH0Bh{!O_?-4h7^ZGTPa7#02A!^S;4{gDAI=YM`8tNuItbBtc zbW#B^l@p+%uAVOqrM9!AdV4GX&3zX}qqG}GKSE5<@`jx{@~Wq=A;&^Z+^+3i;_qfX z0-;Dtu&$+Dx6aj?Omd-0Q)4Ow(cm8A^Yr-g;me0Gc2v7@cn@lYq3deKS!gM_c#esg zc?e7;Y<4J|B>$FbE3p(b7a3~Jd^Ez`clRzMgochYz$feS^hMZ$(8t$fN3xOQ8yIm+ zRjd4Y0GuqMsk_{0U{rVB0rXt*?%PnGlQ~U5BEvH?GfD3u#01(yrMRwz(BoENg(2*X z==z@b{>(+NF;hTElcDI@xKsUZL(Qd#9MY!{&yCE|;|WcE)lb?SVok^t6w-4P1TI_c zSlx&3cf%ISe0lDWtNq(W)P#0W>!5N_Zg{`TC2h;=CR}&=ype zCY!G?2h$HKPhW^fQc@oQ%{$cVP;ve{gBVy@sThT*fwH#qX6P$+9DG;{telC%y-R~4 zb+oHi?V6r}s$ho-f~WkCbAY<;-oHOI83h(D301agAvg$1+t~ATpS7o4edT2&EMpR^ zv%FHgvyr!q`f*F9gI!prV-Uys06Oxor&z2E| zPQ|@9FRK=aJvwbW}pn6@hgANj7)Rg3`FN4GddI z0wDJell>RJe>(Iw-frz0E90WX<*mGLhjv`6h)MLg^x3QA<9-IA$JDlzuIFq=kG1>g zGM%u=Sfy`@7hB*D3=i~&NA^YzvT8$P30x}re>0NhK0W*=>FMkRrwZ5Nk4un8!ENhL5EC6XC zsvT)}nH%TvW%DhN4~l#JnLB%E4j_vs+H1{pe)z)!mHRqOVTYL&*c)Feie9sJw;>o3 zVmm2O)mh9o9a${s9<*e%7n?V~yCM`*>JWA`7%rvy{P|Ta3n2X&0j|+ih4X`9+i`xb zm&d=JOX=CZpgSicz-1&;Bg7U|HNUyVCv2@>9XJaW@FO7AV9xi{yN_Pa?* zhzw@7n#?VP224n7`qXc5uJWKGU2SzdPxYRfBwsJKKVuO;_vubcqKG1Y&37nuXjiXJ z?(v>d1|=izIWx(&sSeY8et-Wg1@}cJa6vNFQd4lw;Ya4A!nC;k;MnqxQY`VmyW{)h zWLm=n-~Ht#9gT-V2zpJkbcK;iT6uZ74O3$sM=qCzSx!!l(4hVut^=3#g8RxAd`Ug| zaHy9^4@VW7aGTh73F^(^q! zfETg1pF|^yG!!cxM=0a!igt_?>T+VlbX=8Y5=HMOJ0bRlL~bN*Ry)zl196}Lb3mX{ zJg{)LhF>`%Ecq!Fl~X_G#uTzvT$3-oChhtnz&7KP2#?m!%%K!OXnM=lUBq&WTc2Po_xlE0|(4^vkDcm{7&RMAt~!VNqq)y zPL26!(aM`T9IKn2f-#&OHz#v{DKR^m#%T zPb2EyUge!G)P8e$+dP;)J3%FfXTQ&|OS@~I$!!;MK#L|D40+=oM0BHiEjQA2{V9V=+LzoXI9 zK{{zNDlN_EOV%#q zP|{HvE-e)#8ls67LP5hA@}jvZQgfX1aKtfC!o_>d51+&+a^9z2?LLPeh}%AvogX}= zs;qpctgH;m$qaaZwGix3Ou5g5hR6uU0ORNVNY_AUdH67(5S|tWFDEGP;t%p1CvzU| zr%WRS!=7T}MJIST!S$BZe5;1%KMTHH&@!+%d4c&H$a_3Cam;H``o6MD1%pYW9Hz&| zNw2)9s%0xy*kE?0x*3_lZh0miMvU@CL)otr-n)RSbU!jBamzPb5O8?Hs02TKdZCcx zb0RpfEnmn71HEZyF0$z^%CkSCZ_QAujWY(iF#a&dMMd2F_lXHj^h|hFsh_SpPn6@J zf7dBdbWP};!`K6-Z#RUYs)D}zB!eCR}%gl82@hKqB$^fS>A6XWk_-6D&(*emJqI$FA7Akt$O8?-c40wd>ms`bu_>qCXkNm%g?>0>p#}tN1>X zUiy^sL=tPXV3O9RwZ60gt%;~fjD+pQk%mfIH1bG> zda9w&oWZneSMX{ahOL{7qba?75Z~CmPuo;>1ZKq05pD2a)zoNUfFKd=VyIBDRHC~z zB#0fkmbYQ+p8zO5bog+w)DGN9^M-oF5JZN6xxyIJ@@)uOS_+?bKD#k8K%WcEIMV~c zhdBzbMz%;JBMs~MB0#c}#gk)*#|ZthkX{n(L2olT^U%N6*h$zz-SmBGD$kzZf z+Ii@EMp81uuBepPlC~l0F z^)i7z=bV8Fu9xZwXEFl#%NpC_vqR(xN*i*a4@S{Tdf%fAoc$Tch7EC~KQ~I`^$sKz z)g#jbK#}j!Z6&6qh>+Tw9qE`yY;0^tsmETp2`5rhD&;mFliEZ*hfiohoP%}P+lVcP z1KQkTjc=t)G5fd_NJHfgEu{{+jVDE{r?9vomdUg4lwh`arifmD07>tpHj#Hzn(1i= zp8gY-rm|)yz;|>YV^_JxLa@)6#bH$jF%BfdD%{?jr(SqAngd z?2EQTY-0|#0CbN6WHjva2*3d3?BJV{ck}GqX8?>@4;l;D_$^iFRHcqkV+aU}j|dVo zI1jiBeIG#wAV_yHj0fAdb1GPvG>yfV^P*j=p$IP2NtUtkN{GQAnNSG0G$IUL=WU`L z%jR{FD~56haY!3NEmvS%k1C(ZM+n?$_@SObiCgx_ z1aO(K=xBSET(rID>_1qpdad}gc!Jel6VwtIru5vSCZtsia&eORS}CMGAf0e3X9hSnwklTH#@ajl}ylxp!dRaAuK*6>YM z+;+jvCdkZBo^{>edI$-={rUAnJMngcuOMeY4l(EXf*08jo#<=a$#{VTUu}cejk9mar6pr z!R3sZKkYZ}q zLZ-dm^7@iIV@O3_+}lbmx^w3aVs_1^0HzAQP5bZg_%U^)dBKq8=l}weTdBrn^-HL- zI09hNjmVcLxNI=B-@wLZm^eRhl18!@rAHVCC69*^R%;&Th#Y~udaDi?MN>3RYzk7E zLgEj_tQ0mIh^c?(1a=0BUaKhoCc5`)Uqvg~_wd!8hnKG^SOud?R{ZqXv4XvL;8@%O zsdccFw8m}Kp}d@Uqs~Fwf5$Xaq7W`(c+U=LX)^X(^ug;4^+er$UVlmun!Q#4*bIM0 zr-2#NF}Lm?xR`?|7;M~Q%oNdPoyOvKe#G3=E42NjsmF;f9J*+Z{X2Ia#x%%>q_hBh zA&%ZL*k{s#_nv@(DL=|zKMyAErpCR2JjQy zGE>I028DMzz*tcAp8^4%Uw^uOwgSG9eELWpInLjOcI~2S6Ng0c9)iEaq_i*uFd)J( z&}0}*^Uq0c?}Got(9!$?{nUX_72W(l*GiM$XGzofYJ;^U67k}J+_%H+YCUxebwRRGnXL(Ot$_T`c zv_h$3{`}Y155Dfs_$KlYQb9PrH$o(U0Uih8MA3k=nW?E5Xau-ZH9S03>ZhTRI2KXm%DDZ$?d_=%_ z>f*y7-nYv{Wx!E7>B&1tToW-VZaCyDiG78ZlWcBC5}*#bOH^;2?-*9uoT2Y)}Hv0~j4qmbOTJA$J!YEf{v`4N{`7Vlsd}7eGC+JM3Yr zTNo0BErn51hq^q*X9HQS{j`G!K%B`qL2hg8zt0^>`U1?Qpn5_QU^XfSSrhlXL!6O; z$QUka7&7X-01w3XBuqf`3^kuBc4U~JpC32D$#Gxr5mqb$oG51qG3UQUQsexqh6bc@ zcbaZ$#>1ZYR8Ytz|Iz7sW< z5j}+g)`80;UkN2^7SlYNxJ?3bHfxYnhVZpWarGTOz$MjmcAg;B#NgjkT`tAsAcp7I zw06ZvSK)Dz_IEmD>)Sn=&k^)VNtwKL2eGzV2vRn$22Us)k}=#opbGDP9U*e?_JQxp zc|VY+Q-pj8y5ib}6f+HahXK?97?!%{jZBf2{rJmJdv21_Z4TqUKM)2(GyDis17lG+ zwM$e~i;veWEIKwiJlv8$iv&KpPEJ`bCWu6xc%_FQpjW0{yH*>K92nqNm4@l>*#tYn z-Xx46cO>tUo$I61iZp$=oj|0ESpLGa*Fq8iW}`-t(2$Txv{+XTF>xQkhC@(Pm(do{6xALsNG|@ zewz=^N=>}xT`#)3+rT7Bp(lXPELb%F2>L*zjw>lC72300<{*h;+1zJgo1T)3dEd!0 z@m*V7rYLyKM}H=FCY$qECO1}t=yEc=BPb|JrlkPe0N|60F~Bio<2+zuL&OI9Iyu4Z zqybug=U@Hl<5Ns74dp&sh)V#~>J5tUYf0rkd*WHiexVB-IGOoGPU8CMY{PUm*~=xP zMZ-dfkOqMFU6;Y=I$uYe3gRAu~N`rng^vO|r#R`=V_A3t)L z^6K7vl7TPO)?#&hpVq$Pvz}o=XOU|vo{@z|=@YBHFzdf(W=t9*SP-I z)evIzPQUQ21uTf&zkZrO+I4{p{{%h>FV_Sfh+8M;j`7ShF?bbKpTnswd~nt%ZaoQG zdh^z;eB))YcW%}7Aw1T>8k5uk7+#wRsZRs!#I16n+f^^`Rxm6jS_Rdd+TA8(7uGZK z9cE6iKqI5#6d8TQ1pWYJv-gHSzxj$~pk(}AhH@*hAxz=IxG(sKu!ljx#C!njF(jDsKYX>3ebg~aS=wY=kVD+Wct<7OK1hr=D782f(TYB&d! z6|(wNbke{b=0>Z*=h^wG z)>KeqQWw7`fIl2^l7-ntGlW6hV<@-u?6=%}lhI2YScG0x#zq{_K?Sx@O%?g>op<=b(sAD=b^g8vjH215Az}8{p?RSz&M+ny=>8`0=$JR4PqVm z99^_i624fHgAmi6;RVk9-TmQ6wA#+!LQkly&Zpn%+epiDg1*4O62u$R;XlV!k2WRU z=H|sLJ50aJe18#?u@)?vKY~RLk8dJ=0l7=WV<&$UAi3x)%e`-u8S~_|7J_^1)mU^( z902lh95MVW`!bC|pTy0MC7Q(NBgiEqDIh+m$@dl}H%tr275{Hu3!h7de?fN@p?#x$ z08m0l;khKF$y4EFbCns&WGWt^CdnYVyS&GN15pvtn4rM*J^V^K`7@HT^ZgJWkh`~m ztto7EEFO_4McY%d@+h^l-*Z}RjeE-ko-sdiVUjJNp~JWQOoLn{GUN{t&o^e1(yYpt z=(8P*1j8D+se7Lfi;(*_oU3H;;^d+TjED_if4(9%4ZI0IC#T%e`(n=Sb0|fTCspQH zP&*be@B5@=$Q7dpkj5QH=Kz9QLO+EZaTtd^`F$Hfk&p`~gyb<{s`-d11Y5~yk$;rL z{!URO6UAxMh2Z_sa;#ComL|9n>R}8Y)xLxx? z*~CMi5Pc9*k&pMggk75ku(3v1zgTFa?rlB?IFr~thr-Xy(FmNo@?jl1I#f!K*mS^z zT=MipPeNs%AB@lE_q-ItH~MVWAvvM64u${bNbO;; z{8(y)4q4Mp!h;j)_q-R7WfE}V^tX?k`1u39ev{CzPy?p^zO+ZaJ;N!iOF7dnOjy;p z9dU$58ZMtDI>-bgQ%F?Ams>O7L-=+MX?JA{X7&+?x`-GLD1rt$p}qsrPe6u4N=k@N z<%=*3&IQG^z)Sti?!A#&KyMLTmusEiQy6|F^|ym4sB4B1bDaJPwD1l$f?jpo_Y&5Y z@BY;Sq&d3_A-vE)?FLHB6JYi4b$+pGc#$K0pFbXjZSDwXK-49|z*DiAP}vxWz_A5g z^SUt@1R|r52CFBt9?mXvBu-fPGq%7-7RsS){yYI5K^~vHG~uItW4|Iuce~vBx0#kW zey5PsW=}AbP>daiAvaD9%~4%rf{WK8vIEg;o``S}x<(!035CKb9wnig2sCxznx>G2 z+TyqQ%U9Has?Y#BWdD}GDc~Z~(e-OCB#v+~3mH)%fdJU2u4n8T_Cf?7!lNFwO9k!X zFBh6d$1CKk(*8oo_sr^YBFMEE5T{lEqeBu9j1B?;C)^}nld$%-V9Nnl8YEUd5Vg1o z4Z!ii{-xo3yLW4l*b`<8NF);nhr{silffvJmMqEAaks3+Mu~yJPsZk9letvzLIg{V zFq=47^wpn_yNDy5_dFX*;?3_vh7~t!;_W&BAMfvVG#Pb0oLd1lGZRv}SIC+8czp?G z)YH=gTsmW%70#gS#d5X%E(0-OEsjJ+Ms}2Wl@V4LupsgfbQi9(?m@7eCFn3fZOaJp zIeOt+Tc1mm7MN&1hoV6qUvoHu3q=Pny!(wrx`vo{YJyrmXDLKE@C*%{*{S(;)xrMi2Ml zZs$Kjwaew^EhQd|yuU`%+XEEFFyC%)NbFVe(6Y6L#rVF(SMRb#?rrhUf1!fKKYDK@ zKJ@R)X_43b`)7566#RXauO^wx-u3VJU@zCh!cvD>=pQ%*WO_>1f2 z|Br9v|JEwWNn4cm7FHa(h&yO@{eNrC{|_#;uWtzH1obU6v;gZhb^u3U4ybIgZr*%} zrQj)S(*N>sOvTuUMgu^+whHoi94+H$rBWgmLLP^ovZyF4YdAY+FXHWT;_uLg^Pks0 zy9MizXILlw^$iY!cS>v?(OKYSa~-C<*nefYC;W+Cl7`v8dn={`L~(Q6p*4g9316OW z{P+Dp$p3fu=AN)L><)$tgN-6C2l8y7j-YnH)Zaz!jPsFt{Ca?x)jK($7A~%Ae?PDw z5Z|bdrutSe&VOBtHtxgz1dtKn1IG8a8VT>&)6~KXu=;wt-MvcF;P7xg9}xsk06Epo z`1Yvd?Fg5J*~}y!d>$DwPG^N5!TJFW)@H@jP90z~+eM�VJHhTPXz59k`>fV!)2U z_PiI8TMl+I&wa`!8Znt}YMv1TPFk<(4E(AKIynsk!+@8(`yv97*5E^r(FWrS-HFg% z>scW6(y{INqGk%pXakYFwtK$eCj@Y@FyQ$7!qvPBZJ7l|qm!cf05fBe`;1&%Ts zOKc(WVCwhRN1feFmXP|Bg`I(Vb0llk92%RU9o;ePOac~xQ2k~~;u1_VBn&=6Ts=UC z#`PcRSoSsZHn%ny!$1 z2|vSJNGeE|&dN(j|6!l+wb-%nfdZ1b$m>cw-EyC3EB2WwJ!1HojW7iO1 zYQz*zv-ctP5{o3FXq4C=rNLutW>bo9=-Bn0}uS0nva(cx>%wgl-&JdW!( zW3{%GC*-C9f~86Q>P$sfI5NT#-yba3-Kb;=8`}u zJwxRs;gpX&`sQA5nmqvdoCZo(avLfa=WUnfEm{_2>bwR<;-5O$41^fr+ib4Ad-slSq&m)sKjCaFEaydq9bWm6}uOPb+3{YV|a4DSM7*xuFuqc!YlS;&+Uw`7S zIU3@j9rR9sqU4oLVjTm;XaHx70F1DW(f$7f;Xsr?)i{`9qYyk6tmbi~!!9^8Qr|Wb zZ)fBWt;9npOA$gvVCw1mZy(A&wD2~opl?574M{*stjL6sq^5wPgj5N2bUF63xH=*a zC>AR#vOz%P4$Wg-={=l53no(kKM4LeR>j`xNlPh}0%IY4j_Q>g-2Whf2%Z4<9}V z0j^gD&s+Muy+x3q+CYN0IDk)zwm<5RgUyc^9uZ#Y^G(nGcP0ERh(iu*JqhOdl* z&Y}T&Z*D?wrDtW5Hi9H$KH2we=pF=(+WN|btZw>>_d1<~{^3397Yxi9@ub~}cm@-d zZE+vOR0*_hXBQ_r3QSK&2fRfsC_3T%OX8D%3(Ceg8rM;N2zNFJ*u~kEZR#Oh3Sq*C zdxAKCJi`tk&E0Xb?#PBou0o(aKx9y}IJ>IiLhzD{Mi}J)p&8%){hck=+6K_`p>N;a z2MW}aGM&3kDFfe_*7DnJ?<^Q=lU_fde9xgnBnm$TkTtPm}|GAD{`zCh#ngf`SNn z+b4LAvy`3wl$ADQ)!M|8x zx#0Cv-s=+?6`H`sLbhuQ(aU->($7s)+A{}_IUv3g#JX~`A!%}dvuU@|fJ*frAm!J_q z;n??`P0-5amPVYFVZ2}5x=|4v0ikLDRhqGUg<_5jYYoA6GxsP(YEL+sk&77K23LW- zuvKTU3l~0e?o$mzXFsat06>P%UZjZkLj|5+%^YwD67B@TO%fB)WpHR-hQA+xoG!#9 z+$w}pC*0JgO$MVZ9f%duO-c=+-8(Dk3*}oh}dnfu|#&zYWptwD4IXc3SJ6 zw+GDjzJaAphez-Dc?nNI%6-x*?Cm`?aK`;?QK> zCkR#ZOWz_t&3`majsmF#XyHu=Z-L$ry0{OcKW$J=6@wm&3vTJe)1>EQ?x!{tDTrgG?02^R_5?B%t@ly z=E5mrIS-KA6W38jU5htZn*^*MKQwaCxikYH%~+IZ11N|`+=x<%wx658@uvGf1Pg?Q zn8T3ZLY>wC(Tq4aN&@?m@Ds#rA~cCe=LR4^N#Z6FV^9ruxUFIBIuMfDe?vIUwokZO z=uoAWQg%y=b+6W@U;9(*2-O+|zrVlh`tLf|`TfqhobUOb&s);#IUe`N z{eD{y%G(Z0*ml|@8YU&y7{(Ivnc>i(L(p1$foH<%pwlWcR&96kxCc^gU;JjV;@4na z3p7&mZ!y;6bCq4^mGB{*FltNN2$ZdM=w~H;aPY4-v+YU>w=~|fDLC0wAsnyAVQ~>) zmdS3kd2*<^dKzZeCL$%2nuxF5w8r52A)e(8NcK=|E0sxk`Oxph?a#nQySPvW?nMwy zkFgiQUI^UNUYtR533sEQu+ZN?OUgE2)7%{QjK(jnl+Otwd4i{Ud*4kB=9wUtj84EJ zF0tP9ahv^YoK&})@1xgeD|WHP*2q8p%u$Ucv}rJP65oH2M z=hb&hI6@Z{KH}d(Rm!d5x;sv4%>%mxeFa99Sh?Fa0SVTJ+OaDiM}QC6w-Msf3{=vP zGdr*9!-w;5E5|=eC8#+`cYzy~ypABw<(aLI3LgEa^68U?BRZA1a!A#Ta}&Hne0}xo z{}O=@H}`17JQB4uf4)X3Glo8Zv+dpwScU3fdFo9I)f-F)k3N8S7+)4H)wBzQ0Y-Jx z{J$BA&>%x4kR^I9MShwFVaJY->hA=*6MK|MYd3FBPBo_(CL*hb z=o79UA#^Dy040XRVXA~DPP^FK^xNes9?*W`-ld?TrU^K@Dofk}!-qjQ*=qORz&)c{ zy;@_ur5|3ftQ(bd_z@XcMIh>n0|Oq5dqM~hQkrpVU|I}^wgkytnpV0mjHl`#ZKj7P z9pkuRUDRXrQvPSD-s*MQbW;;56w(FLoI>1V0#rlI*nnff9}<7CJoW;Jtkk~?Y{!n= ze7spB5}ktCxosvoP>ql`6f?K6yT6JO!u2w3#;^HT-XOG+#~!Fqi0TWfnNtN}n6zUN zQgjo0D$$DPyw%S(=^*PgC6SKbJtX;HMZ%lq&O^NJyOWh+q&2@JYTX?o=l{W(yj0kWBr1{)QW^-Mv6 zsa=)P3VbGnBe6z%kk!x}k8>da$!)_6k#yXo{R*C_UqKf<2AOcyO5j}Zgb%L}XWuqQ zQL*pq-!OBtkII+>_aGL6k{#EWZv18 z6qnW!i0R<(iWvE<_Y$2O)+WrYt^~lQydi?vCNLh7G!X=eFn|sS+i41+F`&CJgmfZV zCSHrsVIkP(@yUtc&JKUngT*(PfnOoJ5z~}9&hzw!7Y&t1Gk61L3)0zoTzsi=$8G7d zX)ej(UT)hUU@!vhu$h(-jx*wt*~t6)G@5jd_FXhIWG{vFX$DkTKMYyL_i!2+B`k}K zJk5~4qDi|zaJhJjaFRlnsm!xKb*8l|j*}5gz;P}^F-`X7fB+{g5&jaih)h6UD60I9 zr0<&fblb+%9_uW-^4Chri<9UZfUzBdbVdWOniXa~F~UTmW)P-~Oh#Z5joZouOQLgs z73Ra2aU8K{1P0h~@Dh3V%ZUI;c;vl@MNT1_2g2#$6~lZ>F$_kFKLlH!b?b%|z0kZ7 zpa29E{%lH|jAvklPPG%$0~ET-7%NrIq~ukA>Vi_iJhz#s4x5V?8}$fc6gG{k7bor? z1Z|CIQr_N0??SL+89uysNRh+E$vCiH?VSg}ub{6-QB~L)Y`K z0GK>65IT~4xO8;)&%azZg9w;H=|w^WDc5t@Ie-QX9VvlG42%s?;XgQv>7^B7S!hOJ zS?@hkaxE3nsnTkx*B$^|+E+pp`xwF1AUlOzihUW6BU1x>8yl**Fv1d)2(Z7BtP z3^qs}8|C|n9+Gs{^%GG0*FeM;RDx^?BghhEbTP4HSdVGN9t9GB3YH}Q4`lbi;@e8n zUjpm`0&INpJkag8G)P=A_CgU%B&(H1+Ez6g(mZ!|+Pn$j~Ml(<;FBgcC^N|1%TS0D)`x@%Yb3?K@ClK<;xpged?c2*c`0rWTG z0H7!zh7uw!{a^#*1$vX47?e_Tf_yb3q-mB54{FHvof%> zqNPQ9();1!tapcE+Je4P2B>C=&|l~Ov;ZKHLs$%O!$Ei#L1LCH#T|uY_==a2XlTgm zicuz?Og#g*{O7puxJ+`1`LD36l2;RjDCZ{AS#E9_ER;sb{YJ8gnigF_6aGdb1e8)f zFCjYuQuqb40ueG}Tj+4Y`BTc1OZsC~A0iW~Pk46e-{5UowO;D$xtIG_Qf=ERx-&85 z$;QJg4=BHuq)ClSQu`&^m6xF`%Aq9Oy5W)V#={Yh!VceC_vqf?zzyPGSFUVP;q~t= za2CFJ&4=mBrB|1J&gXJ-f2mLnIk&y4eIlqY`%WYt#82uvl_a%jv`IwU!n91ax(JkQ zAI8=t+;If>dLfKdc2GjWlWIXw2oN|#rAd6X66D#jNMH-7#dBjnQ#C(UoC07D4W&pi zvQ`$7jg~7gVt!N)9X& z(RXz?&UExnTjpxyK^=mC0qyPyG6_%ycipqVW0N5F5BU9S*#3-=TSegcNkqU)#=alN zbiV+8otRa6&2lqr#(!+^N{6aUq7DJ1?1_zoD$)qzuERqi4l;1x5(39XKKa3#f+mZ| zV0fj|E6kPFrv=Nz5?liIE}{&A9SXS$lZP}Nry23um zi^4`k&qU2-5!$sdYE4vu0O=;7{PVR(nV!8Xj2YaUNTji^z3sEYiSWyYNTSj4fkQF|u%aPe+D1WrC{)SM zk#AsRp)u(mw<&15uBWGzQ;G7BX)z!VbyrclIkpwz79pxHQ50E4H8r)&Au>xQJlPLI zL(vdVz=tKUG6wB~L>+`V*LS-S+6xz#R56V;K41spJ|Q@8oHyKF>E1B$a`{;F`l=-0cQq#GW7XLqW0h3`En*E` zi9}1MuoLZ&(+~p)`i_^9+5Yg$lBWp?xs*=^_W7rKj$K3(5t`^&gr#6a20CbbfC;B= z$i9=S(P_+{cmG1{``@UKFLLJ*NTx&-EXf7GCC|?GJh#I?KKWJGYvBDxo(9~6L`@D{ zxGc?T^!CC&lcW>Dwn88s0#z3xO}kT60$Aj0*cm0{apx?*u=V;2A##mUOLa9GKa#yySZDsMY=H_u+GpX`*rmfe<# zh!bAM!%96kpoAo-l%HKJA5--zjRcyjK#DgIt5~>l$X-p>877pep68bbQK}~soyePh z;Ge7JLsS^G&MPBc?&1Bco zVsoFMn@&GsyX%N>bacEP2sa^?$<&XYlvs$SMmyBOYBi&f^*D0js)fV05edqG;?=Xr z>K#iVh^*90&RR(G$+XPdrUh*+vD^Wkf`XJa_{1?soE(wXHsaKn2A>-CKN%lB-I7hj z{{}`)4lo?z?GFX(6u2*Qym(vj!^Bgc=`j17ovH=S;AbqM=3gDzsOxs@*l`?wK=L{L zc|{SJfFJ-D;T#lJ-&e2VU}I0x$W3dXhz=ga3!lijYpL_V3#&*kw5HkZX6_b)efQf& zl1=SF;6MSWviaeM`7U`H$7*oKns|I1Gd3^?TNu96#)a;$SqGJl3(Ej-hB`v&EF8xL z1E(&2FUtiWhseAw*!M#`^o#rY?nk0T`ovSu2g7X`lKptA$(-&($Tw`mVeLh+);`ul zP10yxd{`E~Z*i_cO{umS1&*%fLk?I)Qkim|rXKxJtdJnyTLSc$WUHf8DmXbU?y26= zum)uhjFc3Ohqc7z%gvB66V6Ao|tDK3m_o`$4s^~ysR5*xh%Axh_SY_6n+Y0{$Lqu z#2RBfpB&mpmJs9~$vHpBvGQN8rP{3eTNgI(cd7nGheMxN*6#!rg-|tA?&wWxN`j=Pyt6`epf^3L;)s#oxW*+dv(CJ<-ey`ciM^xa6Q_4#>O-^l{*@Pz(Fhw zl5Dh>R<8E5FDlYPH{(*=?^lnopIjy+V~W!h2dPqkX$Yr@cl=ak@M#rpKPf~{)}m&V zkEqwq64)E@ky(;`YYY_z`d`+KbCXSzOeN@_N!$H<=xdk5#8X(znI!Ds8!t54OZ3}r zI&o1^7qCSpn0D2T1)d(wIS)`w9;Pqo&7DikWkSxVqByC5bSEC9;%5EjXR5o=3NzWa zVH^TdNQCJiyT^2a-0W3`7FcEe71_by8apf@0N#&dB(Lbl6OO8{+d9DqK*6*_U1;5o0OG5!w;Pb( zey8)%K=d$T(hu{x(BTMSMIZog%HvJ~5LD+eP4jDnb-2^+aPcV3*vBd0t(%pphz;(z68;baQTIrZcU&@Y2Wd$F;W zH6CK_JV3meI8~uCRH(Q;n3=&V_L4VoWetITtjB0pPHzM!3Ng|STji6M!*Ox8PcP)M zxOfea_rytkd;*Fh%{hY8ZoN1+J(7r(I?`FFp78!aBu02UXuvoeM$woq_Z2{HBxAW& zYdk|mH|j0(1;_dr2k#9W>#UtZdqsva(0Nk+rFhG@>iK$2Phm84HJcB%AI&A*Rkbb| z3mQe2DPC89oVuY3#c(0Z0A$~Brh()-aTh0TgP=`_$C4tuAyMUP+K*)Vn%KjRYvYC7 zj}!nBcEwp&ID50a9zCilMl2Fuqr3{b&7`^jXk$NnS(9-B6Ov2KM~_Lvs}}`a>W?zh z>jQ;!<-HNLR#`LPnP$ulN4ZFi|KXp#}l$IKI+q4`uKb z+J>usTetpqdwvDBjez7+ep617eYo2#96%9OzK6@z3?N+peEskFH2<3~NtO}NB!W$) zV2V#`xka>402;X6A80A>Z&>af@!`W{unh%}dNJ1QrgM{+FnWBXs&dA#pvi<;hS^{S z8ANYP;Ex$Qf!-zj#_~!i^wPBL_01!0QJcT;FF9iKAjx}{e}3sm_*C9_UfTVvw3~=9 zNzX%a!_Z?gDXtaS`QJkv>WxL&&J{+mCDHybPf|*yG466X|*t|*=4qixkgb!ZcTL)#; zBosUZP0-8hTG8P6-`lW`ADb+sBx*UP;ngE6>934|!XpM8Q0baFE+*M+G<){M;}tUd zUSJUL+vW^OlmO&j;L=1DW_9#GQ83e&UNZyWWH`^!5AsWm? ziZ}QdZa|E|GGg`Z-o_WNULD3A!5;sn`&9V+(;h) z!rlZU5F6SU^Z-Ogheky~cyOhuRSk}$B3~}v&;|FB$1D>|#yk~BVxYNu%iBg}m{AUd zq3pqL!0yD}4n1adjx+H$VhE&acRZ5=>N3P!dU+o;x&4-h#kQX0`6M)Vtx=qgP&Qhh7%*LaR2^)9Sg#r+31_E{bP|* zO={w;hV7ehc`Npe_`G^`<;G|pOLT1PyyYcI_yYoxkT0gbNz*gXq!FJA=gZJY?%4Wx zG}e+NL#?ZixM^}XPl9Cy4fkWsd?bm{tT_wN6 zCH>)l*E)3(8dT!b`g|juYBC2>sl3^rf#nv__GcJK82J|GKlc&v5CJ0lMcZw3PjHFuVRQSc_6}_TOJqsc5s!fegFSF z*m0UDb&T7ujxc@G96)8j1S^jPUTzU1<%8;oSDh$+pa~;>I4C($e%W?J;IHg)3kOrW zEn*du>wmX_sZ_S)oW(})}EkeKCWAnEzvZ7Y=?ZJ<1rhv! zOS(%f_wtjxhlnCD+cd*SE2q3)`Oj9d^&UiT0~bM!o|5{fF7IEB!R*TskE?#D(pz`^ z`^}HYA%K!)4f$v;Z_fSGG&3C(5C}*mzg>UZzyItjfaN85_)d`MKSgTmHEf^vGzph+ z4q#g3+|^lvj~-;KvE1tjlVJLF%Z;_p#vovd=JF_P2cjgbm|C-0=fBn?8V}l|!qIPx_HO09UmM zg=B@HnO*FWZ{vGb2X5CL9Boso=TlO&5TT+fV2m^F2x!@<-^aZvEP3u|c}>@eD^us) zm;#x6ZJeB{22NPqV|=c|Q@w3?!^+rEEq}_n9t(j?pn;e*K6j5R>Ja-um{rTIEW49U ztdVb{RfNta`|stoIMv2>-Y47glQ*Z^_4ve;7QRnuDVgOX&pH`8gbUU099knw%eB}U z;a8j{8Pjt#db`u%h!8wXC%n`BgqA_bI};f~2R-JpYWmxs`qRyX-Z!(2y4Lq)Xq0zP zMf}y0^BVM7oR*Wvc2equvacMs&KEVgw!`(=oF(g;KVQ1X`_roh5t$U;ZesnWoU{-H z@aW(%&L&%Amp0trSU;Ik?N$DT^(4hLNVH3p&pJ@lM6mC|E4+3x=@2_WQ(5;I&6bNm zY*B?6L6sjwTi-b}HhD=5gP#e4Bt$%UuG;)9$1|Y=w zyXx6k78-#wc6Ke7&2sw1*1E`Ayk{%SFOS%sSG(=!y1H%CK%^^gk*2!A06^B*gNw9W zYCGT9CFuT~+Z22_oQ9qv(wMNY>u1EO)dTCSag}8_*K&EGxm*sy$WV)OvbKPYHW_{<4B%92;$JM|&tJE0;VNBFgxri}UJ`H=J;`?g-=P(6AcpmyB)BuIYU9U zx#G;pjA-tgJD4R}9m2%DxcD`q7lu!qAM+Rq6L0(CxezP#MZ1=2YII!k;R{nJIGA!K=Joqy zUXi(3>Z>%H3-7Y9&A6(oZwA>HF|tdug*Ci2b7^S8!^}&xbyH>W{JxHMj?{|oJI{7$ z)*Y4cv+$_@Dz(?B-$QXE@#oMQmpsFAIro*R6`y94bor)vyOv^t4v+Vn$~+cq_m3TU zcv&$=+DybXgrPm6wJ)^jXHHg(@7WvfZrkWuPD?afNXaI;S&3fUJ;&A@-F z7LU_UQFSR27tSdz*aU_49(LbPufu^VXZ|()WYhuyIr*v8H)@+(0?ee0K8dRBG&shWb~z#@MM9|Cskkub+04vR$BSqY zv8!TVE?lAJns*etefV}j-CQA6t9g6)F6sk(Z_Eo07`xeJdDaFzRIvEo^Gf$c|KdO-=oG$tcmUzL!=2jkM zE<8Y(KCtDPj`P?SPMrPw>jP1;uBuy6-1}|4ucGd2ca~n-{7h5xm!qT3yezrh5f}A0 z)diNcSr})p=}ghqRw)W>&mUn{_YKi5q*7rWc*hZ7cYd;amYdqqIHhT9!FE#W;lXEi zL!HwC#g-neEG<)Yzm)Tge-;^ruB8n6tjcw}cRXqEiFx) z#6|?oOaRMO%^->?yzZ{WS^A}y8NXz2B^O?0y{YRzsXcu_@S`A$ZQP``n(UPS!p7vp; z4@<|_^l-Y;mKSoe;~f{~kI|R0S-&o$uVAmO*%P0qFRc4C*!fNSy|15S5F-{8Q!f9b zG2hrD;N&CO(u2?Ds!G@9tz+aYbbm8d9@avOA_Mds^I05HAUaFc5jKW|_nSxrA4Kqx zZ`U^vrjh`o>X8X#q-pw&3VW)~gEr)IRM>ac=X=BY*zZ_?($!hI1%fsx{$?GF5oI&J zD9Fp3COL&Jd4JBz#%D$7$29E*qV1sGCA=+Sfo_8FSV3KDIe5a>`R9x3GhADjDj!Is zsj-J_adsNoJvK6XD2SnQi9NBT?{&pD1|jXl>UrM!zO1EL_VTvIQ#oR*8tbmuQ%-TD zxzl(*xR^N96{>IJ{IZqPP2i9i)#RVO)4Te!Zx8d}F7rzJst)rp@X;C3M;I@BLp1Nmx+dDnif7>jezK5waz< z0-HPjEq$aCgMUyTJynd%*}`HW$PGwNn#yb@Nz1xah>8<8zcRI^AKMopKhg5VL4iR| zB8Y*0>5s@6@y7`ogVVFOwQu#bjIH7>ACpwqt59>W7bz`8( zY#YhAnd=yHiX{pQDk?pmGzEEiqP+IAWxf^=v+093rf|&bgjOQhE<1N}u(}%w#DyAF z%7-NR4-XHk@n6n4!b)E`(_enS!k>s$6Q%bqop;5~W!Jop96^AjJI9jc3~NX=4MRDw zX`e_GO{Pnhco8yIS7*#m*Tf+waJ;D3?_6HTjhl*HEMv_|dxn0`jTraFU9TFszj4bz zZ2RwKOX=>*T4Q}drYiB}i>6uiZD;+1guO?%DAZY(YM3>LMkfuP1&SB6&*W1a&*r|h zeM@>>R!2LxYWhs;hji?>zbaY4eWtG`SS@gxcJKfdcMqCEopR1 zd%m%Fu{r*_FBN*bnwlUPVGKrFjT;}WSY(Vv@op^I?1^&-Yjb};=}5RvC*@nI;rE@o z?)Gqt?0PxnUYe!$J5HK2YYg{U|K=JQ)%`U0cT>lYxlgZNYTNxSWX4{aBkmq?K*s5j zT%*a%a71WIwbao0%2zRy&WuO(1t)*aFdT1=H;BU7!daau*8BLwJ4)GapZGg5#nltn zGacI}bVltRr&X64pGVE#{OV^cxo2V)|0t_vPKzy+Tt4Z_nw|eM+u`Ml7dK&-?)02M zLrUlb0+8Vw%n|~9LFQ${<}G@D(A3KKOacP6r?sk!ujNcBoDLvCWC?(L2XPD!5jzp9 zLRi>a`;dCETBye2-@F;5eSt?qxQZ9AY<@re47Udq%$>MxSc2b%B|O$w+-kzflY< z_uBQyYi@Q)W&h$nN?4J>PW}1_`&K{OifiIY&xZ$F*e^~ibo4VQ1pJZdb(fZ^JFmi* zG?}|6G+Kp2{PL4bgHL~C3~PV&JpAme@xrR)nbtZUOcGLy6|XqKdz{P?^|2alAE2&7 zk6eDH8BF;=1GGw;C+hk4Z6-!_!YE>BCM2_AV**{XjQ|8FXr#r!fxKifZqJbmb(%8} z_9WrCt$w?A*xhdXEt8{HR(!opSMs*Qvv?uxE)9X>ozZ;ax9M)zhnY~bkWq|ZThVwV zG{)b__x?0r9g6YQKCG#wd>RFe9>~JE;+}x;2?KF`|~jj8a8nOjm-PJ z7x!~EUi8-T2dOhQW_%(Lnafo<)jX$Ljou?(5{CWBklW~LiCWnDF592#YvRqpd&BRp z;t0(IpZbl+E>u@}GYjn7Ok}L2l2)Ca(Kri3b#GQns_w%O#P9;iA#6Ut_N3a}9|yS7 z{uB!);Fa%xJ~m;|XuFKY6Z$ z&Supj&1tjhuZ5etIDJBj1Y`=;g9~SF|MDvDmB$fkZ_0&T&iqcz5agl=RX8(oa@NOXzmztVe<~rI}{U|*NX^`g|JoN&1lcp@F1N>vy zJ{%zjjbKNe^_YM0;?K;p6l6sZjUUm&2pqS%0oAE1fd`mAho-v5s=v8SS022( zmH|XO4wN;3I08Pb0Fq$JrIGXIVxQ!rEOyzmACEQb_a{tjXOYOBks`ZFr~23yJ-?g1 z*0%bm?m2fZoDi$-Y>?k|er};xdeTd&)S}?t_>mLt;iW0YqYHDvx6KSQxhZkR>*d}h zhgpeUp)Rz%Y(EzF#=7WBZCvxrIMZQvnm=B?E?ckecH_Kz^I7~#zw&4lWeFc_9W>Y& zV15cMhF8SMLGZ&7(qtf6{QPpp{hf7OOtg!1y~xMu*bT@ z7$JM(oLumR+ND`^XD-h5iy~Z|^65gSDRCUz572m@juB)4vYYu)gz!eh#UoiD!hq}{TkSga#vX#i~68v*y*R*nPQ)P z)|PAdXte5HL+jU4@ue^KHGL0yo^Hi6x;6G}wMC=0TSm>u0-eE+5vNi5RUgl*VmVs`(;PpoKy{6;$_ZESV)k(w^I9WQW zlwx8eN;{$@ht8(~I=R$WUE?#tlOJBh#WiOuQ;zmVpEe!{=+N^}vwAa&v z9TKm=3lV94+%x^lp0}y@jy3D`hjRqHv>omq`)#p*V)*qvKe=yp(a)_1o(oyj@jqqu z#n>bf`}7W*Qva|1re=lAS>ejXcDRfOyl<8%RhyWc++;BL`NUj@%X3DjpS1Z?@1`}L zj`}GM2HRueZww6LWqfz8<;aBf;IVt&;a7C?N3j}EnJ558}8iQP}hdj zb3oA=!Hta;NjSO4~EesfWZC`OVf)*I zohjXsuLF1^GOHbLBvtPhe+UH3vKn4&TprlpX zPlNGn4Yf@2B$QAHdRG`22T|S#wGxqKem6Z#4xjO2NRofkm2*cI0z48Z13ff}aKtY} z&mb?~n|(m(<9E|ISX|ZSzv)BvhrgjWN<{SQG7x9+skC(7F$i~z92p2<_p)=MPKt+U z{yY1?m6IrIlGOm#4Drh~u<9B)6lpIxdMS^UP2`)n^0+f`Rz*qQKc8vUhH%JZQs&+)bdE;jmX>suiexRwH9iRRa+<;NpEmx$~01(z1K$b z#X83~q`KAoIL!_w$C;bmSN-HJf|g5sqhxD=iiC|G{Gl@BMf&Yc-Q$OmSkQNZ+i%FD z?S5lY|h)uv1=xWW6qSJFgD)e%?s62?MuJ`cpg zI>Z&I7m^L-HN1u-#A@C#xoG4XOAHzv@x3)o4GnLlKd%&_!vvxRj3xsF=`wXiQ7pTb zx_Z)v2+c{57qJf#Sq1FQLZ-Qm~=)U_O8S$&o`jq#K{?sZ~zFaEPsY1JpTb4MO2F(!&bT4Ub=a;rO_TP4nKUJ zv`y#jhPP*5Uz5LmzfI1I6*2=zQ3mKL-X#D9n9JND+2&ZR4Wg8{M2ZlxvlEq}udgqS zw{GZ{u7}wI90;y%2$`S@q_YbV+tzedW##I%=LAe8!OKBA!MIFOx~Pp4#UVBi$nF8J z_>t2n$f+4bmr?ZR$437fBbj@&I{CFu&PXaoJ&mEEtkc^vIUZ+uFMi#2QCFtL)>W}+ za#8_0Nd0qF!9Z*JQRkZBikyNyDFIpe?64)0_!77a`yn(SsvT&yzN_&sP0ATJ9BDnc zah27I{ntgWTw3U6&!)Q&koJuIK*cW=hCdJB%Zx{ek}DD2#O+w=sM#8JZWTF>2UWeE zB+x;>$p-2lTv7l6_lWosxkcG?aBK98+pM(eyqJ2T&|KSIHMe4oi+9thk_L|<4a4os zB?bFmg*ndx`7kI{0aW660cCF!Z!QsckS&IsrUUP)U*(N@MDCLy7)V*xPCC$9 zKdM)s$?|J7Gy5(;5x!}-ho|u9*LW$lz%;tm9_zLcDPn^X5j&7b9RwBO?n;sD!YiD5 zd1jkF{8|+Gv;O)+oIE9b1Z^=ArY_@gqD?4i&52Qq83@*P6r#9H7>hYZU zD&2)O`2wp03mHW9R}To&d@3nnN0o$tYP}U(9P{!fAivDE#$}-=B{}2u2%XB%E!{lU zP4w0Pw~oVV2*Y}oveC`jBO>JpjRmlJbJ0(A?xudS6+MS?>QK*D+Q3iVl?PM@`#VBo zDniw{H4x$C`c2h)`JiALrY`eAw5hYhi8oBt)R*MPdofDgJ5vCqUmuEFic zf~5Hbh@nx?^e?Pu1K3>*(e%q2p&G;u{ln~sZQj_xRf`>xggue~CD=cVuwthMt>o zeBmF4Ugz6o=eg&wD?MZ|Q!3dRrf&-D1KZ;2h zARI&FDR#DyCGuy_znX)C;JFzPRS1t^dPO(hEeO1WNo)ureT?A!Hf-Uw*tLXBH_n(m zBMkP4k>bEBVwHdPYj4=*H)=_^RU4LBMWqwqK1oCEV*xyb$c$y-CL{;dA>$X^hA83H ziG>Yq%wdG*eq~)n5qVVKEo-MkVjmAA9p(L~w(on@wbTUvWf?1!V-83luE(rIwjcg| zo>%?u3mn7|Nf@<_mS2sjAcKGvsyi*m zida+75R{EUtUQ=4f*0h*>*mg&+e-Zv>#g|XbcdS`OW&LryXvO}|lBMbg^?$$y@M0OFglBV;*6tT-fXrwq~tsB5=d5S|R z-_s$H(bFMm`28l)ix=3~vf{5D=r)>aUp&ml7T)rcGw037al_pmEMN5~aV`dSlOyAe zt!969RX4e}Gc4z7mA#izPK?@LOiSyk)0ZYF@un$NfdU=7IV%g^xdZ>%>)guK5-$G6IW8AFH;w z_vVIktfSW)9j-3zeA()CYr7OOiQNcFWt}EAtO2Q~ZGG5$?l9rR<|CIi_uBFl8PRJ3~()UW*>Z86c&zAcBm&7Sa1 zR-7)bw|;oi{=5b1=n=ia=l3UGXgV}_Z4KTcwk=>?-Cc#{*ub4Tw?-`QuzA=*SR^37 zim4vK(K}M4vO-Bl>>aDr&Oc+;<|$eVJ`_pGh#6(mNG;bibX8;TR;}CQ_Uw0o2&{aI|_u00Gn)cJl?P-5*myAgyjP| zV3Q*`CxllevR1s;8K&Pwu=H?fj08-nrovU~hEr9nt(Mm}LdW46M=jQ=dwAn}(sy;j zVTEyR=fB(bOvSP4&q(Yf8!h&2BuKQHK|I}}fWz^WkwEjdDp37Je0H(JQ(T{jqFpUh zG|jNjO_2+`At-wvU=Yxns=Ky>pcm^lo(PmCiH4a-Y^7!t)W8fy*l@_9xC}9i(&%rTVeB3QdC^7WG(`UCz6%yaU!Kw zp&ixV92$x0(-Az=4&)D+5^DS-Kg{Oq@qVS!@$^K8iqMYbl}BkX3caPx;{J(_T;lNz z4x=4HrAwiOr@fv>CQk@0CJ&n1jdEBYJhGnu3suic4m0-8(XQRua|(VPmQ)WustE?W3%sbcItVx}|k{)niYL6)UnxSTMJXsX_{Tk6syhZ=VoR9Q>x4#Owp{B$jqX)imD#u_N&NNIpgM@dsN_;))z z$eCE_0482O`-2aE|KnBqzp3+olpmaKo6x*i%>Ij(eV?%(SCRUv-2pOI`zj=yel;`B zoqWnw{HCpst4-d#l#SltYizT1Qk}Z}(yQ^WbG79{vE_y-qm5ZxLRynj7I>$wn;FE3 zG;93brm1Y7nIKhi%Bw9cGsP+2skm1)sbzex=a@LUcb)iLThn08eF&8!;@5xxI@k$h ziGs6amL&g^jpNWL6dWK)C_=gB9xX;nQe-|(>FwZb2}?*Oxt_0uqW4D|9# zP?B}q_mjQH9iNmipVj&q!n(!q$7h_F-cg!0SnBNXTCCjmtl-(d?@D`S*04wjsn63J zpZ*(DCZc5b%}vZ+bN-m+Wy7KH{KQqPHOpn9U&$F||6;icIJc^T8}#%KSyEiPDVD;wpo z=`j?idYG zz$Pa$G>&Q0uV=d``DzL-T$z9#6laQ`Q}$ zlMpj_pixmXiX*1;wX^z^D7|re*O2kbx5DP^4i|W%G`5rpw%BEqr*`zMkITF6;lO|l z&$*oPN!sSWkwvqA&!-JnEN#iM;Okdx8#gx@yVAX)uWxT(d*bPzOS#7!b97QV+vfx5 zS_gY4{v1!d@hs+Pp`yS9Oqvr*m+lo)Svwn2Lx&2fUBglH<3{hV8Mo#OblVMf+N)kpnpH+h z=VD>kz?1ls@ewzpLhKgqEa(jJTt7rL#bdGcfOOEAoqfNTj&9k*0#^@V){j*wsHnsuY#=c!i{O=ryQl@LTVW@M-Jina zCZ8-4datDTxtF|8Xq)m$4sdd3F&OXJTeT<3&yC$;Jcp`k(x}a;aiqX`{<(c)U2V-! zUUH~q_Kt{?8I42ZT~6obFArOU@bJ8DicMZP)adytxlT5ERBh5F#Aaa3zL1`>TRb50 zP}LB1By%vmond8nLVUpRK69SL!o0Tqg*-AD`)XA}^R;z3xEq@8YpUBw)Qx2eCbBA+ zG+_1g&aC0ficmi7RTP{~cf^KV>5~g~){h7D{4`@)Th%1$5SBu^Gi>V8XjT>U^rBna z+Q@OZBvRzb#e5+Jqup-YMSrf54jKdc;P9}bqhn!HvSuP0*_evG8y;BHOP@ptgcrle z86qR-XlpcxS6-%VlTR&O``XYiVL;cgtbYHSL;np_XY2F z6luKvG$}{+q0GpD)#8Ka$8Cp)NX8#o%{;OY5DUq@bpils*N%&e}d*TkV##%-(MThs0C3862zh z>=NS_q5@hk=T#|mIb5Mt3^nnx7rcD%>Q!d((l-rI#NW@YN=Qm-M6d;m1aud*IJOAp zslP95Nw6_Vb%fR+3%MD2uMzMi68h)kp}fhEAk6M$ZQ)K1;_xx6xvaG-SLg?RL4;E@ z#wz}L^{iWW?oFB>l~zFBg?ClZB?Q~cdcDNG5MJC=@@~u1kwZO43pLfECNCV=#Hh_L z;gbQ{HF?3n?^15P3Uopz=FFTvTua}sp{!Nmh7;%-x)9C{!v)YRcF#DQa9#_1rJ zyx%&o_i4z|^W4X}$t<6K6fQZlsB6{tx%Q9LD$}2t?X*o(kwAjo+UM!>7*@8h%m?4NJ<*|`NB>m#zVPK>mqi#Zi9p8^l_ z1P*7T!Ljfk7#C%IcDjGE`1W>S?=|=z;IO?UaBQ$3{QGck96de#*@V+OH!Ppp3dtpl z1aU{$7jW_#4c)4fI=!UM;B}}g8iknY)?iMqnuc9c0fTn&>#if4&i;D4Dj`1pON+qi zh$P-k>DA>aor?uK#Cd`&l0sBxqBs+Wtp99Gh$Y*KPn0<4Ys<01Akjwu%G_^`Rn)i$0T=IyQ)2`#zzzY7NAxo1{{&ysj?C zVJze@aGB2|(2JJRTc3(7C8si`rSTp=aYCS+6QM-wYfl1Q4E@`4NM_?J96i_%W3Gad zk}o->59@9KAvfd^6wOxNmh>TUB^^k5@*!~rBsP3NE{XVJ02{NVG$+>8$xFze+Si0r zwo)1s&6{;^MdSq>v&6qVj$y2t09edQBVz?rV4VqIclP9uV_gZG$ zDZc0@Evp(Q`QY8POt(ODt$I_P@`D%Fx~(rh8Fq)}xyAh7dA@|<$;X=NS<1JUEVIiw z{T(82G;>z{X)h^E$QhRAin~nL07o$)z zqjt%vV z>JAdE&i27xNuE!%Cg4*AB6z>)v={EoLE{&mCK|}VZ zhp=DNtHyYxK<9+)>7~@tp|`&vht~PRS1Fin{Ut*KXRKhXwtPoLE3SVr>5zuQ_@^pH zK{8>kyesV2i!BW+b46oA!_6NbAOD_TNIuCm5|3m`{>U+RcXvX~3?MN5&3G1?S6Np1zlmz=3(+i=mzJciTUE3&Hd;n3 zBU8*W`%tA{+!jiW`O!p$Vw-OJxk~k=5dm!}vHLQa^wodbAHQTi+u7N4es9o{w&Z!= zfD_a59d<2;EEitqPu^E4N}#sa8tCPCD=SKF&Ct~JoLgky_RadE{z_U}qkkRH?F=G} z$x_oxXE$Do`tff6rDOe1kq`#$Yr3DvE?U~5=b?Y2$YEJRv(JU(6cnCA$R0dp<(BB3 zq6j~JAPE5t(^m5Jcv1R!@p3k-c4{~vPxL;mC^9w#}psBA0d z$W4X2MesIHTh+I=&Blt36HH2e9=_Gx%>}KAQ)T(p{MCoDEeE^;JtXBbj=j#Id0sln z{*9x*mY1q1J(Ag;u88^AdAFa98nJ1zmPJwnCu?mUF20$#H6&AfC@ztE&Q!~&%uP?# z&{(gSkv%KqO61S}$3ZF9c$E@v3kLmurT3|^p*D<}+CKBnSfU~I$zb!dsFJT^n;bZX z?6e$b_zMI0H0cr=^iBRG98{F94J}@hnj16^Z5r)XPw(8k25x{&Qd(oPZBx(BHzArK zp|G$}rM!pKE2p{oP(z5zc{iOWDHTOiNyAI^dy+R4JjlD%ejG2_EH}&%n{)(2c@8CM zl0Y2^I|RU6qOX$jA$bXimwlVN77v8@ai?&?hygSSj-ARiv}Flmc~z4o3*DZy7DIZ@ z>=J6cy%!IOU!Iiqi@H*vkTUky*sE!_#>+%bO6+`!;Z5q3@qCK$c+oD^KOc{Y{qkg^ zWN>JpeJa=WYd7cOX6dvA`MVxW24>w+0{{giDI+uqbGm!_u3?8UaBy_;+jR6fuT@5V~W!y?E=dOm+B+ie*RzWeR(*P@7woOQb`LcWKD!D$&!63 zTe4TkzHe!g?8~4{CA%WTBwJZS#=ew|VbxGMozWNDqk%-mYXBFmm!=`LFp9-l1n_9Y!387ZYV`5`r(9j~la zv$1fes9>^pbxO!BE-3kBUds~w?}Mc8w7axxn4%o1EA`XquWGk{3*K)!Qu=mzV&mZf zw2AGVHY8RDf3s1PK$xlq>JVh$V$@Yb!#6;GRIKh4r@;OadR9GpZl5Z}u7KA9zWS9b zDUgG1L;Bg>A=PV3-J6w}`3}4pd!KQAA3q3asNHbr5zeP_SgO?r1w;Ki(=zm(efyWZ zTa`PrJU&u~@>VNIxr&r}9zr)adXyEqOgk0eDSjFdN;GYgGehRQHL*9#2CebrpuT|7 z&N3m<`>R|=Rc;UFrq7TTi~M3KBtGwK7Y@4SCgJ#10|}%wEG+ElP;^he25{lQdBP%$#^7=wlPVE*N5mJ^86}TCK&YaC=gvpH8y~fU@T|-n5C`k}xAd(hw6fJn;mh%`ES4qAfFXUv7OS zJnP-NtI&m921Lx|%_DP5P%!$&b&&+a93X+EXNpTzZJ+@0ooV_59%2AjO{8z_xJ##j2;BTP-VLg5L2YwwCZ2Y$xHj# z0RY|z`!Uj1qci>cw=K~;S zJrwZ!EF2159-V9_`~EK5sxXIyB(E777Mb~e?>GDJVbR9HAMxvJ4h-o#`|nnCP;h!_ zR$r*1E%LMr2q-!kx?+bOxj)^*(tl2WAyNSzRQr!{5t zr^XG?U%#Q8v)bRbcxay|&Z)EYi_v0#lTo?rv$pR25uDcW42l!Imp3H+SO0jk)P=0i z15My-Xl&Tyx^SpgXmD=nNT`Z$Wj1Nrz~0_IT#JHY=Z-F1U&dsOsi9#UGEQy82CSVz z03&4WsHh6|cLI*7Yn+c3f&oD+p)+F=Gg7BY9!@g2eeU^8@*zJ(dVOv8^y!W2FwyivCwJaNixb>gfQ@Tzi(|2_4RO^W2OAV$IW_LV0a#bQ+)ylGRrhl6J1fJ|o zhc(7b{xYNReEw2E*33}!oav=%$Aeg6b=9n)eHrJqWP;gJ-al5Z_qhv?w>K&LU7mMS z#G$9Vs_wRRsqw#e zjH4OZD4ZhAPn1n(X4-^YSm$v6bW*C@O(CZ4B_B$Wa6gK%o3xsnn#_`_Dpl@r3`K+t z#W%zr0fTeYG}49~=vu(f$1Tn*3sbPbaVyUr0bRcg9#%3?NIF6(5MlN#m}-=>C7Z1E zE}Q9(GrYdh1q$gYy!|C9^h{i4iG=RnN9-)MhTh(#JMW-Q;y~XCeINRH`m>izF0>E4 zfmw#YbbRZkMm6++BI(2HH)MJW7l5R@Gu)qi^1+2m*w6F4{lJbiJv(J z&JmKBZ13m*+Rz5?)&1sw?!u&XJuqHKBF?jVB@sd4VkVu;+hVK8X>!g4A&oCdZ!eB} zu4%a>w1z00;(AXdU|z;SvPI=ySGOO(TCZC-iTCfNIw3__HIduCi?S5h3+6|I65*l> zazh*!A(RRbRNI{9Mor1325bjy;ItrI!lcz<@#Tf$YJ@DG7ov1k`DNV>@ocR0kK2rf zIGDIM*r_}e`+2IEb<9SXLFSLjJKvDiWElnA~M@`og=)=v~+rAUDPz=p?`!BS|e2H@OTDF`V zJiGPz&H&HctamfC{G6FEPWjr|=>*gmfWq)#?Ww|)b2ZBkEcEIS3;J zlKfICSj&QU#(l%3-Lg-ZJ3~9(;Hjpd;;L$LGK-{=w(H(5lAX{5g1rDV!F30RmtPP+ zjWX4QVhmCm0s;cO;pnac$*kpdTYRK95q=z49lOHDEsP z#aO}J>$11%#OI9YBGepT8bvn=w)%!9luYjq5>QFq6$OME*lDCSJ@951E^+>4IAp)F z-n>Cbcq2KBii?p3I$~UwDBXu2Z;{RRt~3L%1#i>mw8*r9cNRazW_|n&^Y)`6k4Ebs3QPoN58f83CEdHzNG__1m}CP_w%Z z{HS<8#7-VRejMTBcoq|5XIjkJm;aqGc5lB;g#X!`-VWbUMZ*1+85Q;!1 z(w`ve+7^I-zPOjE%?J6+$+_w1PMfPz5zRsxyD<;&)q%zAB(5EZ%sO5l+Ud;@7iRgW zl2xKsh}zVDX=sy7qqmp8>E|f`D4`f63Jwr~hZq?Z0U;;wBdP#W4H3;IRw z=YHvoAFLBtPP6_a1P!xIeMfyqLEA}|HXfn`IatYqNR?u?pq~n^^)=F{d%8AEQNNWsP%&^e+9d+b?DJe>DCx4WYJUa2S5NN=oy$V zj8rv5Ve+P)zCP{il>IlgK)Uf6H;l`vj~rD#Io~kJC+0F(H~ZYL{?l)}o|q4bXngg> z?31t`m6wgB)E6~;m7aX+e|1(ldT@?n&Xkstp_W0aYhT6_6ulf)`%1co%d!gb*X~9H z6H>EuB)oo)yK$_I@_Sm$RC+*bcVZTKf{D*PbH1b^UHdHG>7Miz9<4$#^5TP_uz}?T zr>4ZQ;1(Xt@@(En(W#fkGC>Ac_>MVTuiH7+yzkYI-%HAb=M-Z#zm;wU+b9klf_k2i z1obnTvBqO&Zi&Y<%2a&Ei=9>?HGLLaRK(t~7_=C?gd7&tJuYF<-&L+%1?`NJS2>S2 zdh#m9pHNi3L-dQ?@iLaWmN`)M=3tY~XRR$Hi^bu${OU2b2TI?wz*2soX>IhrL-j_FzN>2?XyQ3jD#}tapHR@{?-VG-1CXgfwuBafMSpqmF^V9nTA`!lqf%`< zoX?w$??AbQZ_za^*%}9bl6v7-D>N2;5VdQY|BIS7&7 zm}vCFqMzXb0~!YOp>5{;-lBP8J5a(YMTj>%Ky+>>kf;T99twC=kL>u)wJm9NvJf% z<~DdTq5@f8ZeNKb^5Xr?^707d_7ZW#*zz8GGPHGV7*UJdkWho0n;Q`SKZ+cyVYly! zd6j?Az8lm*1zBxR;2p5a$crfBZb@WAngn)@U41@h4|&>Y9UsSPD04ml_lM6rXCEqp zar=%V_C2f(*(4(g-`>-Re17#GwUT9l1dOU`o&Hkc5b?T!+7Nop8T9>gt`4A7R(BYU1a+XckV@`WBJg(W*NVK zw0ZNWKp7$G?T0!xSj37_Sln6@h+r(-8h8IAg{XhMcbk|6o+WVq-*MlAcWmp9hCWPm zY$G>J?;9>Ug;KEFdK?tW`&Nl}c$N90t-X&)7bnK=z5L+j@3&+YS;HD6nTdQUn03+8wJtTFbM~3^DA)w*1E#ty|vBIU=p6y-;b(c`Op16eFG_qxCP(+vqpzdFMa=62casz zf-OseLdhyPA$ofA%VZO>zxmJacejKFR?WiDMR~x|zFSmw*7xp~1pi$n)(NL?=^K6n zS$Oh;XCmZk!IcYUPCHRHFqUTfBT@f+=HMLxY1{6I$YLXy`uz7HBKEM>rXV+&E-!_m z2LEO2ZQer3R`LH2ci82kY>nKZ%&hOGn%O4kK`F3KNR}#Z6#W>-lRst}mW>dH{`~;o zE8xHZ=Di)z-9-T#{@~6ujm*6PB-nfQlP(X-!HGo%eZYA{wQNhqOHvmu2chhBcXz+; zAx-zV>=pJbA-_-yJjE8y=H~HK*#BBz1j-djjw>C(j`nt1oCwLth%ga7>11@Tp z!O`?IROpWjQbi83W`G-tp=4VK*3y%c&qG7PJ8-DzUhR^|!OdFz=O}LpRLdD7W8--! z{J%~~QH-%2Drbf>ASW_sy19H7qwrV=o$c5Del1{pK~c(rCx`s%c5ZI27UT)= zHgGNAL$@9yLIe<-5ta;K`VV<&gx{^YVBbBm@4p>?$=24^WbnyaXz50i)$|JgbIe3Y z5TwFaP!PhgCb}d7aS64oUoG4O3N`o+--p!@8EHUWJ36xp6x@13UOt19 z0K0excu-aWXgNB@f9?AXCkRiFAzHwK3T^S(W}THV`BY3yY~Du_>`BG;aVzkEF*0WJ zs#stNoCx_PLbLqIn~#rA&)Aqxt$<%0+ANz2eqBW% z_>ffQQ(KM=SfXF@lk0P-XF&+fMF;@-CBtBjt1FAsgfZM%zq-liQr0wTQT1dZH<)Ij z?enkys?`A-o0FC0dY23XW{~0S5T%(8Cjgk0uEQDTZG?t~8eW~ai7P;F_KhZuC;xj)d)uN1@`jNhu3xA1f{QhWjBywn5+^ zy#VC92y75xxxW}!FGLOF2?=p#P$QS@lm?QMo5s}@&_9`5`7N(}N(!01i%ePJZz4i| z(H+o}O`@8}lL8=OdCIf8C#w_uJxWLhARDT%`8+Gnpmn{hObYXT7h43Ld|vwy5t3|x zc%xE7YAS&2^})~klT+9x7fMxGy_Irua{A;pAQv?Xy*0Pfp$9u>VZj6GfhWe%i~}SE zCVlM5J3N|{^OX>q++TcgC+Y>;KN6HC?^6Ka{BgiMM>xhKIXh%F4&XMV2;3waH(-5P zW3`wD9@!>qWaJPX!aWbIFaGE=A||rcL%y@ek`Ba9m%5cw58QZ?$W}cQkceGNdO7uc zF0eUuWMd@9Ipcm?@t5VMb&UUThE2S^^Sc#!=KxTN^M97!oE+hWQ^Hx7GnE$w+J$Y) z0J8HzK`2Y85|kyvtiG96;#h3;FKboH$RaQQe_$746fp)U9%`Y#iv`cbz)byU|^~nWT4pDawWrW6K`{quL49$C^YHI4CJOR|7LR3iA0o9OZe-S+oMwPtn7_d9F zx3^0g5@4N<9DI0%)ml^vmQmbi%kUPQgPC6l^l`>ZCTkuv5P(uHLt+|l4HJf@Bx9vv z`knuuS~jUR9J(Bjoq}9Mnn-3xV{G2l3UxJrw#Prx^%`Q7Q+{%(6?e^sEYj z@!WiCU~&$Szj*e#JVfR;Bl9J;z?#AJ0|jVlTeG_hKlm`_upjJOT#u+1^pIR=MS7|r zvxMw6m<0o0-*Tuf01o)bUW!>$j#+*N>S4sm65f`t7vj)(SypBQ;sbNcZ&o2&GD@>8 zH*h=uEr{4!T4Wtc{*wOwD{6uN7W^AZ0OIlA@zK`mBX9i2A^+cxMwa7$7i4o0{)r|3 zrxxVL_922ZdK`wsAQ9588q|*&uUXRz_`h)`yAz}gp{uqT(lIS({sx7heE&wa|5JDK zzefR9^8a_;@cTG`vp1DIB9aVmse5pfe)OBF7=Er%Q2sw;75=Z3*aPwHSTXQNZ2`Yw z3|C&UXQYzoaXtQH(WL2V55pcDRFR=7YHJvCqbCN2K>(L@{Xk=Qmx)UPFi<}&gX{4rHDZ>5uo9Q8!L;mlU#KiscP@0i7J9j)Zv#6-u zP}|YbG1j6NhZYkThookP3~PA3ZI4Is_*7m?D{k_|!`>M$lgY_R#I*3MgJUqe_{#YHJuY~r4 zf5#QPB*A;Rzm7E(jJ1uO_fb_!e9(>r|)y3OX zAWXU5^%iZMn3yPO%oY^(QBJ(~8!jg-?4Y}c$7$AZFt9pz6xd0&LmD!y(=#ptap)f2 z6~*PVM=*+8-IFLv(P~oOEeU=#VNH<%o(TuhU#mAZCxIj$md4n4S(%msnd%XUS8V%G zWd-PRk{sb)9862_sniObT3nekB}V)mR6;4b1r&wnSn{h1$zh68HD6xyH%dY=bU;@m zY+0fXOV@xYQocj6L*m;*@Qt`;Awm3}#{CbNvH66OxD3a5jFDXHm5PieZZqds++NPc zab!1N)^PiVJ|WQ)r3VzPsvGO1=*B0%OmoeYze59xH-qGC7hkbKw!>;pj9|J(XnQm# ze9iKTp$u%&v*dM#7A@1;zw&W9qZm6~Rn>`U@3B_9fUAG&cTs-9-gTlw(to3}H6d~o zv!=ntW`BEr<B{f4mEHJz@qUK=WoE`5)g5>A05 zB!OvdbKeY4YU8HR;X@$b>FMbqgfw|G$SDi6gsOmqlz)Bp&3>8ouxIfZ7^4BZ3~f*$ zCv-V!;m_FRC!OPZLJZB`PLH5Kkb(6R4zC}WkRCh#v((HueG%HYpo2xC4T2wl*}?7M zt)cU0NG0U4+2R=5=k}7BD4>k7V%4NG@WCMWBkmA@0!3#=MVOLiBewOUeWxbm%}O?y zf=*xw84BzzBYYA>5=lv5tokH~i4M=8l(5uN(8qo0j;9arbt?@&Kf4l}os%OlbS-kd zOMAmq{W6qA=KRuqPjNc)t^N6q)AsP3(3ZF33|nig&(`5_!;`Y`6&nz!;zt^UsNuXd zg72T4{QeE^;k~ax@T0zVcmHB8)@-}VbL6uZKb~9#gu+%@0n0W}cLnhGP>GSgen*N> zMY*+=)m4lTgPT!@sJOT^fxEiO4Kn*fb7sI%I?IT6R!dOn z1Dx;34E!Av6BCjrIpfEF%oUg5Sc!z(H_y1V)g*t$p&`5J*`qB>`&eQ{!})@7F-e`{ z@FjC+pTn_^iSR;*3t<>?b0bgJ!otGM6{GqAmyt@KT=V!)ti%$xFgMtFjOu}j_Wjg7 ztz(eRxb!lSU-zD|Lnr-XLU7MqU**D-T=_nTz6R_y?fZ)zL^GjMQ>!NzqQ8JqA(Q zel|W|8j0E#3r~$7^al@#6`SHX9mi-+AlfBp`g#VO}Slar^*d(1eF$i_}}k8rZ8 z`wL>a(f$3mjB(}eo^Ecb_raHk9EaUbe5Xt}Qv7wgE2;xtL8Y^YjAiqbFM z59)$bKT4}(d!36Wygrej?#caxC~psH9Co26dgrC<2~WKU(Zva4a})DELvu@)adeNB z$_}h9b5y1euCcpJG?2RqM^AJ2hYxl}?QM=$UFB9`8-aPIjCltyk{$|Ajyz_f%3!OF z+C}R3ONzcXM7EH@m_M$w;w5mi70R+|$I?m1g{{SzVSDmPXzWB7XmEq0fJ}#cN%qu% z*1mlCwQCa#j8o%}ab7Y>vvFXI@KwJSR}$xaWLUZPYHw_CwQVrw1A_wTlG&*%kCMJQ zm;178=MGs2Yr&)K97?D zFP^g2)6*jo_DeffPWYU+`Bh>S1pO~2(~3oY`&5GXyIt3M5vWF?`&cI@FGrJDE8He} zCHL#akh`Hu9K0#RZRW;5J!4iS$45tR4XgR#%l6ks#cIbee~m|yO;#h7Z~6JSQV%_> zHPvFat({)vJ3{4dXbyd!pY3H@Z=O7BKIE#PI~5GJ9)oZf;lHV|e~jzqvo#&OJBuLk`!9 z=^p8OB%!SPJl!$7K`fTBPxMPDcD^$EqD{W~=K@i5Pxs0no9XIcONL{G@pqgTUMoEj z?>@J~>fC*)kVYcLW&uZIl|`Yy_p(qsx^+SR@VkQ#?}g5n<)UAVNeb=x$S1I%uO~*c z!ETqFlUwEZdV4#L_r+#rp0Wpp|M02;&KUVU^L_ z8(~oyf$oeEW+?O}>_mTc3OUA#u@1;R)9uVM8J+a~`9ODo6k#sGK4xU;<1!@^gOgg+ zN3#^Padi5dqv~S9{ev5=dFhVFI{oJ={o|d_!&7>McEx|GaAE%P@;z`mbb{zBgTD=yiZV8|NLQzdY&d>=}(VLd4rm~d9`Ck{Tnnzde3OK zC4_2Ijs0Em21=6$jV3Q2Wi$u5?fX)w+Bonn)lrJ(PKctzLca41O#`_yZPfN7jemuI zsR>Lr%2)kqDNtBvrC{cX4_M_Y=F`n088z4E9R=@_+d!Qza~NHH_f;HRDhR zt})|JYL%dHzUA4wokJ$=`oeM;-rx_?ySi;>tge1qB~}jE|4uAA#$j|({4|azDu(Vc za$%WM?8#-AH;L(3f5p?>*%^xt=)n~-@ya>Sm}29-P7wo4!m5*M!>=`tuH3O?XofY^ zm;CllEov}5?>1VPGKEtT z{q|QJsQvvmRM9X-9Cxl(=vzolUf!96_vN2%em7j8t!7;6U9MW3Rj8&HnO&0%U-RuW zp^bWnsul}+7rW6t-8;8CeBc{?m$|cdCniMk{@fD2ur)!=DMvb@jt?yr5?nq$?_)hS zcSWWkC5f$hPS!^)A3 zbAC~4q3a76XY4ba!i5=~itx6bmJC`VjVxk3gT7Td|2kM5~p;X z{4MM9)-K7h6>}i>)w9Kj$(hHoIjkKPO_VB0Bd648r43B21pSue<>btcuf*p#eT!*3 z{q6qwjUHU$kzZf+&lr7;y-5j)m!UjGnvgnJx%ypuxpBd~LZDYR8VpjP6{E5VE-LbbT5KoJp%kMC(cX}W+IT!3Z zI^KM=%l9bJ*ud@}4(-&l-|`oCPtmd1Yjn!3O&@u^rA`FDs@r+rDVO)o9x|0D46N9l4 z>0_MI(|&R!8{C0f3(Vu8MU&=zPhy*FZ<{_ME??W2_i?xDNEZ58*vfcFA0iye1Y|~l z8Oen?fxk&ZPmj)WsC|Jxe6~5il`$i?s%eC@xb~5>*zY+Kp-c`rX*W17jJGk4Pj@kL zVvlB&Dd6Rp9*Jl2TJw|BR>J$vX)FhnPYv~)lMTxK_NUQHE6PSVc35Vt$bzqxi5ldw z^mL)m7xU*?AaE4FO4kK)ESKlgma->JQRg&-&Z)eZs2RVV)1EIzA5EObG^%3`STe*H z7Z)d#M4UU7r?jwR4nI#F^UH<0_Eur;^Ek~BPBo3B5%C0RpNb*MXxyX=TCUNb-+OM{ z%SPp2=kpr7Ld|Fyp84ROXxlhh|3lUK5hF`C7FY`uEo0A8*J|3C9#N*F$>5>lV?S7H z0dzaovogh2hYM;`c!c%zWWTHr`BWM?+g?*;FE?CV%8>QLL>xQhZkoUoJmHjk*Q085 zrfyi}FohihW%NiX9I#oF6RdWIcJdFeeVbl1t6ROx{#F?s`se|F(+z$xO=}V9Nb~`9 zioKQ@ZH=jp*QdKgLp^m(*@Vnf#ddc)Ry`2)iau0pVXeRHmvT5ciCcKMS=;hOuvyfpok+U=7`za$xgetaYt=dI;Fe>&-PB%}^1 z78syT(xw}aRd6v1(d~2}o||(edag+;+`aq8tD0XOp7Q9>N|iWRUH&FJ>vp|87wDNf zrc2GDooFU(Zn3<`C3dDnXlGs?=Irs>S3IYbmp-|;*}eHWpP88*`mh;=ZPHLJXDf0E zMD4wkIlYo#^;{)npOp#KCqWtp87s}7@q}DlyF2=mi>RI+x{PqVfWOeHV9DQK?txCI zlqNei%PCh7JzE(i5vFMvB$Pp8KCvl+s)%d&=O%csN?lpnn4@UjEoEbiu8BYBKE)|KUsg*aIE7TRXo?sxiIUw_4TP$4Z|34aRh=bbdSUp z@MEKB80+(|G~BZF?jN5cz2a|*eH4`Yb(xuEOiz?`H7G1(GSdDi z`eF(OrfHNTy?uR+Cwv)#?5auH%Vv7*re}ql3Lc6-Sy)*!WxL&c2Wtga!J7+E?y_09 zZo;u#_@H0R0%H;%{krk{a&!Drw)M4*kT%|^(re1u&9d~Qq5!_bZ7}QFu~mPFvBu2q zCYtK5g-6b#R}R8v494D6=U-VLWtVrWY|Q`so)d`fdo;b;Mtz zXxJG$1D=%UYL4`S8|xciirLS_dS{B1V5}9xPfQI2PYSSSnVK!KzKtC# z!^;`SVAIAMr%&L&udIc5EynrIim0#E#S<4Ri+=x>?AmF}+dLvf&7GkTNMMc{j|eyY z+`W`;eY;WXSO4*eW$JDj@nPd!Np7>+du-S`%&)Sf7u0|L{CR8LipwebN@Sv`71FRW z35(2S{Qkik7em)iTnQgCx~(IYF(@Zg4B*EE^5Vbd$Z1YC8dD&$1(OTxSUe6yl8)hAF;V> zFHqiR$BR-sj~Cn0?2Xxc>Tv$m@STXqMO`NW)Sj z?|<^QXSFJ5!s4E_Je|}3xrqp@Jp-ky9Cngm7}F>4S{2^5iq?5Sr>y4wehiLQjBTOv r)qwpxN1kMpU6`Gz?3*rp+SsWfe(9jlbt`Km5vQiCrIdTY;_iO|2nFP3 literal 0 HcmV?d00001 diff --git a/modules/telco-core-about-the-telco-core-cluster-use-model.adoc b/modules/telco-core-about-the-telco-core-cluster-use-model.adoc new file mode 100644 index 000000000000..aeaa5b37b19e --- /dev/null +++ b/modules/telco-core-about-the-telco-core-cluster-use-model.adoc @@ -0,0 +1,23 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-about-the-telco-core-cluster-use-model_{context}"] += About the telco core cluster use model + +The telco core cluster use model is designed for clusters that run on commodity hardware. +Telco core clusters support large scale telco applications including control plane functions such as signaling, aggregation, and session border controller (SBC); and centralized data plane functions such as 5G user plane functions (UPF). +Telco core cluster functions require scalability, complex networking support, resilient software-defined storage, and support performance requirements that are less stringent and constrained than far-edge RAN deployments. + +.Telco core RDS cluster service-based architecture and networking topology +image::openshift-5g-core-cluster-architecture-networking.png[5G core cluster showing a service-based architecture with overlaid networking topology] + +Networking requirements for telco core functions vary widely across a range of networking features and performance points. +IPv6 is a requirement and dual-stack is common. +Some functions need maximum throughput and transaction rate and require support for user-plane DPDK networking. +Other functions use more typical cloud-native patterns and can rely on OVN-Kubernetes, kernel networking, and load balancing. + +Telco core clusters are configured as standard with three control plane and two or more worker nodes configured with the stock (non-RT) kernel. +In support of workloads with varying networking and performance requirements, you can segment worker nodes by using `MachineConfigPool` custom resources (CR), for example, for non-user data plane or high-throughput use cases. +In support of required telco operational features, core clusters have a standard set of Day 2 OLM-managed Operators installed. diff --git a/modules/telco-core-additional-storage-solutions.adoc b/modules/telco-core-additional-storage-solutions.adoc new file mode 100644 index 000000000000..f1606c59f515 --- /dev/null +++ b/modules/telco-core-additional-storage-solutions.adoc @@ -0,0 +1,11 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-additional-storage-solutions_{context}"] += Additional storage solutions +You can use other storage solutions to provide persistent storage for telco core clusters. +The configuration and integration of these solutions is outside the scope of the reference design specification (RDS). + +Integration of the storage solution into the telco core cluster must include proper sizing and performance analysis to ensure the storage meets overall performance and resource usage requirements. diff --git a/modules/telco-core-agent-based-installer.adoc b/modules/telco-core-agent-based-installer.adoc new file mode 100644 index 000000000000..3ddf02fe6e33 --- /dev/null +++ b/modules/telco-core-agent-based-installer.adoc @@ -0,0 +1,33 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-agent-based-installer_{context}"] += Agent-based Installer + +New in this release:: +* No reference design updates in this release + +Description:: ++ +-- +Telco core clusters can be installed by using the Agent-based Installer. +This method allows you to install OpenShift on bare-metal servers without requiring additional servers or VMs for managing the installation. +The Agent-based Installer can be run on any system (for example, from a laptop) to generate an ISO installation image. +The ISO is used as the installation media for the cluster supervisor nodes. +Installation progress can be monitored using the ABI tool from any system with network connectivity to the supervisor node's API interfaces. + +ABI supports the following: + +* Installation from declarative CRs +* Installation in disconnected environments +* Installation with no additional supporting install or bastion servers required to complete the installation +-- + +Limits and requirements:: +* Disconnected installation requires a registry that is reachable from the installed host, with all required content mirrored in that registry. + +Engineering considerations:: +* Networking configuration should be applied as NMState configuration during installation. +Day 2 networking configuration using the NMState Operator is not supported. diff --git a/modules/telco-core-application-workloads.adoc b/modules/telco-core-application-workloads.adoc new file mode 100644 index 000000000000..143e861f1f48 --- /dev/null +++ b/modules/telco-core-application-workloads.adoc @@ -0,0 +1,37 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-application-workloads_{context}"] += Application workloads + +Application workloads running on telco core clusters can include a mix of high performance cloud-native network functions (CNFs) and traditional best-effort or burstable pod workloads. + +Guaranteed QoS scheduling is available to pods that require exclusive or dedicated use of CPUs due to performance or security requirements. +Typically, pods that run high performance or latency sensitive CNFs by using user plane networking (for example, DPDK) require exclusive use of dedicated whole CPUs achieved through node tuning and guaranteed QoS scheduling. +When creating pod configurations that require exclusive CPUs, be aware of the potential implications of hyper-threaded systems. +Pods should request multiples of 2 CPUs when the entire core (2 hyper-threads) must be allocated to the pod. + +Pods running network functions that do not require high throughput or low latency networking should be scheduled with best-effort or burstable QoS pods and do not require dedicated or isolated CPU cores. + +Engineering considerations:: ++ +-- +Use the following information to plan telco core workloads and cluster resources: + +* CNF applications should conform to the latest version of https://redhat-best-practices-for-k8s.github.io/guide/[Red Hat Best Practices for Kubernetes]. +* Use a mix of best-effort and burstable QoS pods as required by your applications. +** Use guaranteed QoS pods with proper configuration of reserved or isolated CPUs in the `PerformanceProfile` CR that configures the node. +** Guaranteed QoS Pods must include annotations for fully isolating CPUs. +** Best effort and burstable pods are not guaranteed exclusive CPU use. +Workloads can be preempted by other workloads, operating system daemons, or kernel tasks. +* Use exec probes sparingly and only when no other suitable option is available. +** Do not use exec probes if a CNF uses CPU pinning. +Use other probe implementations, for example, `httpGet` or `tcpSocket`. +** When you need to use exec probes, limit the exec probe frequency and quantity. +The maximum number of exec probes must be kept below 10, and the frequency must not be set to less than 10 seconds. +** You can use startup probes, because they do not use significant resources at steady-state operation. +This limitation on exec probes applies primarily to liveness and readiness probes. +Exec probes cause much higher CPU usage on management cores compared to other probe types because they require process forking. +-- diff --git a/modules/telco-core-cluster-common-use-model-engineering-considerations.adoc b/modules/telco-core-cluster-common-use-model-engineering-considerations.adoc new file mode 100644 index 000000000000..28a4fe93d8c4 --- /dev/null +++ b/modules/telco-core-cluster-common-use-model-engineering-considerations.adoc @@ -0,0 +1,48 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-cluster-common-use-model-engineering-considerations_{context}"] += Telco core cluster common use model engineering considerations + +* Cluster workloads are detailed in "Application workloads". +* Worker nodes should run on either of the following CPUs: +** Intel 3rd Generation Xeon (IceLake) CPUs or better when supported by {product-title}, or CPUs with the silicon security bug (Spectre and similar) mitigations turned off. +Skylake and older CPUs can experience 40% transaction performance drops when Spectre and similar mitigations are enabled. +** AMD EPYC Zen 4 CPUs (Genoa, Bergamo, or newer) or better when supported by {product-title}. ++ +[NOTE] +==== +Currently, per-pod power management is not available for AMD CPUs. +==== +** IRQ balancing is enabled on worker nodes. +The `PerformanceProfile` CR sets `globallyDisableIrqLoadBalancing` to false. +Guaranteed QoS pods are annotated to ensure isolation as described in "CPU partitioning and performance tuning". + +* All cluster nodes should have the following features: +** Have Hyper-Threading enabled +** Have x86_64 CPU architecture +** Have the stock (non-realtime) kernel enabled +** Are not configured for workload partitioning + +* The balance between power management and maximum performance varies between machine config pools in the cluster. +The following configurations should be consistent for all nodes in a machine config pools group. +** Cluster scaling. +See "Scalability" for more information. +** Clusters should be able to scale to at least 120 nodes. + +* CPU partitioning is configured using a `PerformanceProfile` CR and is applied to nodes on a per `MachineConfigPool` basis. +See "CPU partitioning and performance tuning" for additional considerations. +* CPU requirements for {product-title} depend on the configured feature set and application workload characteristics. +For a cluster configured according to the reference configuration running a simulated workload of 3000 pods as created by the kube-burner node-density test, the following CPU requirements are validated: +** The minimum number of reserved CPUs for control plane and worker nodes is 2 CPUs (4 hyper-threads) per NUMA node. +** The NICs used for non-DPDK network traffic should be configured to use at least 16 RX/TX queues. +** Nodes with large numbers of pods or other resources might require additional reserved CPUs. +The remaining CPUs are available for user workloads. + ++ +[NOTE] +==== +Variations in {product-title} configuration, workload size, and workload characteristics require additional analysis to determine the effect on the number of required CPUs for the OpenShift platform. +==== diff --git a/modules/telco-core-cluster-network-operator.adoc b/modules/telco-core-cluster-network-operator.adoc index 4e3f1dd9aa33..e03ebc26a5d4 100644 --- a/modules/telco-core-cluster-network-operator.adoc +++ b/modules/telco-core-cluster-network-operator.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-core-cluster-network-operator_{context}"] @@ -10,27 +10,35 @@ New in this release:: * No reference design updates in this release Description:: -The Cluster Network Operator (CNO) deploys and manages the cluster network components including the default OVN-Kubernetes network plugin during {product-title} cluster installation. It allows configuring primary interface MTU settings, OVN gateway modes to use node routing tables for pod egress, and additional secondary networks such as MACVLAN. ++ +-- +The Cluster Network Operator (CNO) deploys and manages the cluster network components including the default OVN-Kubernetes network plugin during cluster installation. +The CNO allows for configuring primary interface MTU settings, OVN gateway modes to use node routing tables for pod egress, and additional secondary networks such as MACVLAN. + +In support of network traffic separation, multiple network interfaces are configured through the CNO. +Traffic steering to these interfaces is configured through static routes applied by using the NMState Operator. +To ensure that pod traffic is properly routed, OVN-K is configured with the `routingViaHost` option enabled. +This setting uses the kernel routing table and the applied static routes rather than OVN for pod egress traffic. + +The Whereabouts CNI plugin is used to provide dynamic IPv4 and IPv6 addressing for additional pod network interfaces without the use of a DHCP server. +-- Limits and requirements:: * OVN-Kubernetes is required for IPv6 support. - * Large MTU cluster support requires connected network equipment to be set to the same or larger value. - +MTU size up to 8900 is supported. +//https://issues.redhat.com/browse/CNF-10593 * MACVLAN and IPVLAN cannot co-locate on the same main interface due to their reliance on the same underlying kernel mechanism, specifically the `rx_handler`. This handler allows a third-party module to process incoming packets before the host processes them, and only one such handler can be registered per network interface. Since both MACVLAN and IPVLAN need to register their own `rx_handler` to function, they conflict and cannot coexist on the same interface. -See link:https://elixir.bootlin.com/linux/v6.10.2/source/drivers/net/ipvlan/ipvlan_main.c#L82[ipvlan/ipvlan_main.c#L82] and link:https://elixir.bootlin.com/linux/v6.10.2/source/drivers/net/macvlan.c#L1260[net/macvlan.c#L1260] for details. - -* Alternative NIC configurations include splitting the shared NIC into multiple NICs or using a single dual-port NIC. -+ -[IMPORTANT] -==== -Splitting the shared NIC into multiple NICs or using a single dual-port NIC has not been validated with the telco core reference design. -==== - -* Single-stack IP cluster not validated. - +Review the source code for more details: +** https://elixir.bootlin.com/linux/v6.10.2/source/drivers/net/ipvlan/ipvlan_main.c#L82[linux/v6.10.2/source/drivers/net/ipvlan/ipvlan_main.c#L82] +** https://elixir.bootlin.com/linux/v6.10.2/source/drivers/net/macvlan.c#L1260[linux/v6.10.2/source/drivers/net/macvlan.c#L1260] +* Alternative NIC configurations include splitting the shared NIC into multiple NICs or using a single dual-port NIC, though they have not been tested and validated. +* Clusters with single-stack IP configuration are not validated. +* The `reachabilityTotalTimeoutSeconds` parameter in the `Network` CR configures the `EgressIP` node reachability check total timeout in seconds. +The recommended value is `1` second. Engineering considerations:: -* Pod egress traffic is handled by kernel routing table with the `routingViaHost` option. Appropriate static routes must be configured in the host. +* Pod egress traffic is handled by kernel routing table using the `routingViaHost` option. +Appropriate static routes must be configured in the host. diff --git a/modules/telco-core-common-baseline-model.adoc b/modules/telco-core-common-baseline-model.adoc new file mode 100644 index 000000000000..d5cc07d39a07 --- /dev/null +++ b/modules/telco-core-common-baseline-model.adoc @@ -0,0 +1,47 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-common-baseline-model_{context}"] += Telco core common baseline model + +The following configurations and use models are applicable to all telco core use cases. +The telco core use cases build on this common baseline of features. + +Cluster topology:: +Telco core clusters conform to the following requirements: + +* High availability control plane (three or more control plane nodes) +* Non-schedulable control plane nodes +* Multiple machine config pools + +Storage:: +Telco core use cases require persistent storage as provided by {rh-storage-first}. + +Networking:: +Telco core cluster networking conforms to the following requirements: + +* Dual stack IPv4/IPv6 (IPv4 primary). +* Fully disconnected – clusters do not have access to public networking at any point in their lifecycle. +* Supports multiple networks. +Segmented networking provides isolation between operations, administration and maintenance (OAM), signaling, and storage traffic. +* Cluster network type is OVN-Kubernetes as required for IPv6 support. +* Telco core clusters have multiple layers of networking supported by underlying RHCOS, SR-IOV Network Operator, Load Balancer and other components. +These layers include the following: +** Cluster networking layer. +The cluster network configuration is defined and applied through the installation configuration. +Update the configuration during Day 2 operations with the NMState Operator. +Use the initial configuration to establish the following: +*** Host interface configuration. +*** Active/active bonding (LACP). +** Secondary/additional network layer. +Configure the {product-title} CNI through network `additionalNetwork` or `NetworkAttachmentDefinition` CRs. +Use the initial configuration to configure MACVLAN virtual network interfaces. +** Application workload layer. +User plane networking runs in cloud-native network functions (CNFs). + +Service Mesh:: +Telco CNFs can use Service Mesh. +All telco core clusters require a Service Mesh implementation. +The choice of implementation and configuration is outside the scope of this specification. diff --git a/modules/telco-core-cpu-partitioning-and-performance-tuning.adoc b/modules/telco-core-cpu-partitioning-and-performance-tuning.adoc new file mode 100644 index 000000000000..7ec4980a357a --- /dev/null +++ b/modules/telco-core-cpu-partitioning-and-performance-tuning.adoc @@ -0,0 +1,57 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-cpu-partitioning-and-performance-tuning_{context}"] += CPU partitioning and performance tuning + +New in this release:: +* No reference design updates in this release + +Description:: +CPU partitioning improves performance and reduces latency by separating sensitive workloads from general-purpose tasks, interrupts, and driver work queues. +The CPUs allocated to those auxiliary processes are referred to as _reserved_ in the following sections. +In a system with Hyper-Threading enabled, a CPU is one hyper-thread. + +Limits and requirements:: +* The operating system needs a certain amount of CPU to perform all the support tasks, including kernel networking. +** A system with just user plane networking applications (DPDK) needs at least one core (2 hyper-threads when enabled) reserved for the operating system and the infrastructure components. +* In a system with Hyper-Threading enabled, core sibling threads must always be in the same pool of CPUs. +* The set of reserved and isolated cores must include all CPU cores. +* Core 0 of each NUMA node must be included in the reserved CPU set. +* Low latency workloads require special configuration to avoid being affected by interrupts, kernel scheduler, or other parts of the platform. +For more information, see "Creating a performance profile". + +Engineering considerations:: +* The minimum reserved capacity (`systemReserved`) required can be found by following the guidance in the link:https://access.redhat.com/solutions/5843241[Which amount of CPU and memory are recommended to reserve for the system in OpenShift 4 nodes?] Knowledgebase article. +* The actual required reserved CPU capacity depends on the cluster configuration and workload attributes. +* The reserved CPU value must be rounded up to a full core (2 hyper-threads) alignment. +* Changes to CPU partitioning cause the nodes contained in the relevant machine config pool to be drained and rebooted. +* The reserved CPUs reduce the pod density, because the reserved CPUs are removed from the allocatable capacity of the {product-title} node. +* The real-time workload hint should be enabled for real-time capable workloads. +** Applying the real time `workloadHint` setting results in the `nohz_full` kernel command line parameter being applied to improve performance of high performance applications. +When you apply the `workloadHint` setting, any isolated or burstable pods that do not have the `cpu-quota.crio.io: "disable"` annotation and a proper `runtimeClassName` value, are subject to CRI-O rate limiting. +When you set the `workloadHint` parameter, be aware of the tradeoff between increased performance and the potential impact of CRI-O rate limiting. +Ensure that required pods are correctly annotated. +* Hardware without IRQ affinity support affects isolated CPUs. +All server hardware must support IRQ affinity to ensure that pods with guaranteed CPU QoS can fully use allocated CPUs. +* OVS dynamically manages its `cpuset` entry to adapt to network traffic needs. +You do not need to reserve an additional CPU for handling high network throughput on the primary CNI. +* If workloads running on the cluster use kernel level networking, the RX/TX queue count for the participating NICs should be set to 16 or 32 queues if the hardware permits it. +Be aware of the default queue count. +With no configuration, the default queue count is one RX/TX queue per online CPU; which can result in too many interrupts being allocated. ++ +[NOTE] +==== +Some drivers do not deallocate the interrupts even after reducing the queue count. +==== + +* If workloads running on the cluster require cgroup v1, you can configure nodes to use cgroup v1 as part of the initial cluster deployment. +See "Enabling Linux control group version 1 (cgroup v1)" and link:https://www.redhat.com/en/blog/rhel-9-changes-context-red-hat-openshift-workloads[Red Hat Enterprise Linux 9 changes in the context of Red Hat OpenShift workloads]. ++ +[NOTE] +==== +Support for cgroup v1 is planned for removal in {product-title} 4.19. +Clusters running cgroup v1 must transition to cgroup v2. +==== diff --git a/modules/telco-core-crs-cluster-infrastructure.adoc b/modules/telco-core-crs-cluster-infrastructure.adoc new file mode 100644 index 000000000000..fa46cd75cc73 --- /dev/null +++ b/modules/telco-core-crs-cluster-infrastructure.adoc @@ -0,0 +1,25 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="cluster-infrastructure-crs_{context}"] += Cluster infrastructure reference CRs + +.Cluster infrastructure CRs +[cols="4*", options="header", format=csv] +|==== +Component,Reference CR,Description,Optional +Cluster logging,`ClusterLogForwarder.yaml`,Configures a log forwarding instance with the specified service account and verifies that the configuration is valid.,Yes +Cluster logging,`ClusterLogNS.yaml`,Configures the cluster logging namespace.,Yes +Cluster logging,`ClusterLogOperGroup.yaml`,"Creates the Operator group in the openshift-logging namespace, allowing the Cluster Logging Operator to watch and manage resources.",Yes +Cluster logging,`ClusterLogServiceAccount.yaml`,Configures the cluster logging service account.,Yes +Cluster logging,`ClusterLogServiceAccountAuditBinding.yaml`,Grants the collect-audit-logs cluster role to the logs collector service account.,Yes +Cluster logging,`ClusterLogServiceAccountInfrastructureBinding.yaml`,Allows the collector service account to collect logs from infrastructure resources.,Yes +Cluster logging,`ClusterLogSubscription.yaml`,Creates a subscription resource for the Cluster Logging Operator with manual approval for install plans.,Yes +Disconnected configuration,`catalog-source.yaml`,Defines a disconnected Red Hat Operators catalog.,No +Disconnected configuration,`icsp.yaml`,Defines a list of mirrored repository digests for the disconnected registry.,No +Disconnected configuration,`operator-hub.yaml`,Defines an OperatorHub configuration which disables all default sources.,No +Monitoring and observability,`monitoring-config-cm.yaml`,Configuring storage and retention for Prometheus and Alertmanager.,Yes +Power management,`PerformanceProfile.yaml`,"Defines a performance profile resource, specifying CPU isolation, hugepages configuration, and workload hints for performance optimization on selected nodes.",No +|==== diff --git a/modules/telco-core-crs-networking.adoc b/modules/telco-core-crs-networking.adoc index 09f9e6535a57..c6bc9e3e5d4e 100644 --- a/modules/telco-core-crs-networking.adoc +++ b/modules/telco-core-crs-networking.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-crs.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="networking-crs_{context}"] @@ -9,27 +9,27 @@ .Networking CRs [cols="4*", options="header", format=csv] |==== -Component,Reference CR,Optional,New in this release -Baseline,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-network-yaml[Network.yaml],Yes,No -Baseline,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-networkattachmentdefinition-yaml[networkAttachmentDefinition.yaml],Yes,No -Load balancer,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-addr-pool-yaml[addr-pool.yaml],No,No -Load balancer,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-bfd-profile-yaml[bfd-profile.yaml],No,No -Load balancer,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-bgp-advr-yaml[bgp-advr.yaml],No,No -Load balancer,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-bgp-peer-yaml[bgp-peer.yaml],No,No -Load balancer,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-community-yaml[community.yaml],No,No -Load balancer,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-metallb-yaml[metallb.yaml],No,No -Load balancer,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-metallbns-yaml[metallbNS.yaml],No,No -Load balancer,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-metallbopergroup-yaml[metallbOperGroup.yaml],No,No -Load balancer,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-metallbsubscription-yaml[metallbSubscription.yaml],No,No -Multus - Tap CNI for rootless DPDK pods,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-mc_rootless_pods_selinux-yaml[mc_rootless_pods_selinux.yaml],No,No -NMState Operator,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-nmstate-yaml[NMState.yaml],No,No -NMState Operator,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-nmstatens-yaml[NMStateNS.yaml],No,No -NMState Operator,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-nmstateopergroup-yaml[NMStateOperGroup.yaml],No,No -NMState Operator,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-nmstatesubscription-yaml[NMStateSubscription.yaml],No,No -SR-IOV Network Operator,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-sriovnetwork-yaml[sriovNetwork.yaml],No,No -SR-IOV Network Operator,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-sriovnetworknodepolicy-yaml[sriovNetworkNodePolicy.yaml],No,No -SR-IOV Network Operator,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-sriovoperatorconfig-yaml[SriovOperatorConfig.yaml],No,No -SR-IOV Network Operator,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-sriovsubscription-yaml[SriovSubscription.yaml],No,No -SR-IOV Network Operator,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-sriovsubscriptionns-yaml[SriovSubscriptionNS.yaml],No,No -SR-IOV Network Operator,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-sriovsubscriptionopergroup-yaml[SriovSubscriptionOperGroup.yaml],No,No +Component,Reference CR,Description,Optional +Baseline,`Network.yaml`,"Configures the default cluster network, specifying OVN Kubernetes settings like routing via the host. It also allows the definition of additional networks, including custom CNI configurations, and enables the use of MultiNetworkPolicy CRs for network policies across multiple networks.",No +Baseline,`networkAttachmentDefinition.yaml`,Optional. Defines a NetworkAttachmentDefinition resource specifying network configuration details such as node selector and CNI configuration.,Yes +Load Balancer,`addr-pool.yaml`,Configures MetalLB to manage a pool of IP addresses with auto-assign enabled for dynamic allocation of IPs from the specified range.,No +Load Balancer,`bfd-profile.yaml`,"Configures bidirectional forwarding detection (BFD) with customized intervals, detection multiplier, and modes for quicker network fault detection and load balancing failover.",No +Load Balancer,`bgp-advr.yaml`,"Defines a BGP advertisement resource for MetalLB, specifying how an IP address pool is advertised to BGP peers. This enables fine-grained control over traffic routing and announcements.",No +Load Balancer,`bgp-peer.yaml`,"Defines a BGP peer in MetalLB, representing a BGP neighbor for dynamic routing.",No +Load Balancer,`community.yaml`,"Defines a MetalLB community, which groups one or more BGP communities under a named resource. Communities can be applied to BGP advertisements to control routing policies and change traffic routing.",No +Load Balancer,`metallb.yaml`,Defines the MetalLB resource in the cluster.,No +Load Balancer,`metallbNS.yaml`,Defines the metallb-system namespace in the cluster.,No +Load Balancer,`metallbOperGroup.yaml`,Defines the Operator group for the MetalLB Operator.,No +Load Balancer,`metallbSubscription.yaml`,Creates a subscription resource for the metallb Operator with manual approval for install plans.,No +Multus - Tap CNI for rootless DPDK pods,`mc_rootless_pods_selinux.yaml`,Configures a MachineConfig resource which sets an SELinux boolean for the tap CNI plugin on worker nodes.,Yes +NMState Operator,`NMState.yaml`,Defines an NMState resource that is used by the NMState Operator to manage node network configurations.,No +NMState Operator,`NMStateNS.yaml`,Creates the NMState Operator namespace.,No +NMState Operator,`NMStateOperGroup.yaml`,"Creates the Operator group in the openshift-nmstate namespace, allowing the NMState Operator to watch and manage resources.",No +NMState Operator,`NMStateSubscription.yaml`,"Creates a subscription for the NMState Operator, managed through OLM.",No +SR-IOV Network Operator,`sriovNetwork.yaml`,"Defines an SR-IOV network specifying network capabilities, IP address management (ipam), and the associated network namespace and resource.",No +SR-IOV Network Operator,`sriovNetworkNodePolicy.yaml`,"Configures network policies for SR-IOV devices on specific nodes, including customization of device selection, VF allocation (numVfs), node-specific settings (nodeSelector), and priorities.",No +SR-IOV Network Operator,`SriovOperatorConfig.yaml`,"Configures various settings for the SR-IOV Operator, including enabling the injector and Operator webhook, disabling pod draining, and defining the node selector for the configuration daemon.",No +SR-IOV Network Operator,`SriovSubscription.yaml`,"Creates a subscription for the SR-IOV Network Operator, managed through OLM.",No +SR-IOV Network Operator,`SriovSubscriptionNS.yaml`,Creates the SR-IOV Network Operator subscription namespace.,No +SR-IOV Network Operator,`SriovSubscriptionOperGroup.yaml`,"Creates the Operator group for the SR-IOV Network Operator, allowing it to watch and manage resources in the target namespace.",No |==== diff --git a/modules/telco-core-crs-node-configuration.adoc b/modules/telco-core-crs-node-configuration.adoc index 5a44755c0ade..c104a9df98f0 100644 --- a/modules/telco-core-crs-node-configuration.adoc +++ b/modules/telco-core-crs-node-configuration.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-crs.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="node-configuration-crs_{context}"] @@ -9,12 +9,12 @@ .Node configuration CRs [cols="4*", options="header", format=csv] |==== -Component,Reference CR,Optional,New in this release -Additional kernel modules,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-control-plane-load-kernel-modules-yaml[control-plane-load-kernel-modules.yaml],Yes,No -Additional kernel modules,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-sctp_module_mc-yaml[sctp_module_mc.yaml],Yes,No -Additional kernel modules,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-worker-load-kernel-modules-yaml[worker-load-kernel-modules.yaml],Yes,No -Container mount namespace hiding,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-mount_namespace_config_master-yaml[mount_namespace_config_master.yaml],No,Yes -Container mount namespace hiding,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-mount_namespace_config_worker-yaml[mount_namespace_config_worker.yaml],No,Yes -Kdump enable,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-kdump-master-yaml[kdump-master.yaml],No,Yes -Kdump enable,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-kdump-worker-yaml[kdump-worker.yaml],No,Yes +Component,Reference CR,Description,Optional +Additional kernel modules,`control-plane-load-kernel-modules.yaml`,Optional. Configures the kernel modules for control plane nodes.,No +Additional kernel modules,`sctp_module_mc.yaml`,Optional. Loads the SCTP kernel module in worker nodes.,No +Additional kernel modules,`worker-load-kernel-modules.yaml`,Optional. Configures kernel modules for worker nodes.,No +Container mount namespace hiding,`mount_namespace_config_master.yaml`,Configures a mount namespace for sharing container-specific mounts between kubelet and CRI-O on control plane nodes.,No +Container mount namespace hiding,`mount_namespace_config_worker.yaml`,Configures a mount namespace for sharing container-specific mounts between kubelet and CRI-O on worker nodes.,No +Kdump enable,`kdump-master.yaml`,Configures kdump crash reporting on master nodes.,No +Kdump enable,`kdump-worker.yaml`,Configures kdump crash reporting on worker nodes.,No |==== diff --git a/modules/telco-core-crs-resource-tuning.adoc b/modules/telco-core-crs-resource-tuning.adoc index c6aefdb9d3b7..0131759f5c50 100644 --- a/modules/telco-core-crs-resource-tuning.adoc +++ b/modules/telco-core-crs-resource-tuning.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-crs.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="resource-tuning-crs_{context}"] @@ -9,6 +9,6 @@ .Resource tuning CRs [cols="4*", options="header", format=csv] |==== -Component,Reference CR,Optional,New in this release -System reserved capacity,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-control-plane-system-reserved-yaml[control-plane-system-reserved.yaml],Yes,No +Component,Reference CR,Description,Optional +System reserved capacity,`control-plane-system-reserved.yaml`,"Optional. Configures kubelet, enabling auto-sizing reserved resources for the control plane node pool.",No |==== diff --git a/modules/telco-core-crs-scheduling.adoc b/modules/telco-core-crs-scheduling.adoc index d3ae65265f96..2793905bad0f 100644 --- a/modules/telco-core-crs-scheduling.adoc +++ b/modules/telco-core-crs-scheduling.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-crs.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="scheduling-crs_{context}"] @@ -9,11 +9,11 @@ .Scheduling CRs [cols="4*", options="header", format=csv] |==== -Component,Reference CR,Optional,New in this release -NUMA-aware scheduler,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-nrop-yaml[nrop.yaml],No,No -NUMA-aware scheduler,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-nropsubscription-yaml[NROPSubscription.yaml],No,No -NUMA-aware scheduler,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-nropsubscriptionns-yaml[NROPSubscriptionNS.yaml],No,No -NUMA-aware scheduler,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-nropsubscriptionopergroup-yaml[NROPSubscriptionOperGroup.yaml],No,No -NUMA-aware scheduler,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-sched-yaml[sched.yaml],No,No -NUMA-aware scheduler,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-scheduler-yaml[Scheduler.yaml],No,No +Component,Reference CR,Description,Optional +NUMA-aware scheduler,`nrop.yaml`,"Enables the NUMA Resources Operator, aligning workloads with specific NUMA node configurations. Required for clusters with multi-NUMA nodes.",No +NUMA-aware scheduler,`NROPSubscription.yaml`,"Creates a subscription for the NUMA Resources Operator, managed through OLM. Required for clusters with multi-NUMA nodes.",No +NUMA-aware scheduler,`NROPSubscriptionNS.yaml`,Creates the NUMA Resources Operator subscription namespace. Required for clusters with multi-NUMA nodes.,No +NUMA-aware scheduler,`NROPSubscriptionOperGroup.yaml`,"Creates the Operator group in the numaresources-operator namespace, allowing the NUMA Resources Operator to watch and manage resources. Required for clusters with multi-NUMA nodes.",No +NUMA-aware scheduler,`sched.yaml`,Configures a topology-aware scheduler in the cluster that can handle NUMA aware scheduling of pods across nodes.,No +NUMA-aware scheduler,`Scheduler.yaml`,Configures control plane nodes as non-schedulable for workloads.,No |==== diff --git a/modules/telco-core-crs-storage.adoc b/modules/telco-core-crs-storage.adoc index c8c6f5ef16da..7ae3f0531103 100644 --- a/modules/telco-core-crs-storage.adoc +++ b/modules/telco-core-crs-storage.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-crs.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="storage-crs_{context}"] @@ -9,10 +9,10 @@ .Storage CRs [cols="4*", options="header", format=csv] |==== -Component,Reference CR,Optional,New in this release -External ODF configuration,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-01-rook-ceph-external-cluster-details.secret-yaml[01-rook-ceph-external-cluster-details.secret.yaml],No,No -External ODF configuration,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-02-ocs-external-storagecluster-yaml[02-ocs-external-storagecluster.yaml],No,No -External ODF configuration,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-odfns-yaml[odfNS.yaml],No,No -External ODF configuration,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-odfopergroup-yaml[odfOperGroup.yaml],No,No -External ODF configuration,xref:../../telco_ref_design_specs/core/telco-core-ref-crs.adoc#telco-core-odfsubscription-yaml[odfSubscription.yaml],No,No +Component,Reference CR,Description,Optional +External ODF configuration,`01-rook-ceph-external-cluster-details.secret.yaml`,Defines a Secret resource containing base64-encoded configuration data for an external Ceph cluster in the openshift-storage namespace.,No +External ODF configuration,`02-ocs-external-storagecluster.yaml`,Defines an OpenShift Container Storage (OCS) storage resource which configures the cluster to use an external storage back end.,No +External ODF configuration,`odfNS.yaml`,Creates the monitored openshift-storage namespace for the OpenShift Data Foundation Operator.,No +External ODF configuration,`odfOperGroup.yaml`,"Creates the Operator group in the openshift-storage namespace, allowing the OpenShift Data Foundation Operator to watch and manage resources.",No +External ODF configuration,`odfSubscription.yaml`,"Creates the subscription for the OpenShift Data Foundation Operator in the openshift-storage namespace.",No |==== diff --git a/modules/telco-core-disconnected-environment.adoc b/modules/telco-core-disconnected-environment.adoc new file mode 100644 index 000000000000..6f4a7e3c81ca --- /dev/null +++ b/modules/telco-core-disconnected-environment.adoc @@ -0,0 +1,26 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-disconnected-environment_{context}"] += Disconnected environment + +New in this release:: +* No reference design updates in this release + +Descrption:: +Telco core clusters are expected to be installed in networks without direct access to the internet. +All container images needed to install, configure, and operate the cluster must be available in a disconnected registry. +This includes {product-title} images, Day 2 OLM Operator images, and application workload images. +The use of a disconnected environment provides multiple benefits, including: + +* Security - limiting access to the cluster +* Curated content – the registry is populated based on curated and approved updates for clusters + +Limits and requirements:: +* A unique name is required for all custom `CatalogSource` resources. +Do not reuse the default catalog names. + +Engineering considerations:: +* A valid time source must be configured as part of cluster installation diff --git a/modules/telco-core-gitops-operator-and-ztp-plugins.adoc b/modules/telco-core-gitops-operator-and-ztp-plugins.adoc new file mode 100644 index 000000000000..2a4e7ff4d952 --- /dev/null +++ b/modules/telco-core-gitops-operator-and-ztp-plugins.adoc @@ -0,0 +1,57 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-gitops-operator-and-ztp-plugins_{context}"] += GitOps Operator and GitOps ZTP plugins + +New in this release:: +* No reference design updates in this release + +Description:: ++ +-- +The GitOps Operator provides a GitOps driven infrastructure for managing cluster deployment and configuration. +Cluster definitions and configuration are maintained in a Git repository. + +ZTP plugins provide support for generating `Installation` CRs from `SiteConfig` CRs and automatically wrapping configuration CRs in policies based on {rh-rhacm} `PolicyGenerator` CRs. + +The SiteConfig Operator provides improved support for generation of `Installation` CRs from `ClusterInstance` CRs. + +[IMPORTANT] +==== +Where possible, use `ClusterInstance` CRs for cluster installation instead of the `SiteConfig` with {ztp} plugin method. +==== + +You should structure the Git repository according to release version, with all necessary artifacts (`SiteConfig`, `ClusterInstance`, `PolicyGenerator`, and `PolicyGenTemplate`, and supporting reference CRs) included. +This enables deploying and managing multiple versions of the OpenShift platform and configuration versions to clusters simultaneously and through upgrades. + +The recommended Git structure keeps reference CRs in a directory separate from customer or partner provided content. +This means that you can import reference updates by simply overwriting existing content. +Customer or partner-supplied CRs can be provided in a parallel directory to the reference CRs for easy inclusion in the generated configuration policies. +-- + +Limits and requirements:: +* Each ArgoCD application supports up to 300 nodes. +Multiple ArgoCD applications can be used to achieve the maximum number of clusters supported by a single hub cluster. +* The `SiteConfig` CR must use the `extraManifests.searchPaths` field to reference the reference manifests. ++ +[NOTE] +==== +Since {product-title} 4.15, the `spec.extraManifestPath` field is deprecated. +==== + +Engineering considerations:: +* Set the `MachineConfigPool` (`mcp`) CR `paused` field to true during a cluster upgrade maintenance window and set the `maxUnavailable` field to the maximum tolerable value. +This prevents multiple cluster node reboots during upgrade, which results in a shorter overall upgrade. +When you unpause the `mcp` CR, all the configuration changes are applied with a single reboot. ++ +[NOTE] +==== +During installation, custom `mcp` CRs can be paused along with setting `maxUnavailable` to 100% to improve installation times. +==== + +* To avoid confusion or unintentional overwriting when updating content, you should use unique and distinguishable names for custom CRs in the `reference-crs/` directory under core-overlay and extra manifests in Git. +* The `SiteConfig` CR allows multiple extra-manifest paths. +When file names overlap in multiple directory paths, the last file found in the directory order list takes precedence. diff --git a/modules/telco-core-host-firmware-and-boot-loader-configuration.adoc b/modules/telco-core-host-firmware-and-boot-loader-configuration.adoc new file mode 100644 index 000000000000..57d262789d0c --- /dev/null +++ b/modules/telco-core-host-firmware-and-boot-loader-configuration.adoc @@ -0,0 +1,20 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-host-firmware-and-boot-loader-configuration_{context}"] += Host firmware and boot loader configuration + +New in this release:: +* No reference design updates in this release + +Engineering considerations:: +// https://issues.redhat.com/browse/CNF-11806 +* Enabling secure boot is the recommended configuration. ++ +[NOTE] +==== +When secure boot is enabled, only signed kernel modules are loaded by the kernel. +Out-of-tree drivers are not supported. +==== diff --git a/modules/telco-core-load-balancer.adoc b/modules/telco-core-load-balancer.adoc index dd25306b0999..50d50c0097f4 100644 --- a/modules/telco-core-load-balancer.adoc +++ b/modules/telco-core-load-balancer.adoc @@ -1,38 +1,40 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-core-load-balancer_{context}"] = Load balancer New in this release:: -//CNF-11914 -* In {product-title} 4.17 or later, `frr-k8s` is now the default and fully supported Border Gateway Protocol (BGP) backend. -The deprecated `frr` BGP mode is still available. -You should upgrade clusters to use the `frr-k8s` backend. - -Description:: -MetalLB is a load-balancer implementation that uses standard routing protocols for bare-metal clusters. It enables a Kubernetes service to get an external IP address which is also added to the host network for the cluster. +// https://issues.redhat.com/browse/CNF-14150 +* FRR-K8s is now available under the Cluster Network Operator. + -[NOTE] +[IMPORTANT] ==== -Some use cases might require features not available in MetalLB, for example stateful load balancing. -Where necessary, use an external third party load balancer. -Selection and configuration of an external load balancer is outside the scope of this document. -When you use an external third party load balancer, ensure that it meets all performance and resource utilization requirements. +If you have custom `FRRConfiguration` CRs in the `metallb-system` namespace, you must move them under the `openshift-network-operator` namespace. ==== -Limits and requirements:: +Description:: +MetalLB is a load-balancer implementation for bare metal Kubernetes clusters that uses standard routing protocols. +It enables a Kubernetes service to get an external IP address which is also added to the host network for the cluster. +The MetalLB Operator deploys and manages the lifecycle of a MetalLB instance in a cluster. +Some use cases might require features not available in MetalLB, such as stateful load balancing. +Where necessary, you can use an external third party load balancer. +Selection and configuration of an external load balancer is outside the scope of this specification. +When an external third-party load balancer is used, the integration effort must include enough analysis to ensure all performance and resource utilization requirements are met. -* Stateful load balancing is not supported by MetalLB. An alternate load balancer implementation must be used if this is a requirement for workload CNFs. -* The networking infrastructure must ensure that the external IP address is routable from clients to the host network for the cluster. +Limits and requirements:: +* Stateful load balancing is not supported by MetalLB. +An alternate load balancer implementation must be used if this is a requirement for workload CNFs. +* You must ensure that the external IP address is routable from clients to the host network for the cluster. Engineering considerations:: -* MetalLB is used in BGP mode only for core use case models. -* For core use models, MetalLB is supported with only the OVN-Kubernetes network provider used in local gateway mode. See `routingViaHost` in the "Cluster Network Operator" section. -* BGP configuration in MetalLB varies depending on the requirements of the network and peers. -* Address pools can be configured as needed, allowing variation in addresses, aggregation length, auto assignment, and other relevant parameters. -* MetalLB uses BGP for announcing routes only. +* MetalLB is used in BGP mode only for telco core use models. +* For telco core use models, MetalLB is supported only with the OVN-Kubernetes network provider used in local gateway mode. +See `routingViaHost` in "Cluster Network Operator". +* BGP configuration in MetalLB is expected to vary depending on the requirements of the network and peers. +** You can configure address pools with variations in addresses, aggregation length, auto assignment, and so on. +** MetalLB uses BGP for announcing routes only. Only the `transmitInterval` and `minimumTtl` parameters are relevant in this mode. -Other parameters in the BFD profile should remain close to the default settings. Shorter values might lead to errors and impact performance. +Other parameters in the BFD profile should remain close to the defaults as shorter values can lead to false negatives and affect performance. diff --git a/modules/telco-core-logging.adoc b/modules/telco-core-logging.adoc index 445f802305ce..2f4a76306000 100644 --- a/modules/telco-core-logging.adoc +++ b/modules/telco-core-logging.adoc @@ -1,21 +1,22 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-core-logging_{context}"] = Logging New in this release:: -* Cluster Logging Operator 6.0 is new in this release. -Update your existing implementation to adapt to the new version of the API. +* No reference design updates in this release Description:: -The Cluster Logging Operator enables collection and shipping of logs off the node for remote archival and analysis. The reference configuration ships audit and infrastructure logs to a remote archive by using Kafka. +The Cluster Logging Operator enables collection and shipping of logs off the node for remote archival and analysis. +The reference configuration uses Kafka to ship audit and infrastructure logs to a remote archive. Limits and requirements:: Not applicable Engineering considerations:: * The impact of cluster CPU use is based on the number or size of logs generated and the amount of log filtering configured. -* The reference configuration does not include shipping of application logs. Inclusion of application logs in the configuration requires evaluation of the application logging rate and sufficient additional CPU resources allocated to the reserved set. +* The reference configuration does not include shipping of application logs. +The inclusion of application logs in the configuration requires you to evaluate the application logging rate and have sufficient additional CPU resources allocated to the reserved set. diff --git a/modules/telco-core-monitoring.adoc b/modules/telco-core-monitoring.adoc index fcfd25a0488a..716965ef4660 100644 --- a/modules/telco-core-monitoring.adoc +++ b/modules/telco-core-monitoring.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-core-monitoring_{context}"] @@ -10,16 +10,24 @@ New in this release:: * No reference design updates in this release Description:: -The {cmo-first} is included by default in {product-title} and provides monitoring (metrics, dashboards, and alerting) for the platform components and optionally user projects as well. + -[NOTE] -==== -The default handling of pod CPU and memory metrics is based on upstream Kubernetes `cAdvisor` and makes a tradeoff that prefers handling of stale data over metric accuracy. This leads to spiky data that will create false triggers of alerts over user-specified thresholds. {product-title} supports an opt-in dedicated service monitor feature creating an additional set of pod CPU and memory metrics that do not suffer from the spiky behavior. -For additional information, see link:https://access.redhat.com/solutions/7012719[Dedicated Service Monitors - Questions and Answers]. -==== +-- +The Cluster Monitoring Operator (CMO) is included by default in {product-title} and provides monitoring (metrics, dashboards, and alerting) for the platform components and optionally user projects. +You can customize the default log retention period, custom alert rules, and so on. +The default handling of pod CPU and memory metrics, based on upstream Kubernetes and cAdvisor, makes a tradeoff favoring stale data over metric accuracy. +This leads to spikes in reporting, which can create false alerts, depending on the user-specified thresholds. +{product-title} supports an opt-in Dedicated Service Monitor feature that creates an additional set of pod CPU and memory metrics that do not suffer from this behavior. +For more information, see link:https://access.redhat.com/solutions/7012719[Dedicated Service Monitors - Questions and Answers (Red Hat Knowledgebase)]. + +In addition to the default configuration, the following metrics are expected to be configured for telco core clusters: + +* Pod CPU and memory metrics and alerts for user workloads +-- Limits and requirements:: -* Monitoring configuration must enable the dedicated service monitor feature for accurate representation of pod metrics +* You must enable the Dedicated Service Monitor feature to represent pod metrics accurately. Engineering considerations:: -* You configure the Prometheus retention period. The value used is a tradeoff between operational requirements for maintaining historical data on the cluster against CPU and storage resources. Longer retention periods increase the need for storage and require additional CPU to manage the indexing of data. +* The Prometheus retention period is specified by the user. +The value used is a tradeoff between operational requirements for maintaining historical data on the cluster against CPU and storage resources. +Longer retention periods increase the need for storage and require additional CPU to manage data indexing. diff --git a/modules/telco-core-networking.adoc b/modules/telco-core-networking.adoc new file mode 100644 index 000000000000..f3a7224ef2a4 --- /dev/null +++ b/modules/telco-core-networking.adoc @@ -0,0 +1,61 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-networking_{context}"] += Networking + +The following diagram describes the telco core reference design networking configuration. + +.Telco core reference design networking configuration +image::openshift-telco-core-rds-networking.png[Overview of the telco core reference design networking configuration] + +New in this release:: ++ +-- +// https://issues.redhat.com/browse/CNF-12678 +* Support for disabling vendor plugins in the SR-IOV Operator + +// https://issues.redhat.com/browse/CNF-13768 +* link:https://access.redhat.com/articles/7090422[New knowledge base article on creating custom node firewall rules] + +// https://issues.redhat.com/browse/CNF-13981 +* Extended telco core RDS validation with MetalLB and EgressIP telco QE validation + +// https://issues.redhat.com/browse/CNF-14150 +* FRR-K8s is now available under the Cluster Network Operator. ++ +[NOTE] +==== +If you have custom `FRRConfiguration` CRs in the `metallb-system` namespace, you must move them under the `openshift-network-operator` namespace. +==== +-- + +Description:: ++ +-- +* The cluster is configured for dual-stack IP (IPv4 and IPv6). +* The validated physical network configuration consists of two dual-port NICs. +One NIC is shared among the primary CNI (OVN-Kubernetes) and IPVLAN and MACVLAN traffic, while the second one is dedicated to SR-IOV VF-based pod traffic. +* A Linux bonding interface (`bond0`) is created in active-active IEEE 802.3ad LACP mode with the two NIC ports attached. +The top-of-rack networking equipment must support and be configured for multi-chassis link aggregation (mLAG) technology. +* VLAN interfaces are created on top of `bond0`, including for the primary CNI. +* Bond and VLAN interfaces are created at cluster install time during the network configuration stage of the installation. +Except for the `vlan0` VLAN used by the primary CNI, all other VLANs can be created during Day 2 activities with the Kubernetes NMstate Operator. +* MACVLAN and IPVLAN interfaces are created with their corresponding CNIs. +They do not share the same base interface. +For more information, see "Cluster Network Operator". +* SR-IOV VFs are managed by the SR-IOV Network Operator. +* To ensure consistent source IP addresses for pods behind a LoadBalancer Service, configure an `EgressIP` CR and specify the `podSelector` parameter. +* You can implement service traffic separation by doing the following: +.. Configure VLAN interfaces and specific kernel IP routes on the nodes using `NodeNetworkConfigurationPolicy` CRs. +.. Create a MetalLB `BGPPeer` CR for each VLAN to establish peering with the remote BGP router. +.. Define a MetalLB `BGPAdvertisement` CR to specify which IP address pools should be advertised to a selected list of `BGPPeer` resources. ++ +The following diagram illustrates how specific service IP addresses are advertised to the outside via specific VLAN interfaces. +Services routes are defined in `BGPAdvertisement` CRs and configured with values for `IPAddressPool1` and `BGPPeer1` fields. +-- + +.Telco core reference design MetalLB service separation +image::openshift-telco-core-rds-metallb-service-separation.png[Telco core reference design MetalLB service separation] diff --git a/modules/telco-core-nmstate-operator.adoc b/modules/telco-core-nmstate-operator.adoc new file mode 100644 index 000000000000..b9dad0a1a284 --- /dev/null +++ b/modules/telco-core-nmstate-operator.adoc @@ -0,0 +1,23 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-nmstate-operator_{context}"] += NMState Operator + +New in this release:: +* No reference design updates in this release + +Description:: +The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across cluster nodes. +It enables network interface configurations, static IPs and DNS, VLANs, trunks, bonding, static routes, MTU, and enabling promiscuous mode on the secondary interfaces. +The cluster nodes periodically report on the state of each node's network interfaces to the API server. + +Limits and requirements:: +Not applicable + +Engineering considerations:: +* Initial networking configuration is applied using `NMStateConfig` content in the installation CRs. +The NMState Operator is used only when required for network updates. +* When SR-IOV virtual functions are used for host networking, the NMState Operator (via `nodeNetworkConfigurationPolicy` CRs) is used to configure VF interfaces, such as VLANs and MTU. diff --git a/modules/telco-core-node-configuration.adoc b/modules/telco-core-node-configuration.adoc index dc3a4ee31448..fa365f24337d 100644 --- a/modules/telco-core-node-configuration.adoc +++ b/modules/telco-core-node-configuration.adoc @@ -1,40 +1,42 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-core-node-configuration_{context}"] -= Node configuration += Node Configuration New in this release:: -//CNF-12344 -//CNF-12345 -* Container mount namespace encapsulation and kdump are now available in the {rds} RDS. - -Description:: -* Container mount namespace encapsulation creates a container mount namespace that reduces system mount scanning and is visible to kubelet and CRI-O. -* kdump is an optional configuration that is enabled by default that captures debug information when a kernel panic occurs. -The reference CRs which enable kdump include an increased memory reservation based on the set of drivers and kernel modules included in the reference configuration. +* No reference design updates in this release Limits and requirements:: -* Use of kdump and container mount namespace encapsulation is made available through additional kernel modules. -You should analyze these modules to determine impact on CPU load, system performance, and ability to meet required KPIs. +* Analyze additional kernel modules to determine impact on CPU load, system performance, and ability to meet KPIs. ++ +-- +.Additional kernel modules +|==== +|Feature|Description + +|Additional kernel modules +a|Install the following kernel modules by using `MachineConfig` CRs to provide extended kernel functionality to CNFs. -Engineering considerations:: -* Install the following kernel modules with `MachineConfig` CRs. -These modules provide extended kernel functionality to cloud-native functions (CNFs). +* sctp +* ip_gre +* ip6_tables +* ip6t_REJECT +* ip6table_filter +* ip6table_mangle +* iptable_filter +* iptable_mangle +* iptable_nat +* xt_multiport +* xt_owner +* xt_REDIRECT +* xt_statistic +* xt_TCPMSS -** sctp -** ip_gre -** ip6_tables -** ip6t_REJECT -** ip6table_filter -** ip6table_mangle -** iptable_filter -** iptable_mangle -** iptable_nat -** xt_multiport -** xt_owner -** xt_REDIRECT -** xt_statistic -** xt_TCPMSS +|Container mount namespace hiding|Reduce the frequency of kubelet housekeeping and eviction monitoring to reduce CPU usage. +Creates a container mount namespace, visible to kubelet/CRI-O, to reduce system mount scanning overhead. +|Kdump enable|Optional configuration (enabled by default) +|==== +-- diff --git a/modules/telco-core-openshift-data-foundation.adoc b/modules/telco-core-openshift-data-foundation.adoc new file mode 100644 index 000000000000..3424c3612c95 --- /dev/null +++ b/modules/telco-core-openshift-data-foundation.adoc @@ -0,0 +1,22 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-openshift-data-foundation_{context}"] += Red Hat OpenShift Data Foundation + +New in this release:: +* No reference design updates in this release + +Description:: +{rh-storage-first} is a software-defined storage service for containers. +For telco core clusters, storage support is provided by {rh-storage} storage services running externally to the application workload cluster. +{rh-storage} supports separation of storage traffic using secondary CNI networks. + +Limits and requirements:: +* In an IPv4/IPv6 dual-stack networking environment, {rh-storage} uses IPv4 addressing. +For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/planning_your_deployment/network-requirements_rhodf#network-requirements_rhodf[Network requirements]. + +Engineering considerations:: +* {rh-storage} network traffic should be isolated from other traffic on a dedicated network, for example, by using VLAN isolation. diff --git a/modules/telco-core-power-management.adoc b/modules/telco-core-power-management.adoc index 8ba4c636b235..ca1a5f0bd456 100644 --- a/modules/telco-core-power-management.adoc +++ b/modules/telco-core-power-management.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-core-power-management_{context}"] @@ -12,9 +12,12 @@ New in this release:: Description:: Use the Performance Profile to configure clusters with high power mode, low power mode, or mixed mode. The choice of power mode depends on the characteristics of the workloads running on the cluster, particularly how sensitive they are to latency. +Configure the maximum latency for a low-latency pod by using the per-pod power management C-states feature. Limits and requirements:: -* Power configuration relies on appropriate BIOS configuration, for example, enabling C-states and P-states. Configuration varies between hardware vendors. +* Power configuration relies on appropriate BIOS configuration, for example, enabling C-states and P-states. +Configuration varies between hardware vendors. Engineering considerations:: -* Latency: To ensure that latency-sensitive workloads meet their requirements, you will need either a high-power configuration or a per-pod power management configuration. Per-pod power management is only available for `Guaranteed` QoS Pods with dedicated pinned CPUs. +* Latency: To ensure that latency-sensitive workloads meet requirements, you require a high-power or a per-pod power management configuration. +Per-pod power management is only available for Guaranteed QoS pods with dedicated pinned CPUs. diff --git a/modules/telco-core-rds-product-version-use-model-overview.adoc b/modules/telco-core-rds-product-version-use-model-overview.adoc new file mode 100644 index 000000000000..b2b814bed932 --- /dev/null +++ b/modules/telco-core-rds-product-version-use-model-overview.adoc @@ -0,0 +1,11 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-rds-product-version-use-model-overview_{context}"] += Telco core RDS {product-version} use model overview + +The telco core reference design specification (RDS) describes a platform that supports large-scale telco applications, including control plane functions such as signaling and aggregation. +It also includes some centralized data plane functions, such as user plane functions (UPF). +These functions generally require scalability, complex networking support, resilient software-defined storage, and support performance requirements that are less stringent and constrained than far-edge deployments such as RAN. diff --git a/modules/telco-core-red-hat-advanced-cluster-management.adoc b/modules/telco-core-red-hat-advanced-cluster-management.adoc new file mode 100644 index 000000000000..e304a1a5de54 --- /dev/null +++ b/modules/telco-core-red-hat-advanced-cluster-management.adoc @@ -0,0 +1,33 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-red-hat-advanced-cluster-management_{context}"] += Red Hat Advanced Cluster Management + +New in this release:: +* No reference design updates in this release + +Description:: ++ +-- +{rh-rhacm-first} provides Multi Cluster Engine (MCE) installation and ongoing {ztp} lifecycle management for deployed clusters. +You manage cluster configuration and upgrades declaratively by applying `Policy` custom resources (CRs) to clusters during maintenance windows. + +You apply policies with the {rh-rhacm} policy controller as managed by {cgu-operator-full}. +Configuration, upgrades, and cluster status are managed through the policy controller. + +When installing managed clusters, {rh-rhacm} applies labels and initial ignition configuration to individual nodes in support of custom disk partitioning, allocation of roles, and allocation to machine config pools. +You define these configurations with `SiteConfig` or `ClusterInstance` CRs. +-- + +Limits and requirements:: + +* Hub cluster sizing is discussed in link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html-single/install/index#sizing-your-cluster[Sizing your cluster]. + +* {rh-rhacm} scaling limits are described in link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html-single/install/index#performance-and-scalability[Performance and Scalability]. + +Engineering considerations:: +* When managing multiple clusters with unique content per installation, site, or deployment, using {rh-rhacm} hub templating is strongly recommended. +{rh-rhacm} hub templating allows you to apply a consistent set of policies to clusters while providing for unique values per installation. diff --git a/modules/telco-core-scalability.adoc b/modules/telco-core-scalability.adoc index 00a4ef9034bd..0d9bf84ea0a4 100644 --- a/modules/telco-core-scalability.adoc +++ b/modules/telco-core-scalability.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-core-scalability_{context}"] @@ -9,5 +9,8 @@ New in this release:: * No reference design updates in this release +Description:: +Scaling of workloads is described in "Application workloads". + Limits and requirements:: -* Cluster should scale to at least 120 nodes. +* Cluster can scale to at least 120 nodes. diff --git a/modules/telco-core-scheduling.adoc b/modules/telco-core-scheduling.adoc index caf1e1cdec4c..fe8bfe7c05fd 100644 --- a/modules/telco-core-scheduling.adoc +++ b/modules/telco-core-scheduling.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-core-scheduling_{context}"] @@ -10,17 +10,31 @@ New in this release:: * No reference design updates in this release Description:: -* The scheduler is a cluster-wide component responsible for selecting the right node for a given workload. It is a core part of the platform and does not require any specific configuration in the common deployment scenarios. However, there are few specific use cases described in the following section. ++ +-- +The scheduler is a cluster-wide component responsible for selecting the correct node for a given workload. +It is a core part of the platform and does not require any specific configuration in the common deployment scenarios. +However, a few specific use cases are described in the following section. + NUMA-aware scheduling can be enabled through the NUMA Resources Operator. For more information, see "Scheduling NUMA-aware workloads". +-- Limits and requirements:: -* The default scheduler does not understand the NUMA locality of workloads. It only knows about the sum of all free resources on a worker node. This might cause workloads to be rejected when scheduled to a node with the topology manager policy set to `single-numa-node` or `restricted`. -** For example, consider a pod requesting 6 CPUs and being scheduled to an empty node that has 4 CPUs per NUMA node. The total allocatable capacity of the node is 8 CPUs and the scheduler will place the pod there. The node local admission will fail, however, as there are only 4 CPUs available in each of the NUMA nodes. -** All clusters with multi-NUMA nodes are required to use the NUMA Resources Operator. Use the `machineConfigPoolSelector` field in the `KubeletConfig` CR to select all nodes where NUMA aligned scheduling is needed. -* All machine config pools must have consistent hardware configuration for example all nodes are expected to have the same NUMA zone count. +* The default scheduler does not understand the NUMA locality of workloads. +It only knows about the sum of all free resources on a worker node. +This might cause workloads to be rejected when scheduled to a node with the topology manager policy set to `single-numa-node` or `restricted`. +For more information, see "Topology Manager policies". +** For example, consider a pod requesting 6 CPUs that is scheduled to an empty node that has 4 CPUs per NUMA node. +The total allocatable capacity of the node is 8 CPUs. The scheduler places the pod on the empty node. +The node local admission fails, as there are only 4 CPUs available in each of the NUMA nodes. +* All clusters with multi-NUMA nodes are required to use the NUMA Resources Operator. +See "Installing the NUMA Resources Operator" for more information. +Use the `machineConfigPoolSelector` field in the `KubeletConfig` CR to select all nodes where NUMA aligned scheduling is required. +* All machine config pools must have consistent hardware configuration. +For example, all nodes are expected to have the same NUMA zone count. Engineering considerations:: -* Pods might require annotations for correct scheduling and isolation. For more information on annotations, see "CPU partitioning and performance tuning". - +* Pods might require annotations for correct scheduling and isolation. +For more information about annotations, see "CPU partitioning and performance tuning". * You can configure SR-IOV virtual function NUMA affinity to be ignored during scheduling by using the `excludeTopology` field in `SriovNetworkNodePolicy` CR. diff --git a/modules/telco-core-security.adoc b/modules/telco-core-security.adoc index fb30ace2f7b5..548d7773bf2c 100644 --- a/modules/telco-core-security.adoc +++ b/modules/telco-core-security.adoc @@ -1,31 +1,69 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-core-security_{context}"] = Security New in this release:: -//CNF-11806 -* Secure boot host firmware setting is now recommended for telco core clusters. -For more information, see "Host firmware and boot loader configuration". +// https://issues.redhat.com/browse/CNF-13768 +* link:https://access.redhat.com/articles/7090422[New knowledgebase article on creating custom node firewall rules] Description:: -You should harden clusters against multiple attack vectors. ++ +-- +Telco customers are security conscious and require clusters to be hardened against multiple attack vectors. In {product-title}, there is no single component or feature responsible for securing a cluster. Use the following security-oriented features and configurations to secure your clusters: * **SecurityContextConstraints (SCC)**: All workload pods should be run with `restricted-v2` or `restricted` SCC. -* **Seccomp**: All pods should be run with the `RuntimeDefault` (or stronger) seccomp profile. -* **Rootless DPDK pods**: Many user-plane networking (DPDK) CNFs require pods to run with root privileges. With this feature, a conformant DPDK pod can be run without requiring root privileges. +* **Seccomp**: All pods should run with the `RuntimeDefault` (or stronger) seccomp profile. +* **Rootless DPDK pods**: Many user-plane networking (DPDK) CNFs require pods to run with root privileges. +With this feature, a conformant DPDK pod can run without requiring root privileges. Rootless DPDK pods create a tap device in a rootless pod that injects traffic from a DPDK application to the kernel. -* **Storage**: The storage network should be isolated and non-routable to other cluster networks. See the "Storage" section for additional details. +* **Storage**: The storage network should be isolated and non-routable to other cluster networks. +See the "Storage" section for additional details. + +Refer to link:https://access.redhat.com/articles/7090422[Custom nftable firewall rules in OpenShift] for a supported method of implementing custom nftables firewall rules in OpenShift cluster nodes. +This article is intended for cluster administrators who are responsible for managing network security policies in OpenShift environments. +It is crucial to carefully consider the operational implications before deploying this method, including: + +* **Early application**: The rules are applied at boot time, before the network is fully operational. +Ensure the rules don't inadvertently block essential services required during the boot process. + +* **Risk of misconfiguration**: Errors in your custom rules can lead to unintended consequences, potentially leading to performance impact or blocking legitimate traffic or isolating nodes. +Thoroughly test your rules in a non-production environment before deploying them to your main cluster. + +* **External endpoints**: OpenShift requires access to external endpoints to function. +For more information about the firewall allowlist, see "Configuring your firewall for {product-title}". +Ensure that cluster nodes are permitted access to those endpoints. + +* **Node reboot**: Unless node disruption policies are configured, applying the `MachineConfig` CR with the required firewall settings causes a node reboot. +Be aware of this impact and schedule a maintenance window accordingly. +For more information, see "Using node disruption policies to minimize disruption from machine config changes". ++ +[NOTE] +==== +Node disruption policies are available in {product-title} 4.17 and later. +==== + +* **Network flow matrix**: For more information about managing ingress traffic, see "{product-title} network flow matrix". +You can restrict ingress traffic to essential flows to improve network security. +The matrix provides insights into base cluster services but excludes traffic generated by Day-2 Operators. + +* **Cluster version updates and upgrades**: Exercise caution when updating or upgrading OpenShift clusters. +Recent changes to the platform's firewall requirements might require adjustments to network port permissions. +Although the documentation provides guidelines, note that these requirements can evolve over time. +To minimize disruptions, you should test any updates or upgrades in a staging environment before applying them in production. +This helps you to identify and address potential compatibility issues related to firewall configuration changes. +-- Limits and requirements:: -* Rootless DPDK pods requires the following additional configuration steps: -** Configure the TAP plugin with the `container_t` SELinux context. -** Enable the `container_use_devices` SELinux boolean on the hosts. +* Rootless DPDK pods requires the following additional configuration: +** Configure the `container_t` SELinux context for the tap plugin. +** Enable the `container_use_devices` SELinux boolean for the cluster host. Engineering considerations:: -* For rootless DPDK pod support, the SELinux boolean `container_use_devices` must be enabled on the host for the TAP device to be created. This introduces a security risk that is acceptable for short to mid-term use. Other solutions will be explored. +* For rootless DPDK pod support, enable the SELinux `container_use_devices` boolean on the host to allow the tap device to be created. +This introduces an acceptable security risk. diff --git a/modules/telco-core-service-mesh.adoc b/modules/telco-core-service-mesh.adoc index ac37352d107f..315bbb3a3650 100644 --- a/modules/telco-core-service-mesh.adoc +++ b/modules/telco-core-service-mesh.adoc @@ -1,17 +1,13 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-core-service-mesh_{context}"] -= Service Mesh += Service mesh Description:: -{rds-caps} cloud-native functions (CNFs) typically require a service mesh implementation. -+ -[NOTE] -==== +Telco core cloud-native functions (CNFs) typically require a service mesh implementation. Specific service mesh features and performance requirements are dependent on the application. The selection of service mesh implementation and configuration is outside the scope of this documentation. You must account for the impact of service mesh on cluster resource usage and performance, including additional latency introduced in pod networking, in your implementation. -==== diff --git a/modules/telco-core-signaling-workloads.adoc b/modules/telco-core-signaling-workloads.adoc new file mode 100644 index 000000000000..c9042e6b5a9a --- /dev/null +++ b/modules/telco-core-signaling-workloads.adoc @@ -0,0 +1,11 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-signaling-workloads_{context}"] += Signaling workloads + +Signaling workloads typically use SCTP, REST, gRPC or similar TCP or UDP protocols. +Signaling workloads support hundreds of thousands of transactions per second (TPS) by using a secondary multus CNI configured as MACVLAN or SR-IOV interface. +These workloads can run in pods with either guaranteed or burstable QoS. diff --git a/modules/telco-core-software-stack.adoc b/modules/telco-core-software-stack.adoc index 1d2cd1a7a801..528fad6feb17 100644 --- a/modules/telco-core-software-stack.adoc +++ b/modules/telco-core-software-stack.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-software-artifacts.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-core-software-stack_{context}"] @@ -13,21 +13,27 @@ The Red{nbsp}Hat telco core {product-version} solution has been validated using |==== |Component |Software version +|{rh-rhacm-first} +|2.12^1^ + |Cluster Logging Operator -|6.0 +|6.1^2^ |{rh-storage} -|4.17 +|4.18 -|SR-IOV Operator -|4.17 +|SR-IOV Network Operator +|4.18 |MetalLB -|4.17 +|4.18 |NMState Operator -|4.17 +|4.18 |NUMA-aware scheduler -|4.17 +|4.18 |==== +[1] This table will be updated when the aligned {rh-rhacm} version 2.13 is released. + +[2] This table will be updated when the aligned Cluster Logging Operator 6.2 is released. diff --git a/modules/telco-core-sr-iov.adoc b/modules/telco-core-sr-iov.adoc new file mode 100644 index 000000000000..f26bc44961dc --- /dev/null +++ b/modules/telco-core-sr-iov.adoc @@ -0,0 +1,39 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-sr-iov_{context}"] += SR-IOV + +New in this release:: +// https://issues.redhat.com/browse/CNF-12678 +* You can now create virtual functions for Mellanox NICs with the SR-IOV Network Operator when secure boot is enabled in the cluster host. +Before you can create the virtual functions, you must first skip the firmware configuration for the Mellanox NIC and manually allocate the number of virtual functions in the firmware before switching the system to secure boot. + +Description:: +SR-IOV enables physical functions (PFs) to be divided into multiple virtual functions (VFs). +VFs can then be assigned to multiple pods to achieve higher throughput performance while keeping the pods isolated. +The SR-IOV Network Operator provisions and manages SR-IOV CNI, network device plugin, and other components of the SR-IOV stack. + +Limits and requirements:: +* Only certain network interfaces are supported. +See "Supported devices" for more information. + +* Enabling SR-IOV and IOMMU: the SR-IOV Network Operator automatically enables IOMMU on the kernel command line. + +* SR-IOV VFs do not receive link state updates from the PF. +If a link down detection is required, it must be done at the protocol level. + +* `MultiNetworkPolicy` CRs can be applied to `netdevice` networks only. +This is because the implementation uses iptables, which cannot manage vfio interfaces. + +Engineering considerations:: +* SR-IOV interfaces in `vfio` mode are typically used to enable additional secondary networks for applications that require high throughput or low latency. +* The `SriovOperatorConfig` CR must be explicitly created. +This CR is included in the reference configuration policies, which causes it to be created during initial deployment. +* NICs that do not support firmware updates with UEFI secure boot or kernel lockdown must be preconfigured with sufficient virtual functions (VFs) enabled to support the number of VFs required by the application workload. +For Mellanox NICs, you must disable the Mellanox vendor plugin in the SR-IOV Network Operator. +See "Configuring an SR-IOV network device" for more information. +* To change the MTU value of a VF after the pod has started, do not configure the `SriovNetworkNodePolicy` MTU field. +Instead, use the Kubernetes NMState Operator to set the MTU of the related PF. diff --git a/modules/telco-core-storage.adoc b/modules/telco-core-storage.adoc index 9e5a75d7ecc3..6f72cb429afb 100644 --- a/modules/telco-core-storage.adoc +++ b/modules/telco-core-storage.adoc @@ -1,32 +1,28 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-core-storage_{context}"] = Storage -Cloud native storage services can be provided by multiple solutions including {rh-storage} from Red Hat or third parties. - -[id="telco-core-rh-storage_{context}"] -== {rh-storage} - New in this release:: * No reference design updates in this release Description:: -{rh-storage-first} is a software-defined storage service for containers. -For {rds-caps} clusters, storage support is provided by {rh-storage} storage services running externally to the application workload cluster. - -Limits and requirements:: -* In an IPv4/IPv6 dual-stack networking environment, {rh-storage} uses IPv4 addressing. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13/html-single/4.13_release_notes/index#support_openshift_dual_stack_with_odf_using_ipv4[Support OpenShift dual stack with {rh-storage} using IPv4]. ++ +-- +Cloud native storage services can be provided by {rh-storage-first} or other third-party solutions. -Engineering considerations:: -* {rh-storage} network traffic should be isolated from other traffic on a dedicated network, for example, by using VLAN isolation. +{rh-storage} is a Ceph-based software-defined storage solution for containers. +It provides block storage, file system storage, and on-premise object storage, which can be dynamically provisioned for both persistent and non-persistent data requirements. +Telco core applications require persistent storage. -* Other storage solutions can be used to provide persistent storage for core clusters. -+ [NOTE] ==== -The configuration and integration of these solutions is outside the scope of the {rds} RDS. Integration of the storage solution into the core cluster must include correct sizing and performance analysis to ensure the storage meets overall performance and resource utilization requirements. +All storage data might not be encrypted in flight. +To reduce risk, isolate the storage network from other cluster networks. +The storage network must not be reachable, or routable, from other cluster networks. +Only nodes directly attached to the storage network should be allowed to gain access to it. ==== +-- diff --git a/modules/telco-core-topology-aware-lifecycle-manager.adoc b/modules/telco-core-topology-aware-lifecycle-manager.adoc new file mode 100644 index 000000000000..8361c5debf34 --- /dev/null +++ b/modules/telco-core-topology-aware-lifecycle-manager.adoc @@ -0,0 +1,26 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-core-topology-aware-lifecycle-manager_{context}"] += Topology Aware Lifecycle Manager + +New in this release:: +No reference design updates in this release. + +Description:: +{cgu-operator-full} is an Operator which runs only on the hub cluster. +{cgu-operator} manages how changes including cluster and Operator upgrades, configurations, and so on, are rolled out to managed clusters in the network. +{cgu-operator} has the following core features: +* Provides sequenced updates of cluster configurations and upgrades ({product-title} and Operators) as defined by cluster policies. +* Provides for deferred application of cluster updates. +* Supports progressive rollout of policy updates to sets of clusters in user configurable batches. +* Allows for per-cluster actions by adding `ztp-done` or similar user-defined labels to clusters. + +Limits and requirements:: +* Supports concurrent cluster deployments in batches of 400. + +Engineering considerations:: +* Only policies with the `ran.openshift.io/ztp-deploy-wave` annotation are applied by {cgu-operator} during initial cluster installation. +* Any policy can be remediated by {cgu-operator} under control of a user created `ClusterGroupUpgrade` CR. diff --git a/scalability_and_performance/cluster-compare/understanding-the-cluster-compare-plugin.adoc b/scalability_and_performance/cluster-compare/understanding-the-cluster-compare-plugin.adoc index 8148f850af31..2e163a36ba71 100644 --- a/scalability_and_performance/cluster-compare/understanding-the-cluster-compare-plugin.adoc +++ b/scalability_and_performance/cluster-compare/understanding-the-cluster-compare-plugin.adoc @@ -17,7 +17,7 @@ include::modules/understanding-a-reference-config.adoc[leveloffset=+1] [role="_additional-resources"] == Additional resources -* xref:../../scalability_and_performance/telco_ref_design_specs/telco-ref-design-specs-overview.adoc#telco-ref-design-overview_telco_ref_design_specs[Reference design specifications for telco 5G deployments] +* xref:../../scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc#telco-ran-du-ref-design-specs[Telco RAN DU reference design specification for {product-title}] diff --git a/scalability_and_performance/cluster-compare/using-the-cluster-compare-plugin.adoc b/scalability_and_performance/cluster-compare/using-the-cluster-compare-plugin.adoc index 825ba3184d0a..09e9251d6b79 100644 --- a/scalability_and_performance/cluster-compare/using-the-cluster-compare-plugin.adoc +++ b/scalability_and_performance/cluster-compare/using-the-cluster-compare-plugin.adoc @@ -25,4 +25,4 @@ include::modules/using-cluster-compare-telco-ref.adoc[leveloffset=+1] [id="additional-resources_{context}"] == Additional resources * xref:../../scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc#using-cluster-compare-telco_ref_ran-ref-design-crs[Comparing a cluster with the telco RAN DU reference configuration] -* xref:../../scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-crs.adoc#using-cluster-compare-telco_ref_ran-core-ref-design-crs[Comparing a cluster with the telco core reference configuration] +* xref:../../scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc#using-cluster-compare-telco_ref_telco-core[Comparing a cluster with the telco core reference configuration] diff --git a/scalability_and_performance/index.adoc b/scalability_and_performance/index.adoc index 6760db1a0454..18e0f2dd7b24 100644 --- a/scalability_and_performance/index.adoc +++ b/scalability_and_performance/index.adoc @@ -30,7 +30,7 @@ xref:../scalability_and_performance/recommended-performance-scale-practices/reco xref:../scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc#telco-ran-du-ref-design-specs[Telco RAN DU reference design specification for {product-title} {product-version}] -xref:../scalability_and_performance/telco_ref_design_specs/core/telco-core-rds-overview.adoc#telco-core-cluster-service-based-architecture-and-networking-topology_core-ref-design-overview[Telco core reference design specification] +xref:../scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc#telco-core-ref-design-specs[Telco core reference design specification] [discrete] == Planning, optimization, and measurement diff --git a/scalability_and_performance/telco_ref_design_specs/core/_attributes b/scalability_and_performance/telco_core_ref_design_specs/_attributes similarity index 100% rename from scalability_and_performance/telco_ref_design_specs/core/_attributes rename to scalability_and_performance/telco_core_ref_design_specs/_attributes diff --git a/scalability_and_performance/telco_ref_design_specs/core/images b/scalability_and_performance/telco_core_ref_design_specs/images similarity index 100% rename from scalability_and_performance/telco_ref_design_specs/core/images rename to scalability_and_performance/telco_core_ref_design_specs/images diff --git a/scalability_and_performance/telco_ref_design_specs/core/modules b/scalability_and_performance/telco_core_ref_design_specs/modules similarity index 100% rename from scalability_and_performance/telco_ref_design_specs/core/modules rename to scalability_and_performance/telco_core_ref_design_specs/modules diff --git a/scalability_and_performance/telco_ref_design_specs/core/snippets b/scalability_and_performance/telco_core_ref_design_specs/snippets similarity index 100% rename from scalability_and_performance/telco_ref_design_specs/core/snippets rename to scalability_and_performance/telco_core_ref_design_specs/snippets diff --git a/scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc b/scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc new file mode 100644 index 000000000000..7325368043c0 --- /dev/null +++ b/scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc @@ -0,0 +1,235 @@ +:_mod-docs-content-type: ASSEMBLY +:telco-core: +[id="telco-core-ref-design-specs"] += Telco core reference design specifications +include::_attributes/common-attributes.adoc[] +:context: telco-core + +toc::[] + +The telco core reference design specification (RDS) configures an {product-title} cluster running on commodity hardware to host telco core workloads. + +include::modules/telco-core-rds-product-version-use-model-overview.adoc[leveloffset=+1] + +include::modules/telco-core-about-the-telco-core-cluster-use-model.adoc[leveloffset=+1] + +include::modules/telco-ran-core-ref-design-spec.adoc[leveloffset=+2] + +include::modules/telco-deviations-from-the-ref-design.adoc[leveloffset=+2] + +include::modules/telco-core-common-baseline-model.adoc[leveloffset=+1] + +include::modules/telco-core-cluster-common-use-model-engineering-considerations.adoc[leveloffset=+1] + +include::modules/telco-core-application-workloads.adoc[leveloffset=+2] + +include::modules/telco-core-signaling-workloads.adoc[leveloffset=+2] + +[id="telco-core-rds-components"] +== Telco core RDS components + +The following sections describe the various {product-title} components and configurations that you use to configure and deploy clusters to run telco core workloads. + +include::modules/telco-core-cpu-partitioning-and-performance-tuning.adoc[leveloffset=+2] + + +[role="_additional-resources"] +.Additional resources + +* xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile] + +* xref:../../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-du-configuring-host-firmware-requirements_sno-configure-for-vdu[Configuring host firmware for low latency and high performance] + +* xref:../../installing/install_config/enabling-cgroup-v1.adoc#nodes-clusters-cgroups-2-install_nodes-cluster-cgroups-1[Enabling Linux cgroup v1 during installation] + +include::modules/telco-core-service-mesh.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../service_mesh/v2x/ossm-about.adoc#ossm-about[About OpenShift Service Mesh] + +include::modules/telco-core-networking.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../networking/understanding-networking.adoc#understanding-networking[Understanding networking] + +include::modules/telco-core-cluster-network-operator.adoc[leveloffset=+3] + +[role="_additional-resources"] +.Additional resources + +* xref:../../networking/networking_operators/cluster-network-operator.adoc#nw-cluster-network-operator_cluster-network-operator[Cluster Network Operator] + +include::modules/telco-core-load-balancer.adoc[leveloffset=+3] + +[role="_additional-resources"] +.Additional resources + +* xref:../../networking/networking_operators/metallb-operator/about-metallb.adoc#nw-metallb-when-metallb_about-metallb-and-metallb-operator[When to use MetalLB] + +include::modules/telco-core-sr-iov.adoc[leveloffset=+3] + +[role="_additional-resources"] +.Additional resources + +* xref:../../networking/hardware_networks/about-sriov.adoc#about-sriov[About Single Root I/O Virtualization (SR-IOV) hardware networks] + +* xref:../../networking/hardware_networks/about-sriov.adoc#supported-devices_about-sriov[Supported devices] + +* xref:../../networking/hardware_networks/configuring-sriov-device.html#nw-sriov-nic-mlx-secure-boot_configuring-sriov-device[Configuring the SR-IOV Network Operator on Mellanox cards when Secure Boot is enabled] + +include::modules/telco-core-nmstate-operator.adoc[leveloffset=+3] + +[role="_additional-resources"] +.Additional resources + +* xref:../../networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc#k8s-nmstate-about-the-k8s-nmstate-operator[Kubernetes NMState Operator] + +include::modules/telco-core-logging.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* link:https://docs.openshift.com/container-platform/4.17/observability/logging/logging-6.0/log6x-about.html[Logging 6.0] + +include::modules/telco-core-power-management.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../rest_api/node_apis/performanceprofile-performance-openshift-io-v2.adoc#spec-workloadhints[performance.openshift.io/v2 API reference] + +* xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-power-saving-for-nodes_cnf-low-latency-perf-profile[Configuring power saving for nodes] + +* xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-power-saving-for-nodes_cnf-low-latency-perf-profile[Configuring power saving for nodes that run colocated high and low priority workloads] + +include::modules/telco-core-storage.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../storage/persistent_storage/persistent-storage-ocs.adoc#red-hat-openshift-data-foundation[{rh-storage-first}] + +include::modules/telco-core-openshift-data-foundation.adoc[leveloffset=+3] + +include::modules/telco-core-additional-storage-solutions.adoc[leveloffset=+3] + +[id="telco-reference-core-deployment-components_{context}"] +=== Telco core deployment components + +The following sections describe the various {product-title} components and configurations that you use to configure the hub cluster with {rh-rhacm-first}. + +include::modules/telco-core-red-hat-advanced-cluster-management.adoc[leveloffset=+3] + +[role="_additional-resources"] +.Additional resources + +* xref:../../edge_computing/ztp-deploying-far-edge-clusters-at-scale.adoc#about-ztp_ztp-deploying-far-edge-clusters-at-scale[Using {ztp} to provision clusters at the network far edge] + +* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes[Red Hat Advanced Cluster Management for Kubernetes] + +include::modules/telco-core-topology-aware-lifecycle-manager.adoc[leveloffset=+3] + +[role="_additional-resources"] +.Additional resources + +* xref:../../edge_computing/cnf-talm-for-cluster-upgrades.adoc#cnf-talm-for-cluster-updates[Updating managed clusters with the {cgu-operator-full}] + +include::modules/telco-core-gitops-operator-and-ztp-plugins.adoc[leveloffset=+3] + +[role="_additional-resources"] +.Additional resources + +* xref:../../edge_computing/ztp-preparing-the-hub-cluster.adoc#ztp-preparing-the-ztp-git-repository-ver-ind_ztp-preparing-the-hub-cluster[Preparing the {ztp} site configuration repository for version independence] + +* xref:../../edge_computing/policygentemplate_for_ztp/ztp-advanced-policy-config.adoc#ztp-adding-new-content-to-gitops-ztp_ztp-advanced-policy-config[Adding custom content to the {ztp} pipeline] + +include::modules/telco-core-monitoring.adoc[leveloffset=+3] + +[role="_additional-resources"] +.Additional resources + +* xref:../../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] + +include::modules/telco-core-scheduling.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../scalability_and_performance/cnf-numa-aware-scheduling.adoc#installing-the-numa-resources-operator_numa-aware[Installing the NUMA Resources Operator] + +* xref:../../scalability_and_performance/cnf-numa-aware-scheduling.adoc#cnf-numa-aware-scheduling[Scheduling NUMA-aware workloads] + +xref:../../scalability_and_performance/using-cpu-manager.adoc#topology_manager_policies_using-cpu-manager-and-topology_manager[Topology Manager policies] + +include::modules/telco-core-node-configuration.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-sno-du-enabling-kdump_sno-configure-for-vdu[Automatic kernel crash dumps with kdump] + +* xref:../../scalability_and_performance/optimization/optimizing-cpu-usage.adoc#optimizing-cpu-usage[Optimizing CPU usage with mount namespace encapsulation] + +include::modules/telco-core-host-firmware-and-boot-loader-configuration.adoc[leveloffset=+2] + +include::modules/telco-core-disconnected-environment.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../disconnected/updating/index.adoc#about-disconnected-updates[About cluster updates in a disconnected environment] + + +include::modules/telco-core-agent-based-installer.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#installing-with-agent-based-installer[Installing an {product-title} cluster with the Agent-based Installer] + +include::modules/telco-core-security.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall_configuring-firewall[Configuring your firewall for {product-title}] + +* xref:../../installing/install_config/configuring-firewall.adoc#network-flow-matrix_configuring-firewall[{product-title} network flow matrix] + +* xref:../../authentication/managing-security-context-constraints.adoc#managing-pod-security-policies[Managing security context constraints] + +* xref:../../machine_configuration/machine-config-node-disruption.adoc#machine-config-node-disruption_machine-configs-configure[Using node disruption policies to minimize disruption from machine config changes] + +include::modules/telco-core-scalability.adoc[leveloffset=+2] + +[id="telco-core-reference-configuration-crs"] +== Telco core reference configuration CRs + +Use the following custom resources (CRs) to configure and deploy {product-title} clusters with the telco core profile. +Use the CRs to form the common baseline used in all the specific use models unless otherwise indicated. + +include::modules/telco-core-rds-container.adoc[leveloffset=+2] + +include::modules/using-cluster-compare-telco-ref.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources +* xref:../../scalability_and_performance/cluster-compare/understanding-the-cluster-compare-plugin.adoc#understanding-the-cluster-compare-plugin[Understanding the cluster-compare plugin] + +include::modules/telco-core-crs-node-configuration.adoc[leveloffset=+2] + +include::modules/telco-core-crs-resource-tuning.adoc[leveloffset=+2] + +include::modules/telco-core-crs-networking.adoc[leveloffset=+2] + +include::modules/telco-core-crs-scheduling.adoc[leveloffset=+2] + +include::modules/telco-core-crs-storage.adoc[leveloffset=+2] + +include::modules/telco-core-software-stack.adoc[leveloffset=+1] + +:!telco-core: diff --git a/scalability_and_performance/telco_ref_design_specs/core/telco-core-rds-overview.adoc b/scalability_and_performance/telco_ref_design_specs/core/telco-core-rds-overview.adoc deleted file mode 100644 index e72c31dfa2de..000000000000 --- a/scalability_and_performance/telco_ref_design_specs/core/telco-core-rds-overview.adoc +++ /dev/null @@ -1,14 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -:telco-core: -:context: core-ref-design-overview -include::_attributes/common-attributes.adoc[] -[id="telco-core-ref-design-overview"] -= {rds-caps} {product-version} reference design overview - -toc::[] - -The {rds} reference design specification (RDS) configures an {product-title} cluster running on commodity hardware to host {rds} workloads. - -include::modules/telco-core-cluster-service-based-architecture-and-networking-topology.adoc[leveloffset=+1] - -:!telco-core: diff --git a/scalability_and_performance/telco_ref_design_specs/core/telco-core-rds-use-cases.adoc b/scalability_and_performance/telco_ref_design_specs/core/telco-core-rds-use-cases.adoc deleted file mode 100644 index 98c7e8d073c4..000000000000 --- a/scalability_and_performance/telco_ref_design_specs/core/telco-core-rds-use-cases.adoc +++ /dev/null @@ -1,33 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -:telco-core: -include::_attributes/common-attributes.adoc[] -[id="telco-ran-rds-overview"] -= {rds-caps} {product-version} use model overview -:context: ran-core-design-overview - -toc::[] - -{rds-caps} clusters are configured as standard three control plane clusters with worker nodes configured with the stock non real-time (RT) kernel. - -To support workloads with varying networking and performance requirements, worker nodes are segmented using `MachineConfigPool` CRs. For example, this is done to separate non-user data plane nodes from high-throughput nodes. To support the required telco operational features, the clusters have a standard set of Operator Lifecycle Manager (OLM) Day 2 Operators installed. - -The networking prerequisites for {rds} functions are diverse and encompass an array of networking attributes and performance benchmarks. -IPv6 is mandatory, with dual-stack configurations being prevalent. Certain functions demand maximum throughput and transaction rates, necessitating user plane networking support such as DPDK. Other functions adhere to conventional cloud-native patterns and can use solutions such as OVN-K, kernel networking, and load balancing. - - - -.Telco core use model architecture -image:473_OpenShift_Telco_Core_Reference_arch_1123.png[Use model architecture] - -include::modules/telco-core-ref-design-baseline-model.adoc[leveloffset=+1] - -include::modules/telco-core-ref-eng-usecase-model.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc#telco-core-cpu-partitioning-performance-tune_core-ref-design-components[CPU partitioning and performance tuning] - -include::modules/telco-core-ref-application-workloads.adoc[leveloffset=+2] - -:!telco-core: diff --git a/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-crs.adoc b/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-crs.adoc deleted file mode 100644 index d0c7b7750145..000000000000 --- a/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-crs.adoc +++ /dev/null @@ -1,48 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -:telco-core: -include::_attributes/common-attributes.adoc[] -[id="telco-core-ref-du-crs"] -= {rds-caps} {product-version} reference configuration CRs -:context: ran-core-ref-design-crs - -toc::[] - -Use the following custom resources (CRs) to configure and deploy {product-title} clusters with the {rds} profile. -Use the CRs to form the common baseline used in all the specific use models unless otherwise indicated. - -include::modules/telco-core-rds-container.adoc[leveloffset=+1] - -include::modules/using-cluster-compare-telco-ref.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources -* xref:../../../scalability_and_performance/cluster-compare/understanding-the-cluster-compare-plugin.adoc#understanding-the-cluster-compare-plugin[Understanding the cluster-compare plugin] - -include::modules/telco-core-crs-networking.adoc[leveloffset=+1] - -include::modules/telco-core-crs-node-configuration.adoc[leveloffset=+1] - -include::modules/telco-core-crs-other.adoc[leveloffset=+1] - -include::modules/telco-core-crs-resource-tuning.adoc[leveloffset=+1] - -include::modules/telco-core-crs-scheduling.adoc[leveloffset=+1] - -include::modules/telco-core-crs-storage.adoc[leveloffset=+1] - -[id="telco-reference-core-use-case-yaml_{context}"] -== YAML reference - -include::modules/telco-core-yaml-ref-networking.adoc[leveloffset=+2] - -include::modules/telco-core-yaml-ref-node-configuration.adoc[leveloffset=+2] - -include::modules/telco-core-yaml-ref-other.adoc[leveloffset=+2] - -include::modules/telco-core-yaml-ref-resource-tuning.adoc[leveloffset=+2] - -include::modules/telco-core-yaml-ref-scheduling.adoc[leveloffset=+2] - -include::modules/telco-core-yaml-ref-storage.adoc[leveloffset=+2] - -:!telco-core: diff --git a/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc b/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc deleted file mode 100644 index 00e79e8752a8..000000000000 --- a/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc +++ /dev/null @@ -1,182 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -:telco-core: -include::_attributes/common-attributes.adoc[] -[id="telco-core-ref-components"] -= {rds-caps} reference design components -:context: core-ref-design-components - -toc::[] - -The following sections describe the various {product-title} components and configurations that you use to configure and deploy clusters to run {rds} workloads. - -include::modules/telco-core-cpu-partitioning-performance-tune.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile] - -* xref:../../../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-du-configuring-host-firmware-requirements_sno-configure-for-vdu[Configuring host firmware for low latency and high performance] - -* xref:../../../installing/install_config/enabling-cgroup-v1.adoc#nodes-clusters-cgroups-2-install_nodes-cluster-cgroups-1[Enabling Linux cgroup v1 during installation] - -include::modules/telco-core-service-mesh.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../service_mesh/v2x/ossm-about.adoc#ossm-about[About OpenShift Service Mesh] - -include::modules/telco-core-rds-networking.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../networking/understanding-networking.adoc#understanding-networking[Understanding networking] - -include::modules/telco-core-cluster-network-operator.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../networking/networking_operators/cluster-network-operator.adoc#nw-cluster-network-operator_cluster-network-operator[Cluster Network Operator] - -include::modules/telco-core-load-balancer.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../networking/networking_operators/metallb-operator/about-metallb.adoc#nw-metallb-when-metallb_about-metallb-and-metallb-operator[When to use MetalLB] - -include::modules/telco-core-sriov.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../networking/hardware_networks/about-sriov.adoc#about-sriov[About Single Root I/O Virtualization (SR-IOV) hardware networks] - -* xref:../../../networking/hardware_networks/about-sriov.adoc#supported-devices_about-sriov[Supported devices] - -include::modules/telco-nmstate-operator.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../networking/networking_operators/k8s-nmstate-about-the-k8s-nmstate-operator.adoc#k8s-nmstate-about-the-k8s-nmstate-operator[Kubernetes NMState Operator] - -include::modules/telco-core-logging.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -//* xref:../../../observability/logging/logging-6.0/log6x-about.adoc#log6x-about[About logging] -* link:https://docs.openshift.com/container-platform/4.17/observability/logging/logging-6.0/log6x-about.html[About logging] - -include::modules/telco-core-power-management.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../rest_api/node_apis/performanceprofile-performance-openshift-io-v2.adoc#spec-workloadhints[Performance Profile] - -* xref:../../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-power-saving-for-nodes_cnf-low-latency-perf-profile[Configuring power saving for nodes] - -* xref:../../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-configuring-power-saving-for-nodes_cnf-low-latency-perf-profile[Configuring power saving for nodes that run colocated high and low priority workloads] - -include::modules/telco-core-storage.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../storage/persistent_storage/persistent-storage-ocs.adoc#red-hat-openshift-data-foundation[{rh-storage-first}] - -[id="telco-reference-core-deployment-components_{context}"] -== {rds-caps} deployment components - -The following sections describe the various {product-title} components and configurations that you use to configure the hub cluster with {rh-rhacm-first}. - -include::modules/telco-core-red-hat-advanced-cluster-management-rhacm.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../edge_computing/ztp-deploying-far-edge-clusters-at-scale.adoc#about-ztp_ztp-deploying-far-edge-clusters-at-scale[Using {ztp} to provision clusters at the network far edge] - -* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes[Red Hat Advanced Cluster Management for Kubernetes] - -include::modules/telco-ran-topology-aware-lifecycle-manager-talm.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../edge_computing/cnf-talm-for-cluster-upgrades.adoc#cnf-talm-for-cluster-updates[Updating managed clusters with the {cgu-operator-full}] - -include::modules/telco-ran-gitops-operator-and-ztp-plugins.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../edge_computing/ztp-preparing-the-hub-cluster.adoc#ztp-preparing-the-ztp-git-repository-ver-ind_ztp-preparing-the-hub-cluster[Preparing the {ztp} site configuration repository for version independence] - -* xref:../../../edge_computing/policygentemplate_for_ztp/ztp-advanced-policy-config.adoc#ztp-adding-new-content-to-gitops-ztp_ztp-advanced-policy-config[Adding custom content to the {ztp} pipeline] - -include::modules/telco-core-agent-based-installer-abi.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#installing-with-agent-based-installer[Installing an {product-title} cluster with the Agent-based Installer] - -include::modules/telco-core-monitoring.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] - -include::modules/telco-core-scheduling.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../nodes/scheduling/nodes-scheduler-about.adoc#nodes-scheduler-about[Controlling pod placement using the scheduler] - -* xref:../../../scalability_and_performance/cnf-numa-aware-scheduling.adoc#cnf-numa-aware-scheduling[Scheduling NUMA-aware workloads] - -* xref:../../../scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc#telco-core-cpu-partitioning-performance-tune_core-ref-design-components[CPU partitioning and performance tuning] - -include::modules/telco-core-node-configuration.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-sno-du-enabling-kdump_sno-configure-for-vdu[Automatic kernel crash dumps with kdump] - -* xref:../../../scalability_and_performance/optimization/optimizing-cpu-usage.adoc#optimizing-cpu-usage[Optimizing CPU usage with mount namespace encapsulation] - -include::modules/telco-core-host-firmware-bootloader.adoc[leveloffset=+1] - -include::modules/telco-core-rds-disconnected.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../disconnected/updating/index.adoc#about-disconnected-updates[About cluster updates in a disconnected environment] - -include::modules/telco-core-security.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../authentication/managing-security-context-constraints.adoc#managing-pod-security-policies[Managing security context constraints] - -* xref:../../../scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc#telco-core-host-firmware-and-bootloader-configuration_core-ref-design-components[Host firmware and boot loader configuration] - -include::modules/telco-core-scalability.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../scalability_and_performance/telco_ref_design_specs/core/telco-core-rds-use-cases.adoc#telco-core-ref-eng-usecase-model_ran-core-design-overview[{rds-caps} RDS engineering considerations] - -:!telco-core: diff --git a/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-software-artifacts.adoc b/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-software-artifacts.adoc deleted file mode 100644 index 071d6d453fe7..000000000000 --- a/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-software-artifacts.adoc +++ /dev/null @@ -1,11 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="telco-core-ref-software-artifacts"] -= Telco core reference configuration software specifications -:context: core-ref-design-validation -include::_attributes/common-attributes.adoc[] - -toc::[] - -The following information describes the telco core reference design specification (RDS) validated software versions. - -include::modules/telco-core-software-stack.adoc[leveloffset=+1] From f4efe5205681c957de3b8ee11f51b6f8c4a85ca9 Mon Sep 17 00:00:00 2001 From: Kathryn Alexander Date: Mon, 24 Feb 2025 10:13:14 -0500 Subject: [PATCH 334/669] 4.18 GA --- .github/CODEOWNERS | 2 +- .s2i/httpd-cfg/01-commercial.conf | 4 ++-- _distro_map.yml | 11 +++++++---- _templates/_page_openshift.html.erb | 10 ++-------- contributing_to_docs/doc_guidelines.adoc | 8 ++++---- index-commercial.html | 6 +++++- index-community.html | 1 + 7 files changed, 22 insertions(+), 20 deletions(-) diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 0f9fdf67996c..fb001400bd2a 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -1 +1 @@ - @abhatt-rh @abrennan89 @adellape @aireilly @bergerhoffer @bscott-rh @gabriel-rh @jab-rh @jeana-redhat @JoeAldinger @kalexand-rh @kcarmichael08 @michaelryanpeter @opayne1 @ousleyp @sjhala-ccs @snarayan-redhat @Srivaralakshmi + @abhatt-rh @abrennan89 @adellape @aireilly @bergerhoffer @bscott-rh @gabriel-rh @jab-rh @jeana-redhat @JoeAldinger @kalexand-rh @kcarmichael08 @michaelryanpeter @opayne1 @ousleyp @sjhala-ccs @snarayan-redhat diff --git a/.s2i/httpd-cfg/01-commercial.conf b/.s2i/httpd-cfg/01-commercial.conf index 520b4d3505fd..7d33d725fbd1 100644 --- a/.s2i/httpd-cfg/01-commercial.conf +++ b/.s2i/httpd-cfg/01-commercial.conf @@ -164,7 +164,7 @@ AddType text/vtt vtt # Redirects for "latest" version RewriteRule ^(container-platform|enterprise)/?$ /container-platform/latest [R=301] - RewriteRule ^(container-platform|enterprise)/latest/?(.*)$ /container-platform/4\.17/$2 [NE,R=301] + RewriteRule ^(container-platform|enterprise)/latest/?(.*)$ /container-platform/4\.18/$2 [NE,R=301] RewriteRule ^(online)/(3\.0|3\.1|3\.2|3\.3|3\.4|3\.5|3\.6|3\.7|3\.9|3\.10|3\.11|latest)/?(.*)$ /$1/pro/$3 [NE,R=301] # Release notes redirects @@ -766,7 +766,7 @@ AddType text/vtt vtt RewriteRule ^rosa/?$ /rosa/welcome/index.html [L,R=301] RewriteRule ^enterprise/(3\.0|3\.1|3\.2)/?$ /enterprise/$1/welcome/index.html [L,R=301] RewriteRule ^enterprise/3\.3/?$ /container-platform/3.3/welcome/index.html [L,R=301] - RewriteRule ^container-platform/(3\.3|3\.4|3\.5|3\.6|3\.7|3\.9|3\.10|3\.11|4\.1|4\.2|4\.3|4\.4|4\.5|4\.6|4\.7|4\.8|4\.9|4\.10|4\.11|4\.12|4\.13|4\.14|4\.15|4\.16|4\.17)/?$ /container-platform/$1/welcome/index.html [L,R=301] + RewriteRule ^container-platform/(3\.3|3\.4|3\.5|3\.6|3\.7|3\.9|3\.10|3\.11|4\.1|4\.2|4\.3|4\.4|4\.5|4\.6|4\.7|4\.8|4\.9|4\.10|4\.11|4\.12|4\.13|4\.14|4\.15|4\.16|4\.17|4\.18)/?$ /container-platform/$1/welcome/index.html [L,R=301] RewriteRule ^container-platform-ocp/(4\.3|4\.4|4\.8)/?$ /container-platform-ocp/$1/welcome/index.html [L,R=301] diff --git a/_distro_map.yml b/_distro_map.yml index bf102a1989eb..0db1e366cea3 100644 --- a/_distro_map.yml +++ b/_distro_map.yml @@ -45,6 +45,9 @@ openshift-origin: enterprise-4.17: name: '4.17' dir: '4.17' + enterprise-4.18: + name: '4.18' + dir: '4.18' enterprise-3.6: name: '3.6' dir: '3.6' @@ -170,7 +173,7 @@ openshift-dedicated: enterprise-3.11: name: '3' dir: dedicated/3 - enterprise-4.17: + enterprise-4.18: name: '' dir: dedicated/ openshift-aro: @@ -193,7 +196,7 @@ openshift-rosa: site_name: Documentation site_url: https://docs.openshift.com/ branches: - enterprise-4.17: + enterprise-4.18: name: '' dir: rosa/ rosa-preview: @@ -206,7 +209,7 @@ openshift-rosa-hcp: site_name: Documentation site_url: https://docs.openshift.com/ branches: - enterprise-4.17: + enterprise-4.18: name: '' dir: rosa-hcp/ rosa-preview: @@ -219,7 +222,7 @@ openshift-rosa-portal: site_name: Documentation site_url: https://docs.openshift.com/ branches: - enterprise-4.17: + enterprise-4.18: name: '' dir: rosa-portal/ openshift-webscale: diff --git a/_templates/_page_openshift.html.erb b/_templates/_page_openshift.html.erb index 8d93f202df3b..9848a5be0bc4 100644 --- a/_templates/_page_openshift.html.erb +++ b/_templates/_page_openshift.html.erb @@ -131,14 +131,6 @@ <% end %> - <% if (version == "4.18") && (distro_key != "openshift-webscale" && distro_key != "openshift-dpu" && distro_key != "rosa-hcp") %> - -

    - - <% end %> - <% if ((unsupported_versions.include? version) && (distro_key == "openshift-enterprise")) %> @@ -227,6 +219,7 @@ <% end %> + diff --git a/contributing_to_docs/doc_guidelines.adoc b/contributing_to_docs/doc_guidelines.adoc index 82b865566b5d..003b96c8a57c 100644 --- a/contributing_to_docs/doc_guidelines.adoc +++ b/contributing_to_docs/doc_guidelines.adoc @@ -603,22 +603,22 @@ possible values for `{product-title}` and `{product-version}`, depending on the |`openshift-origin` |OKD a|* 3.6, 3.7, 3.9, 3.10, 3.11 -* 4.8, 4.9, 4.10, 4.11, 4.12, 4.13, 4.14, 4.15, 4.16, 4.17 +* 4.8, 4.9, 4.10, 4.11, 4.12, 4.13, 4.14, 4.15, 4.16, 4.17, 4.18 * 4 for the `latest/` build from the `main` branch |`openshift-enterprise` |OpenShift Container Platform a|* 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.9, 3.10, 3.11 -* 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 4.10, 4.11, 4.12, 4.13, 4.14, 4.15, 4.16, 4.17, 4.18 +* 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 4.10, 4.11, 4.12, 4.13, 4.14, 4.15, 4.16, 4.17, 4.18, 4.19 |`openshift-dedicated` |OpenShift Dedicated -a|* No value set for the latest `dedicated/` build from the `enterprise-4.17` branch +a|* No value set for the latest `dedicated/` build from the `enterprise-4.18` branch * 3 for the `dedicated/3` build from the `enterprise-3.11` branch |`openshift-rosa` |Red Hat OpenShift Service on AWS -|No value set for the `rosa/` build from the `enterprise-4.17` branch +|No value set for the `rosa/` build from the `enterprise-4.18` branch |`openshift-online` |OpenShift Online diff --git a/index-commercial.html b/index-commercial.html index acc1b04b460b..6667728a5adf 100644 --- a/index-commercial.html +++ b/index-commercial.html @@ -179,6 +179,7 @@

    OpenShift Container Platform

    @@ -288,6 +289,7 @@

    OpenShift Container Platform (日本語翻訳)

    uhl_9!+d3a;F)ku^*k(NO??$Fha$KW7deM&CpY}v!-5}x6(^{Ps3U)X$@JYTal z2L_^R6cquU_dg0z zOry?Ak3B=mBV$mq=X8rrI}%)acTaC}=!w=!Av1j*FqdevNpzh?>^${=D?(+y$omph zu1@}``?)9K`{u(3E=7vbd3x#~^k<{iMH@K>q?Q+`6-CoIZI9x|9BrPh-Q=?wQR0); z1L8|_kr%W&yTC*5mBo%wp7kCEO%{B6!FKPzKn;PS>d z!HEi3npAY)6k`~*L-!%Eki)F$;HyZ+I|?541y3vek#`ey4TPVyzD%59(O%wy)}XA~ zX2?N}rv)M!`R@+A8asFI9*h;DN9jXNOv=NsFYvr^XbF6Hb_tD8lf;3*tp>*1Fyf&( z^KWGVx#CkvZ$7|ODW#JQB1y0F;E|XHC$*cWr;(M_VNyITFE?@hi|?n?!*H&IF1sys z&s;J4AzEj=;4s~hK=OizaU8B6riZ+#p0B6F*LAwjldmgGxXwm=I(Lr_AljG)~I20 z-aR1kw{9FjI(j^c>9^=Pe@<#Ul4sRS`NkIWHCiJhxT87OHUD$ZHQ=6JCUuMPYP0QI zU;Z)tl}mAm-TzwBmA`JX1ETWU0)V_Kl@2DB{aIhZu_ZBVj|~9O0kk(GkSEZVW@u-p z3~0ZPk70cQxZY`ie!6vDu^@qNz{DIdir$0VlBJq{8Mru^!h#AnDUmOfEe^!ilz(jlamj8i)yOIwRQH(#*V;&PhzH*OKg zg68g~?*J~O^>uIAcjgPNNH7EaD$u{#Pe>sD3Gk$Q$khD@q8bmexsEr(Ggz-FkA3_Z> z@&=8J@>W*pm+`S7;3xV2{@(wZ{#~gx>XjcN|I_OC_r3HvEjs_tk0m9y71!J@ZsMqI z*!$;S>-Ud`>gsfWlSDT*71vZw9LsC+3xE5z!wN4@wwW!AqUEji%J0-R{O2pok9$k4 z`%rzDFz~wecWcr3yAc|s&~(HIzS_(T%NguGF`%edAWyQj_f`s}7;Z05&~X?!w)P0v zNYdFkq#!;Pmb%fqX7ROKZT~Q=l6pHH#Hz10_3e+uGXH+ZXa&#Rhv$|3KK=JQzU^+# z`GRxc>^%K`iJj_TLeoaXeHw=`ltQK)hJBst0beSc3A$CKXD_Vw0b8WCC`Iehh3sL?|HZDL`QcIkMvNf)+N@6)B4R_bzM2ks8__ znWR0(?-Rj67VT~K>&?+yI9?)79U?2PkQ`1~69a0c$mOEgTiPT@CINK7d@%K#r%7Gd ztp+HX{(M&9&(-b%1=#=mMYS|H*{U5XnUqiTTcP_<|1=!BAT$p|*&C#!vJ+nW*?QGy zfS~KKj5swKn46$1%h!Ls%f?~oTo}o$mgw@-!+v)Me`j@AVeKXD-6s_>mPrt1VL=-J z`?yWd$A#Tf)P9^(S39vs(w$gjzs=3AFDMTU?}%`88J*MDQFyM>VQzq& z`qr)1x~DoqHQ3j4%2Y<6^e>i@|C;ZI%)EKCVCq{e(elXe4k_$mOsrWNtVM~-V%+TE zHOgB}CF>1$wZDa~dS6_!@!k6pkiO}gVVi2oYdI*VCE@r(22`-Qk3J30f0n*By7<(6 z;WVD2(xLCO1e_1d#BO|Z zU%6Z(D)u(y9gQZH6{bH|3M4x)6$DM}S&umOq*!sB><+Ez)_cT;r?;n*h5bEG0n-xY zaQgmy^&(wx`n$1CpTJQF*jY`J3#pp!`nZ;`gYUHvHs(Xpl_)G%pP%P`vWy3H*7ciP zfxg7uM&@gwh*enNqg=XR2(D{P_jWFc=rv)NP>IVAj%|K)gDwyMnbd#_)9Ok6AxG4Z!S z{L|D&I-V|eKa&McK!P9T&u`Pe;`O*AQ)}aCkH(sp zC#pVpU9iqpO(l?SZqC&%d)r0eEL7o|Y3T*1@y*uqmu|I|jiUvk%OouK;ZgY>@kfYL z>iqex?CXODIO`M78+aY-@_X<1@oiF41Y%=@=6>Wz@#u;AhG%$Q&k1U>slTGho~Z2N z`Mif^e5d+3_Atuwc)j6J$ZTYpn47bNMBivokQcEY+@6gi5 zi2K}5#dVsS?v=VtZuSo}=gs&YNmDak45fJfb>$1yfcWm(L^JX$IG#T15m<3M*}~&5 zdkw>cUhX|`&Lnmosjw7+vJW}OtXixi^`7K@9wjEHt2Q3?{sFJK?HkL3sYe4GGeZpx zEu|m0-460ekUh=M!@KN(UOui^m}wPQS>88W*9G7UBGs>#YhoE83y^lJ2L*Zh|IRsCk%3U6*V;Wic8+s>K(*D!la|a)G z?x!g*7m6GQ|=Bjs(IWG<<*v8Ah5`(D+15Ry59yKe( zVh`)$g;tw8D&?~@a_Wj!XSZ`;H^z)e1k<`scp)~T5e>i!&5DPwAq+i><@;D94wb~Y zDxspCB>EEbv3H__`)BqXewV!p%9wZNv690!D)T^Ecn-5|QoWPu&m~+J^CbRr zYw8D4oBB}=UiSSH{r?O&uEPZy5I4E8-i@vP>0F{5_-(<#l46v3_C0-2x*p3L$Rh@t z`F9Y42=(lRrA}>g=QeY(qC)3h237?J>Qc2qaHVmd&SgI_?QFJ_*JP13pY;4ST5c{i zq<3y>vUbKfwA@{34)N_u9stKuv zAsOYsZy2x~F#DB-H(qC-(2#FOb#-rlqQ$*a0`^ygEYBR~Mg{{z##IWT`{?D9f}3oQ z1Ak9tFvF*&)($>2;4fX|n+0-Ox7tB=#05@unVq)cfcEz7-3IG{hVxLcB6a1$23l1# z$=mT!EgkQRLVH|_QKx<@1Jy5OmmsNJ+hTqX8;oVCR#a@b+=>ORj52HVx4;~8o3?et zv=)ckC>L97J0wbYe`}1b&mJ`aGaR6ax3zw>>|^{JzfPXM^;Oy2YNc({%+HXMJU7ju z^7bp%JIU||W%7gAjlKPG+IiHoK%>8C=s{bFuS>DPy@cfjT5{c_ZNe-SCo6GO~JB zNO7NrP)y9dDpkB+8*m$M)O7`Dm#ObxYp&vaFuA|C-3xLKCga7Hcw77Qj=3D91ucAg7gg(|ho58I+PPm*w1q6z5 zSw4VqkQ1KSa^ah#R4ZiZ$@DgrT9j;Z%4W!n2JaQEF| zWmVUz18O7T<50R=hpzjEYMCLLw?ze`;O=6p7m(ELeVLdUle@dI^=z~1)_B#btb)Q- zXoE}EO~=Tg-6ju(%=_s8{qQ(G=%S(-m@vad&LcV(Ai# z*hivMuOqKjBXi>izYPyi-AwO7LT1(*E*f)kr6@lNFm9ovi=&qCO3wW#IHidM*p%9M zPClMYl@ErH4c@d3SUs+F@l$8{#A6zG=&R5Cx7l-@%i`Z==J2G(L&rPs%>Tk8UKysx z@l=qhZjV;e0>nmwdd-iM3LJ=sMZ3 zrC;;@eyM#T355zDfKVCKkPzwaR5gGco}nzwPqi%OS;B!Gsbc{Y@sV}$-c0$`~ipo9jETh z?i(Y2x}`}_d^}(9%8!GU8A$D58=%RYvjD#}YW61z{cet2!(BgIwqVs>pm0sB*{^{7 z2-a2xN8%vfrWnA{jadC^hg+uu+YEF@4MeC|-%EtvYed(tD@}Ouw^fb1=fUp+a35hv zFYo()vu4ZpYg;|BYPvM#EbB>rP#8I= zyWoVp{PX8CZ&Idh>MV}+)A~hy>G-NhW(7YmEZYrfY|Iv?n6Ini)DRFLQdwkyx*`Xj z;^mS6^8;PqfdDRPz?EvR(yW57Ww{m#R*#_!*eqq{d=ovL2M z7^TW#kE5F1@2p2of@D{;GoA#t)9TcF91rSoMq%ba%hE+g9SfWAj_ypCbDpSR3sk{l zo>gpZ2(V|TcP1-5SAkX5Vr2ymRiw$@r*4;~ckS$7W@6p8M9KH(O!+zGzW-1fZ=ujVZ>JujEjF7hEqHS%D@U{ymWB7uwtfJ zc9w6E*Z44Fz#3a3ZCAbKZ3j&KRzZQ;3RXUnPXE==*ZT@5w*LTa+T;4&JY7`~7);|X z!K7#Du zTxEzCUzG+jypJzR?>K zoHO}{rXM$wswZ?sedBJVxBUT_PSLu!b7}U=kbU&G1M#KtKfmX9|&NIgmE0wn-m=6$)Jl6`{D_BQCqan~2)s z9ANvzq#LdqQ64V$B>!3f&7YK1RLz6iIiULoINZun%`{MBE!J8G%;IA6R2jOB&PfMr zHj|T)a!r-f2QOk1)*fwVN}|c^BUl!HjSk4iOE?FQyKfeH9NrHO;kkRL7_-iT-Ck>k zveE#XpQoOsdQOU#Hs8HsbNT}{?9d$M0<}KhJwa$Yp6tF*T)FHM1(M}8u(W{75OQD) z)LHb^NvNH8)-O&yb8DNpyYChV@Lrb3EZM-d(EaH2#*hQs<{TF8CmqggQhvG>G>0)n&l&TA$kZeLv&#Od`|;(w zdxnc^+5OX7>!_)yl@jAsr|rdYJ!f)0baizAJ8r^I(d%Nh+Gca!#dbR%^#Y}p?Unau zv_)OtFstV?_u#V*k5T?Xu^gUYmQ_ zHUVAPrZuN24;vWR`i`}6fHD=j+zbC$_%%PB8?EPqveIB3EPcg3S`Up)z%W zaL3(0{lofm786@c+J$b38OPj8cRjgl)_^6*GGT;lxmVk59qAzs(iY^ z=eYQQEoS?p0z(9VpVUFSx|~IhL0S8!L1mOPXxdCurG#XzhWbb$n)?^7kf?3)+hv=- z`#~n=^g^_@COu8#T78Ee=AmPT#c`ViukDr!ePc^|Z>!coVnOz<$Cd^tDiNJ)5h4W) zow}d2>J@*)nFKk3A11cbbcW@L%nEiT13nc!i==#bKxTCFF~7cJrgHD-Oi-y;FGZ!N z2gY(cj%?%SgU&eN$2$ZBa$&Dt2U`Bn%?TnyPxXwBQmM3?*Ju#i;6g-X8-d1Putr5gHG27o6l}ZnVJM4hcxr z!{djTf>~!mv2WBQ}Iadn@(-9X_1RKIPcfBQ9J7Bvt5Is zA84*FmT(lhKO|(ER~4jeIP$ZRV6w1n!V>mZ!k8~~e{~uj%9T#Q(wU47Nx}w6WpaLSIh|+?D zw1|MT}NmEKF{m-oj-QRVdB2uan5zF zbDi_P%$Rbz^ODhYIc@onW<4owTipqhxaplw&epIjFVdmeUlMLL zH!#01diDA^)@(sK;v#Og&+CadZa$!bSnocDU3i8mu|25?9~GPD8_d#UyI|b>&*5Rw zjSbQr!*5Tx-S*|s^)MI;xem9#hc@HYCibokE6rDZ3xJdIOc(wl$`j#Xh6y5C=F-o9 zNJQC~((K8d`78dd4vCgPlSti&?s51d^5CR+Yx?&yq6aB?GlD{oFBWPSIg7Z%>#|9viNl;`-L*NGlT>s5uJYHj({1ti{m`Rm+&Ue|aM7w(Zw zLs!!l34?a@2uQ#4=%44Dp^W;blzan<{;-se9)_e*4~I8aZSt}Iy^e1uDm6S*eMsMF zz*p-ta{cp8(fvz09R)+Y*)%uBo*--S?=N5q#-1R0Ai>dhW&D|Sf#X@^RRxZlG6zK0 zRZ~pf#XMOU&7tc1-rBE0^out49%V@V>)6e|Kda$u=Jlk2#JNucu^vZ{>rta*690Kl z6ybdh@o>l^fOug6rF2cc&p?Fb$nE|8qAJ@j!-YR7-~D@#lp6+gg{=y^TVT?T9t|n7 z$Hbb|k!5uBxK5qO{m<_iiW2Ih+fg#?TFA}9{QdoCc-0+qHrz*SLtQ3XDE6NT3^3|@ zBXRoY z_x&88&7e6J`0v59&{W`0weESR{;x?&`0oH-S3a^!$d|ofNwYf58`?Z)x^RW^yWzq= z7xf~fd!5A`nI`0yzK#6&F{nC?qb{|(t*Ogr+cZ>j50{G$W9^8OByFmXf*_sA`=pCa9h`@ zWNU4|KMonYt3_C>@`QP7rp6qvN}{7jrtw~pQD__m3`nZ4*|Xi_tDA-SN4qPt#Gd2v z30H^ih~RoX_*5Zs*EwTvI~$waeynApqvb%F zNo;mJ01@dGIvK|cmoH~pzdzdsgPb9|Ej_!2=k4|bv$fpC;o$naY9;jzOjiWmo#m|3 z3jD*OnuoWfREy%h!`YYx1l)Aal5|ZvFgwgV3;1sR-s}OUg^I^mDNIAW)S%gw zX0S}Tu#kmitEhYkn+jwo6N;CNf?1=#-$n@we`Lem5+0*7Q6jo8Ar!P3oL$HNGYY~QDo7; z?!7*=aZ0?orZMe1-{Gz$-V_I`AXbxu!wJ7xv&kYpjodqX<$O{n5@8#9sQjrwSZaaFCjm-`-4*3hENI zuLO3kS7ZAOFeyx}jg5U1@oSMdWAHy>L9nc~-g<{Uc07=`4>=IDeG(MHt9NXoC5pXo zdwJj~0y6=WG*wqCihGBvKoT3Kon8uBXTo1n={5S zTvysvgpaT6&fQ-H@&_)DcurEQI;TXs+fI3y9;7w=86)KI@ci$mOzLTAdn8-T>)mq1 z(ZY06OxjsVf)%!c_I{0H63p=HL0!e-T7-GtyLcPKyT5PDC&_U_( zVLLL6e(5eLG6In4$Nun`%LB5aW` zs`0DI~V>;$}J7ng?hRA%^V2aY>WdLHIH{`e6I>~2Yi%kWIX zSLpnCwhh|ev(#H@ljZJZf%FVkTn1y_VDY7dV6SW4*-^aX6EVt0%!0v_Dan##i^{S; zbzmgYRQgD~;Rq}M+~m~Mi4fEg0uL)-sX1#NIb4No;(Sk7SjC>J+dcj)_T6c!>^f@< z^glAEb{FWS*emujPWtXJ_HDzZu}+OFbjw&fEim4q7-jfGnN!kvTT3l}IJ};{s^8vO zPfoKySz9}6!nIjwY%G{&_vxyIYKG>}>XKcmTsP)o6>4kYZZn6LmOw>+xv%j2ns}ax zzn@>;Pke!c8$L}H6}LHS&NzuTYwJ}aZ#8{+ER}!@kDKlNeWKCN&1Fg@mFyrD6;&if zDE;y+f5)+bNQO>1Xnuia-LWT@-_f+H&{C7xeyY!_x-pu%@rnujd>WG~1;huSae$!7uD+xvg z-%laKV%Oi>n}*X`%0y!h7E?{Dfr z%#H{70%3ny2F*q>dd}gojU!`HDi5JItDhu<^(?_V=f9Bl#R&(Wl#&c(3BdQe`f`CU@Q_U^}JPmeAL^-h{?9>l-8rdT!s6DzX}G z8xm;eymGctO>IEnzGmft=N2lG*T9>;5s#CDfV~siIUv_?3Ae3Nefc8oVJDzE99t+qxg{SdFoD19jv_eV#!)P?4U}gNgjLp7%?IYLA?YCq8FJmEE}c9Tv_q`P-V^3!`KB%(bX6{6x_Buj!;WTh0rE zrJ|t1^%^yy6tZ{$^bP856W)Hz^}{Rn=y1RZKlN@iDO4Ue?=IcfonDE z$oKQ~>Hd5JAJT<^n8n>|5G8<11yrExs?oxh<6oX4|GI9+aoNNhLKcs*LPk0|QHey3 zzRzO2j%JG{fvGoG}Ml9aeh zf$cDzd8JYM*Of1{46R}9A-0yX_V!7YdwNc_^|g(>{&5qY3|d;2eP5fD-n0bX3m+6J zN1zKVA|?HAW*&nx=WyDkQs5(PZ56$Q_y&d% ztZayi`^IWZI&=bLnzbHxAXU|tUY-8e3JhAJU)&{0&EK1TTZ^k$V439$7 zWE38Soz^$ye+MhZkQT+JQ{b*su2S;eZ@&nED$=tv0ycFQ&Ryk`2R0_<7WL-LKJ0z= zcZb*A+)e?BWxm*C8ChlQZ)3app2CtvqF&L>d}_?b>@BqHx0 zcE^+@-`#cUWH>a8$OO|1A%AlB0$;j=o4on7Q}Foc}Q(W6Jt!H4F$t)|Kk{#dHtBJXF&*Ep+ zOJ82S^^ulZ(E1+KUmBjxJF0V5LH(y<{?#@f}UD>J7%zGN=saGuFWignFkc86asC>uTXHrW0s7NzpW(rNSjTZX} zdbxYivY#t^ZTEJ6>uGqmvGzN5<*}aP_9Bq74u3@vsJKJv5vD=8-d~D)tt3Cyh%15% z-Dk|IUMACAbWJVyLigc|GXTpnAD7G%BY^z*?Hvr>+fQCz4QD&DANtLUeZ91}G+iu} zOr_vadVQ?jVu9P+%4p%r9gAsGK7KwKDGv*Yc#DqchLDDK^&IM$wpzT7?~LH7khQCGz# zk&GIz6Pd$7nnDKknY#K~v+AF2#7{>CUASFG?PY#5-P(U!U3p5)zBNWz9QxF(t~VjjbU2T5 z0Q$;WSN7Of6mF^7Idg9FnX!Y{EEO?BLW11LXLk+Sqemz%+VR$2g*l+& zaLky%P{1Yt_v9Z6k??haL`jIooSB9WJKb+l#OJ+|R?S3%{X#7G#(Zx=#{?5Ff=W<; zVe2i|+fie|Gxw`fCn#x;_zW5x6l2|Z%h>H(1ocXXHHkbm;vkg*!-)FIrQqU`lQo9u z%sN)3jh7qXLg3QOn|JX2Rc>W4Nk(_C1Yuj9$70yhTVgcl-rJcdXz{pO4Ccic)l{E~ z_@r=Q?bY#CCa|w7D>j@|zNOEFzjY`S+xQeR@_d$M_R_pI*su&f zVy1Gj&bP|*q@YsYS&nS7>#)zWYVA^Gf;N`=o^I=p9cx_q2eX-p>z#e;MVewN8vtRal*HOduVb#Wi zfsox#5{Z!orm2cz!pdM(qWpJ|BZ?3!ap^|L{k_-&HC6SO+FqDLX zl@7`@WpZIH9psXDM#Q5It4nEIWln7_C_VY5K};MQ8njW`4CY;Ub}ck{|AFh9x5Ihl zJwTBiG{n~P=DYT7neOqytG9ytkcPRqgX` zO+pK!czI=VD`wZY@x?App><@dID-m3Tb*DvdN{=%!sXOb5`i0|!^UqU65BhZzt<9U z+qu}-)Y=sT%`W9C54J8~LNlKhSs6Cm6a09}r!t;0AW;SIPSbBV=f>J}q$L{(a=gjC zoMpoKstzq&;s9tbsz-}Kn=Ug8igAr}w3X;rg8!O zP~mRkdAT=FrGqnCW!GlG=K}Je+RE?H;Gi;9r`uiEZs=xCC#LLkoQNxvk;f0-vetHBnZwe|<~Abo zPLCyyv#UP_eiT7KQ2+a++0pti@hWt~UKn1?$W1)iY{7%pk0aw=EZR0J{M1`v8H@zQ z(ymnnZe?o*L)K%F6JZ!1@~R>#!~nkaChttYSZ@#RQrCTE;_NQ^wUN&xOk*=-8=%f5 zcK-EySd;ji5I_7vy8He0U(%}!Q9bFhZ<`qFEJn)ZrCo&&6Kdo3=m4<9D)H>@gm-AdO`Z>0H9lSo%)a}{bc;E&oQWyKfZshc2y2|7h|t6w%_-EDe$Qy^~a zx4vCN7z#}JM(HXAOVWpdBCPWw7XdE;ZVOYCNnFA8LF7{HZ3@;Ac_~F z|IZ)}{o;!bLk2W*qxn9~V37>>99KwR^p`P7GV_5tY}1n7qr}^?%JQ@$UVE;>ruAXx z%!ihLd$^4Nf}|=kWd>4fcJFuUj@kNMWbAiS*wY>NQ4_P1DHQ@mv#>LRl9Y^?)5Ob2 z!{&5gdVir)g;OG>KO}eg;+!lF28e0#2;UW zWmYM4H$pb|<_0eP(#Ab8|KSq_CcDOO-wt;`AOm|I&!qL6vbMh7406lY>KRa?Psab^@L}u5F^3Ksx^I9iDs&E!0mzyMI?>92xCPP1)b^}q1HL5Y+ zo?uREznChdX1!z^nqfLn$4x>?0wrDh_ySe8zWzR@<#{u?LnH0T9yrmS*@R6`=S)e0 zkO0rz9Q(m-z+@EXZM0}TcarJTTTvrvZtueo^FlUkl?_UY21rL9myc`n!_66ieI1=m z(cM3lD!>kOvIqe;o5>W`EQ(RxTe9|+MMFMiStUO|zh{~DHhQw_G%~5p5@;{I^JY*x z?uy~+l=P^S&c{{19zOhakw*BdUnp(UX==-w4>}{Fo;m1<5Sn zo`QXIe_C=eadHaQoN0GTNMwoXPQJDJ(zD!Bz){WJy_~^+R}CWI0@IeI#yYi}jdFy* zsZH_n)hfV4bWO$U`WdLtAj_ZXz$}Zu8|elTPkB%v)gMzQACr0L;H5=}Vw)M=wkvf$ zh@Q}nGG0P&Sc@Dc_FNY#FSnbmPq!mLTm6~&HMChY;-ZZ*GpftsmwT_jlY}umRTmU5t^-Pa=z4qlSHL9ecUtZ5>Nhj<_ z`{okJ0(wnjF+A3ZjAK>75QeXc+tEvvtWD;VC&)l55$IJU|2^%fp4Oecce9TmTdIfn;H%LI;BCJ3I^$eC2lO zh7qgCqyT-GT3o1`m{Lk}iD^WzQ7fjTv;+-_@*M*xvFI&RhCBh2ZpJ5Fw|!P{tr=#S zy!2AFGU3=ywA=^0G4~zSp&F%o=AswVo?{x)(5K)LBe!`!lLqa;?w${M4`vNzwo#yG z|9ep5*5`{Mz_8iw9N_uC;fBOU{0?|F`1cV6aDMOBPeyL{oE1Ps`s;8rdgm=VfwW$B`zt+=&$FA&*^!rno;%H z*nduKVFZ5$2(2OSTcuasMde#sGBhhvGSs{jE6OTEC-=`#u*UZ_^P=~glKHo%&Vy+{ z)Dy^f6k5VUncQui4()m+9t6@0OTNX)z*R4DCe>^!v)fM#I2Z(`z%m-M4W*sS44WB` zlO%`q!Hrs(bgLSM{t6+28k_&<{Ah;fYl&H3r&80-{!nBWJhUDPfCU1X)^usYWz~H7 z*Y=jn(-AjQD-V8~!1{U#Kr-_WyH~51S+bxLV*{`6!#rSd-W}f80Cu(wgkpE@-+6L+ z@{jGwmCiNpIiTnyIH?X@X+H5tG|!<*-IGqj8C8N-(n50wC+{W3Ft>>iyxc7#EsY~Y>@t$S z@wB;mUow+@b^;JLcCz41ZGQ8Az z@Wqs3IZvkk3H1Jhr7nfGP1;esNgv>�ybVvnga!ZgjM0MxLIu1kd-g)X! z*|(pdtnD`YgM-oc+>Ow#o|2_+=OaEJ!~y{qSgrrqL0LqKNiRKPhwNJD3<5@8{5(n( zg4O{NGTnQ40j%{DY^4gIC|jBNnWSTN`5^KvMz$U#Gm=~vfYE8MU7IR!*=LFoaFd3> zp{Zq9)MX7=t0{Cktf&V4yM{aW&Qc1AR$p#GH}9zOyY0L|awIJ+x-NUG>6(N{vY|AN zdld)WZr;8h6+I6RJry1z_m#j+OQFh!()w>kb~(<3i%KkwBaV{5wLwybCuNq)g zwiP0++9Spq`NZxwh?3yn8Su#kGF9z|3G}pR$?bY|WFn3p4Jt>uFl`&^LOWsT*%*;m5X$$%c+98qVbKJcJcV+Sz!njlp~TIzNil85k#W!f5VL^hE$ zTYvs_f#LAk3W6sY;lKhiqrXmPt8*$?>bfXG{^77)5T{wxGgVV(dbqei)s~qFj+2w+ zCVVtcpQ$SYa)0MP`;xcJ&ORbQH)R09SkJ~N-ZqFfMwxUD!;H{5_ zzy)}7p@W@0aUWHFeZpUV|F|&_y(ECJkfbDo)_`n@BxHE|^E|ORZ?yx#3Ni^Q%e8i= zu~J7B9O}111U1!tj^AcIAR#dk{0+1&A9#WQg{K5gp|!y?OuBiprHfFBng$Tl9B!DT ziiTlC&=WE5t6LLApH9O>B%d*E?CTSk(<)NPW!eL_^b#(D0kSUKmV|foW}s5yr+qGw zJ>rf6V1k7&CXNwc2EI%+Z8eKYcx_(H4sfDrXCwu`!)O{PTN7bFq{-JXtghgKpUwNC z=`<@W;+xc*C7_KBl66Ng_yoJUI;M&cJ1HNP1wk8xH0nVGAg6P2b&&}|>?}kmUw=Bx zZYd(j0H}=y5bWJ*4ne^-9-kkLG;cwOO90)Gd`7Q?srmVJFvzC87MB{_-hZC+?2$S` z9nQKlZ!D)ovdD-t{JmEHV-Wx0r3cn^=3!rNZ^OO!Mc-?_bJ@-QLBXye0dn&o>uXiy zJqW^X6<*BBPxX@t($*d#r{vIp2n^M!W^LM}FfwBxS#M8~Q1j4wW=&Ve{`m+S(2~ud zScDS_;qQqA`66A{x&3R9@~{PVulCCWR;~M5laKNvi)I2WVmb7@(kWyn0Tacs)^}qx zA+>XfF;&6JgLmFeTwJ1|x?Ktwh@d1S5rqgEW+!8BAi>{y?P2MRYlAA^eg4Mwc16%@ znU9j)T$k_hXDwMQbDapP^V-BB`n^l7F)0v*Hvm(zx)oE3`VcFu0l}Yli5n`Yz`oi- zCVm8t_&`#=(EZ-?3ddtG#(|)0wB4RmgcuYT$EWHi-{P7&R}XZevUe8Prc75HKPUww zAi-yp(TZ$P!3LG0q^PECzP~j&c`u+;V$OD%80%G}m9qZ!TJO{em^n>YQ$4#v!pPS{ z_6}=b&EiYL_Xwhy4gf>9m@VGD9abtLWhI9|4<i190 zK#WZ#VEX}Nw<)j}A4?HWO@N+Pb=QgTU521MXfpcw_fGN1b$rIiV@Jo0QW!(enB|Hv z*68eEo#C#nk;_e{J7~Gbkn!G`t|+v%vFn6bVQxFp0q99`kY!lECq=^f5PFB{by07( zh2C=ASlwaG0xN$cfC54?hzp2usSdDJ2{KH`S%{b|z5VN3$2v5+{&I z8C);2j?Gdn3M{lzPpsK>%9L0L!5}_%`Fa#q$Y-`0Wv6wdtjT*J;(s-SJ~UJ1R?XCK zp$)F$hJ7aQdR6QD#+pu-Z132t}N% z^9Ck_-*NjfEn%N&i1HFHHU+@}iIsZkE>Wa;SZpx8%hYQ7FU2v{3=3kpD{H=><&Z)W zvs_yPafVzG>l25r(~Cm_$WJ7m+*F3(9-ob>QdzpUSI?MnIE#kUj>GAs@AGRYnduWdPsiyZ`wMXt100`x!w-P3WerB}iP=l5ax)j<&B$ zw;KU=(KnjP9>!T;gS0|WSE0d_;oB8Z*v{t1Co1cl^U&LFQeDE?JO`p3tnZ6dRKMI7 zGHP9jh$bkcO8@uSIJdrp63$3j*CMwimDV0~m&$>-HyYnT?CtLp9JF)STD?Mcf>~7+ zxG$xOMbBeALTaor?10bA2NNv4?I5#OfuA{13$~+IiKv~2{+q7uH^6?f>T%WE&O^|}59Fw>)u15W_Yin>xD0l{QD{QP zTgH|i%U*iiRsYeLwPAFT!P-uJW%q3-T}0Gv`@HSq5H;M2e^_GA(+a*@ZT%dae}H0) zh)DnZ`SWa2U|brOOIl<2(vKgAgFr+8$I&XNCAOBL&`m(2LItdab5!pwO3P2e%T~xZc0}zbF(R>l3gEc(P?jcZBY6iBJs%z<{9OYq*bZBN8 z5(sG5GBIhS?1`aJBTI=ip1T_tM`+=m$SzaSVA9m{u?B{J9{IhmP|*}fA0bP5h<*k% z76hfj_Mh^7eCQdOI7f#?5*PP78yhERM?@ea32djgzkkofuQO(TnLvZabd=^GGefFMt}WX*KBh-~D; z(h?Hkq+hH{Q^L<8D%Z;CAvuxO9Whcr|EihIWFJwmg4puP8JD^_Iur$HJhX)=tBQO; z#ukaS!axU7nINn8Z_TH|&QWbH{N=G>W5TMQuk0tEez`F*@do6j9=|gI;P+s|K-u~y zma7 zO)thwpKW-e>~&3exs9UT>oQ=Q6T9x$t(UZ^I9*@4dhn?t_Y=a{6tF<{~{m{xGT*{ z6b$dTny(sk@P8H%D%_Ofw_9yJ8@CI>>7dt%!4l_YF+j`($H;u}?*WiVPI*2D6#80z z)Mrm(7kZ_5F?yZ6dIV7Hx_i&@Wa)-2o(1%xz)FR%KW3yEfICcFkE=So1A{9I8jx z8Uw_iSzSm7GqyEaFh}q9!51VFD0w!rcQSmAO%U0EC3|a?F2q#@FNSYFgqA78Ap%&ZtK}^u`!xu z&psj^1~w>jxWY9gGz<0mset{m_jPKnefQq}-Dd&AWcEXXS#ZTPH1=sok}h@v^<%_E z#r=HzKJb32yBbT(>cwgFW*z=F2|HR#jO`X-@576lLNG~jmDDMCnadfFU^VaFfqsNf z9-seRgoSi9hyluGxy_k?5mt_I+T4%}N`i_eVMhiS<0OTNJ$a_+wH2>ZKfRBn&fmTR z)MBq(&qF*eM4x1qW%@i($~x7%PDaeaiLc%+Qi#n}b;yDgj@#zvbGa+`2081%ID4#a z-<63QJq7j;kuxB5D)S=@GmsFMl8QQj_*M~ik?LXWQfe_9Yj%HCCRir3(aoFd`@APu z>n$x8RCXs&T{ATbx1>`2Xz>b(RchPPs`&WM_l=X!Z5`Xcoi{)PrkgwBt-4I7-dAmC zZ(DW9B1Toh9WM!{I4H@T_>5UwTHJa^GbW|M5EMR3UJ=RZWA&!((PsUnPkredAz^|? zcdPmj$2DSJc$LPjh&oX7v_oYJ%#hlVq_B%H=Eej zq&5O0iBB|OMmFe>8yUDC6~i|_jqOS&?d$*|S9^h2cHUzJBw86zuG7b=eURwC<$Si3sU(9P4=gE~;79Bc1<8Vu_0n>K3p?a+2mzUZ22p{av*g;xGTZ{F zce&*hB1Fn_dJenjR&d27uOf(NJoal3*KH0(&D#sdZZ4f8Qn(oGEBco6Ibjr+#^IV$WhlfhvWL*U7)W2}#)Q5z|&Y z{$a|QtdS%y2dUo9=pIw9rmE+;fxUV} z8@*sjG8uV^$P!vz9Xmj^l}#i0yQqfzyr)3z$}Z~-FmK@YO+zgdfn>%V;u!H~eGN`rh;lXax?I*@>IULC4X)zT5reo-~@y9l_v6alB3 z5-pBAt2Q!yvx`-KOa32+di{PDdEX=VQ}rj@;kEcBHM1N4f4G z49SZlqx4G=Kz0^Q^>_Wd4WR3Sl0FTACgqPe5bb?z2#z-sW8PkU_uoo>9uE*gu0y@W z5pe=F{!i`410JIctqxp2<l3dPT-sXNIByWHBE|4>Q~E_E46 z1Ta{XSb^66)xRnT{#y+|WXKfdYS3c(E06i#)dUFMYR0{M+fH<_8$EBKLLdGbYw-84 z{j!$rhRL98tkYjW3Toi7C2#~Hk*5K~IG3t`Ro7n4|5Ndj*#yNvaF3+_hoM3~L=@2! zaz8ikXlWk(JC8%{pK2K77V3z7N8E=H{Qa-83kJ)I9PI7y|F_QNzgO}?)@Y2dLRY~} zDE~Lb7S%TWhj=ahzsn>zHDJ07L;kQ!NO*?p_Cun+UyQ~E0owDv$iFm5;aB@4WZe^T zhW+yYDV+gB!p6t==tRR6UP)9ycDw%n8ev1xSoyxG*R0e>oYc`h8Cvs%p1Gg?MBeS+ z%O}NSUqjn@SMt1InI81L^7hdWr0nN^DvBCZcy?bWP&^l^`&WHS z2<~2u{B-9(tTS>I4JrS>i|qOtrg2Ml+P^8zzj?HyYfiFvd`yWjms$U+BtoX<1q&-g z0wADQ{{8G$rw0z0S9?fbBNbf`Jf-j(L!Cf>nO!DA>-+lpYGS=mKp(Y!8$T36)S=K# z!U2Q?paxYfda7KptmJ{NKi+}ZaRD9-N}g5^P)*C~PoNDOfp%m6<3u zv)vRd2`N7Yn$@~EyQZsu*GRj}s*|2fy_LlG_LJik zJ+pmOSSZc4F15|kD10m?vXl*q*JcjPRlaPs55u5d&CxFXJX+jn81x=RO2MY4R5{~b zm}N2dj=R)h%{5~ROggxm%*P))`bIVG4QwsFGlRTgv;JMvpRM;)OZwNk{nno#u5Uc| zCKC3W`9TH%$??F!Dk)H05Ms*1j9L`TP{Ecqsb<|sZ2!Eti&Wf*PPIq}4GIaWX6h!s z_Zz8ni9uvTHiL#}W(CYuCbfi5z@UJ3pStH^fyY`YS1UL*Cio&It;#&vLPj^e#R04w zTaK0cS1|cJPpO8Z^`H{*>pM`i6FR4#_7@wMfC*NPF3QhG`8d*a<*rF0A(eF9r##^P zJPvUvAVR}tT|j$^x9oj{=-+`A4I6__?x=?kzZ(oOZ+&-xh(Gk8by!DI9E&}6_2AsC zldR8V?s&)a#;ErWxi0*ubcFM_^ovj{`DGWyMae}UoS&i6uN1lB_vv0pg7Z>;t7Z;& zY<{co?vGEKl~Y1XCkKbL@!h-b&aF8(^aB^a%|Sts_5<-vsDWg+o}EJF)1ct;n>aC#bRN% zPab&9&syULmb=~b(5-`)U2l1Dff}?CS%t%61ZE;~65?wZz5Vu=)6Hxq5%>B&Lh<9z zpLS+f=deCg8bub@be9CXx-GSeqTPu}IG&IHG0S5fHhVBSHWtBcLw`^yOk|2i+f5H< zlVfIH>S#GXpOVO%GvcHK43yaVMX?u7fXT(Z-?Y}?vU`>>c8`Z6dFij)nhw)!Li%n? zGcAUxFX4<+j%VNP;k7VxkyqFMe6_5%&V7Sa$C8pSZ*P~G<#5;k!C!xf{&ny+R=8X; zj#RG1c`=0iO*bB21MTH3W8O4raB#HG4^#%egk!f-&tBW3juCMP=?j)Pf8pvIS%#Xr zZa*pqYDJ(%KFP@*gR?J3*K<$U+ht)mrh~45{3;pdP`$60(|lAGOR&mmO|o0$GP`6K zW@)(d9j}t9s+T;pEKEb$kx8S~!+Q8Tw_$=~&u;WDLBjP+4rxN(IOu6 zsk^J&%du?%`b7PXGq?^y*=K(1T6VJU2U#fgSk9-;BF6hd7fJ*fYIb6H9RJ)eD=?gR zNZ7r7??-*T-B=UxNe8r?{)@@^8NF(kDdW;2^Ka#h27hj4#ZvB;`Ce;_5f$qH>VmuA zJGV7ATur_-Lef91@-NSezis*zGo>vd%79HMVH2T4lsp>u7 z*|Vy{S6ATj7!yzBKC3?KOLi}~6vHAGa)*u0?_NFGL6^-Pa*|jPmk*ua7Fi+*MNI?b zGZgFG0k`sR=bPbgm}!Ku?2s1}NU-@oZJd{i7A#*%tz-TZX6#VSSl)jo7k4lJeIVT< zUsF+-t}UE!xm)wr(~8sQP9OVX*hjL*ZGYB?Sxr<+D+-#~=IGfzt??#yU*GuNIpmxB z+BNTKD@UGDYhYiELO3izHTDSKhwfEkvvoeI6w%&u zAnXy*KJ5SDC4`)T|upesswu z<6^Vtddn<@i2C{Y3zqt$3S-;?CAVpo*9gBqudzitS;64{{DCQQ}Y;fOGpEs9- zLpBz5|X~!J!=4s@L}-9!V4&Ju!whAc3#id?SmDafPo{(zJCl2obk@gb7gv z`fCTuFVcg!o@F{|GDcqkC z`?LB}$mjj&<{M-NsVyk2SY1s`P4)4{`emEWq<&mH9o=A=%R*{hI^?K@4?n&fobR*K z*EfChGX8uWLkj$wH`D^)cJ=uKAe7x};p$e7|CR8ACW}!40RZyik z-QF%sG{;K_fX<#hr}5P_Q}eGKXbGjH_SG+k%`V*9aQZW&(qB~9onqagVcpBO=)(BS z%v0Ys2+YsjwzbEEt4@0UuA6wnk2`nus@Y(|xjXkVXot!|Q*2C7#s<_^sGdR#N)U5W z5;kSkEFHooIq!SS6HO`5$n|XB#hJ-Z>F&=AW{TYLierxzV9oC7X>E!&qHYMr5ZTz+ zuzVJdRWA|L*+uN%AHQ4U^H(&9)c6Rdrr~=@d`A;t_-VHYvo_!EL6l`-5pmxxP{n)9 z?VUv3Po2s@HrCk-G&HrjC(35Jo$eY>mJk&?%~81(=A6*)?PZ*vow4b1;l=301&Zs@ zmIWD^4wQtGr*;159OHh!VKIhZyA6#@p9S7#hdE7&f1TL>LovK1ZgQ(y=nIS4v9ETW z=DI>&-m(md_QYQs{7S$@tNJkj*CGDIa?gB`!-v^l9tI^X}QDExMx-IXV0FHgj{ zf)&@jlrE=j!IF52K=kF|#H{@i?;kG$y&R4i@jOkKSWA)Cn1nWkiC-;kCS~OtnATl4 zFTKF%9Z#}6#?Yb}ox)!sOgp@`yeLOR#e)`ykjV2Qp!FD|ADzV9^;`RwM2de};qw}WZ> z@{ALo=UzaL<8w6)>Z%sfN+#-^PoA^!+7FUrzWjT$Jqk{#lRB15YlKF`St=WDk_KeVOC@#LA zbFf6)a#f%~(wg0#uzp9W;Qm7CMz^ybJ;`a((ed9NgQc#4FIo>ew$Zb_u|Y=6M1O7F zdJ%E)-HGLHq4<;bkDmlIIZ*Oi3cIXb9^Xq(R4Vzn(s?=U4{OD+CL2Dx=rbk?zYg0^ zS<=vCof+szur%@Ij$|*3p6Gq~rcqu}KKi9!A*ldQ&gjJ2?FK{6CY7(LT2*vrv5!~# z5_boyJZN5ciq!`Iq{8wD@Mee2j&4$NAq|u12lSFNxVO>0SGw6^d>+Q@50p;))GN`* z9um1tDwB+kEU=I%obOBKUrqJnS01j^jI6^*HRe%V;k7 CMNg@vBr^4v%D*J+|d3 zm{8#i0fgP@?7Wt5Z6Df$P3{>#Q4ky&Dt12L@{Ppq{N^}=?7jdO`;+sY3Q@LoH~wNp zJYiL}QMfh6*J+9O(JlRykK(cJ_wN^u^}ZYe%%ELnuN(Q5@mGEodDndxXOVicMaA@B zH)M}24OGLX=Q^oDx-e9iA)c=?>*`vu_G>sB9W6ZibC4n`NtTyr#DBQsz)Gj$M+)8t zioXK7-CWBjy_+d{76cfHUyx{C4(nbTD2#m-)~%ARE6Lgrb7{BG+Hj$RkE!nqpqgTj z{R_;zOW`{zcgQ>L2cP_XG5prV&V25AiSF1W`5n{bZZ=S#8)PfsSJIy9^M&UspJHbb-O)eRhTs8Q@S%*wj+xu*BGJ`R`LB5=c8 z^hQy5E3&mJPWpWT7|3)?j=i<0qr`1vOE_496k7gLg={KIq^V#TjG6}4wyJ5s1FdZx z(123y^tVa8PhaKJb>VKu$Wi}l^SD*IePSRXRcrD zJgx8D`5;!XJd_xI8b zR=B7pSCmCDPs!6f{gw97eun$O!e>CG_2ca)@aDR1<(`e;WzD)XP8g)Lh#w4Xvaj?8 zvH5)Z^VEccoOWz6OJE#^ELdVMIgEP`h7KSF-2(fJb)H_(NclhyTfWhPQmChpkgna# zo*B=gkZ9%cw_ScKZjzy*3(+MkrhPd+{jMc~@8-g_A83%3ifvY=Z}g?uDL7i{%9%d? z(w7Yr1}?++J?1#yr*iekcx_VN2)Xv4+x~( zbtMETFwgX7j}wuKvoqZi%2Y~ZbG+4zAQ(UXCGwYlqq*=4+zG^p58`#**m}8MV(ZIq z)}JEn%YRil?CNumoyR1PZabJR@mN?aq-kKz_iOeGmb(Q@Zd${UJ>TE_-tV>9ik99O zG78UUiQ_#-l=)qJrY)O{re-$rA1weJ^rBrh{yoCrl=I#%Qac8|Tz!I?uR(vYEgkEC zQ*EWD94wK1a60b7<+rEN*U@1=pKf%#*-0r`8&7myI5}8s`x5g~@AKU(c9=oM;xwA> zw?5B|l;3btWIRbyUm=%y_()vhtJ47gNddX z$;;i@wOQ)so_YCT(^Cw=KSV}My#^NM4A_ARBUZ5Mceyx?zkOIy5=k*`H2Hkn+s8MQ z^PW%W9ads{YF;$%-Nf2QAD=M@F+{F@*w~$B9lFRF@u|azIi=K>!Q-Sa{rzd?rQuS$ z=S~lCGYqT)O45CCzIpF9dcLuLq8xk;^8lvW^jBMu(gf25@+%FEH+Q$@m+A{0nmM>Q zQ#^WvsW!2WGlC?Sv5B6+Vc^7rs$NWx`Dz455~yU3^_uuE%DkK|^^MGhp@IfLk7_fx z8>6bOi@wym?_wwtyJ%IuX>ZawTl*;{=&u*D+^P@OsQ~?%i6Un93uRxdi<`lCU z44s+sn=jwkyR4>PBs-ihaIlBa?j&2PKOID^&+D9_Is2M2w(KH}!|=zd5lNzX>9F=K zqPd`E%HzV@?a4hvs}u$Ff-K}Ao1=!Q$6$oXUgS|QDVK>}d}c5PCpiDPXq5w-?zL_h zi)>8M@fW%$B>uWe-cX^10{DKZayO?pRQ69!CUO$vLm}@nUd#;`6hK$lIf1_}T=-Ev zq-En|XS5K+H{PuGq5oXuV*`TcQe!V&#h+wn#2JDWOI&HB!?GVdQojR~Q{7S`uRrY| z9oPLUVmA$DN+DJ}W{q(I5N+6&_EE`e@{)^^pce#@NbggUoBE_z5N6X z*Az_Vv*t%_UWNr}zG!>h=GbY(qW^~YCqr+InB@cFCjq1-aY_nfv<$m_p6xl`=1 zaDfOMMYr?0kv@}c`IMUIZ3Kfvrc6#E^m_`HIH#+tH@H}9oJ9y~Yg z^Si7-d!B~oZF7Wyh!BSvuGH6Mjp_HEu$#Sz&dX_NbM44HvUu!{a%EnfBTm!d7r045 zue}!mzpjU#O8y&?3w(I+Hzb$PJ8|NH#;N{>UV6C!dXEahWtkUsJeK?f!|@-K9dm6J zr+f`ZHhi-*IHMXqJm<+$|Mp4zH0D`-ZC&28>Z0rGDHZj}vJDVBKQ+^N1Gk5SsMiKd z;yI#Rx3(y9 zG%&fE-{^CPW)fSrcHFN+h{aSNd8sHzCxmnFLVo-)peYp7%~`GVs(#dz@orBqB(D(m z9zme3+;~;(4!Qfr<^%+ba3<*Swr}ybh*`2bxho$zQ1KA|TJ&XBx?XDWwu#HSQ!05V z3@ej77I7cVAxASHGuLmw*!bm|<<{~d>s&Z4)n<){>ck^)?|u%m7S8lbt`gEaG;r#w z*1K%qY?^wu;rcF?RO#K6?EcPVaT-_=&m}3Zc6GN0=B&j~hr6sbQAC7=)uGvg8`bkX zF9R@Vy7gxycJ@KHMIRr8VszD_<<6^5{y(>5Q;LJ&lhZUm&2ZdAIJ1__Za z0qI65krL?=kdp2$1*DM%X^`&jy3h8U^S$r=?%hBBVaHl)))QllXU@4JI^OsZrl%51 z<$<7F=BY!hta#70CeAl!3njvcll(j>86y{#k3S9C77NhtHo!NB|w z9Q(6)8Z;DW{36Fg-?V`d|5wEI1<1CP+OamdEij2ynUcZY^b_9>7@2`c&dIfoIu+rLZ%uByKdV(8H6dkq1X_%zhlCDPIO3vor}jCf1aZky-em=I$Sr ztNsWXoZWy#f+yddL^{uY&rBh6i@iVX$ivJ^iKrA0w=uZ$wJ5+9JnY<1&PlJ5RT5ip zR(g0(z;o9NHf+5bjQQ)aroVd1*>=odoy_|Qzl+Izj>x4vs09u2v|!J>XED+5Y)>%@ot<6lHUDl+kCylieD1s=%G1lE9;x87 zk}}4uT}Ru%iGBoemX<5z1*cRza#$4zx!5AZHDW?c_stX4@-&wD%|E~OqbHSGr zaZfUWCb0gVkGyi7JTkpWqSg#`NrFJaBf=EOI3bHv8Rq3FM>c%4u>WG|-dUCF9s{Io z;g{=^haP2JQv`D=z&(Ghe}gAG-$V9S-kGJliBbWE@W@r;chSjs2AZ%;VEjx0#;<63LhmQ&nNxh1n1>n9(ro?-xI^SXZV$L|x|HC-Md2KajmexdawL(Hi_=Df2ZqW>{-AqHW(HzPhkIBT|!okH2-jc9_ zRNQ%Mg(Yvg-s7fgMV{I?ix#2eH1EPII6hc={_`)=$)b`DB=rlhe9yBcd|g87`pC z<9dF_9fhBUIJrBKj$E8JgQ2t7s9PMO$8ST)=&%*P!eb0}>Kkr{(Zyw(`!RayCBVByZQ;=joU6&{iFZUV`&{Wb9>l;< z@2u~*KNETX7?6viKk4mX=Ejln+C0RuevM8tM`3hR*yPh)hZ+VexvZ7-HaDp2 zbnnNXj=21*hVAdgHfsU!O{xx#Pv^0E53qWM5$_PK`u)G~%e2Uie`DZm^>mb;B!L?< zjly&*kzMA8QZ(Jt@;&C4NE#*Xoem*El5HR4py0awT>SL< zSGHJ8fc#*N6tO5q*oe3wZ_oS+=D7LwO4&Gs?9+8iDZ221`S1o-M=PBaGr8`9ua0Z{ z>ggm;j6TS+@#+t66))kgtgfcy<;iJjB}ku5Qv!aw*AZkHFofAUZ$l|~T&ok<*W_#1bpnD4!5vlcq z0CS*?79#@)^3Un|V_*f$-i>daoSbf-n>IGkgV(#qqN%0n_@`HBWe;!#z$0kJLm|Y0 z^M_ab$q3p3QQu0!?Vk?!Kn>3xsr;#Mvqoz73FFGkJO6|AQ zOjuv9_&}QdBj6@58Kn;G&And#l#xk-n%{8Cqs+k2(4~mt3BQ!E$J%F@{l8ge!i7hf#ZT1>*SZFV6`}t!&QGU*6u*D;Yp%P~ZevSg zd36;5G0sFIP0`t6Q8hMgUo9hhIdAIPUcrf^@7PQjqgze7&Li6m5S^45uZoLL+zA*% zT7YhNiN~nG>jm_8hb`UXl2&T9ZkM8`7p!VUw^YoAhbk{79Kw3nA)S1IJpoGj(^sS@%>^3*^*(> zrzd7tzn=>3$gmM=D=Lsr>7O3!^-rH+IXu(V-Ft8!XLi=C6J~`}>{qc3>DNYIFp$p7 z%F-y>E)(NDOf>=lfABkkvtD#IwhvvREovazN$cS%cWAH>5T+7*w*wfc%t*;QZ)(KA z9UCwIrUyH5?Rzlem5d|v>@>aqyAOpDAyr zeVTi8Y-YNov?%g=WM(K?uamHI-Jax*BmLv;IMo1bmoGxq{^OlXW^0hLFb-v0sbKql zmX}{!N zD0bbdPgkT>azimkj4HWeZA<-g`cElaFASDKR%!pM!yW` z=;P&p(l|B57vEnCN-F44AYWS?lG#2u`MyVoy)u=3t#e6k?j+p5-+Ty`2_j+dT$I1L zU_KJEQGL1Z!Tvtw#j_dMYOaK%PgBzRQ%n0j+(d!%4V7a^t!c;Qfo&3(!)3$PUwC`T zYr}&q@nxs_fHX*!mYRvf=fy?M?s>ZiI6HXu*N{vqW(D4g%+XgXmi61Zf|yy0&ofSD zNx^n=9;ZD2rFiJ@ZH+2x6aeM{nS|36Wn*t0_H#)gd%{SmuJlZAQEx@sdvi+F9>P~9 zYIlX7M`;8ENjlBw=dPdq3`b=+g2>@ls%r)3VXVfO!zPU!Cf}$jMYQ&>yNuDefYk4J zyQ;JCQ(DIW>k%B6HSEmA#ug={TVwYH#_sjP+xOmgSNHfzE$9rt_at$Jl+a;GJsn6c zc$cdylyrYDGiR`47?|W%U3B{ak@wyN!xhw!lu8@)R>Y!5;{1TMi+V)t~nvq_00@iS7hOnkvvpUisxny*W9s+8MfAf>Bwts$ZJou$ zR0P-yrS<6_MWo=ikql{H0~zGm7j|9?*~>u5)gxG~0hz7lC=TYnA^G{zMrN1qqh#Y7 zcB9ulPijru^VuJ980^niaafyV71u(fd`oj|=HQoiB!-dof#=f0=WVxyb;-X7RmQ*G zE>;idR}^1q#<@r{Z*XYUla4MB-78NVt#x?}?g>&??GDv%ut(WF_;0U$OCN>oq^A=| zD9;|v#-N=hG*}PzNa!8-L?r|kL0jUAT zVz#NKB8YHM$#Mdci4>6|4~B(jfGW+NrtbkS?YD16ryVUw$D-KAaDwRwR4)O$Ar^+Q zECqOJeUt-H3AO0T-|F;xW51gtcx6U1c{HYa;P5bNNSYB$bT=cuBGbDUDR1rZ3MPLH zKK2G)x@BO_zIgiGIxl8S`70U~Gq+Bo2GAC8{4hyf%nQ$x3N9v*oLjW~UF<*)Zi2lP ztCv+O%I@pU0jR0<2Vab9%<@%yQoOyq9gVv{3u&P_K`ij9+N%O#M$sBV+CJdH{PdB1 zWnj_xvHL6r00szP&DP3FOTmGDxgUS@*r-Fx&Q9q|k6zb2@1XgFRyOvnPyALMum?cv z2lRoVv9;CJB8xdPm)is(Ky2C+6}jym(WtUWz={C4ugH36IO*22R7`8R#D>qeT|<;o z-%tmqi`m&9e1pR;Ut)hr5l{3JE^7!H%xcwTC|EI%8+@DvQM}Y>5xt=ZT`_}}atE0k z9D~_IE_}T5_)7Z1FGLSS55V%jd$525$y#r%932o00jBbM(`8B&E!*E-Iy0Z1D@d_h z<0sJ87IbqLvOa94Wck7`25Abx!BLm4b+d4$ya=kQm9S*&E;`{Hv=(F$sQ}``SvaGm zM>i1VgyX+J&GC+!P7=2nf=07+X#~2x=uVfVSwaQ`u_(Z3PfY;iJH1UPcl5x@-~kc8 zJ()gj&x@R}f}9))7}wFaGeMf&t5}n#-)`_LoypS<<=RU`z4^^-xZc{a05c7c(5%U1 z^!P*teeE>$xU(~y+wpwYtrr@F-A>Nwf0jpP4;%CZ$N)`<3wdVQ0SGHhhf;`LLv#_R zb26-bu8-}j6pL4G9jX)~UeOV@T~BU#%P~DL+=n=`W}jRr1lhRYYL}u^i;Q|`x?tej z3ajg@zf-2MfX2Mxhlrk4Y~thK%b;~?ux|0IBLzJ_1MzpZx4(~#okp)yQ_+%b6|^_7 zv7&jhfnBflT$6_*V#Qk^O<`fEGfGSVxA55bGcFRwh(#(q5typEqdrtP)1!-dH!TJ| z`XQXH6gpCXc08|ug^YX)pE*SDv)V4Muip)sGJ|gBw>dYjzpqD;PJaSb-UT+(fQwev z)IE4KgKJ~66B84sr~fhtmJCM-^P?H|7cat0OijaJ&Xyf`)C)DY{sea1ze=nz&E=Nw zb=@RvB!liK?F|7xUq4A21rE~IquOcLKL?l0fQetfnb)S!fRf!&Mfp! z(Vu6M3ltZ>LJ$rei;j&EDNRm582`Za&1c-ZB%*KF!6cf_!8J^&>Dmb2B2z?f;xyDug zQ?d362>|Au*^8G*yZD6PdX=FFN2!JItvo|Z~;-c};#E^Q(;8Mq|;lAlfx6Eou z134*)8AWt!FHCKOKIZkAD6FPhkAwHDNf$33-u2M62#l0Gyk_{?Ie6ZKn{(&rAnbZC zUe(@LSkKWtNgOC=2PH-zuVRu5SE9YJZ2!{qB;7#XWqqnR3Rq_3LXVLb0IzS0T%a{> zeNCll8arPiTcoqUx^!ny$rdd$8>+OK6Zbn{1RTYC_&aJTgLzljJFZy4h$Pz7D|ay< zOi3=WNrEWQJXvr);Q0h)f*KB?CaD6C-*tGKF|6I7OSAYgdMNFZ)yDlWhEi}Gc!%c& zHnv7AI`N|-cYTxt5Fj_-aPBCZD-R;`B?4IN^=OMAz&QXt8yjISg{BoGMS?CRTu=gQ|7U{@qB-0D zwQHL-9TsGMzK#*L2sHTk+Su5bIvk2S+H{->SL#@&MBM59r+0SOqa(%CUPg`1A^PQi z|6Qh*$`+f3QwKi0|Ci)OeE*-nZMs;eL4WOv27|*#Y(5{J{3XTz&!-gU@9+}#=9cG@ zyU4`-?)<|f{;w;~M&JGad9S}Xc0b{VYaN>_$v<~lxqF{5Y}UPF;W226F&?{2O-*I9oc`yFp4am#uJs-Jx3T~CQ`4`{AMzBMW01xV{RwXxW+4a=Hq~p(MxRcY7(MNX$Lctg~;HlW^tYzBz0n9`eSm~eBDv2w1r)DUdWw+W=aFY z-=B1TGY>5kZK?T(Am-M1zz@N3&|kP`kTr#V3X%vO^^+HhhYIyKRBUkGYwNlGJUHxg)j?ES6pT{w^w?4u<-1;p# z{;Q6F*9L_xRyI#A)hO@S{I`190{K*(;Znv>f}qWZOKr_!w*et==lT6c!=K9l+;8e= zIgBQ24RtwO7NS12H`-p7x>v>+JKBVE18H-zwgYHf0zqdkNV{XXLL%zPHHw!@mYQjR zT?edit0I&c7#i8Ql6(3}N{5!yADlM@fFPf#2>rKv$d5*8+VhX5+h{JWNPPc3`js=N zOVukZgbs(y5E{_nr^^xd3QLXp6VgEj77aF2mS8FxlK)69z?(fS?r6e-g+me(=$|g9 z0ga0|n%qLO3QZw&B$f~F@2tHf&9yc9(fFMEuNINwZ#v-P={po?gp5GuOG`b@h4nyd z>ZkAj=J@Qi&{N{n?5Tvf&mQMC2IM}bZ^#L8zGRexb8)+U{$7hLJpXYz@R$+bzO~s| zQb)Jmxl0nU=`QBuPs9Jt0HB^9Pjv^hiMcr*@N_S+0uTq;SVQUM8%Ugh@>m(8Rb{zK z**l^3Fkq|wG?Q$wRi$e$-}FqRUvL=YIq+o&ag3MMInYmuB(*Fos>=cKM6laMyAexM zUo{cGtr?GW+EK=xb(36Nby zU-yDMsyW4lo;Ktd!;3j5S~C1`2yy?*^ag07;LDM|a=rURYRpf)rNt1phWNx(03S|l zPKnodhYSidObW7LV6j%8; z_q?uDn%|LIjTUoD;^JdDM~;RcO5{g2+dpVv73jttC9Us(JRn1(n+;aGSW|~Xskv6L zTn=8AVIeX!)qB>R|3$-RW<)DJ#9q?!-)l^udH?r%g5d0Xnfe29B%=)?`d$v_y+WnAFW&(Uqso%-6z&Dj9*64q z#@cDxfh0TnPunGFdx##WN8nK+|D2#AdL7czKU-@}T>;c4>Z2j2_bpA~O-8&f3ZCVh zF)q+_H$rE^8fweIS3qDOa^?|6>PaCC8)TC;KRiAuv=7~?c(rc1eHP-pBMJ=qedF!s0l6^Ud-&R#>Z z?+?UcQ2$k|21X*eles&)aLVe~ zfa3;`f)L3zP@rEOw7rD@K?&k+wKwc;f2n zXn;^Hl&!H}67iwL%KE(vvngIuNO^vT=&r1&6VIIgVGD;bi(hb}w3GU`!NtX)tH<{H zul9=#Qrt(T=9!;=sIZ#gu|20iJn%_I-946U&D8O;IyyKp9U%`~U{Fm-a907)3W3cc zlyXVo62Dz$Fd+lzFViS7-~#=t-SQaf-h{K$(Qx8kX2{#KQ$r(W4H@7Kkxbwe)S8Oi zh2tNS%GsJq8!>K0G%-c)2%2e^N3L|(5in#_&XCLef}~Y%?U!zwvAsVl zn8B=`u1HHuMU~Jy^NK4^vw4zOZH3eNMzxCzlo+)1SEB)iJim`)oK=tpDiGU~MvYtJ zV_AvuTfTmN+n0I#7kr$p)sm0-j#d#U`H_|O9B7#*D%Trfh(M&0T#C?zg3AQM{OFA& zD}!2rNavw;@6b^A7-IdYU>LtPF^~mSWP2n(>z1G(kK-;{MS-?a%h#d7Z{uiQV+W*# z^op3^4iNZFWcE;+LF_FghO^)xG~+?9Hwd8wNCkWIwqC)3le~_r57}F=0Y+cKBoma{ zAOIbB$E^GK#3X(a{Nr~G{-~7n7MC~0oJM_~8%~PfGfv4j4Qa}y{zYTmsShC%1 zp^j0tA)dZbzxx;n84z`gLaKX3eg7`jczH`7m6ZXLonJ=b(RpCJPwsXVA)W9#e$L2xa(=I3_X0D?uu zvr-RXj-9;&y~F9$M_ioxKNsESp$cDf1oRsKxp_l$^kOjoUV5jR;wTsB)8M%DOy_v{ z{$j2~7dHDY3b4!zgD;H7>|eO+Q-)a<-HnM`m|sbCB^voB;(w6MaYSSPk*M&SXe9Df-ODJ9q=9tit@gx+Kr!n^3T`W((W=(l#o|{z!(~Jj z)A=1Jcz|U6&!30ou?aTDGXt&X4x2HDx-?m~+eu(VkwLMSP zyrRS2g48$&NaA;Fk2Y&JTg`$byCH8fildk~~YeQ$}&}o1}vAx)q z&o8)ud2apQ`w2ykbf|nF4YsCcc)BbKJMw3Mxq;y9!x@oum~z3ua{$%~&G(25G&Q@_ zOT}r>g4vp!CrVFz{C(eA_Y89cV=uhCj!-d!YA2F{!wXa+AU0|4ZHs|eOR973wf<<4 z9u4RzyR>Rm|5b&|^LDnyc2afEN&cFx8%M*qt=Aaj%kb0$3WZXAf*hHd9o}f>mFERy z#Y*2H0>E9STK9`UW&FAaAR}<*=*&=Hrm#EaKZdA6aQox;Ia@m;Y-*8zrDkZJ9KO$< zTKB3k?JjoqdLRp(U3>L9j(wC1V4mH`oSlXeIt0`VoVo1YyeEjBK%5nB_I!y$0WD5U z!aY7M@4Y)Gle3n1~1VLk}F)1zw27eB8?F3K4dd&PS6O!v&jBWj2CcHDONuDu$b zrj9*0YEvHb0HEmv>AYw6=u8sm5wi$}nAbI+bm=z+e3O8H$gE4~Y6_eO#60b~l7h)` zlA!SX-mhu+M+%^17_Bxi=!j_Oe$F7$%zR26Oo7?o^=lRG$238f3yp-iT0GPXJI5!l z4fW$!D_Qn<#MxWC@%BN4wtg!xpQqJ*Le#^SZBXBV?lkib} zoWJTC z=MHj>M%9qUErKlo>uxjRJulIrh8i@<{0V1t%;TW=H6?kHvCQF$dJokW`Pk0hzHq1A zZ5F7h@=;#Niyf-akiMoWd5Ap*h}qrUJ<>2_qo&w#+$Ds3LgXg{Cx~+w zJeTBOz^aL)(EoEn^5}JK3mcwPx&yX*fku4D`egk(PkFKo(29WvumMXQk)y1lQFFco zTV&q_J~C=6M3~p5!AW9E!P%MNV!yX-}{1sbxSD^NWm znNCl$$ZTcp8hP%9(xf)G&>A$;UACbAW5q@S+Hd`hv5r$$Rt%r&3Nupv&}B~(2Fanb zQj6DH2!YtxxKiNI`2HJ~Ae-Lb_?=KUKK$76gch(xP<1tdxPFmo{9$Bnn{Myt{sl%o zbG46@bsVUXB}Sd{z*%I0k{(GHxnWx^b8-E5AzSymAQZazQ&z%SJ`wSP=~>tLGYD=yb1vEn7UL z!LNtfvku=9klD`sPRWt3c{YqLMFi>+wsSW;&|PG`yF3)|fkE?_k9OQiY;KY{b{{41hZna#Celn(tQWne(^v44(WXzGH3y3gI zg^2FW`HP#(c{Yx&08uFyzgvXs=t0e0QI>|^CO^-^JK)XV5)c@f07i$H*Paaj^07X` zI$zvwZo4d<3|s&m!S121JE0?#3EUZn>|C0498BtuSsL=dI=pB5PKrkqXE8ay59D1F zL|kzAEN8x&8#gc66DZed11Y;if}^y42a{Y}1tkO9`wM>VbZ27JRv)i3h)$Kw=;#zd zcr5NC^4i_VU$xfbEj~htPJRr)<35EPJQgb8#?9nJF{D61j zfL){I+}jPe1fU9lcf+{*{kbch@d=5Cf(1BB`+G9K4d%KorRH3;1X^MxS zZWan~TDa)t*|POW+D-evG6QJD5UxpUf6?zOAgD7?XqtnW$aMdE=Kyk`zzJKe?mtPv zOWER;Pug1zJJT3f!KsKm;l`|Ro@nfDt>+o-PfgN2sZ(eUj` z*STAVL+0%EQ@hHY7~mBj1Ac?>`-i8OCj+feWEw`xJ224s9)!(~Z?q76EQ6&8#9gbk zN`|peQUwNEgU|Tmm(tdCj_&0-;Q^^8;@fa^$d9QFVr5$WKs0Rqi`^4YSZEr?zowBM$QI#7pc_yuz;KXy zaUsNJxqip*VCCUWiZe_yU%=9hg2-FCbEau~0t15=09CPg*^mIWEB!D=WjiW)DzJ&v zZjo!JrcRyHNBy&WBJX#pmtVw`l^>*@IyOyqs~9FiR~X3?TsGm#4x3LT0E1IvzrhBw zEV0h=#J)bc=>+Bi9Rb73QX7UN1@J=|p~M}jXB8{IQ^`*-Xms~ZT%nqY0!sRf6y7SL zi?{y#X|w+Q4OfB2GbqI_waCkT$*AQD-l=h#5ie9QF&2W3ka zsv2L|v$CaOVs6R~pMW%m_2vYqQ+|En5v^$%s=S;O4Wf4!{*0(7{D{Z;vqdG4XR%gh zfwC=_5kE$-9Uao9eL?|-Zubl^C-sM$1-gpG7QUDpYKU3jbM=K)rn4|dVeBe?Q-N06 z_OEx@Zq0!@D+oT9S|TyBqOyNC=;NL_n*3_!$?54I7It&&z*8FNh@^w>`u<+;5s-%}o(rIUFL;*h54BkCh7-OO5d z&d7Rw{RNZuQbbq)(x()y^lU_?kXnI{y*2f~>fN)Tm}EXF9q^%u_!`dgaoSpW2PrKi zD+p(X1irXwx+DtfOKw#24L1$vKL9Z+B)OqSt_rfjU`hyc6H{Z6+A@<5_v;>aFxp%F32M7O zpu~e)UwpOWHDcCjh0LwLrViORa7-_b9Y4uU_|M7?5%9t!Is)ks-K_{oXp|UcLWy01 z&}tA|hj}dB^K&EB&yRB}n|}I}AMdf4NdOk|WwQWIkjj9a2rvnRgtRXR3F9S~>;RdC z@K6F=cbctct+K`<%d}5^zM0${{5~8sLYbkYGr{8+#m3y6ZDL>ia)}O$lUWyCFGvA38;jhArc? z_gx@*wK*!wU6`W}BrUV6mgj=Itav?NnuqN>5$GD!+QYp51a1f~ckF#pfNAwFL$fzp zC*%;_9;zti3$y$)?R2UBrNhp5%!aIj0nvGb(~n}u zM@T6DX5Xc0@XP`55oGPChOgc%L|shO+4wTz_v}o~-3H_lkS3#!i3JXm)o*uuSyb|= zYC^PUmM;?#rEHMwtKTm-Jjrl3==?$ox?9smE{M5kyEUZ8S?Z3bh3T9wc5lA9tdEI{ zWw-0Pr&+%BzW!}vtthYyOp}H2ASj(#*%Y_PO#pYERcZ=F@UUgUrLgM}Dc;W^-)rZS z&$jCD#hx;Kqj=4IIOX~}fF^*3HjV&Ssr~`2xYD4q%w3t+fpu!@+Bijl{Si~H{*xD% zNVK^aXqA<(r>6Ig4PvWomt;q8CsetzK=ST`dh~{xUGVm8;wt?vpR8qmlN> zCS49BbadX772HJnAod!Q@PLTOSy|{Lp(ubp>Z%@nN#SH}un74>Xt19s)lV-y{a)9Z z!dv09nwn$k+e6p{9Ir`2$bJXMEVq7=PFtR+OAC^w4$Y9>n4AzS((ADynQXv#dM)}> zRNVV=(weUWjCH+Lu1O5!T$p5!v82##tWA5-j8q0MHzw-1xwiOy-TAraHuEUbGBdva z7`>0|A1Zd0k_xJpuY>g>(PTqktP5pAg?K7m8OIz~t`WaHGqY6sQWQWKl{e-^jp{JF z>dW5N{>yKvP<4683QhC0RK?Wljj1nXs*bnoz+&X#XmS|p<IN07>RM3OL@f<(R zm5iX)be7?W52W^ps1e4{4$WzscJ znV}#;(4E)gB|Uxz2M4FlLe%PwQ<}*}Zs>}3E(a+FPF=Ih{_?wUIt3~wCd}K!s+azw zC2hk`M9cx-gvM$;I8~XneXf{K2xLLK$)#(ntkLB~nIML}$iR<7tgUIB;nL~>yEj$rtCtrp2> z1u~XA%l5|iUCgWd=r)g!J!Y@-RB<)?n3f>!fP*uHY+A);fUoSywunzi$M}GX>Z{9B zs>nZUhB+ze>A}1MEm^WH63V)53vY8zcU~G88ogRi<7P;B7^Wjs)E5>bXov|BGamEH zo0gUb!BL3cM@Rp3<$h$N?UT{;9OH6WMpRVvz|T7zBPxFkEmq6VbPk1*YCR=Ggep7C zF*CR+IX|yRe~*vDV&aCkU+93Ha@g{t5WvZ`BA>sG$ z;^4ff>#vkj%AIr|a4moH?J&(+l=UNRcP&MjWN+IfZh?I4wX7)Byi@0cgNj$?Ti*_k zzrEq%sd_bNuJriTag3&%tQ0)cKHH}^*s|W(FIk;sFv@WFhI7=y4(5(EIVjPFL7o+;LUZ&FcZE6IxD!ma*! zy-580#YK!9_Vpk^ko{!P(;rS#@I5ADu5022uprqZTRt&qZCX z9bD4i&?WgWWJh53LzLrxel9fn)?SybafeE)6QvhdTC=gH1{i;_}OS*mA+g@t^f0Uy4xNO=lJ2hHZm#eSgu z%6FgmrDBgs+b7E2zP_wt%cWn5MQ$g%OR!Ze;?=iDcQ;`B;%IvAhc~K9ITU$Sx*Rb0 zVUu4@yr!ng#$f7xD%#_#thsAr3%|@PzaJo4Hs)Whe8ZeNPty`Hy+lRRu>KI8^Rhed z&R1jlepvQxg!T#Ib6ZV`zJC2jNkRnKD=h3q-XZ1vz*pIeljp5I6=LQ6`&x~7$V9KJm(1)J z{Fv_e|8zMO)d4R3y<5H+-v9M3d}N5>(3+f;$U<9M(rfpyCn%lwzpKDPelkrK+`37; z5{D(i9PldJ296ZtKqw_?o5h{Z}prHEXmHGp#mr!FS#HnaUJ z{oM2wo)613+)mrCCaUdoG%XMI_dg{j-Y$wUT>o#W=%Dl$igk0sq`p9{tgM_F&U+d2 zNbmak#zv0C+U)F)2mj66J{ceAx#g-yoSZk2y5cx7JUl$&u>AjR*s|})S?e7;YxeK> z-q~b>YUGHf{rB3x&PSGjE;1QC_i~Q4^##Jp|MTj@kV%MQ6mhtgX-D&K$AeL*rsPU; zMGhYpGvWJROVmZGh~s}P#vyvz|Fzt|$xCsPBK>z+I6d%l!ET=&npqQ0-+vDwDyp3M z{~lo)(gl6L@_dy4Yw-Flt+)p5F#qRw`*-#Km#z4U|H;XP!mr@g!2fqy|6h+(ftvB- zhcX7~1STiAF&vx=iTN}d3d(ib&(vE@#@n)zX_|3Z0UQrm8Se+^afh8Blf~J4-mexn z8X9Av>LvfND?%CFXd(2WNLxcsX3&oT;`gpz-aa zzVF!()wCd(FMjQW?SxCJdp2M7^9eLw8MnW$!C&9WT;p6~bY(c%4&)_U|QIt)c>T(dB6!OoFP;dK) zp=a_2)fHP8?tJr%)^g3DA?|RyQAcZ{m;mWhrR!%f&dY?0Y<3z<&L*8X_Vj3}%Rw;)5nlt4J zMJn%|?LYOID;0B;N6l8t<$BK zT9YT#^KWDOcPD}j(pr2djwj?3G_DBUP`bUgwjC+L%tl;vUl-lDn}uf}cJq!*8Aa5? zE6tY?N!~A8FRvpl6TVEI{C1RLY;nz9i80+F0H*@SV}s!CGw^pD=g{NdK~T(_h$Edkl@p|7~JqJyTdHHFuuvs@mo|)XKJ*~w; z?QytY9)2I6a(M>!&({`Ff_@`MG2-gYS*$=uSRjsf2p|9T>JE$^j3wha6C(p+Rx9d1 zqhMrd{K+hnoN5)MdXZ_$@e98_U}*1CQX-adK|bdc*ZQg+iPkBK!B1Z#q<|Eu`MZkKP$ahZS1y zdgWH$F)f}F!~XoCo0V2DpH4hou#QIZfejgmD2VKqHu)~yHec<zVDE@jK$xAJ1WL9uHk1qgW={X9FarK>e2OCf?@9QD_@v?BUyh}CC(?5TL-Gns z_uj6ipc66G?z&*CtUcftof4vBDzQ5ht?Aoi$TLT*@ZKNOHhar1lv1Vx-|mlvUMBwv zNntPyZx^gB#fD}0Oi(uvGeE$;ZOd6*c~%jkc{c?}cn*UrrO8;%VwyoZVBR&%wRuqrZ` z-UpPivuvq<^y!6TO7zW*OUbrTG!K$ciqsYLuCQ4*n<7a&Bgh`{aBh9Tn#21h`xPIq z<>0{!R>SsGk>Dl&>|k7I7#=SC&&eh-uDA8wbzAm>C{;2?yA~}oS80cSq{+&Yujxw8 z$iCO#ovF^O{z~#eB{MpF<7C)DRaw57Ke)fen;3TY00~c?Eaq6h{bY)#fd4bq=EE}n zA{R3G+%IY_`I*UQ#4_7!Ir)v=@7WmXsxS)9dbDqc%HNL`2jMN+@4din+oTFlRp++n zU(~6@Mn>>eD_I~OiLUSnb=MuK z=MtBGZxYf?v~vRq*7u*?2Xn4S-dzHmH5R2xRu)m zE}|oGbA+FO;X*>Pbf+%eyh0fZ{*tLUO`N_p68I92nrY_4whm|c+h_49R!uoQYmIek zi_vP@&%tZ^S-PZ-1TFkNnp}g9FwOCf5Slh?>dH^q=3`}F$l5@dBkX;A+CrwQ?bf#b zYxPY=a0P=|9tU?$lu)Y##Z~>S&*+V7suvkw$4h@6FU?(Rw29;`*ikSN)l>vbJg zXskZ&rO=u>6(pPFp-+%-$CW8v=EOn7J12{sC&9XuuN>R!1nn0%1!^m?@hh0#fy;pV zqz)YSC<{@S9$?xW7W9NA5@911n)Q;n8(m=LpmgJq6Zt54y7|DRllQC7f@3g}C`**{ zDIuDyJ5S4Ltx0}oxIy=k5}e^*iiai~=*?w#Rag(##_DO@=%fsH_s{M=!@7Q{rgrY*!^pqTN;`vWbEClYLD{ zWIqvu;P{7TWZKkdL;aJP`uhy-Vv|)~q)FH!Nuv#_an`}L_9!XQ*w@o(2*AF98Mi*% zVx^4koaMLRdaj8(2CJ}tt`Wz6>G~ma><6RC)*r)@s3@GZ!}#S)E#7-Ik)Kyhi}Zfi zI0=Y*))n+ClJqF;&bhWR$n}p^mMKdus;Pk;bYLN&*;f%)RsJwzTXP4;iNssz97DdF zU$xSG{eqFFL5~vRBWw3tZ`o~DlB`lb-A}#vjctA}6awwpB;#Ayq#cYbU9FS2@nHlW z7;|{!??iJ>J%u0mpv>-;lT)y=_((GOJ!6s3dxMw1uI~4z(JfNy2|0M;cjOsgmjlox z*u%eC?}lRI;NSfqWOb!zO0_pr%&q*ntcQW}JZihzE%Bn`fpC2&^0Y#hwYKV8O!>yZX zJf)+!dvW%Nf&LGcgB$O4_o>U9s2(wzY$@N=uY+Z8oM1{AFT6A6;?r7Z8l`oG*K4o}By{3!|Tk z_u}%&Fm|Dhx$j`iodlgTbeIG`7c7Op*9VX%)K#Nhe7RV&r9`(JO6ENm@A9omG=&0zoMa$IYgI$TlfgucHc6)- z{!8K+b_2YGILF3_`!oT$a(gbx5Z7aU@mOX*Q%h;+5EnsyVzw_bTjrgqTNaY%rs2Ok z9vyv$=tfU|YG3KN`f|KF)utXrzQxtGSgAfV44(JquAU;}vs`%n z(rs+czjpUoev{{dt7L!+(pBWvk6)9Sjh}IFMii1qa>-}=YzQDguJm}t2|*1O(w)LH zgrWC)^tHINxyQzOg`@3b;2k{c8+z9yy!vqlqvYjYt!*Mlt4_|$jOKSVyS;9<47zj? zUXL#obC70|zdXTdL=J)OY>C|He6%0lt@5Rn5`*iRj+W2wnH-*7TPf1%kOB{;2jiWz zoC(Ks8sK6PSqS7_*Sz|@(cHs3JS9}C>*dDP2o($=%HXNABP#y1%xHfzf+h^YakWg~}q zUOh9h5ZFI!;LV|6WcUuN%tmjI8_BDF>zwvMZ)b=GQ&jR~+tD325AHZSvTNflc`3#Q zA%n~QJY|deU@Jrwn@I&~3sJum1mzzViQgcT7-`4o%RK{q+I*$08}6^%E6iI}B-~gR z=+m=XPaq;i`jmLrze%|J$qNsxpj?tZ^AOg}-<@Sv{74BP*W8aw zE-v^zgS1RQpx=9LUubLPov<&OvNa|haI0lS7#~fzrflPC(9(ja>^+3z&*&7ELfYbutzRFYB4S?1k!J*#nU|M|Q+vjR5#d^j8WuXP^kG5~<}YSGASAv*~CM(No$0tNbkM4*FP2Ct77W@=Rf_ybSB{sid z+twr9R8ND8$c)a#>#zfqi7P_#TVoHd7&l<1Eei&uB2bQO%557K*#jEulI%Y(eGhKP z*aC)f;ZJgD;}W?R%tYF2e%f&0itKQKsquoc^dR`r`9R(?wywr@WDz`z=9Z%xN@gz1 zYc-xYcl$KjA0vhX>3ekwehU*H#REp>1c90UY+?sbLDdEG<)JVzIBlAP#y61=v$#Jy zzMqY_B5Il2k<%k-jO3wQHjkWGqEB4qFEy|=Qtv!$~4 zjPRXz&-;Ae`+oO7&)~jX*KeHXaURFy$nuF%Bt`qKfE{AB5J_*$W>P^X^}s< zpWgL;u5GMND!2>j|CLpnfB&-VR+ryaQTw}L=UO_nt_1%9gvLqL&Hnm7Dv$bi`yo=$Kf^-Dw(IF%4#%ODPh+sXr6pU%5-DPxH;adDD}$>J%~R%-RDW(ubC28@Q_x4swD)YD$gl#p6m&&q_Lij{!M&5>21dRCww z{kM+Q^#~^NTB8g{{~Pd#P*lW1o!ULMbiHJW3{?j*#pWt^qcDD}uaS|t+J?XznwgGY zzZ>G-{xoM~sMDJ<6+-bHxqpncJ$>gi&X`|VOsD{{&@%QxL`+jJ^pljw0oy(?sJXmSD)0M`EA-G{WOfYO=Jf9y<}m)=cCc&`0HU3)&A}n5aHVVsdt(D&y(4Vn=ECzQ^dc_aDFO5ZOm$`rXqtt+WhRhTonh z(Z^y&P2sl=UZu2AfBQ-Pw~N*0u>;3$tKPxf2t;cLkCza7Pdtjbd)E0^eHYx!jb30fe!qIpnUAAgKKAiNmol-`o&GzU$!IMr$yBU2e=a%(2 z0Tew(V=#@O<3v!jd4dIj)5qFyY-VXs{rsf!VZCO)aBtC@B>gton1MV|oKF%W5|SoQ zgV)%03zk?J7VdD`ozlbfFE(QEO7lWB>a3^xm&?lFS|kw%74g z8F4rLHD8jtvu1pN_{sgbJVb3zaqL`DB56Ay19Q5G&odBhf!<1^dvrE0Ms5#VJvSEYtI8n)*(dfxB zeaPT~tX#$gJUp7H`~X`p_pPsb)*tDoo=mAN*ui4eO)1IqC}f;dB;E9oBIX|*jh_Al z|1(f~KuM!`>iuKPD*#HeW;!ot_7DLC_+gPLDyP*niA6y)h2FZ>O_^<@E>~GJeZRDZ z5sQP+c(k=amgxadh$Ze>4ux{;>g|m(Payw@nH+O4KAskxpPd^tU!IDu#0PvjIIMcj zOY#>hpI0jL>f!x`iX>l|4FR>fmk?B;Qs*vpQhD;C@GdaGHxv5eB&S7W*vDQ#{S1VY zY(>%-&j;=e^~?kbv`3PJV+z9a&sEBe2i1`&pH4Xt*VRjyf>2EW3o0pW#M+_9*)|1Inz! zYq8h8&^kJTmycIjG!(H<=1$h+k6Fp4o)$ z_VYK_-NniI^hX?wP})9L(w)T`KXP?_vs~GqbvVa>JF+qan~1l&!ucl0A7! z6ea=O$b?wa62D)t+=UO0PU&v9O|aN=A(nd!^TYf>dq}HRi^1HFS)8&gYHHm-W&2>= zwH=`4KOZLeP#GmF`-WTx*;cB9foR@Xqy%c^QD)Qbo6Uz2f7W?XB(eJ+2Xao^^Q>VP zx4mxN9|?*-v}e3#2hKk_PBM!j0Ot!jZ}OoFTHBnI@H>cLkFKe!LInOI`z zXPNAUM=Lo?E|g8Gp0xTRC9==9o~dVOI=ICtt!la!B*`$D-H9yxfMN(0?1k~oQt#i` ziE)HK7Jm@z`OW$QJ+Kz2PT-^w2*<;5Rg~?hW;EdI8kZcJ3(Ff2@c7G)dd2#mi7&iqA z_U$Vm+JzafKlj#Z&eY)FPcD_e)stppi9Tn%H4n8f3bvyPGL5F&+%A@p5WlDIH)7@9 zp&K|L?_>7u8&H&jnjeU5g-|9{;vY|MI?na7N{LhIAQ?p1u3@E#Ry!>#B?{G|;>#EB zdflJG)DNZ>;%^%+j;7+k;&t>qRh<@YB)C@Z-P%1&Q*+~XDDVYP@VDM8JW&$gqbb&y zTxyH(j!hyC25vwUN(n4uei$P`x?H5&u2|+%(`>AZwn`=Rv|Az=Q^M3&npC#ha&@F2 zgwUUCKGqW{q3~l^YV|-0@d>6XdHL+&Iu8@Vu5xr+ia}xHX(8N~AL{S{=)}HH2_4K2 zAuxD%ePoFL<<_>vRCX=7jv!|7fWse(6n&|*YHs{r(VJrFHT1fIC**&I=8)&#uK)PN zLJu+RJZ0oY^cTU_T4S8hcEV4RBJCL;-?U1k+`x(fsqQMor0VijbU@kL$DOyqOmw4W zMaFX6mx@u1<{Yj-)KokOT9V8(~31TBb zwLeK@A=Mqrb1%!a8^Q^R*0S_tFT$eOh8Gzew?;LqJ;*PE`T*r8iU&FfY|^T=`+d4| z@?2wZcq~LB{c2bxCzR&}?zlj3wX{W?%F+@lh_798XocN_n)Kzt)KGtI)i;fgw`Bgo z120ZXVmRJSY6xS9`=aau1O#atp*3PN!@Cvf*W?TDJZnK;zi=n~thVTcnuyvxsZ6`J zA{xmE10tijPlYA@4nMr5WBh~W%YvpgY(gz%%5!4#{-1m$hwj`6*li8bO{mot2qBILDcZwTZ@fCFBwF>NbJu< zz}8&Y`7-j%XRloL3EPDEGQUMUcTI{kyJXXSw$h6!na~>lt3Y$J&Ixi?*o4y2o)=d3 zP;7u=GT4gu?Y2{>_p@u>3Ghyf!kOIgA zYAMOY;h((GET>aZR16<*jDv8s|A3uF^K@>+koT`Wr{M=~zg;jEwi~VjtL@_aN(i&Q zBz>XlCub=3K#jd|OToPH@C@q;^&e_{&dtY~s=UP5_vLN%7Jf}(%m$h_4^C_5ih!63 zoCZ4ooN)C<(W$)_h`Vqg$j| z5#4wPD&~NQCvR^)n8$>5nwDu301+r{W*Hxqaj~6;*{E(H{rY{RE8HBH+RdM4x1%si zYpzVZQ`|)L%p-xS<;eF^<8+`oGCpj?0F9bal>>!-+wL_CeV$W?>?2aa6{FY&BvRno zsTSSuA$@63+ZV?PLCU)%baXF%b8h$>(Zo@)#Iuhg&Z>)aZ>5blm+_A9#0T>(cr#-& zAu_byALTDqUs0MHmDM)%6M|`uH1xtYz7#^>ERkTg@vj$P1p#%kgre?cV(0sxCTmL^ z?Jx7s`2o9|C%Zx*jw6(&1Z;%T-3mJd)c;KQxV@!-h18!^wVsMS87S#pyk!ca3%dU? zKZ+T<9oqtHvlZD}I+=M3%IQ1S3pBv{EuJTBgf@B;K!SO+;%l+Bf-%6!urO;JD%*g< zWRb@vWSc3|k##IpH~CKJo_0Luu5b&M%3dHzVdKybbnY_CB#dbc``NU0#8Npn&fhv1 z!@sz0n92a$9+WX3EuP|uN4yrX$L+V}cJCq3o31C1d7*d{5R2f6q~$jc>`7o6y&G?P z9-EkKuO%(PV|=Y?GppZiSG3|fopLh39v}G?RyBO0lyp#*h2CSa@5z; zbqj<#UyNh_E|fRG3VqyfEXFIhCLZ*LnUaR0s(I65``xJ}ukWe$h2IYH>G+NW^5n za_uA>^uZ?vT;6q&*zKa*P$a>NaUed7l0MOYmU7furaa-68?)NS2YxHb$>u4pS#?Ya z2?~~m!%`OcgA4DbQHb{{8|wE<=j0UivFH@WD}7UBIAXQ+902NYIMEY6Jl*O5JAyL) z>yQJcmfWH>Robxx8wdgt>eMD+gsRqA`5!+J>(W%l4w72^_m9q zxe?=>t3>~0N8YdveeFu;+=uR0;cxze|7`z2Ns{`Uk?om z-j`-1Qz6W617th&dTI>c)hm=ufxVRC_+9-o5)qW84bfRdA-W!FG1 z6+11ZMVYh|Lc}Rvzex!_n`^)WG`at-hAPKge~lS5Uzw$aSJD-%dZ|!}0(PqQpu!G& zTiNRc^v(mpYvO+HUJbNZN?fE2Y*PF0%LoBnR9BmoX0?IRNv}%zAOCFLZK&uWA2;{U zUYiZf5~=WYm!%?Xwg1S?j|0frXe7rim}ti=vz;DXHK3BevR}gLqyBMo89% zhU8af5lF;N1fNG_8(9+w&ros*!pPDd2ME9TYSX}CdY(IkwJ*&}j<$n~Vv|;PdctD? zK<-C=0rMM5tdN*%z~lhxuVMoO0sQIn`fAL#oC3 z05%vvpDKE^m=dY>^7S>4=HrXN31T@P1yn%&tpokjq>)b*on3e^m8 zqopDAUwBc#*2Tb&6Tj7oJ{eFpI|U{O6`DXuY;{@!+6U^ttjZ#YI--)TrtQL*0ov@( zKS)smWuKee1_rpNOpkGVnS*AoAQsMoh6RTZam*F{vf+~U=Gg2X9?}8p`|I57Df5Gu zv6*O8@7$DYNde7Ci^k?{#?t)oRf8`&<1z1-m>sk&pm*?fI|s#}ot( zywc(RTv9yFhcfhf#mYQ(xM~7!$MxjRtQBoke#m12(G~QP<@3&}_-MF zA7-(2oMKC+^F&?tKb+(WUsjzPkHZ@WR!95^Po?t5A`D$=H~>4>p|es(LG@|eQJyk zZaP%7(mi;ex2sGi#76^l#q%<)2@6F+a4mgDhSxwYg7@~M)o@HkCB=|F>_S6;_%62@ z@fQ(d;++XlKmL6mp7Ak5(|N1!;*is=lcuA(UO&K8C3_JB%e?12$9V5K*ykQJk)goZ zQ2@WwkA$k&ge3ZcGDp|008bmEg&#Ykizr~9Q0W(Sf_MaDT7PS3i33ICSljim4#zElPs8_|b?$&ag#ug~sN#WlYlp)1n*4Ro zQ>w3a+M)i!G$^S=J<&*L=hO!lJz^%y4-Fvw&=KbX$`A|p5B{&0QeA0?fQ5@GbO?g> zRB(zY`&@L+Lkm+zQ{3fATGJpKhSPS46?d5jij;?l%IB>59VIg9%^p4km*n8$mo4@0 zP%(C{RHS~7&yAQLY{!SSD!T@YsKlX1f_F_AuW0sRS|JdRuhuY;2SkuT9~+qGskW*h zIr)iLH2nj<5}jjonWKci{Nb#~f*eF$6AcCwYN!>~5W*FX?MwjDA8gvTgR}hkZ*If@ zljSDK$V|GH+ED1H5Tg-A1&PT`AOQb=Z_RpqY$57jfA~Q}-d3M&#=;FsLNpI0Tge#y zrocdpI*Zl(yT0ZZFB<0RUvkhMicZD4EX=Q9AfQyi((>R8c@BxR{%ntDg8@SD1^aSL zYd!lj{CClkerIXmBSk%jUQvXodD>guUs_=?8d(}+#4dl!zhp3Z zLQVCqva3DytN1_EmXp^^He_cGSU5xO?No?I#W(s?5>o@W0~o67eQyg!T@ZgY&<6^0 zXl$Wr_aW5uH^hY}1HhO#{h*cxtYbzO9>r~tB6Yo9ChqTDU8YE+U){W#eByEmOkx;p z7D~w4Q_M3IPN{I@1q?KL%!7%ro$)m?S$t8FAM)2VpPq2}pB%r{E`l4BDW|r4LnB4I z7Kmk&O~b#B?LGgbbr)u#jl^aceZ)2URgd2i_21^80Co$l7+n%~!FQV1v zU`+3m_#DlUbpctv_U)1rH zQH?G%rew55<)4DZINM&u@V9gpW|s6rFs`nm4YEtU*3!WdG1Q!$JuKmkxbAyIk7B|> zevMCQ0{R>+Hl8Zb&qcGfrku12_;dS4U!VubuHL^PMo57u+SHB?DtQlv%A{dTmEzeq zMiWSkC#3srJ7as?-}k7MW<1Q&?Ni&pZX2St6$X|XufkJL3fL)d(>c6DImwu|PN6w!XCC}gAtjG_)RkAh}DytP0dVlQu#MOH?P0c`6 zWqejgP2jriag67U0VE(`9|S@sW)p#c!q(a+MKJPA`POqZ?KJqtE1*rmLInC_SYw$# z$)p!dN|-v%Y18}2_g&|zvJx27+EIz83Hj%^CmZ^aPniQx^{LZV)c=3-SvxZIq2)D@^1t zhTTdaS^O32#=1uR-Xl*F=IK*rhLMZxnvsZ1SF^-34)}591-f?4z5EjS=YTIPhg=@O zG>LotY04BqdAL(Q7%VisNXE?+zWbeap&5U~XbMSLEdM)8w@dj6pa$eg&!Zr{`=M`r zvp-Js|6JyCxepG|7Z_yjrc*3|uad}h*3MB_Q22)+=qTB7L7WV+))`K9&sqEQpByRM z5K?4Tyk^UDn<-zjMC1(l&Y3UURxqbsYEH{t^&al-C5Tp-VgS5Xlhk89rS*8L=JPP0 z=CybE1vc{|lqO-sxkLEDW@G2zkcn;&or>=?Ss4BFgQr+;g&MHQiGz&JIJUo%{ojIt z0c8luF}U6TB}j}DAiqYY9fV=HMJ$VAPRtsgkFEDiWMmY+9hXp&Fek8@*lq+xws-M` z^KZ8BeRMUc+jTSR;;t1av3>6t$y_;1rzjs)`yy5ZxvVCLqxeNIDQ9vZO9w`u*lXz8Vn#Wk4(gm zW^wlSbJf|%PmRS2H$)Lr=#zZ< z|Lm-*vE=7V1I-C0^sj9;>0Gk> z5Ae!{B_l{Y?$9#HU!y!&T|0JwG$AI}o;^L;E;lJbS# zW`O4nZT2Fgm9eZbjOk=RKk8u1wfX0U+>o917b%M+bH+=E$o>uM`i(tRKf+$Jp}lY= zM#8aM(Vqz>2TpDNijS%+unx*f31dQh z^o*Gpw440CMEXRdR9@fhtR|ZHY5*X8PzZ%xL_jDGPRqzxCXysRdMP7o*nB7IIz*u-g)&~?oKy&ze^NUMDac<&+mNUK_Ea% zQWuKeh1BspEGmNgdnG*(FiJxf$;;&5DZsPECLNOozKDGI^o-GdWEq3tF=vRE`HCFk0)p&mya$0+V5vvO6> z);D#S7COMEzkIeZWQBB@asVH3w|ux5s0*W$#VN7(lb^W0bcy)tXgPUFNSO8|xD6I~ zo-s!jCWHDN3^NR*by1OUnZKv?`>4HI6veIPm+vFt?q<10N%!y`Y~A|dQz^~_IxZfP zS`GPkIwYseOm4N}oEssgWq1R`T)n<>H`(qVCY|Qt5Ei~Z2J)7dTK2;TPiatT#U2m9 zzA^J+YZFa)l=PGD9p^>VFiAs1?Iu|L)D#Rg;XNY~^aOAiCK}2pOr)CAuBxWG)9p~* z+h>!0kA1ydDbQi+2fM~rUA zBC>!1iZ3UK(-L!apFSIKvPpuCZEREN$-s4RHK3}~$YUwiwKsW*#9N-+9_%*|YAR=r zIx5eHpZX8vt*69ksP1MKfs9hF-e#e^yLj4aK?6^E#;;TFK}0kzsh;sOBIfUj-8>I) zal`@@RNo&bI|bt8C5TigLIVXE!Gj9c9>7)ez}A$yXK{g{L|N{EJ;GvTZKdO}yAbuo56~seKk#bg_?;0CQY)^`YT`@Fzhp%W311 zlKPGA8nDQQjo65^03fD^6Aoay0Oh6H>Dje$`6JQgc<}*ZHMeP<+N^D&HXR_0WSyfMpVhv9eAS1tI|KXx zkqlb*z@Hn$o@?yv$?r>0r!t59wRP?9IAa^G!>ffldazL`YS!upKcXtep&h)WLwOhF z*i)fQ7ZzKM8R^xt!U6tOCHu2i)x)t_>2vNJ&YDaxTAPG3Nk;W>?eK+lzOy7W9o0_1 zQ<{ME4hyHI)P6;8B$_D~KTGO`@g&Q}=!GKfMF5XY*24a3?nQL2O~4_8Mf%wMKhzq9&;j_8B6%Q@j?s=&hby|}IYul8|;rDi0L%QWuXygBb zt72!JqB0;?VP+|M%_>teSTtAt=8B(V#YrD-Xy-!ykDE6;TG2v)UdoBX z2IL#An6YHVh&51{3giX`2Zlo4?gPyO3e)3(i}~f(Sk?7sP>-EQ45IG&aBCnSukPZ8bQjS?$=x)f5pRmF+Qj8bQK$*UwG;K&ZPj zQzx@2kY=_q9M)g`w@fd5B@{+9a8D;%NMfZlrRH@h_#HnP7Aos|ybyw{_vHC=b~aS& zwx)YuN}hS!qXTDzXoyWBQ{807!r>7k>0$;G+0CI?jY$z8Iqsw_feMcau?@*@i&cj% zOmQ|$w29&u4x@C%q_kY_Ed9Vp19~UmjRi*3Jx_ysn|i{)%Ly#sNNUS`jGs6;rnQ!F z>oo)GGA~WXOt7lTl|4cGoT&na{m6dalMbolw!vxAw~A4{4Ow<--VBQizVgUt7{N73 zp;$ZO_Fjaz9P9I=pQwTG5AsHC zds3GBdEcFkPSpKsNg`M6b~X17!L7Gs&Ha_a$v}pkQn7+Fo!0Nh%U}0OT%C(u3(ApG zq4-U>kDUnCDUX|rjB$vcW;6U2z16?|VT{x`5^ECs12T^>xnB}^^X&@ZAA0*wqAEi7|#!&F#J@%0< zBVkLUO1}S=&NTH8VcDQSDH~~5*Rf94{ys6)r`P+}SKB@%W8B^$IDaMW%K8Ua|ZFZ+<67OeC829Eo>(Htj zhxM1q#3i`6K=;YXnrW=2y9jgfytD9nk3?AF zW}fV5)CiK_L7@KB@5R>)|50}ViTyYzi__ojvofXYFrYdUoWG%{pu94n*|5;GrzJ5X zP`Ke3ia?a6Eg5f2$H`cFnJ0y*2vn}~Bx(zR@4;&F@6q|3&VSCyU+DNY{y`i6s^GMp z5aW3I-OnyWy!d{+Y8Y}5$I(GVVk&LWUwyd2Xg!(lx^gb}vtRBoc;(d4nu7pd=vRyG zSTSBGIgmh0_P3$P1v~XaaonUDG%3=kwjHxK%Gt(paNx{Zh}}yetlcDX&QWg4c+fvi zRVC8j@06I3S|9hso{gP~!cqv;NWcqA@9eAi6x}HKUv+F*L;-jRar4bb!)NmotXMN^ zdq0)CaVnr=!`O8oX)ev#2gC;ZW=1?j-OicToxdmlFFaA+LHpTByv5+}wle8jF_~}b zD#iaIHNN5B;#7l!l1fwmt)j8Xp4f2-*>#}~pa18hRGq4eDV{`L$t?JH5-a}ujK7uh zdnY2I%1%q6NQ(b$ufh6{s)q_fkcTwG{O_lP`tsi`|DR?a>FsJqou+b+-c|m0M+dDq z^{Gqs(g|FKofS3{tQ)3gW>@j?@p&C){{(kDRv#H28Hs)W9!E+_$}U9g-`m{xVoD!K zc^EP0cUq|HJ2Nv=YSePM{`ar<6*Ocj3f106oTTj90Utl!gf?-D{uigTSxTuz8zly} zlCG5Sjk**6z0STMPn?y|WqPl8pQ%2)sEvbiYRRZ)g@i*&i_nwd+37}Asm%muxz&i| z)WUU*+pMhE7vx=1LQP2cVtw6~oRZRpDm5h~TF7JbCIbVSmzS5LlhgAzZ`3lEn$_qUK=}03~n*7dK*Dx)f zKD~MOE@4tqlB1(zKz=@(tE+481|=C8MzGi=#4BmWR;0(~w5XmQRdRCj;73uk3R6{8 zRiH>OCCQ^ZI5u2sd? zq`0^%Y;0`2mV=0slM{-w@Q4U_Bre;vwY9d+PRVNj($dnBvNG{E%nH3xJooSG*!fFJ zqQe7TLcn`!4-XHmH*|D#U{O9YH)qDZc5PvE6HiXAPqz5q?k=IrI?_r$-w^jYf3lDm zN_~&8>tzSS*jb}yi z#_`VL@xg>4-*@}j`umONCpP=bL*jP+Fwl(`=iXEgy@;J_^pFlVHW8(!T_(eyWMD3= zyHLL*O))h&`Ta-Fn?wO8`md=XT>C5E%2>{i7xAStRZ)#Xo>bXCR#72P{;d6@H=b>% z#>FD-;{5FNj_2-EWAC2VOx^wc?YlqwV&A>PMwC=mb|x75g~1=Oo)(W|3?$>!3x$>^ zbPpfuBa5@ML+Hd`8o}zNkU{E{nHwK%&a@t`jASpYtwosiy`|&g!oMX$>VI}Hk;rTH zy7PWSczB8HifVy=ohC=Dy|s0?N`|CyZ`|!yJ?9)WZlhGU%0snKKV4e#9i!%=i zNy)qoIT@Lj{gn{{3W{(T=iG|S>}=EJ!L(}6y{t~{t5>fgYu(tsLI~8abGPar=d?Nh z^BekSX-p-grp7?%xb*hzTk<<5L6g;OFR!)z>`UP5?15Q-%_Qe|u&S|+EX`;Qa!^$z zgq;}F-Y!vb20N8p*hk>&2O$hJv_R}d4kc01OE2-L^PDbZgM)&MIwNmbe*4Vy?w;L= zy6L~2n^eA>tog96KIt7S+wm6(dE&+qsD;c@ct4n@!M2c^oz2pCvdWe=)82jshm2EQ z=-1wW2pM5WW+n?271gNuLhn!K{trSxVwIUky&B}ZbEhToj_LOArPK_Mtyy?)LNG)h zSf^vJ2jBBMXv&FEQc`BfC2+Q~YsbYQk9YgH^6S5%Vc;1Jr9ZZou?&+)*EA4kB*Y4v z8y!`Tyul;MfyMEr$H>Me-~GeVKBjatss zR!l;etU*@VJqccX%dFD({&&6asXJ=+?B+hnycvBx2z$J;veJ4Y_0_9@p$w^vj~_p_ z?m=8i7Y#&*xe1iqeby09Vm-xU@l{<;Yzm3|=zX+VQc*#2s9*n#0G4vzwhtX0U3YJ< z7%Z{*@88qMNvWx$7^I>Rh}H2z94;=d3a_QirTyLA7YPXouZ-rJ{F@vvKB?Z8j3j^F z^M=I~0&wg4$yVcOw9LPbm-K|@} zo}C`qF#dos-(HLnG1-``c(y%%=`tFc(_-hfWC16!!trg$y+O{TW)`WHmP41J3snc? zp;D_6<|`N&rXyJj%aaxPgHmbAs3qT8nUTTp=g*%K(=I%{T34*at{AdO#Z+N^`>fFCT5qL7y#Lcsn;54h{~xb_s&kcb@=ZG*jzleU$Dn*N6$Nye%zn z!R|u0v9U?B+xzt;G$MlSXc`?I9aeM4Y=gJc$}m%PU0r^K(mP%(n8mI%@eppCarTNz z77LYxpWnZ~fS7&r-aXQ^v^4Vj&YyHoVT}iUIvvZ`a{AtmmG1QvA z>&ozB%O?$970SOBWfT<^s~s0E!<9l{X@7+EYP|s&0TEy|lC{ttMwH0!kfM7UO2FU< zH-Q@CL?(m7GVN?d>i$#_TD5G&4nzO5U_dAZ1`WC!dyoH(VCqB$YZ9(zySuyGCheG4 zu3oh&_uU_oTAphX+27v}@`bpkmcNy+&KesZe>INrkL-v0F6avHcw4p?+U_{aKG19M zB7shMPL`-o^Np;u(P#|l#HUd z#8i<8ElgYMZ)Vs5CYs6Zk{Afk|d z=f5qC-Am%*Ni++z(=?a7;J!#ws@B_HRN77D#qTJy#@}LRzal#uGklfu(nrH4KNO^c z>2q58N?pJARY>fKq9Ptlnbzj->6w|}q#vre#9IwVDZ1X!BP+O6v%RH7^cgh~5t@gG z2QJM6a@dgVuwjQ#Nsi+LrK9})?DKwl_;&iSH$OxFrbY3Fwn4RV+e>2zj;)odx$OsQ z6L&1WQbAupMkXfXK`=6iZ*-8G)8 zhEY^hRPC~ar~JCTsOa-iVF40embNimJ4KSaewP3BcS`4cOJM1<9TVeD7#?9xgZj`2 zGOo)^SX*$Zg}SZAMU&uvM?~H96AQyJ9-GOpsNfg4(DA5pN|&c!(TR^*ONvCXbnQZL`kNCnWn^Sp>Aip) zJXjrT+nB1lgy6TIek3XRa?*BsIBA;X!{TGM;s?@J^01u94s+F6lOKAMHh3Kn9(J_1 z=frd7q8>96pjB?Xu%}9j{MQ6$S67sk^wpa;fol`R4A6peqS&x&XR#}|eSdSN4*CtE zDhBwZQ|x&u26M&&s+ylSS}-&+x(Vni7!X13LB3Wojk4<9jGP>4zmoEy?Yq<+= zeXhw6`L(;dIr+W|>xW?d-nKThYn1nhE72}r<_1XiDLWf7Ap!uGc)n)gb{DNbrY{WG zt@^KheQgj5|H8~_*A+q(a8A=_jkg~4D6(VIDr$#N{_6GX7v0^`^2q`O&!0aRtAmf@ z7qjB4OgN|UoSWoiWFbl-XX+a*Z7}vJLXCi8#kI7wkSQYm!uHeEbhc0znjWryiyv?C zBUQ~kamj@1V&c$cWoP4@!sCX=vuTOts;5K1MSA#}0}i6j>f!Mln;PEDb31y3U2T&<@T7H|t>5K*`H_Sifk zU%k4H!rai}AGkJJ6!IC(*TRQj2R$D128+x6N~IYI&4=>TS&2zVjHj!e5`{cCadB}S zw`Ly_laZND6zLqHpi$FoszEvohYm;Wl7jSTB@a(bPoI8?y~U zkx^g}`%vg^yVx0Z2N1wg^gwItZwzi6)s<%wlmVp6$ zKwhMzj6McRjrfoYpQ8#yH@Bf&H5S|HY693;C?GpLJY3?qAXdNANeMXqv6olvWWg@W zTEM?BaVX+Q5y!f_>w|NPk-Ktg$w$rpOMzY>TFvZ%qU zl61};*6koACDm^7dpPTV;cLuXu9`b4Eq1g#gy!=U=3Q(o0Tj)=hY=F=K1VYi z)|vc>`J*}it`g%mM+gGOb8lppp=vNWJ2|lCNlNXCyRCMenwq1(NGn%etmciXv9Yms znYlvL3?OXEbW{Ss=Z999+-wvGzdt)tpE5HqA>a>;l73+~gDMU{Tal!R*AUH5v`ZTP zDyW0Iz?Fs)F$|fH!9O!~np&8t;NQ4?4&Un@2{7Q21O!d9{2B)}iwzz|Qu5=Oc1Awe{_{P*PCn`G)!Vmkqmp54Z0x06^>9e{M!P?H zmWMMVB>f?}P(65dz3Jjy=$K!gC~6~fi@hcH_yRr9odd3hPZ#4|@tMOET|;TM(o^zMxlnDuXE z786MJ4KfZXyemxBLfC&dw%3OneJC3ZYmBtUH{x$v@}i>8Up*)a&CSi76!RnL`SRs4 zAX6Yorl+Sr=I2LDPwQ1s3^Ow`Q-n?a`0*4r5*-T*hNh;bql-(RJahi853F^uC~F}h zp)~u0GUx>>s;*890->Is9@L5|Ru_N%_@LM(fEg$v1*n-|tmXcGdAduiq!Gp@CiLYb zq@;OF<=1d=gKV@|p9TUVgSeqC_s_&JH&Q*vklBdZ9vF zLk|wwJ9q9pJKm;a=ipdaU)NhW8|dpJBHa>~8LqV*7GYtLN?eI%6&8XwJpz6Xl%N*{ zh;3|YnlY69*&Bg?*ksMq;^yX-R$Oej`5Ueb3pomIcwuk%sVz`Hq0!N#n`!Ugzwc~& z^===+m$?*7x5RHcHvFF>YZj|Cq@s7V_>23;j3d?2Lw z?!JW=%RPATfcH?Nqq8$>n(?Uy+^(^e)$h@+_^!!Git+LBvCZP*VtfyeBkO}gTg}p< zBKzP>E@(GapYZRL&z(@-j7tn~c)I zoAI*wzA>n;tlI#?VE*NuhRuWj8_st+FR0PDq1`&K(9G*BGqKc95gGdJO@ zEcBoAhW37T{XG-+AH*>B|8bZy9F5pwO|b2F^BOBn=~yR#ZMXx;)9yb97Ep+3Ksq@k z`;mQ`*#FY1;s4$6Hb+zDe@Rr?C(~4dgt3)vIRyXR0=Xfm{~9BTMEoo7{pY3###o&^ z@QHX`cn967{>5luXb*p#a{T#uHCah4l^v_P5~9O-%vLSpW? zQ^)S*>vw{o_8%NB|(q_(*~Nyf6ILNDga|9!S%5qbhjl-47_@B`u_Nwfs*fbb z{m_s<*_GKbEhhOr(E4C(h{q=y^CJFzO3=p)n^=&tXg=fg-{R~i6HXvijLP#(*3Luog;W0zWP{}A$nT;p>PGH%sN||w*LvK(?9Mfw zc&9*}nBlh{=IgDH`IDA-OA%-0dasya-GZ@9RaTU=qBql|XNq3-fQu5e6#_~x8m%MF z$6FVLAQ`PWch!`b(G+bM;>F8m3QPLop+w1$s^agxvzeBo_SXgd_8)?a(&nHRmF*CZ zydDbTo1+s}>#kV~q56?~UHt|kqLh-f;DYDWe;7iMS{68 z0)t;M?WPblG*(3NSll}CZNTH?@Xm`OTBxGbI^sY>zyZxPonn;|r=1|#T#3%M(@<=| z!0M6+<=gSquoYKNT|=b0stk`0K+YLjlbYPk#>CyZ*Omf8+TSkZ2BO!d@B)-Bb|_9q zOdL$R_Wi(P)V{Ijz5Pq*0%K3`&xN$Yb(v4ghbv-sAv>uSPp$A%{Zg$#r7^bFx`?kK zMZg$^y$Tv1;(K(ogEN%VrVRXaT8wO_*m2T^K%KoHafkD$)u?O)f!0QE%3Kg*d4Dtc zLl#cazjpXKcQN8>XK9wkX|8ot4CSo|1tcv|V@E^r=p^X3=91_#p(|k_lL$eDWw$ot|lx84RJu_DcmSY65T| ze)kU*SeD|y9Z833<>ljx^H4^)kbp9Kw(#&433lQ)p@gaI^<#vQi5eLIa;wrRR1*}? z!3TNnimQln?%q|qgn0R@@>TEtM^L#LZet79o^XFIK!ZM%o5yf^K^fp-a=6KTn%Nq> zSm`ULl7%-(G9g(}A<^1XOWXXDzwS^rv^4>m$4P zLJkzz=|=cfZMgcspzOWMulp0E%|Iu94EK$AG%-l*C24Nml@lim4IVB(3O;%gdY_Nx zQXLr@qMh(Iy22cG7qcDT30Q@AR|GG^UA#jompS&*&gw(#s7d_iTU&;h2U(8+^XVQ)4ur{lw6MiPR97Fc1f?Jg z?tT49jT3a0PXEaP>1$I29GDMxp)_CE%tRBLvHL%Cy$3ke@B0USP$VLey^10k$=+Eh zN)ctxME1^B*@TaX?Cgx}J+qTN4~{K+XFJCKK7GdT`~P3p|6Gl8anAewyr1WO?)!DG z*Au2j7)37DYt-?CI>53n*zPin0-@J8No;L9NH!*1hll9dO4;Z&_reK_E@MI9n$|G-SViA2{y>oSTbJaheLeV1tMusD~Pw;+1OA+ zaAET5nY7B(Ey!4Jjzppx3NVTAU8I0N{;roi_!mG7k@OI`-vhS`K4+(~$^kVnq}!xD?nHk9UWCbegM z4JlO4{61LIetDV6=o!uad|RiKa=CM20p>o-J6l>Z zxU|qMaS>yMI+|FaoW<_}v#*RCMc~Z!)cbzYO6+;|O46B+^7^(R3mUdpp|;d8q*g_0 zW*S&d?T#^Xrg8F_O*OY$y)6h@eH&IZ7xoRh2*sF+S`*)8 zx90Dn5*|D$e=)uFVvqCc$DXS~MV7))-*%HZ2@BK2y)e}6y%Bh(>J%rR)-&JFsf!Us(3|^uSw2OVqrs!DJ#wfIHRua%0vwO`armcW+-da3EUwCK&B&>jNl33Yf z{5S5eex7*pnInfo*2j4meInvdyzdA4;+jn&HR-z=iR$bv#<|x*PSpVb=dch!humP* zXr9C|SN~XR>?pivqw`IMitx}0l>8zS#3WqOoyeMHhb64e0rQhO2Y&F!=yed9sm%s% zQ)23bUug81H?zGuRPYQRJ$~YyIF2P7Mtj$tX^|=xLK)NbyQIT=7F-}CxtlRDyIDsu zsg*~Yi#s7LTcG6~FS#KQQ~|`B`w{CGM2jzC6pIxltZ%QU!msM|gV_^fO_6z{Mk-7@ zu}U@&4-EcPU=0d_+Wz3F*%cCoX&{l!LQ)dZTVdZQdi>;+n+n85O}JwATbpM?RxTdw zR%_>0I9X5ce6m;f_01Hg)Tn=J0j3yCgiJo($vw;U@xLSn7jm==Bx2}zA6&$OLBTeb zEV=8Nr;oNWSY)y~fE{L19y~#(rAy?fCXyGNIiSs5>23u}x zzvZ}ngGf1=zVeA0;r^a9BJ9F`9$hO`!eI0u=(EPn#8A@?E!?;H{iq(ZzPlKV4Xi4^ ztIdPOfrMpI)P;UbW8X(D%M0pnK(o`p4=+d*(K(2Te7&(Q-|UBO9Gf`LG|%7_oK11j zj)zbFafAc94+}gTX{eneCXXjE-6p4)pB?gXcXC3riMqP6?r{BR``&wl^r^~U1cvRx zi`#>2n3)2psBayrx9X=&mGJ>DiPj#ygS^MnCE@of|n6Eh$ZA&@4MUwDhKt0vM2<9L9F9)i_^}`ddOt zo`D)F;_3@KaB?z8^36zK*|8DJu5V;pLu>tf3>#Z{kVp@|;aCb1D(iDd;#?K9*fTb; z>dn910R7Vt1*7g=<3h-bSG7hxmV%-}W@Rdv-E;dPRuGGv1#!lBNxzF2ric3ioUH2c z^;;b7OSSP)#hOK|W%|3PL{jgcLkN2a;#DmWf^#6;XJ9ow=Icpe!dvR-9Um(IL-A2) z*ST}2#3daq2V%X${&(Mz#?^eZz`51aZw-#nW>d{~$aU=+sl7c1h%gih>Q4lyjMFiC27KvY z1s=kIZ`b7z$^Q0=EBl8-%c((*c@?CGtml)`3Fo9N79m8Y<$K{Lb5 zh!|vPj9iI|6zhfkORFP@+eR!!Lg7fFm_YYMneAG)=drcy{;k|h^TVk3cPGzb6i{fS z&UZjG)*C;SfJ2nP0xGJ1C;4$4)L^#9a;^oQnw|3TfFkzYE|(deJH4;^Vx zA#&g|Wp@6&eG*uqTGIjzcQW;)_^N&_R@XsRZ0T=Y=a?tYSrL$2?F^E7?G=e5al33H zca_R)bD8?{7oF{{PkR+Bp5=Xco40{_>rYb%p)^lyYf!<+5XLlvBqrR-{Rs{FfB}-m z_`@%fbKO?e=~NRz&Nr-NoVa+ZOg;RP`tmK@oEqPjN5uGnkqeY+G72x6R|G7&=zN;= zRQc;WPtluqP+5CmNl*fXdi|SneBq@hPfZVbEc<3je#`cEX-N?X7hb*@>V?!{Q zY8V5>Jk1Cd^AZA&b9hCo?dU>+QD1ktOTKY2C$$Gw+Q>Uutc|J6p70kqzK1GmOv~m~ zdt6vi#w$TU9u^4AS7a|=)SZ);jJ_gs^u*2gQb$hjTEA|u!x*+`V3k+#u;zoH3ZGwJ zW5F&1tM<)6f^Cf72KLfy;1H8z%DMz)gKTGp-hgQqor^utw|SS2HrRVNi$owUV1tjX zjbj*`=WCqNf`j;Y-HVN#ONotz18ZnEmXU=+a@crntoZ({K&hk53Egwr@R23(yUk6#_sG*5oh62usNs!oJ*0tj4v zE4Ffs-Y-ZWF6SlHS%?Vt_t0>Nb&v|eBE@XEyQC?X-Bu^5HjZvaLU+YQE5Z}y&yt!) zBOMQ2)#er|c8bml|5{wclv$@`jrfRz7Quw5F1y$gAih_J+#aows2Oe56y%eEy4PRM zBM*Wvx-KUk7S?gG$5DSmwf41XJH^Led9hYd^t0N})zhggP>}v&UMC&()bv_5aJjIl z6BZno{|qGFr_j+TEWe}cqBWZ0x*^(Pr;#yDTzlw}gT zRyf*Os(Z4TY^3{%UsM&Q{q?fiOG zUfYWY73Gnr7ilWcwxpYqpUP>U;^?Jku*O=YS=edwf-)a-^g>-g4@D=6Kv`DnTVYt>t>1Ix6 z(9Rt8RM}HtdgzlzMPy(Erzsu+fcL%34j^%}`|AzED3mxM{Dcl{y5^~FVvX*(4*AWre42x(ahLZ6APpHNA;N;(RMa8iYa zLqrr)e-9C+xfUt_`T8W!S-Ob^YsE2(v2g7KD)HjJyAkNvyshF_=O?1zC&Bj!j4H1}^d?R@Qr~Q+2>7~1WK`6>nd)Tk|v*zMi z=aDIgyjmEI+;b-A`f`y+OP+40n{e+t*fvLtVn~0Y$NQ4WnRgw0aqg!1(6~; zrFR)W#W-*7h~}=kYu(4>A)NM(FM$4o_{A&)-TiBvv9PMhlRERQvOd&j@0+BujI{@rd<5^`x#$*Oy`Opu3#MTRa zSj-X^Wb1C3@sS&wPZ$$|zrghC0GBmzPO#_|HMS#%uVFaQRaMtly;sXsGv*Q(mW75*ks&fhV%^9jk+LIwa8p%ZWsnl zs6Uh#zY=_H;?+R7IA z>p`Wf^xDcn7l+H26_&cGj+oaF;h5g@aW8k*XL$P2-`--7W=J&{KM2)29RrKJ`_YqoV3cxK)!%>c+qZeu$y>q=IbeD%i7l-F zlZ^DW%dXqkKdxu89PEDR;MQBdkI7lg7qq*8QbJ(pcQQ_oHB!bQRwO0rA(~|?>^4zKp=1ie!^8Vh14~XDm1N-Y2F~NzSI1#2pD~5u+9;5)cuki`f{t5SJe6N}mWR^L1fdIJ_ z!T#mNwXg^NlQ}OXn|=!NSrDfsQ{E@vGxEKhv=LEqVRzggqZDBIpdZWHbd;V0&og8N z?>QsKo?``{4NdV<=;@^Sy&j3|EmrX>kZ;1o^II%7Cgd%AZB^ylO~Fx_c4^P{%Qk*_ zvp80C%OgQr^Z;FBdvZ$>2fy9IrTfXKt0J4Lyh3j=LMOo&gTgq{%*!p=~AriH4*er;2BZLn9iW?p-<<*&1A zY>OW*XIMVOPXL!fZ)xBHbOrX9{y6)ho6cDvBj+owo7=E7_zdf-_4W?`z?M^y#_qIO zRVa{mo?znm^yd;9QVyy>(BR(ZvR#5PIo2@|qO0U2?CmN2B42?n^L=QZoLjkPn=z7{ z2$(RcxEHt=UQbuL`5a_rHpV zm*3W&CeRrF&&kR)-p^}vd2Y<>;Hr9dV^r@yuSKE%Jug@U zbpY5(>QXuc0$9oVq5Tf?Qo(->)d>GIjO`52(v-b3oaG-S`opI9b!fFRfe4R>uGoRE z&ylq|_1Q{QF0ke_akA zywQH*CzXkelpWH=iP?cguv)z#z{vfAAY5 zy(ibHpIEkf$2G<);3=tm&M4K*WNm@>W;$Kck|uvkubCN*d~U^#J%Jc0tB$p zNkQWuop>B3+~vA1Xmz-Ti7a9v44-TJg_OMOaa@@3fFv|Y7?xKrL{YEdVkYOviV9gT z|6lhYoDKaD;$Aos!QKJEq>$9pSYzJtd^j~NVPkpnNYJVHKAcGZ76O2fQ~h||>N`Y0 zA2Zh z{f9_X#~;#ZvJ*2MvWvZp~ijC7efdFDBl)}cut7Fwzh`(d)ckK5p z!Br%oDD99owLjpAXTqRf$QrG<1osxB(O37j!MMfW@05M{o#foA)xrqVCt8$BI&uo3Yqh+_|{G6&NVksqdxs@`8?&lsKL63lC_j$nH zl4h};=;9_@K#ND=M)*nUoyTO3pGvy!YfmVFo*FvhfIl>`)}6QWL3 zW)rpeAPg$iyRqVMr!?o;H$&mz7=o`Vki>6$79WHWuwtv@N$!JLtw2l_(LaZ>mv9ch zVFLfi?I<)azk6wLQVQZr<1FkgfZwlmcVi(S=8h2NckuBT$;1~dTqI3}W_aeXdjK&X zFu#9thQD~wOB-wx1W};hd3iSNRs!IpT@YhG`@M zGeC0l+W;VABLiCYEfOrjg5L^S66IH(?~3+-f0L94nYVX`42*m8=MQ!g5s9_4a28K+ z>JJ*hqCJu0A}3nNb3>xxTi8l5M1}nmjAruIdyDi@5|Wu_}A`9VP8W%w%t&# ze6pcBI9Srb5Vv14BtE_$XV+a5wXpz4(od`}XO&I3tkhVYEw4?YOH+iN}Tf&0gGjk z^=Py;Owx> zjknTHS^7eGGLIuZ&@GKfwSQ~oY;XVIho-J?XziSl;K@gdsqYn0l7sfe^fknjC%;>5 zeui_k&T)B)iSI|%y_0gVjUhzQP8oV0Z2y>7DF5x?iTAks(U|z-S}nPPQbAQt3yj81 zj{j12iJ~#A3N?o*Elds{Fd06NAj{g0wTvD4QQ*d#MSz!oT;r+WFzre|L|;p@dtyAD z(0yh&W89%bis*4WNC z@)O~R`AF++*U69Nk6jIM4>BgV7?O;)%NJ$y8mOX9)n-av_=(MCll~kP-&@df?@tlw z+E=XtY=kvGc3=+Pdkl?id`4B8WWw9;P^H%AOh~&-Gles^6YjNmldXcbk-WHCKW5C7$YfQcYm2>v%)4?%*(K zABO)buY{>}El&&=wP?77zp(b>6oV4|`BNlzi?4U7i8^0j!E7OWx8A!#>DO>lxN_-#VD;4>W)Mg&Xyqr@uV}(`oXa7)Y$FZdOMHl zPLyK@o@E_T*-3xmRQ1A{Xk?clL$n)yJQMLu0*&}vyp01wMWGFKr+Twf_8lrD)vVar z+z4!`%|M9*vdAu{pQWGh5cIVx_YqSc{xl$t@*UAx3BC#J#0=O!WJ0LBnp&U+{l;c7 zR$#iAAo0zk(33G;51Oo?jHy!MGppnFjWhGb8k_|kUluVqZgAJ>l@(oM7n`LM(#l7t zoBU__uVv&Eh#P%O!ljypDx!{4VXanQJ`*c5XT;P*>?!`?Z(7BgsX&WK0ayr(eBf3KBzNQHX_(+knc=9964;1a)$`^Ja*f~5h=Gm@(!3s+v+ zHVTXMH*t`RsP?W&oH8RjR*P&`f3ao8eLq3Fk(0yU&~>~`}-oGHI!LFroq3;q7J zo4t3{VMYw#?@XoykLPCG0Ha&X&tngj`TdOAjS#tm)xiKI*P3 z@^VU#$+@9Ez&GeBhJ~V++h!|BH@L=F+nZ)$SD1a$bHS2gA$7r(UVOg`X|R@5wKb-D zRKtOX{3F_>QttUe4f0mi|7y^p=v@}E#BmK#wx;0-@BE*EG%H#f4v41R`J*{cY z$@_86hEgG+hNVA0`upc&28X$G*v3yNoqQs((M| z@6VcSQU1f^;?D|FiGt<;Pu%~zCQH)cnG1!^;kut=6(#EZyMMcaF{`9R_&<(dZ;0~Q z(fm#EH&#~vOxWKSVY#&{QN2M@y`5e%&I!a)|Gwf=KmRXFzGFl!>O5mI10k?6Al?ep zZUvk0Qj?a6EyUSv4wcpJQhxgI&u|wH>^q;WlIWJI0mGzUs0>iTluU_I@;@ag8nEBF&`3VV#wXDQP>UxSnb8XpDDUfamW-;ze z+}&)aU@*e^BR>A6@6uNt@j>A{foJUH4(nd9nfvr&eUZD6I)CBULCM9xFHO?>Q1+}R zFX0`IUG)Fw>BuJcPsTaV{qs5a&qv$Sidf+jVS=r^{-2$N|L+rlLnpZNUZREp=`pkU z??e5!1S0YR=EPQqO<_#}T%u~!9~Y2sRGoZ`+`=x#Aj)jJ;o6L=TK>yKPl@ zHLZDWA1Vw#yR?vT8pM?|-RmGyYH=-l=JAVlf|_|J@l$jivjmCL?^977=MlHXl8tX6 zn*zF~H!aVSx6PcoM&^3V606GmXx7(QEYA`X%KW`{%@<3~T)MBPCf=$tfKk2!UMBqY%{)ImE}&W*@9xkRnzP|?gQE^FxwT-&52KV;oGkaE5E)u+a)vFsNXD*aOf(e6j1NX`6_Qslf%-Uv%MF`j~6NPwM7z_1wNh&4+}g zmeO&}_9+36m$!B_%oe9Y#fv_cZ?t!_PiGO#OXaz*k>H%|lsL}povpvVq;>g_s#$K` ze0I+-xM^*2Gr8vLd`XX)=kdr@UOYu})cuEzp6G&_B;-+KP?@`_kj*4TqDE6YXQrPH zdT93MgeS^nW>3O>iR{sZKtTB5H%=og!peEq{HH{DK1Ue>IWrI5(4tD?UTjV3 zlGT>Ug@g*YiylshpWVZfK<`-B?ZjUt*r>fxp?lOuVCvKoJ?)CFbagpsc=K^%P4{g2 z#%k!V&a>>;q0I-kC%B9p&RZR}og^XM7!Kk?#S_Xm?Dpoj6uK|$D-iR^IUy;+(POoGjla-|{|$d%Y>+-5USoRF}&7ouQg}g6m|xmB?_lWn-AB5 zZP}#DpvTMW?O}Dh;c|Cqn?(k3b_q|nw;T5NOegD65>ake65Baw;3HM^e4C+VEc*&vb;Th;MC8(W?mH`VV|!2GL60cbWoszcqfYmCZ>ude zqti4b8#j)f%5`HLI&8Z2NjGZvDs)}iWbK@emZm+H!Y7w!=Fw&1^Q^B_k3%^^gw}QG zrmf!ScO;oz^w`KcDL@)i*@^ZQZoLp#>-s#quWO44yM{Nhljp}BW7}bqD$8PqL=YKOqVgvaI!d`^3Wx?&@k~F`du|i^gsk@@$!^`jsztYf%^5?9U&cXG{Hj z4;GQ3C!Y;7vu5kWJq_Z3jw^G>LYkS|uvw>@+@yv_>YArhT6m?Ak1hPYc9+#Sz|(7S&AB+0fToYNNH2`+)6#pAxVib z>NLx`r$dIi7p)E_7#+}2TD%l1U(TlY9L7H1>FHROH()#hg@N4~RF=-*tbAiHZF8$j zu!P+whkE6Um8;3;$`gVFtzd^9hC>$-zI!`E*?ZDK)q5c{&wC#q7&wejr42_l3?_tY zkTbX%((T0ieHd6vbbc;G?*B(OOwqN)&sf!2b6pH85j>kA$QB$6ctf_pz}i$x<_Kpw zvY#zg1Qie+X@}>g2&bcj&t;jS+BP|okAF44P)z0m>qF2Q#U;jM0UU1H@)JJ+|?TC&=+QirkM_uRgycV%wfWjNBHNvvQIS5tryvm|Q16AB^LOR#ek+Do%` zZm9WYCeMD^9o@L}N2|cYdhD!N9NYb*C^)m3tA+7OppQ)rdVXbQT!qyfVys!Mn`Upu z?tLQ`<6$MSwxhhNFwIgo*z9Wcm`DYq7T^-K+d*yb{8-jHTd{k^ZrA9#3>Hzqs5XS+ zJgm-oPTk-u#&zS3B6XjSeWnIqK%k23bd7^!%uZ1Jfw)?C3wG~(hhYfJ7(T%WX2w8- z5>wqq&JL+sXoEk}pD8aMi3rah^m%i)KG7CE`OK8T;(6;>H&Ojv?45b1(~3cFZaJ5_ z2;TW}hlbCytsW+&+^rkhH4Ir-Kq=asu(>>U>#%X|`?U7~QL1_x_*^20JZV4Q_IO zOGQe%;q?TNr(8Ms?2TX{@vTmxiKx5Lzshv(oP3@ga|dB_>H9qU0yM!^wAKD*u6hft zuIUU)Zbq{C8O_Mwzco&zXRvd7V+7N(u(2|`h6H6;?m$<@@nTso zYS2|NcuMDvtw9I3>8dSmIq{S(-uZ)GyKE5eRBk!$*Bx6^=7u4}lOu}mth=Sz`<`bC zO7}q1P#H&pxI8o@Vh$F%*(CI_g%+bZ7-C*`6hu;9u9=FEV!tc~-;P<}Xl|jMbYrhs zrHI$~a6VaL_Z#O#{O=xqi;k@tSYUO9B3P9rs%^Mq4QUJQldqi*e#1T79A^`z9IUtZHbLY26v!zcD6v# zEguJAcm7haa(VDByMT?JUoh9@X7lpZYW}Mhg?ZYy z=7!3l+Klq%OVn5f%CvIQBHS}SN($Y*thFMtm=SRT_{}ACGqiNADVmAualj=?t?pJP ze4Z1;myiAVloe%uOvJZOe({&84Jk!WPhhqn?ka$dVSFPb+4&6Fb&KvZkIO6%Z)LaO zI))#mrMkMhFMbJE-b-~wTN@La{q(@6;R*T3ysHqtzII=!bqp%OPc?4ty%hbX+t7mZ z;cKzSt-S04u68CMRpS{$Y4^*T3A)!6_t6*%O4mg=TCx23Cg6c%!jrxP`I~z;td#5Q zHJE77pW2cfM7HW*_SLL8i4%2NTM^%CkmpttHAR6GgGGSXd8EH>i>NK>~yNf zk_^2aJOizr+a%cPRFBtve?GbN=VHm2i>1x}dk5EAf_M}7&}^bV*`!4DO{aeu55TM0 z`o}Z=+r#J$`G5O1K>(vEmsqQODwi=}N`ZY=Vl)le<+eD^k7Go|-J?#lXs&0pk0izP+|KNe*LnfNNXYcVr2+0daXJtncN>Vy;{4m;mExxIO|M5y2dY`*7vaY zjOV|{x4*LKe&+qT;<3$~t#et>z2e#bW>HvCJ<_Mw7ITxZ@$G{1Q`M`7ZuDu&uhZ?Q32_VK z8mBM&>I0cCV`4YXyToKvDF;sWh&g$hGJat4j}xZ^yv93!b-oH~dwGYJ2xbPs7~Te- zzNc!eN!?z^PU(AXrBH?K2Ck#5yIec--n1Z5VDpShs{kQR?M8|UTYVurAui6Z?Zc;f zwnb*?1pH3O$d0{>3mnqSZ<&47Ne?Oj_ z>ptn1yTbZOzMzhV%bs;a&$Qjch#8!iaXtxw&n9xUZLhoVKO z*FJ;?eI2tYN?e(nqQ?GH_~LJO24DRj12nJa$o5rt%4b|gOE0OaDse`Iwa*zKhiz!p zB7NM%i68g(Z0ye+{<5tYI7(n16?jxRnXyKbyrxi`qQwc7%-FI@dg|o9jY&Bu+X~v{ z?IyA3oKgBHeq672IVfOIGu`jqCfR?R%nHkb88li&3kj?jD9=n4`RXqJ;(+49@?3&` zUbyR*m7^w)+rwufAzt-*Z_W8{Z}9b6w$2vhWT(2?e4MtXG>uKxz4s+lRh4gR(z|%y zukqTWe(w!*Ar~yIHLc&jdn0*PwQkZ+pYr?trwxVdywsO~IpUx;^i|QV$rohjVF@NT zm=e1Ufy$`=o=MesWYB0pchJ4XzP^uyT8FivB_({H)A12cRaJ2%k50saSr4k+OHmfT zww;GdgB3Gy5O-MxsDtw>E7|OR2hPHAF~~+>E}OHVZI#R(Y#1A(9bGGZq%d8Xo8t$h zO^=}})!m)$#6#kF)SxL}>6+f1xB8R;@5ZLBSy!xMgqL<*U{ye_nwMpwS+r-RlHN8q z%Tkq_m7$VcZbh}(7Xu8ckO#Ut>mOQB@6y zF3O-vgMrX%L_EKDk2)B42#J(N0+(K~1_yjA1-YUxO5cu_EN-iD-B+ahux>AG`>snb zHeg~S{K!`)&rb}9H*=ehj)Z?7in`R$KkARLXt+ZN5(hG+n8745>zq&>C40vOgj$eR zN{I}vdr-@eg_2Kue?O&Q=@15NU!@Iv6F#{PdC2g!;z3hMB{J4PY@|c)#y+=%1Vg1_ z?S`R)gDP$WONlo8)U@Pj9#TK%Rn8>r@ZmBm1{Kv$$V)W){V-=WI&%E6!MmU89o6!X zr08#q-P`$vh1VIL`>)(QBDV>atrB(G)LTt}Z<>^$z7CpVh)hbquwH~crBB(zd#Q2y zum2N1SO9EuR14_apL2JYcPj}^?1kwlec!o>b9wcL%<6$Kq`x+BcNZgD45TLVTikW| z@u~>s_TJ6#Fv%j*6k+2%SlgMInSli(6#t|CCJ;2+ zia0{-5|o8gf}cwbr?DJv`%B9E<}D^5{<$lHa}m$JyrvfImM$9cejNjv5k?*xDR*v| z_i-bP5vCq~m(d?qKOE28rSuDncnCHP=t>Cn-8G@8ZrR@6hAw#4sCK8-g4VyyaAJ71B4W&(v?qH!rJ%Yc_U zX~aAkwRSzct?5;LARaM#f5R`C_3md2b#hpF;%guQzK!}{0Sz&^c z?dd(IkFXu75V^>eB)co`*1tT@21V~}Tk)ZyLdYcMrq_3a(mWHpctyD(B+LhyY`v3c z%knatZ{xMBLDS_`RTA@kcLrc11o7Q;XBfA9A#VSmZ^{0z_PJluG+-qwD=(r-^(k9a z>hvySjXrdjuF;8VjV-ad%=5lkOoTqPpt_nJrzj{W=x+XqY01lb&uMv_IGP;5U_=^W zo@#QVeWa^?yVrpZZUQQ6PQn4qRE2ddz)n3ru4bQwFTz5l^!~N)W26rI zQx?|j#5l&Xi~vHuVO<9Vg$HeBDneQU!LC_1_S zFaK=_m8{Y)cHS*x_^*Rc8(C0X%xrvdxiSP)&SAI}cRNqOXF68w$HSikZok?HV2l%t zOlGy6theBM94_%DF>+QAgagDVDY`LX;aXyMipIB}jDhK|@8+|-io|tJU}=9B9PNJn zZ9=j86%uoDNt1%a`zc{ZJjZLy%6CB%!!evDG34?EBsu$2ld?bG7CXKVUdhQaH5tlC zfBiWz5zhA@I62m}rp1Hl{Q5lm_tEF{09bJ4OlF)H!w=B;Fy7FsX{_?BZ_`&Q3F87* znBKqd<;T2dl|#*@mU1l{@%ZZNf%ai^-;Dc+J#zi??FkAC#SZots(d?BQXTx?22Fn- z^F$La?+OPtC&zUPXPVF1s^dysm8}PIp!7@%;F0=OVeWXwJhEPUZK=H8l|Q`g#{e*1 z$ex0#RNqTIy#eOF*W`y?=fd{yY0Z3&K(p(kZuy)9KJVkg=zgKa**?j$T66W;!gk2h zzGHMhpXhG%2Xj&|0i~DIK*efl#SL2pL=QhG&$B=gjVsI?Hw#$|oqAb38%WGu`5YS? zPKKE%Ph)NGqOu zy;9GAG9b$z`?!O{HHSPbXvnX;<)QP}tFriMJP!>~{94FQ&Y-@i#^sb1)wb-=sYd{^ zKylrDVewR%COERW6tqy(=NBSC!D?fplukWtiXy&Z!G9}Qvag0alZf}RG-s68g7e)U zEL}y&6(8)dW&;w~SD6(byo?kopWG~Y7#1#K5~;N`j7s&4IgGg|C(%eOLsTi?uPZAeg0X^Znd>PQFNyl#GvVT)-leycPyJ7C#CA#{9n>PPR+gm{CCR0PRr{r^aSi)%^Zk<$ z>BYZ)?iEM-M`s_erq%jlb(@)!Vm(>wYkUP3ysS6;k z4?=F4wOOp=jXkgB4bd-Aa^`Vh7fxRgXm8Mc2oSDlgs|r3Z%RDbZo?uo_K8H^?C0h@ z^r58t*GA=vxWg)LhGHP{0|XzsW(9PYS*3L5 zt_H3Jl$xW?{)R|Xz7O}Az!dJdh=H;AWh+D$)I4CQ_>3id**@lI&z&JuY)@@<*9(u> zQiBE5+bDOlc=G;6>h}Y70yf1;V}Ms$5!QXB$*T`}ax?G=@$3><#Eu_SJ;_)r>l;{c z0K>~0`xrnwT)LuXigzw|(Dcq@5M^D?_p#jz(wXmIZ+csn(a3jm(-KJ&=iDqgFRS!& zD(*jfNczYcS5ui$S#>ZVXs#{s;S=s>%I2};jDcNhgr^-k{oA*hwzo+bM!v^W^$ zq6mewk6OcbOh><~Dzka(3HANtRnTJ(NPitWB-ghk>wZ+5=oHNN*em@tY2NfJ^iwj~ z(_#;0^<=bsOneN8gqM5&z+`W++@p~_2kp>7Pu_9)Q5)l$n+EAtcR1>SQVbrmI2I0FI((WX=_*yfZmC8+b2{ILwxns#N5UqN@daP&$wsPFkkDQFu9Hn?k=E?0! zwa<({<0EHXVze5=ODPWhjBn)HlxW18zN=uK8Iw;_Ho1TFeD4N%DoU}axK$nH!5O8{ zxlLES_i%yCg46&F6vqLt`QC%Y2lEJuW2%x96tdQUMPG7YeW5JQaZza-Ygn0U(d^BK z%QStZSM$0*+8U5QbRZSWz3^7iRP!_UCOw%ssa*eth0l+$;9$E_b^^$#o_H`sKf1~7 z@bIs!EU*Xg2S6;C#L11uLiU`zymnm+{{fNfVtzj;=vJfBEBxwd|Mh@b=($t)Ik@Zq zHjj>Km_EIC&PpZ(`4!NlD3F?Oy6fU@eN-knr#oqSk-&x%TugGoxCjq}Y?x@!aYBkI zeO)xSidZwWFI)1&YR_7h1(lSP1UNA5g++2X17NhUu*kX_WcIN%?|JrBg?HTFI=gm( zs#Hx!C&3ieIqK18lR{9SpVAqi)eQ}j%U{MV+j-d&7Lo={efkFcN45-XS|2*+u+W*s z93$m!$9Dci>lfAP7nxntnKYgJ>*mtLjwmsBU7$+UkFv5XT|61ql2c;`B{TXZi$=zV z)TcrHf?MrXf3G%uMfm4jp;nGf5r3a)i4^g^OiXZE+70IuFOVZQ@KFrclg=X$L^}^< zBvmNEI|ZGZBpjnQi)g5|e(5 z;x1WzoH;+h0Yd^q^~m%6yh*P|2Y6)q{iB0=^};~u>J^eoY=QFw9xOa5NZ15TXluaX zU(Ks^Cgc0)lYx<>qARy}0EMx$GXEwL3E{E1nUT8!mM5S6w&NOjHdW)kM(WQxt;sWK zh2s_3f54WNk}OaU_!4P>D)?dzh;zZY(A@$~X%3Jb!1a=3UmRPqSEkTe-E}FLIU{~a_3y;CIM@WaVF#$^70uFcSn0md-RYWtD z=P5dGlLx^r*~<=uLpF=+gaMuc_5w>fvEV5y`0HN=P20XK znDD;;gSttLmAaDwRhY4pm&eXg2U$c$MfIBV>r>)_!Vwh!mjuY<2TR&t-i6J!=vXD_ zc5p{N?W3p_toDkqX~}40mxHr272wE1v7i+$**Ya|6=^bkHnABcm55l#p2&YQ+J@(@~~_qG*;kNaODv{eV6Ui=IQ8{9OYD*th0ODl(q z${sW|9Vr}P2UNYdov@%UNc)o)A4NAt1`f`nPep^ZNxdekYC@{uR6XwJ{R7&V`;#ur zoUZSq@BX#!Opoh@1t9j@bCG3xp6i`$u=BrO3T`!o@R6*NazFOtD$jFAXE42X3!0HW z!2S;J6B{Ocl)JqWT=P_Xvg$jO71v`ocJ>tvm&0k-ojX9pk$0 z`@W8`)_$?BTeUZ8mM6A6`VSdkGTjZ2=av}1DWXWuK^t&pBcOiYKvV`+J`#4p?M3=*0d};3F)tYQ~TQnu2hwm zQ_62Jj-D3p-VG3BL6y!qJ7b*25QBmXNC#=8@;PiT2gY>{n-eeep1VZxMzO1kR`}HJ zI05&`85rF{ba7AgA`CMmth=0{@??uorcUlVny z@jJL-31Y4SeAdP5)@Bglsii?qn4o{PJ6#`-gpNOCBqpN zG*`7R`a~zlr0&M3eEySXs#jqOjy!F|>q~FYC(ELyVZo3p13_QgO;BXc8R6KSSy?|v z<{uK{$mVP8Oh+0?_VcoHa{fe=&1?+xeflS8T5KWnyML0)$+ zwMEWED4-10W7=eU)!vmnVs0QvI|HYT^f*Y&ve>=t{jK?Ey({vS~mCV6y`&?Kfb?aTjD)S380z3y?`?Tn>k9otZm zbj#9;_*OlF9ey*P})oT0IV8zD=$X0CI_43&foEoahVa&2Qti^xc28N-wqHA0P& z%5~&K%lUS@xAPbLUccx0eV@c_bG2}%dGwPhDff$v!Hi!iFk*saBqa7KLrX^W#sb%3hzh%}zo<&qWmS`Vk#ps99R&|m;Z3%6RK}3hU*Js1N zooeGE7=v$`$>#%bz7-C94*)S$Zv40ZbwNQJ6u#J+j93RRbRZ=8QgtxVD$d5>p03_h zcDK!p92C^N(?|j=%Rp^rAzoURHxyp4JerzUG}Z&oLg~%Qr<|$JLXf*+90Bc#`j~Yn zubcXn#blt9a#|C*7x50OW335)bZE+!{p&}I2k7F&Hqa9QVE}afEld5bK7_PjhF796 zG`BQ~)c}oela{Umjszb9F6_m=yKe6U(s_V9hEZx|Jx#Sk?QZo&_xjNEn8?k4SUOqj zKh_!buKuUi7fq{)Ss>O{w4Vw7zEfnjEeWNm2T1s)8}AltPTz4E7DrGMam?T62$c0= zOe6BNp0vGgn`ry>x71LzH=&YWvE|Ka=N@VFq#`v5!a} zk6P}@wmHR=hn`xeI5)bS z-A=6zHgAF*D}9>1sJFDu33W5EouVpv)o^c?trm;2cf^1p2;P-s0 zR5u~0Um^mA(N%I~qr!K;i4-?(UQ^O#SSmgrV)KObu7x_TQ2fTFe??DKz0C7su**Ob zx+1MJ^@*Q}^=S;@b!h*kAYDa3+Y8r9SQVp>RZ=*1YP5}}C(x|~PqG%W0rVuuw`)-+ zZ*}rBV>o(EzQ`?duNG4DPel^s8GgU>HWwRozNt1l(?@nX4EEyhpUfK~mH98jqp z8#+9m9l$QR0*Z9iT^tk`NP?nF$Gg?+KPhN{or7%x#tt)lgcY&`6ylrSf~DtP>Pp%G z|4Bh*OI0v*Pfn}CUt!#Aur}ca7%uMOPl!S$gFuW&yb{KT?GRuK_P#WsYbx*aFww5k zgG+8^-*!bR_Gx{pH5c&tzl}|ZQ?zN~n{h2%a$;kcRe8xy+J)Y=iI?Xx&1KeWKCnwz zow0TbXhtESE17L_T}yp@D;#V9PdciARKM=lcT)FXMcn;c{q>O_)s>jgacg}oVl*|9 z5KTZ_9$`)U#4T6)e|7dAPDjOd+a*CHvVr)nqlgKvveZ*2dM`)WIM!j<-rKc>oIj17 z1wWQh7|U%)z}fZTPt(mZjSRo1kvyY|Fp`cI^oIAVpZq#)4G=mt=1|C9)|&!Qz}zc% zKL@baY`UfpVdH%VElfJxPD%gea$TRHaLS-WAFr09vA(@zXGI}JVnwc_Sec=+bU3$3 z%;Cqu6N5Htkg9}@nknf)taKd0k7YA@lZF!0ufd+&OMB6X4YKh)GA?(&;2#r|VR!zV zNDkH)2%Q>xa1VM0m0&%!fyCxetacsN3ZD4{UtHON`z}qVi_q_Hg@o!_SPG+wlbOSe z6I0JP!Rdbe!@k^S$BI@oFOgrS?B1u?Q9^g+^r(~8X@`xv60&#Tp4g~;@N;SIre^2U zvm>(RTSk4*YrjRXkml;MOv+TQ(FGwDkwMiGErAC~Yj=5H*CHOWD({g!8E1`pJbO7LmH+BsS1h(n0IV2vXUPpojrah)=Z_6g9^7~OUh0%>~t6h$y$V?3^7@P_( zw=!2K8kRoiVS(<($0`AAXU>W?fwIGu{CaabWx-*=&*fAY;!e*8mf-^rx#_Vj$w{0@ zR%aZn6$g2riDN>H>5%0(J4Q#GS(#GAnL=j9KF-`~W)0oT!7}5YP`I z-IuqI(}p`pS*oq8llH!GztX<&=Gn=_xulb`8EeI(co9eOCFwi`bhJrv{Y!!{jpsGh zq3a-FCOlZYnVTg$N3X;Ee_W7a4VA&>rR?CWp3q~m*AR_^f7-dk01;fr%yoYB*FO5Jdsr-Fh9$%k0C HhsyX96E1j> literal 0 HcmV?d00001 diff --git a/modules/telco-core-monitoring.adoc b/modules/telco-core-monitoring.adoc index 92ea4d9426fb..fcfd25a0488a 100644 --- a/modules/telco-core-monitoring.adoc +++ b/modules/telco-core-monitoring.adoc @@ -14,7 +14,7 @@ The {cmo-first} is included by default in {product-title} and provides monitorin + [NOTE] ==== -The default handling of pod CPU and memory metrics is based on upstream Kubernetes `cAdvisor` and makes a tradeoff that prefers handling of stale data over metric accuracy. This leads to spiky data that will create false triggers of alerts over user-specified thresholds. OpenShift supports an opt-in dedicated service monitor feature creating an additional set of pod CPU and memory metrics that do not suffer from the spiky behavior. +The default handling of pod CPU and memory metrics is based on upstream Kubernetes `cAdvisor` and makes a tradeoff that prefers handling of stale data over metric accuracy. This leads to spiky data that will create false triggers of alerts over user-specified thresholds. {product-title} supports an opt-in dedicated service monitor feature creating an additional set of pod CPU and memory metrics that do not suffer from the spiky behavior. For additional information, see link:https://access.redhat.com/solutions/7012719[Dedicated Service Monitors - Questions and Answers]. ==== diff --git a/modules/telco-deviations-from-the-ref-design.adoc b/modules/telco-deviations-from-the-ref-design.adoc index 7284f7f8cb12..7f657828b254 100644 --- a/modules/telco-deviations-from-the-ref-design.adoc +++ b/modules/telco-deviations-from-the-ref-design.adoc @@ -1,6 +1,7 @@ // Module included in the following assemblies: // -// * telco_ref_design_specs/ran/telco-ran-ref-design-spec.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc +// * scalability_and_performance/telco_ref_design_specs/telco-ref-design-specs-overview.adoc :_mod-docs-content-type: CONCEPT [id="telco-deviations-from-the-ref-design_{context}"] diff --git a/modules/telco-ran-agent-based-installer-abi.adoc b/modules/telco-ran-agent-based-installer-abi.adoc index 753b5ffccb41..84e29d4861e8 100644 --- a/modules/telco-ran-agent-based-installer-abi.adoc +++ b/modules/telco-ran-agent-based-installer-abi.adoc @@ -1,35 +1,25 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-agent-based-installer-abi_{context}"] -= Agent-based installer += Agent-based Installer New in this release:: * No reference design updates in this release Description:: -Agent-based installer (ABI) provides installation capabilities without centralized infrastructure. +The optional Agent-based Installer component provides installation capabilities without centralized infrastructure. The installation program creates an ISO image that you mount to the server. When the server boots it installs {product-title} and supplied extra manifests. -+ -[NOTE] -==== -You can also use ABI to install {product-title} clusters without a hub cluster. -An image registry is still required when you use ABI in this manner. -==== - -Agent-based installer (ABI) is an optional component. +The Agent-based Installer allows you to install {product-title} without a hub cluster. +A container image registry is required for cluster installation. Limits and requirements:: * You can supply a limited set of additional manifests at installation time. - * You must include `MachineConfiguration` CRs that are required by the RAN DU use case. Engineering considerations:: - -* ABI provides a baseline {product-title} installation. - +* The Agent-based Installer provides a baseline {product-title} installation. * You install Day 2 Operators and the remainder of the RAN DU use case configurations after installation. diff --git a/modules/telco-ran-bios-tuning.adoc b/modules/telco-ran-bios-tuning.adoc index 83da0d57de11..72753b56822a 100644 --- a/modules/telco-ran-bios-tuning.adoc +++ b/modules/telco-ran-bios-tuning.adoc @@ -1,29 +1,34 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-bios-tuning_{context}"] = Host firmware tuning New in this release:: -//CNF-2026 -* You can now configure host firmware settings for managed clusters that you deploy with {ztp}. +* No reference design updates in this release Description:: Tune host firmware settings for optimal performance during initial cluster deployment. -The managed cluster host firmware settings are available on the hub cluster as `BareMetalHost` custom resources (CRs) that are created when you deploy the managed cluster with the `SiteConfig` CR and {ztp}. +For more information, see "Recommended {sno} cluster configuration for vDU application workloads". +Apply tuning settings in the host firmware during initial deployment. +See "Managing host firmware settings with {ztp}" for more information. +The managed cluster host firmware settings are available on the hub cluster as individual `BareMetalHost` custom resources (CRs) that are created when you deploy the managed cluster with the `ClusterInstance` CR and {ztp}. ++ +[NOTE] +==== +Create the `ClusterInstance` CR based on the provided reference `example-sno.yaml` CR. +==== Limits and requirements:: -* Hyperthreading must be enabled +* You must enable Hyper-Threading in the host firmware settings Engineering considerations:: -* Tune all settings for maximum performance. - +* Tune all firmware settings for maximum performance. * All settings are expected to be for maximum performance unless tuned for power savings. - * You can tune host firmware for power savings at the expense of performance as required. - +// https://issues.redhat.com/browse/CNF-11806 * Enable secure boot. -With secure boot enabled, only signed kernel modules are loaded by the kernel. +When secure boot is enabled, only signed kernel modules are loaded by the kernel. Out-of-tree drivers are not supported. diff --git a/modules/telco-ran-cluster-tuning.adoc b/modules/telco-ran-cluster-tuning.adoc index e12fef062929..a6d3eff0bff3 100644 --- a/modules/telco-ran-cluster-tuning.adoc +++ b/modules/telco-ran-cluster-tuning.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-cluster-tuning_{context}"] @@ -10,14 +10,25 @@ New in this release:: * No reference design updates in this release Description:: -See "Cluster capabilities" for a full list of optional components that you can enable or disable before installation. +See "Cluster capabilities" for a full list of components that can be disabled by using the cluster capabilities feature. Limits and requirements:: * Cluster capabilities are not available for installer-provisioned installation methods. -* You must apply all platform tuning configurations. -The following table lists the required platform tuning configurations: +Engineering considerations:: +* In clusters running {product-title} 4.16 and later, the cluster does not automatically revert to cgroup v1 when a `PerformanceProfile` is applied. +If workloads running on the cluster require cgroup v1, the cluster must be configured for cgroup v1. +For more information, see "Enabling Linux control group version 1 (cgroup v1)". +You should make this configuration as part of the initial cluster deployment. + +[NOTE] +==== +Support for cgroup v1 is planned for removal in {product-title} 4.19. +Clusters running cgroup v1 must transition to cgroup v2. +==== + +The following table lists the required platform tuning configurations: + .Cluster capabilities configurations [cols=2*, width="90%", options="header"] |==== @@ -27,7 +38,7 @@ The following table lists the required platform tuning configurations: |Remove optional cluster capabilities a|Reduce the {product-title} footprint by disabling optional cluster Operators on {sno} clusters only. -* Remove all optional Operators except the Marketplace and Node Tuning Operators. +* Remove all optional Operators except the Node Tuning Operator, Operator Lifecycle Manager, and the Ingress Operator. |Configure cluster monitoring a|Configure the monitoring stack for reduced footprint by doing the following: @@ -55,12 +66,3 @@ Using a single `CatalogSource` fits within the platform CPU budget. |If the cluster was deployed with the console disabled, the `Console` CR (`ConsoleOperatorDisable.yaml`) is not needed. If the cluster was deployed with the console enabled, you must apply the `Console` CR. |==== - -Engineering considerations:: -* In {product-title} 4.16 and later, clusters do not automatically revert to cgroups v1 when a `PerformanceProfile` CR is applied. -If workloads running on the cluster require cgroups v1, you need to configure the cluster to use cgroups v1. -+ -[NOTE] -==== -If you need to configure cgroups v1, make the configuration as part of the initial cluster deployment. -==== diff --git a/modules/telco-ran-core-ref-design-spec.adoc b/modules/telco-ran-core-ref-design-spec.adoc index c8d5046bee46..6ef0d1b256e1 100644 --- a/modules/telco-ran-core-ref-design-spec.adoc +++ b/modules/telco-ran-core-ref-design-spec.adoc @@ -1,6 +1,7 @@ // Module included in the following assemblies: // -// * telco_ref_design_specs/ran/telco-ran-ref-design-spec.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc +// * scalability_and_performance/telco_ref_design_specs/telco-ref-design-specs-overview.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-core-ref-design-spec_{context}"] diff --git a/modules/telco-ran-crs-cluster-tuning.adoc b/modules/telco-ran-crs-cluster-tuning.adoc index a3a65c39417a..6551ba464abe 100644 --- a/modules/telco-ran-crs-cluster-tuning.adoc +++ b/modules/telco-ran-crs-cluster-tuning.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="cluster-tuning-crs_{context}"] @@ -9,14 +9,14 @@ .Cluster tuning CRs [cols="4*", options="header", format=csv] |==== -Component,Reference CR,Optional,New in this release -Composable OpenShift,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-example-sno-yaml[example-sno.yaml],No,No -Console disable,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-consoleoperatordisable-yaml[ConsoleOperatorDisable.yaml],Yes,No -Disconnected registry,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-09-openshift-marketplace-ns-yaml[09-openshift-marketplace-ns.yaml],No,No -Disconnected registry,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-defaultcatsrc-yaml[DefaultCatsrc.yaml],No,No -Disconnected registry,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-disableolmpprof-yaml[DisableOLMPprof.yaml],No,No -Disconnected registry,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-disconnectedicsp-yaml[DisconnectedICSP.yaml],No,No -Disconnected registry,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-operatorhub-yaml[OperatorHub.yaml],"OperatorHub is required for {sno} and optional for multi-node clusters",No -Monitoring configuration,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-reducemonitoringfootprint-yaml[ReduceMonitoringFootprint.yaml],No,No -Network diagnostics disable,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-disablesnonetworkdiag-yaml[DisableSnoNetworkDiag.yaml],No,No +Component,Reference CR,Description,Optional +Cluster capabilities,`example-sno.yaml`,Representative SiteConfig CR to install single-node OpenShift with the RAN DU profile,No +Console disable,`ConsoleOperatorDisable.yaml`,Disables the Console Operator.,No +Disconnected registry,`09-openshift-marketplace-ns.yaml`,Defines a dedicated namespace for managing the OpenShift Operator Marketplace.,No +Disconnected registry,`DefaultCatsrc.yaml`,Configures the catalog source for the disconnected registry.,No +Disconnected registry,`DisableOLMPprof.yaml`,Disables performance profiling for OLM.,No +Disconnected registry,`DisconnectedICSP.yaml`,Configures disconnected registry image content source policy.,No +Disconnected registry,`OperatorHub.yaml`,"Optional, for multi-node clusters only. Configures the OperatorHub in OpenShift, disabling all default Operator sources. Not required for single-node OpenShift installs with marketplace capability disabled.",No +Monitoring configuration,`ReduceMonitoringFootprint.yaml`,"Reduces the monitoring footprint by disabling Alertmanager and Telemeter, and sets Prometheus retention to 24 hours",No +Network diagnostics disable,`DisableSnoNetworkDiag.yaml`,Configures the cluster network settings to disable built-in network troubleshooting and diagnostic features.,No |==== diff --git a/modules/telco-ran-crs-day-2-operators.adoc b/modules/telco-ran-crs-day-2-operators.adoc index 71ef8ee3b407..d09c9c107ad2 100644 --- a/modules/telco-ran-crs-day-2-operators.adoc +++ b/modules/telco-ran-crs-day-2-operators.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="day-2-operators-crs_{context}"] @@ -9,53 +9,54 @@ .Day 2 Operators CRs [cols="4*", options="header", format=csv] |==== -Component,Reference CR,Optional,New in this release -Cluster logging,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-clusterlogforwarder-yaml[ClusterLogForwarder.yaml],No,No -Cluster logging,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-clusterlogns-yaml[ClusterLogNS.yaml],No,No -Cluster logging,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-clusterlogopergroup-yaml[ClusterLogOperGroup.yaml],No,No -Cluster logging,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-clusterlogserviceaccount-yaml[ClusterLogServiceAccount.yaml],No,Yes -Cluster logging,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-clusterlogserviceaccountauditbinding-yaml[ClusterLogServiceAccountAuditBinding.yaml],No,Yes -Cluster logging,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-clusterlogserviceaccountinfrastructurebinding-yaml[ClusterLogServiceAccountInfrastructureBinding.yaml],No,Yes -Cluster logging,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-clusterlogsubscription-yaml[ClusterLogSubscription.yaml],No,No -LifeCycle Agent Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-imagebasedupgrade-yaml[ImageBasedUpgrade.yaml],Yes,No -LifeCycle Agent Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-lcasubscription-yaml[LcaSubscription.yaml],Yes,No -LifeCycle Agent Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-lcasubscriptionns-yaml[LcaSubscriptionNS.yaml],Yes,No -LifeCycle Agent Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-lcasubscriptionopergroup-yaml[LcaSubscriptionOperGroup.yaml],Yes,No -Local Storage Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-storageclass-yaml[StorageClass.yaml],Yes,No -Local Storage Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-storagelv-yaml[StorageLV.yaml],Yes,No -Local Storage Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-storagens-yaml[StorageNS.yaml],Yes,No -Local Storage Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-storageopergroup-yaml[StorageOperGroup.yaml],Yes,No -Local Storage Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-storagesubscription-yaml[StorageSubscription.yaml],Yes,No -LVM Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-lvmoperatorstatus-yaml[LVMOperatorStatus.yaml],Yes,No -LVM Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-storagelvmcluster-yaml[StorageLVMCluster.yaml],Yes,No -LVM Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-storagelvmsubscription-yaml[StorageLVMSubscription.yaml],Yes,No -LVM Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-storagelvmsubscriptionns-yaml[StorageLVMSubscriptionNS.yaml],Yes,No -LVM Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-storagelvmsubscriptionopergroup-yaml[StorageLVMSubscriptionOperGroup.yaml],Yes,No -Node Tuning Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-performanceprofile-yaml[PerformanceProfile.yaml],No,No -Node Tuning Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-tunedperformancepatch-yaml[TunedPerformancePatch.yaml],No,No -PTP fast event notifications,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-ptpconfigboundaryforevent-yaml[PtpConfigBoundaryForEvent.yaml],Yes,No -PTP fast event notifications,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-ptpconfigforhaforevent-yaml[PtpConfigForHAForEvent.yaml],Yes,No -PTP fast event notifications,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-ptpconfigmasterforevent-yaml[PtpConfigMasterForEvent.yaml],Yes,No -PTP fast event notifications,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-ptpconfigslaveforevent-yaml[PtpConfigSlaveForEvent.yaml],Yes,No -PTP Operator - high availability,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-ptpconfigboundary-yaml[PtpConfigBoundary.yaml],No,No -PTP Operator - high availability,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-ptpconfigforha-yaml[PtpConfigForHA.yaml],No,No -PTP Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-ptpconfigdualcardgmwpc-yaml[PtpConfigDualCardGmWpc.yaml],No,No -PTP Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-ptpconfiggmwpc-yaml[PtpConfigGmWpc.yaml],No,No -PTP Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-ptpconfigslave-yaml[PtpConfigSlave.yaml],No,No -PTP Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-ptpoperatorconfig-yaml[PtpOperatorConfig.yaml],No,No -PTP Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-ptpoperatorconfigforevent-yaml[PtpOperatorConfigForEvent.yaml],No,No -PTP Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-ptpsubscription-yaml[PtpSubscription.yaml],No,No -PTP Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-ptpsubscriptionns-yaml[PtpSubscriptionNS.yaml],No,No -PTP Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-ptpsubscriptionopergroup-yaml[PtpSubscriptionOperGroup.yaml],No,No -SR-IOV FEC Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-acceleratorsns-yaml[AcceleratorsNS.yaml],Yes,No -SR-IOV FEC Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-acceleratorsopergroup-yaml[AcceleratorsOperGroup.yaml],Yes,No -SR-IOV FEC Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-acceleratorssubscription-yaml[AcceleratorsSubscription.yaml],Yes,No -SR-IOV FEC Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-sriovfecclusterconfig-yaml[SriovFecClusterConfig.yaml],Yes,No -SR-IOV Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-sriovnetwork-yaml[SriovNetwork.yaml],No,No -SR-IOV Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-sriovnetworknodepolicy-yaml[SriovNetworkNodePolicy.yaml],No,No -SR-IOV Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-sriovoperatorconfig-yaml[SriovOperatorConfig.yaml],No,No -SR-IOV Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-sriovoperatorconfigforsno-yaml[SriovOperatorConfigForSNO.yaml],No,No -SR-IOV Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-sriovsubscription-yaml[SriovSubscription.yaml],No,No -SR-IOV Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-sriovsubscriptionns-yaml[SriovSubscriptionNS.yaml],No,No -SR-IOV Operator,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-sriovsubscriptionopergroup-yaml[SriovSubscriptionOperGroup.yaml],No,No +Component,Reference CR,Description,Optional +Cluster Logging Operator,`ClusterLogForwarder.yaml`,Configures log forwarding for the cluster.,No +Cluster Logging Operator,`ClusterLogNS.yaml`,Configures the namespace for cluster logging.,No +Cluster Logging Operator,`ClusterLogOperGroup.yaml`,Configures Operator group for cluster logging.,No +Cluster Logging Operator,`ClusterLogServiceAccount.yaml`,New in 4.18. Configures the cluster logging service account.,No +Cluster Logging Operator,`ClusterLogServiceAccountAuditBinding.yaml`,New in 4.18. Configures the cluster logging service account.,No +Cluster Logging Operator,`ClusterLogServiceAccountInfrastructureBinding.yaml`,New in 4.18. Configures the cluster logging service account.,No +Cluster Logging Operator,`ClusterLogSubscription.yaml`,Manages installation and updates for the Cluster Logging Operator.,No +Lifecycle Agent,`ImageBasedUpgrade.yaml`,Manage the image-based upgrade process in OpenShift.,Yes +Lifecycle Agent,`LcaSubscription.yaml`,Manages installation and updates for the LCA Operator.,Yes +Lifecycle Agent,`LcaSubscriptionNS.yaml`,Configures namespace for LCA subscription.,Yes +Lifecycle Agent,`LcaSubscriptionOperGroup.yaml`,Configures the Operator group for the LCA subscription.,Yes +Local Storage Operator,`StorageClass.yaml`,Defines a storage class with a Delete reclaim policy and no dynamic provisioning in the cluster.,No +Local Storage Operator,`StorageLV.yaml`,"Configures local storage devices for the example-storage-class in the openshift-local-storage namespace, specifying device paths and filesystem type.",No +Local Storage Operator,`StorageNS.yaml`,Creates the namespace with annotations for workload management and the deployment wave for the Local Storage Operator.,No +Local Storage Operator,`StorageOperGroup.yaml`,Creates the Operator group for the Local Storage Operator.,No +Local Storage Operator,`StorageSubscription.yaml`,Creates the namespace for the Local Storage Operator with annotations for workload management and deployment wave.,No +LVM Operator,`LVMOperatorStatus.yaml`,Verifies the installation or upgrade of the LVM Storage Operator.,Yes +LVM Operator,`StorageLVMCluster.yaml`,"Defines an LVM cluster configuration, with placeholders for storage device classes and volume group settings. Optional substitute for the Local Storage Operator.",No +LVM Operator,`StorageLVMSubscription.yaml`,Manages installation and updates of the LVMS Operator. Optional substitute for the Local Storage Operator.,No +LVM Operator,`StorageLVMSubscriptionNS.yaml`,Creates the namespace for the LVMS Operator with labels and annotations for cluster monitoring and workload management. Optional substitute for the Local Storage Operator.,No +LVM Operator,`StorageLVMSubscriptionOperGroup.yaml`,Defines the target namespace for the LVMS Operator. Optional substitute for the Local Storage Operator.,No +Node Tuning Operator,`PerformanceProfile.yaml`,"Configures node performance settings in an OpenShift cluster, optimizing for low latency and real-time workloads.",No +Node Tuning Operator,`TunedPerformancePatch.yaml`,"Applies performance tuning settings, including scheduler groups and service configurations for nodes in the specific namespace.",No +PTP fast event notifications,`PtpConfigBoundaryForEvent.yaml`,Configures PTP settings for PTP boundary clocks with additional options for event synchronization. Dependent on cluster role.,No +PTP fast event notifications,`PtpConfigForHAForEvent.yaml`,Configures PTP for highly available boundary clocks with additional PTP fast event settings. Dependent on cluster role.,No +PTP fast event notifications,`PtpConfigMasterForEvent.yaml`,Configures PTP for PTP grandmaster clocks with additional PTP fast event settings. Dependent on cluster role.,No +PTP fast event notifications,`PtpConfigSlaveForEvent.yaml`,Configures PTP for PTP ordinary clocks with additional PTP fast event settings. Dependent on cluster role.,No +PTP fast event notifications,`PtpOperatorConfigForEvent.yaml`,Overrides the default OperatorConfig. Configures the PTP Operator specifying node selection criteria for running PTP daemons in the openshift-ptp namespace.,No +PTP Operator,`PtpConfigBoundary.yaml`,Configures PTP settings for PTP boundary clocks. Dependent on cluster role.,No +PTP Operator,`PtpConfigDualCardGmWpc.yaml`,Configures PTP grandmaster clock settings for hosts that have dual NICs. Dependent on cluster role.,No +PTP Operator,`PtpConfigGmWpc.yaml`,Configures PTP grandmaster clock settings for hosts that have a single NIC. Dependent on cluster role.,No +PTP Operator,`PtpConfigSlave.yaml`,Configures PTP settings for a PTP ordinary clock. Dependent on cluster role.,No +PTP Operator,`PtpOperatorConfig.yaml`,"Configures the PTP Operator settings, specifying node selection criteria for running PTP daemons in the openshift-ptp namespace.",No +PTP Operator,`PtpSubscription.yaml`,Manages installation and updates of the PTP Operator in the openshift-ptp namespace.,No +PTP Operator,`PtpSubscriptionNS.yaml`,Configures the namespace for the PTP Operator.,No +PTP Operator,`PtpSubscriptionOperGroup.yaml`,Configures the Operator group for the PTP Operator.,No +PTP Operator (high availability),`PtpConfigBoundary.yaml`,Configures PTP settings for highly available PTP boundary clocks.,No +PTP Operator (high availability),`PtpConfigForHA.yaml`,Configures PTP settings for highly available PTP boundary clocks.,No +SR-IOV FEC Operator,`AcceleratorsNS.yaml`,Configures namespace for the VRAN Acceleration Operator. Optional part of application workload.,Yes +SR-IOV FEC Operator,`AcceleratorsOperGroup.yaml`,Configures the Operator group for the VRAN Acceleration Operator. Optional part of application workload.,Yes +SR-IOV FEC Operator,`AcceleratorsSubscription.yaml`,Manages installation and updates for the VRAN Acceleration Operator. Optional part of application workload.,Yes +SR-IOV FEC Operator,`SriovFecClusterConfig.yaml`,"Configures SR-IOV FPGA Ethernet Controller (FEC) settings for nodes, specifying drivers, VF amount, and node selection.",Yes +SR-IOV Operator,`SriovNetwork.yaml`,"Defines an SR-IOV network configuration, with placeholders for various network settings.",No +SR-IOV Operator,`SriovNetworkNodePolicy.yaml`,"Configures SR-IOV network settings for specific nodes, including device type, RDMA support, physical function names, and the number of virtual functions.",No +SR-IOV Operator,`SriovOperatorConfig.yaml`,"Configures SR-IOV Network Operator settings, including node selection, injector, and webhook options.",No +SR-IOV Operator,`SriovOperatorConfigForSNO.yaml`,"Configures the SR-IOV Network Operator settings for single-node OpenShift, including node selection, injector, webhook options, and disabling node drain, in the openshift-sriov-network-operator namespace.",No +SR-IOV Operator,`SriovSubscription.yaml`,Manages the installation and updates of the SR-IOV Network Operator.,No +SR-IOV Operator,`SriovSubscriptionNS.yaml`,Creates the namespace for the SR-IOV Network Operator with specific annotations for workload management and deployment waves.,No +SR-IOV Operator,`SriovSubscriptionOperGroup.yaml`,"Defines the target namespace for the SR-IOV Network Operators, enabling their management and deployment within this namespace.",No |==== diff --git a/modules/telco-ran-crs-machine-configuration.adoc b/modules/telco-ran-crs-machine-configuration.adoc index fdf7dc1c58e4..6a38c7e5dd4a 100644 --- a/modules/telco-ran-crs-machine-configuration.adoc +++ b/modules/telco-ran-crs-machine-configuration.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="machine-configuration-crs_{context}"] @@ -9,21 +9,20 @@ .Machine configuration CRs [cols="4*", options="header", format=csv] |==== -Component,Reference CR,Optional,New in this release -Container runtime (crun),xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-enable-crun-master-yaml[enable-crun-master.yaml],No,No -Container runtime (crun),xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-enable-crun-worker-yaml[enable-crun-worker.yaml],No,No -Disable CRI-O wipe,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-99-crio-disable-wipe-master-yaml[99-crio-disable-wipe-master.yaml],No,No -Disable CRI-O wipe,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-99-crio-disable-wipe-worker-yaml[99-crio-disable-wipe-worker.yaml],No,No -Kdump enable,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-06-kdump-master-yaml[06-kdump-master.yaml],No,No -Kdump enable,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-06-kdump-worker-yaml[06-kdump-worker.yaml],No,No -Kubelet configuration / Container mount hiding,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-01-container-mount-ns-and-kubelet-conf-master-yaml[01-container-mount-ns-and-kubelet-conf-master.yaml],No,No -Kubelet configuration / Container mount hiding,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-01-container-mount-ns-and-kubelet-conf-worker-yaml[01-container-mount-ns-and-kubelet-conf-worker.yaml],No,No -One-shot time sync,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-99-sync-time-once-master-yaml[99-sync-time-once-master.yaml],No,No -One-shot time sync,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-99-sync-time-once-worker-yaml[99-sync-time-once-worker.yaml],No,No -SCTP,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-03-sctp-machine-config-master-yaml[03-sctp-machine-config-master.yaml],Yes,No -SCTP,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-03-sctp-machine-config-worker-yaml[03-sctp-machine-config-worker.yaml],Yes,No -Set RCU normal,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-08-set-rcu-normal-master-yaml[08-set-rcu-normal-master.yaml],No,No -Set RCU normal,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-08-set-rcu-normal-worker-yaml[08-set-rcu-normal-worker.yaml],No,No -SR-IOV-related kernel arguments,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-07-sriov-related-kernel-args-master-yaml[07-sriov-related-kernel-args-master.yaml],No,No -SR-IOV-related kernel arguments,xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#ztp-07-sriov-related-kernel-args-worker-yaml[07-sriov-related-kernel-args-worker.yaml],No,No +Component,Reference CR,Description,Optional +Container runtime (crun),`enable-crun-master.yaml`,Configures the container runtime (crun) for control plane nodes.,No +Container runtime (crun),`enable-crun-worker.yaml`,Configures the container runtime (crun) for worker nodes.,No +CRI-O wipe disable,`99-crio-disable-wipe-master.yaml`,Disables automatic CRI-O cache wipe following a reboot for on control plane nodes.,No +CRI-O wipe disable,`99-crio-disable-wipe-worker.yaml`,Disables automatic CRI-O cache wipe following a reboot for on worker nodes.,No +Kdump enable,`06-kdump-master.yaml`,Configures kdump crash reporting on master nodes.,No +Kdump enable,`06-kdump-worker.yaml`,Configures kdump crash reporting on worker nodes.,No +Kubelet configuration and container mount hiding,`01-container-mount-ns-and-kubelet-conf-master.yaml`,Configures a mount namespace for sharing container-specific mounts between kubelet and CRI-O on control plane nodes.,No +Kubelet configuration and container mount hiding,`01-container-mount-ns-and-kubelet-conf-worker.yaml`,Configures a mount namespace for sharing container-specific mounts between kubelet and CRI-O on worker nodes.,No +One-shot time sync,`99-sync-time-once-master.yaml`,Synchronizes time once on master nodes.,No +One-shot time sync,`99-sync-time-once-worker.yaml`,Synchronizes time once on worker nodes.,No +SCTP,`03-sctp-machine-config-master.yaml`,Loads the SCTP kernel module on master nodes.,Yes +SCTP,`03-sctp-machine-config-worker.yaml`,Loads the SCTP kernel module on worker nodes.,Yes +Set RCU normal,`08-set-rcu-normal-master.yaml`,Disables rcu_expedited by setting rcu_normal after the control plane node has booted.,No +Set RCU normal,`08-set-rcu-normal-worker.yaml`,Disables rcu_expedited by setting rcu_normal after the worker node has booted.,No +SRIOV-related kernel arguments,`07-sriov-related-kernel-args-master.yaml`,Enables SR-IOV support on master nodes.,No |==== diff --git a/modules/telco-ran-du-application-workloads.adoc b/modules/telco-ran-du-application-workloads.adoc index fb6df1f0fec9..303da63aad04 100644 --- a/modules/telco-ran-du-application-workloads.adoc +++ b/modules/telco-ran-du-application-workloads.adoc @@ -1,28 +1,24 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-du-overview.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE -[id="telco-du-workloads_{context}"] -= {rds-caps} application workloads +[id="telco-ran-du-application-workloads_{context}"] += Telco RAN DU application workloads -DU worker nodes must have 3rd Generation Xeon (Ice Lake) 2.20 GHz or better CPUs with firmware tuned for maximum performance. - -5G RAN DU user applications and workloads should conform to the following best practices and application limits: - -* Develop cloud-native network functions (CNFs) that conform to the latest version of the link:https://redhat-best-practices-for-k8s.github.io/guide/[Red{nbsp}Hat Best Practices for Kubernetes]. +Develop RAN DU applications that are subject to the following requirements and limitations. +Description and limits:: ++ +-- +* Develop cloud-native network functions (CNFs) that conform to the latest version of link:https://redhat-best-practices-for-k8s.github.io/guide/[Red Hat best practices for Kubernetes]. * Use SR-IOV for high performance networking. - -* Use exec probes sparingly and only when no other suitable options are available - +* Use exec probes sparingly and only when no other suitable options are available. ** Do not use exec probes if a CNF uses CPU pinning. Use other probe implementations, for example, `httpGet` or `tcpSocket`. - ** When you need to use exec probes, limit the exec probe frequency and quantity. The maximum number of exec probes must be kept below 10, and frequency must not be set to less than 10 seconds. - -* Avoid using exec probes unless there is absolutely no viable alternative. +Exec probes cause much higher CPU usage on management cores compared to other probe types because they require process forking. + [NOTE] ==== @@ -30,4 +26,8 @@ Startup probes require minimal resources during steady-state operation. The limitation on exec probes applies primarily to liveness and readiness probes. ==== +[NOTE] +==== A test workload that conforms to the dimensions of the reference DU application workload described in this specification can be found at link:https://github.com/openshift-kni/du-test-workloads/tree/v1.0[openshift-kni/du-test-workloads]. +==== +-- diff --git a/modules/telco-ran-du-reference-design-components.adoc b/modules/telco-ran-du-reference-design-components.adoc new file mode 100644 index 000000000000..6f6c02015f27 --- /dev/null +++ b/modules/telco-ran-du-reference-design-components.adoc @@ -0,0 +1,23 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-ran-du-reference-design-components_{context}"] += Telco RAN DU reference design components + +The following sections describe the various {product-title} components and configurations that you use to configure and deploy clusters to run RAN DU workloads. + +.Telco RAN DU reference design components +image::telco-ran-du-reference-design-components.png[Diagram showing telco RAN DU RDS components] + +[NOTE] +==== +Ensure that additional components you include that are not specified in the telco RAN DU profile do not affect the CPU resources allocated to workload applications. +==== + +[IMPORTANT] +==== +Out of tree drivers are not supported. +5G RAN application components are not included in the RAN DU profile and must be engineered against resources (CPU) allocated to applications. +==== diff --git a/modules/telco-ran-engineering-considerations-for-the-ran-du-use-model.adoc b/modules/telco-ran-engineering-considerations-for-the-ran-du-use-model.adoc new file mode 100644 index 000000000000..7b5e571f1686 --- /dev/null +++ b/modules/telco-ran-engineering-considerations-for-the-ran-du-use-model.adoc @@ -0,0 +1,99 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-ran-engineering-considerations-for-the-ran-du-use-model_{context}"] += Engineering considerations for the RAN DU use model + +The RAN DU use model configures an {product-title} cluster running on commodity hardware for hosting RAN distributed unit (DU) workloads. +Model and system level considerations are described below. +Specific limits, requirements and engineering considerations for individual components are detailed in later sections. + +[NOTE] +==== +For details of the RAN DU KPI test results, see the link:https://access.redhat.com/articles/7107302[Telco RAN DU reference design specification KPI test results for OpenShift {product-version}]. +This information is only available to customers and partners. +==== + +Workloads:: +* DU workloads are described in "Telco RAN DU application workloads". +* DU worker nodes are Intel 3rd Generation Xeon (IceLake) 2.20 GHz or better with host firmware tuned for maximum performance. + +Resources:: +The maximum number of running pods in the system, inclusive of application workload and {product-title} pods, is 120. + +Resource utilization:: ++ +-- +{product-title} resource utilization varies depending on many factors such as the following application workload characteristics: + +* Pod count +* Type and frequency of probes +* Messaging rates on the primary or secondary CNI with kernel networking +* API access rate +* Logging rates +* Storage IOPS + +Resource utilization is measured for clusters configured as follows: + +. The cluster is a single host with {sno} installed. +. The cluster runs the representative application workload described in "Reference application workload characteristics". +. The cluster is managed under the constraints detailed in "Hub cluster management characteristics". +. Components noted as "optional" in the use model configuration are not included. + +[NOTE] +==== +Configuration outside the scope of the RAN DU RDS that do not meet these criteria requires additional analysis to determine the impact on resource utilization and ability to meet KPI targets. +You might need to allocate additional cluster resources to meet these requirements. +==== +-- + +Reference application workload characteristics:: +. Uses 15 pods and 30 containers for the vRAN application including its management and control functions +. Uses an average of 2 `ConfigMap` and 4 `Secret` CRs per pod +. Uses a maximum of 10 exec probes with a frequency of not less than 10 seconds +. Incremental application load on the kube-apiserver is less than or equal to 10% of the cluster platform usage ++ +[NOTE] +==== +You can extract CPU load can from the platform metrics. +For example: +[source,terminal] +---- +$ query=avg_over_time(pod:container_cpu_usage:sum{namespace="openshift-kube-apiserver"}[30m]) +---- +==== +. Application logs are not collected by the platform log collector +. Aggregate traffic on the primary CNI is less than 8 MBps + +Hub cluster management characteristics:: ++ +-- +{rh-rhacm} is the recommended cluster management solution and is configured to these limits: + +. Use a maximum of 5 {rh-rhacm} configuration policies with a compliant evaluation interval of not less than 10 minutes. +. Use a minimal number (up to 10) of managed cluster templates in cluster policies. +Use hub-side templating. +. Disable {rh-rhacm} addons with the exception of the `policyController` and configure observability with the default configuration. + +The following table describes resource utilization under reference application load. + +.Resource utilization under reference application load +[cols="1,2,3", width="90%", options="header"] +|==== +|Metric +|Limits +|Notes + +|OpenShift platform CPU usage +|Less than 4000mc – 2 cores (4HT) +|Platform CPU is pinned to reserved cores, including both hyper-threads of each reserved core. +The system is engineered to 3 CPUs (3000mc) at steady-state to allow for periodic system tasks and spikes. + +|OpenShift Platform memory +|Less than 16G +| + +|==== +-- diff --git a/modules/telco-ran-gitops-operator-and-ztp-plugins.adoc b/modules/telco-ran-gitops-operator-and-ztp-plugins.adoc index 603188d3cf54..bef6647ffc0c 100644 --- a/modules/telco-ran-gitops-operator-and-ztp-plugins.adoc +++ b/modules/telco-ran-gitops-operator-and-ztp-plugins.adoc @@ -1,62 +1,52 @@ // Module included in the following assemblies: // +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc // * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-gitops-operator-and-ztp-plugins_{context}"] -= {gitops-shortname} and {ztp} plugins += GitOps Operator and {ztp} New in this release:: * No reference design updates in this release Description:: -{gitops-shortname} and {ztp} plugins provide a {gitops-shortname}-based infrastructure for managing cluster deployment and configuration. ++ +-- +GitOps Operator and {ztp} provide a GitOps-based infrastructure for managing cluster deployment and configuration. Cluster definitions and configurations are maintained as a declarative state in Git. You can apply `ClusterInstance` CRs to the hub cluster where the `SiteConfig` Operator renders them as installation CRs. -Alternatively, you can use the {ztp} plugin to generate installation CRs directly from `SiteConfig` CRs. -The {ztp} plugin supports automatic wrapping of configuration CRs in policies based on `PolicyGenTemplate` CRs. -+ -[NOTE] -==== -You can deploy and manage multiple versions of {product-title} on managed clusters using the baseline reference configuration CRs. -You can use custom CRs alongside the baseline CRs. - -To maintain multiple per-version policies simultaneously, use Git to manage the versions of the source CRs and policy CRs (`PolicyGenTemplate` or `PolicyGenerator`). - -Keep reference CRs and custom CRs under different directories. -Doing this allows you to patch and update the reference CRs by simple replacement of all directory contents without touching the custom CRs. -==== - -Limits:: -* 300 `SiteConfig` CRs per ArgoCD application. -You can use multiple applications to achieve the maximum number of clusters supported by a single hub cluster. +In earlier releases, a {ztp} plugin supported the generation of installation CRs from `SiteConfig` CRs. +This plugin is now deprecated. +A separate {ztp} plugin is available to enable automatic wrapping of configuration CRs into policies based on the `PolicyGenerator` or `PolicyGenTemplate` CR. -* Content in the `/source-crs` folder in Git overrides content provided in the {ztp} plugin container. -Git takes precedence in the search path. - -* Add the `/source-crs` folder in the same directory as the `kustomization.yaml` file, which includes the `PolicyGenTemplate` as a generator. -+ -[NOTE] -==== -Alternative locations for the `/source-crs` directory are not supported in this context. -==== +You can deploy and manage multiple versions of {product-title} on managed clusters by using the baseline reference configuration CRs. +You can use custom CRs alongside the baseline CRs. +To maintain multiple per-version policies simultaneously, use Git to manage the versions of the source and policy CRs by using `PolicyGenerator` or `PolicyGenTemplate` CRs. +-- -* The `extraManifestPath` field of the `SiteConfig` CR is deprecated from {product-title} 4.15 and later. -Use the new `extraManifests.searchPaths` field instead. +Limits and requirements:: +* 300 `ClusterInstance` CRs per ArgoCD application. +Multiple applications can be used to achieve the maximum number of clusters supported by a single hub cluster +* Content in the `source-crs/` directory in Git overrides content provided in the ZTP plugin container, as Git takes precedence in the search path. +* The `source-crs/` directory is specifically expected to be located in the same directory as the `kustomization.yaml` file, which includes `PolicyGenerator` or `PolicyGenTemplate` CRs as a generator. +Alternative locations for the `source-crs/` directory are not supported in this context. Engineering considerations:: * For multi-node cluster upgrades, you can pause `MachineConfigPool` (`MCP`) CRs during maintenance windows by setting the `paused` field to `true`. -You can increase the number of nodes per `MCP` updated simultaneously by configuring the `maxUnavailable` setting in the `MCP` CR. +You can increase the number of simultaneously updated nodes per `MCP` CR by configuring the `maxUnavailable` setting in the `MCP` CR. The `MaxUnavailable` field defines the percentage of nodes in the pool that can be simultaneously unavailable during a `MachineConfig` update. Set `maxUnavailable` to the maximum tolerable value. This reduces the number of reboots in a cluster during upgrades which results in shorter upgrade times. When you finally unpause the `MCP` CR, all the changed configurations are applied with a single reboot. - -* During cluster installation, you can pause custom `MCP` CRs by setting the `paused` field to `true` and setting `maxUnavailable` to 100% to improve installation times. - -* To avoid confusion or unintentional overwriting of files when updating content, use unique and distinguishable names for user-provided CRs in the `/source-crs` folder and extra manifests in Git. - -* The `SiteConfig` CR allows multiple extra-manifest paths. When files with the same name are found in multiple directory paths, the last file found takes precedence. -This allows you to put the full set of version-specific Day 0 manifests (extra-manifests) in Git and reference them from the `SiteConfig` CR. -With this feature, you can deploy multiple {product-title} versions to managed clusters simultaneously. +* During cluster installation, you can pause custom MCP CRs by setting the paused field to true and setting `maxUnavailable` to 100% to improve installation times. +* Keep reference CRs and custom CRs under different directories. +Doing this allows you to patch and update the reference CRs by simple replacement of all directory contents without touching the custom CRs. +When managing multiple versions, the following best practices are recommended: +** Keep all source CRs and policy creation CRs in Git repositories to ensure consistent generation of policies for each {product-title} version based solely on the contents in Git. +** Keep reference source CRs in a separate directory from custom CRs. +This facilitates easy update of reference CRs as required. +* To avoid confusion or unintentional overwrites when updating content, it is highly recommended to use unique and distinguishable names for custom CRs in the `source-crs/` directory and extra manifests in Git. +* Extra installation manifests are referenced in the `ClusterInstance` CR through a `ConfigMap` CR. +The `ConfigMap` CR should be stored alongside the `ClusterInstance` CR in Git, serving as the single source of truth for the cluster. +If needed, you can use a `ConfigMap` generator to create the `ConfigMap` CR. diff --git a/modules/telco-ran-lca-operator.adoc b/modules/telco-ran-lca-operator.adoc index 173a570acb44..1935f163c9be 100644 --- a/modules/telco-ran-lca-operator.adoc +++ b/modules/telco-ran-lca-operator.adoc @@ -1,19 +1,18 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-lca-operator_{context}"] -= {lcao} += Lifecycle Agent New in this release:: * No reference design updates in this release Description:: -The {lcao} provides local lifecycle management services for {sno} clusters. +The Lifecycle Agent provides local lifecycle management services for {sno} clusters. Limits and requirements:: -* The {lcao} is not applicable in multi-node clusters or {SNO} clusters with an additional worker. - -* Requires a persistent volume that you create when installing the cluster. -See "Configuring a shared container directory between ostree stateroots when using {ztp}" for partition requirements. +* The Lifecycle Agent is not applicable in multi-node clusters or {sno} clusters with an additional worker. +* The Lifecycle Agent requires a persistent volume that you create when installing the cluster. +For descriptions of partition requirements, see "Configuring a shared container directory between ostree stateroots when using {ztp}". diff --git a/modules/telco-ran-local-storage-operator.adoc b/modules/telco-ran-local-storage-operator.adoc index 30ecf78842ca..68127e9989af 100644 --- a/modules/telco-ran-local-storage-operator.adoc +++ b/modules/telco-ran-local-storage-operator.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-local-storage-operator_{context}"] @@ -16,7 +16,5 @@ The number and type of `PV` resources that you create depends on your requiremen Engineering considerations:: * Create backing storage for `PV` CRs before creating the `PV`. This can be a partition, a local volume, LVM volume, or full disk. -* Refer to the device listing in `LocalVolume` CRs by the hardware path used to access each device to ensure correct allocation of disks and partitions. +* Refer to the device listing in `LocalVolume` CRs by the hardware path used to access each device to ensure correct allocation of disks and partitions, for example, `/dev/disk/by-path/`. Logical names (for example, `/dev/sda`) are not guaranteed to be consistent across node reboots. -+ -For more information, see the link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_file_systems/assembly_overview-of-persistent-naming-attributes_managing-file-systems#device-identifiers_assembly_overview-of-persistent-naming-attributes[{op-system-base} 9 documentation on device identifiers]. diff --git a/modules/telco-ran-logging.adoc b/modules/telco-ran-logging.adoc index 921bde81ecde..7a480b4269ae 100644 --- a/modules/telco-ran-logging.adoc +++ b/modules/telco-ran-logging.adoc @@ -1,23 +1,19 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-logging_{context}"] = Logging New in this release:: -* Cluster Logging Operator 6.0 is new in this release. -Update your existing implementation to adapt to the new version of the API. +* No reference design updates in this release Description:: -Use logging to collect logs from the far edge node for remote analysis. The recommended log collector is Vector. +Use logging to collect logs from the far edge node for remote analysis. +The recommended log collector is Vector. Engineering considerations:: * Handling logs beyond the infrastructure and audit logs, for example, from the application workload requires additional CPU and network bandwidth based on additional logging rate. * As of {product-title} 4.14, Vector is the reference log collector. -+ -[NOTE] -==== -Use of fluentd in the RAN use model is deprecated. -==== +Use of fluentd in the RAN use models is deprecated. diff --git a/modules/telco-ran-lvms-operator.adoc b/modules/telco-ran-lvms-operator.adoc index 7b7a8abe12e1..4e1c1435dc22 100644 --- a/modules/telco-ran-lvms-operator.adoc +++ b/modules/telco-ran-lvms-operator.adoc @@ -1,27 +1,19 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-lvms-operator_{context}"] -= {lvms} += Logical Volume Manager Storage New in this release:: * No reference design updates in this release -[NOTE] -==== -{lvms-first} is an optional component. - -When you use {lvms} as the storage solution, it replaces the Local Storage Operator. -CPU resources are assigned to the management partition as platform overhead. -The reference configuration must include one of these storage solutions, but not both. -==== - Description:: -{lvms} provides dynamic provisioning of block and file storage. -{lvms} creates logical volumes from local devices that can be used as `PVC` resources by applications. +{lvms-first} is an optional component. +It provides dynamic provisioning of both block and file storage by creating logical volumes from local devices that can be consumed as persistent volume claim (PVC) resources by applications. Volume expansion and snapshots are also possible. +An example configuration is provided in the RDS with the `StorageLVMCluster.yaml` file. Limits and requirements:: * In {sno} clusters, persistent storage must be provided by either {lvms} or local storage, not both. diff --git a/modules/telco-ran-machine-configuration.adoc b/modules/telco-ran-machine-configuration.adoc index 26217dea031e..112a5af7ee16 100644 --- a/modules/telco-ran-machine-configuration.adoc +++ b/modules/telco-ran-machine-configuration.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-machine-configuration_{context}"] @@ -10,43 +10,37 @@ New in this release:: * No reference design updates in this release Limits and requirements:: -* The CRI-O wipe disable `MachineConfig` assumes that images on disk are static other than during scheduled maintenance in defined maintenance windows. +The CRI-O wipe disable `MachineConfig` CR assumes that images on disk are static other than during scheduled maintenance in defined maintenance windows. To ensure the images are static, do not set the pod `imagePullPolicy` field to `Always`. -+ + .Machine configuration options -[cols=2*, width="90%", options="header"] +[cols="1,2", width="90%", options="header"] |==== |Feature |Description -|Container runtime +|Container Runtime |Sets the container runtime to `crun` for all node roles. -|kubelet config and container mount hiding -|Reduces the frequency of kubelet housekeeping and eviction monitoring to reduce CPU usage. -Create a container mount namespace, visible to kubelet and CRI-O, to reduce system mount scanning resource usage. +|Kubelet config and container mount namespace hiding +|Reduces the frequency of kubelet housekeeping and eviction monitoring, which reduces CPU usage |SCTP |Optional configuration (enabled by default) -Enables SCTP. SCTP is required by RAN applications but disabled by default in {op-system}. -|kdump -a|Optional configuration (enabled by default) +|Kdump +|Optional configuration (enabled by default) Enables kdump to capture debug information when a kernel panic occurs. - -[NOTE] -==== -The reference CRs which enable kdump have an increased memory reservation based on the set of drivers and kernel modules included in the reference configuration. -==== +The reference CRs that enable kdump have an increased memory reservation based on the set of drivers and kernel modules included in the reference configuration. |CRI-O wipe disable -|Disables automatic wiping of the CRI-O image cache after unclean shutdown. +|Disables automatic wiping of the CRI-O image cache after unclean shutdown |SR-IOV-related kernel arguments -|Includes additional SR-IOV related arguments in the kernel command line. +|Include additional SR-IOV-related arguments in the kernel command line -|RCU Normal systemd service -|Sets `rcu_normal` after the system is fully started. +|Set RCU Normal +|Systemd service that sets `rcu_normal` after the system finishes startup |One-shot time sync |Runs a one-time NTP system time synchronization job for control plane or worker nodes. diff --git a/modules/telco-ran-node-tuning-operator.adoc b/modules/telco-ran-node-tuning-operator.adoc index b3ff36f83da6..460c24b7a2f7 100644 --- a/modules/telco-ran-node-tuning-operator.adoc +++ b/modules/telco-ran-node-tuning-operator.adoc @@ -1,57 +1,52 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-node-tuning-operator_{context}"] -= Node Tuning Operator += CPU partitioning and performance tuning New in this release:: * No reference design updates in this release Description:: -You tune the cluster performance by creating a performance profile. -+ -[IMPORTANT] -==== +The RAN DU use model includes cluster performance tuning via `PerformanceProfile` CRs for low-latency performance. +The `PerformanceProfile` CRs are reconciled by the Node Tuning Operator. The RAN DU use case requires the cluster to be tuned for low-latency performance. -==== +For more details about node tuning with the `PerformanceProfile` CR, see "Tuning nodes for low latency with the performance profile". Limits and requirements:: -The Node Tuning Operator uses the `PerformanceProfile` CR to configure the cluster. You need to configure the following settings in the RAN DU profile `PerformanceProfile` CR: - -* Select reserved and isolated cores and ensure that you allocate at least 4 hyperthreads (equivalent to 2 cores) on Intel 3rd Generation Xeon (Ice Lake) 2.20 GHz CPUs or better with firmware tuned for maximum performance. - -* Set the reserved `cpuset` to include both hyperthread siblings for each included core. -Unreserved cores are available as allocatable CPU for scheduling workloads. -Ensure that hyperthread siblings are not split across reserved and isolated cores. - -* Configure reserved and isolated CPUs to include all threads in all cores based on what you have set as reserved and isolated CPUs. - -* Set core 0 of each NUMA node to be included in the reserved CPU set. - -* Set the huge page size to 1G. - +The Node Tuning Operator uses the `PerformanceProfile` CR to configure the cluster. +You need to configure the following settings in the telco RAN DU profile `PerformanceProfile` CR: ++ +-- +* Set a reserved `cpuset` of 4 or more, equating to 4 hyper-threads (2 cores) for either of the following CPUs: +** Intel 3rd Generation Xeon (IceLake) 2.20 GHz or better CPUs with host firmware tuned for maximum performance +** AMD EPYC Zen 4 CPUs (Genoa, Bergamo, or newer) ++ [NOTE] ==== -You should not add additional workloads to the management partition. -Only those pods which are part of the OpenShift management platform should be annotated into the management partition. +AMD EPYC Zen 4 CPUs (Genoa, Bergamo, or newer) are fully supported. +Power consumption evaluations are ongoing. +It is recommended to evaluate features, such as per-pod power management, to determine any potential impact on performance. ==== -Engineering considerations:: -* You should use the RT kernel to meet performance requirements. However, you can use the non-RT kernel with a corresponding impact to cluster performance if required. +* Set the reserved `cpuset` to include both hyper-thread siblings for each included core. +Unreserved cores are available as allocatable CPU for scheduling workloads. +* Ensure that hyper-thread siblings are not split across reserved and isolated cores. +* Ensure that reserved and isolated CPUs include all the threads for all cores in the CPU. +* Include Core 0 for each NUMA node in the reserved CPU set. +* Set the huge page size to 1G. +* Only pin {product-title} pods which are by default configured as part of the management workload partition to reserved cores. +-- -* The number of huge pages that you configure depends on the application workload requirements. +Engineering considerations:: +* Meeting the full performance metrics requires use of the RT kernel. +If required, you can use the non-RT kernel with corresponding impact to performance. +* The number of hugepages you configure depends on application workload requirements. Variation in this parameter is expected and allowed. - * Variation is expected in the configuration of reserved and isolated CPU sets based on selected hardware and additional components in use on the system. -Variation must still meet the specified limits. - -* Hardware without IRQ affinity support impacts isolated CPUs. -To ensure that pods with guaranteed whole CPU QoS have full use of the allocated CPU, all hardware in the server must support IRQ affinity. -For more information, see "Finding the effective IRQ affinity setting for a node". - -When you enable workload partitioning during cluster deployment with the `cpuPartitioningMode: AllNodes` setting, the reserved CPU set in the `PerformanceProfile` CR must include enough CPUs for the operating system, interrupts, and OpenShift platform pods. - -:FeatureName: cgroups v1 -include::snippets/deprecated-feature.adoc[] +The variation must still meet the specified limits. +* Hardware without IRQ affinity support affects isolated CPUs. +To ensure that pods with guaranteed whole CPU QoS have full use of allocated CPUs, all hardware in the server must support IRQ affinity. +* When workload partitioning is enabled by setting `cpuPartitioningMode` to `AllNodes` during deployment, you must allocate enough CPUs to support the operating system, interrupts, and {product-title} pods in the `PerformanceProfile` CR. diff --git a/modules/telco-ran-ptp-operator.adoc b/modules/telco-ran-ptp-operator.adoc index 5cd5aa622776..c46ea79eb1ab 100644 --- a/modules/telco-ran-ptp-operator.adoc +++ b/modules/telco-ran-ptp-operator.adoc @@ -1,40 +1,26 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-ptp-operator_{context}"] = PTP Operator New in this release:: -* A new version two of the Precision Time Protocol (PTP) fast event REST API is available. -Consumer applications can now subscribe directly to the events REST API in the PTP events producer sidecar. -The PTP fast event REST API v2 is compliant with the link:https://orandownloadsweb.azurewebsites.net/download?id=344[O-RAN O-Cloud Notification API Specification for Event Consumers 3.0]. -You can change the API version by setting the `ptpEventConfig.apiVersion` field in the `PtpOperatorConfig` resource. +* No reference design updates in this release Description:: -See "Recommended {sno} cluster configuration for vDU application workloads" for details of support and configuration of PTP in cluster nodes. -The DU node can run in the following modes: -+ -* As an ordinary clock (OC) synced to a grandmaster clock or boundary clock (T-BC). - -* As a grandmaster clock (T-GM) synced from GPS with support for single or dual card E810 NICs. - -* As dual boundary clocks (one per NIC) with support for E810 NICs. - -* As a T-BC with a highly available (HA) system clock when there are multiple time sources on different NICs. - -* Optional: as a boundary clock for radio units (RUs). +Configure PTP in cluster nodes with `PTPConfig` CRs for the RAN DU use case with features like Grandmaster clock (T-GM) support via GPS, ordinary clock (OC), boundary clocks (T-BC), dual boundary clocks, high availability (HA), and optional fast event notification over HTTP. +PTP ensures precise timing and reliability in the RAN environment. Limits and requirements:: -* Limited to two boundary clocks for dual NIC and HA. - -* Limited to two card E810 configurations for T-GM. +* Limited to two boundary clocks for nodes with dual NICs and HA +* Limited to two Westport channel NIC configurations for T-GM Engineering considerations:: -* Configurations are provided for ordinary clock, boundary clock, boundary clock with highly available system clock, and grandmaster clock. - -* PTP fast event notifications uses `ConfigMap` CRs to store PTP event subscriptions. - -* The PTP events REST API v2 does not have a global subscription for all lower hierarchy resources contained in the resource path. -You subscribe consumer applications to the various available event types separately. +* RAN DU RDS configurations are provided for ordinary clocks, boundary clocks, grandmaster clocks, and highly available dual NIC boundary clocks. +* PTP fast event notifications use `ConfigMap` CRs to persist subscriber details. +* Hierarchical event subscription as described in the O-RAN specification is not supported for PTP events. +* Use the PTP fast events REST API v2. +The PTP fast events REST API v1 is deprecated. +The REST API v2 is O-RAN Release 3 compliant. diff --git a/modules/telco-ran-red-hat-advanced-cluster-management-rhacm.adoc b/modules/telco-ran-red-hat-advanced-cluster-management-rhacm.adoc index de1f10690948..0c2f438e0aaf 100644 --- a/modules/telco-ran-red-hat-advanced-cluster-management-rhacm.adoc +++ b/modules/telco-ran-red-hat-advanced-cluster-management-rhacm.adoc @@ -1,34 +1,33 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-red-hat-advanced-cluster-management-rhacm_{context}"] -= {rh-rhacm-title} += Red Hat Advanced Cluster Management New in this release:: * No reference design updates in this release Description:: -{rh-rhacm-first} provides Multi Cluster Engine (MCE) installation and ongoing lifecycle management functionality for deployed clusters. -You manage cluster configuration and upgrades declaratively by applying `Policy` custom resources (CRs) to clusters during maintenance windows. -+ -You apply policies with the {rh-rhacm} policy controller as managed by {cgu-operator-first}. -The policy controller handles configuration, upgrades, and cluster statuses. + -When installing managed clusters, {rh-rhacm} applies labels and initial ignition configuration to individual nodes in support of custom disk partitioning, allocation of roles, and allocation to machine config pools. -You define these configurations with `SiteConfig` or `ClusterInstance` CRs. +-- +{rh-rhacm} provides Multi Cluster Engine (MCE) installation and ongoing lifecycle management functionality for deployed clusters. +You manage cluster configuration and upgrades declaratively by applying `Policy` custom resources (CRs) to clusters during maintenance windows. + +{rh-rhacm} provides the following functionality: + +* Zero touch provisioning (ZTP) of clusters using the MCE component in {rh-rhacm}. +* Configuration, upgrades, and cluster status through the {rh-rhacm} policy controller. +* During managed cluster installation, {rh-rhacm} can apply labels to individual nodes as configured through the `ClusterInstance` CR. +-- Limits and requirements:: -* 300 `SiteConfig` CRs per ArgoCD application. -You can use multiple applications to achieve the maximum number of clusters supported by a single hub cluster. * A single hub cluster supports up to 3500 deployed {sno} clusters with 5 `Policy` CRs bound to each cluster. Engineering considerations:: * Use {rh-rhacm} policy hub-side templating to better scale cluster configuration. You can significantly reduce the number of policies by using a single group policy or small number of general group policies where the group and per-cluster values are substituted into templates. - * Cluster specific configuration: managed clusters typically have some number of configuration values that are specific to the individual cluster. These configurations should be managed using {rh-rhacm} policy hub-side templating with values pulled from `ConfigMap` CRs based on the cluster name. - * To save CPU resources on managed clusters, policies that apply static configurations should be unbound from managed clusters after {ztp} installation of the cluster. diff --git a/modules/telco-ran-siteconfig-operator.adoc b/modules/telco-ran-siteconfig-operator.adoc new file mode 100644 index 000000000000..411b8f7da7a1 --- /dev/null +++ b/modules/telco-ran-siteconfig-operator.adoc @@ -0,0 +1,38 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="telco-ran-siteconfig-operator_{context}"] += SiteConfig Operator + +New in this release:: +* No RDS updates in this release + +Description:: ++ +-- +The SiteConfig Operator is a template-driven solution designed to provision clusters through various installation methods. +It introduces the unified `ClusterInstance` API, which replaces the deprecated `SiteConfig` API. +By leveraging the `ClusterInstance` API, the SiteConfig Operator improves cluster provisioning by providing the following: + +* Better isolation of definitions from installation methods +* Unification of Git and non-Git workflows +* Consistent APIs across installation methods +* Enhanced scalability +* Increased flexibility with custom installation templates +* Valuable insights for troubleshooting deployment issues + +The SiteConfig Operator provides validated default installation templates to facilitate cluster deployment through both the Assisted Installer and Image-based Installer provisioning methods: + +* **Assisted Installer** automates the deployment of {product-title} clusters by leveraging predefined configurations and validated host setups. +It ensures that the target infrastructure meets {product-title} requirements. +The Assisted Installer streamlines the installation process while minimizing time and complexity compared to manual setup. + +* **Image-based Installer** expedites the deployment of {sno} clusters by utilizing preconfigured and validated {product-title} seed images. +Seed images are preinstalled on target hosts, enabling rapid reconfiguration and deployment. +The Image-based Installer is particularly well-suited for remote or disconnected environments, because it simplifies the cluster creation process and significantly reduces deployment time. +-- + +Limits and requirements:: +* A single hub cluster supports up to 3500 deployed {sno} clusters. diff --git a/modules/telco-ran-sr-iov-operator.adoc b/modules/telco-ran-sr-iov-operator.adoc index 2dc52e760eb5..379b4884c592 100644 --- a/modules/telco-ran-sr-iov-operator.adoc +++ b/modules/telco-ran-sr-iov-operator.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-sr-iov-operator_{context}"] @@ -11,28 +11,33 @@ New in this release:: Description:: The SR-IOV Operator provisions and configures the SR-IOV CNI and device plugins. -Both `netdevice` (kernel VFs) and `vfio` (DPDK) devices are supported and applicable to the RAN use models. +Both `netdevice` (kernel VFs) and `vfio` (DPDK) devices are supported and applicable to the RAN DU use models. Limits and requirements:: -* Use {product-title} supported devices -* SR-IOV and IOMMU enablement in BIOS: The SR-IOV Network Operator will automatically enable IOMMU on the kernel command line. -* SR-IOV VFs do not receive link state updates from the PF. If link down detection is needed you must configure this at the protocol level. -* NICs which do not support firmware updates using Secure Boot or kernel lockdown must be pre-configured with sufficient virtual functions (VFs) to support the number of VFs required by the application workload. -+ -[NOTE] -==== -You might need to disable the SR-IOV Operator plugin for unsupported NICs using the undocumented `disablePlugins` option. -==== +* Use devices that are supported for {product-title}. +See "Supported devices". +* SR-IOV and IOMMU enablement in host firmware settings: The SR-IOV Network Operator automatically enables IOMMU on the kernel command line. +* SR-IOV VFs do not receive link state updates from the PF. +If link down detection is required you must configure this at the protocol level. Engineering considerations:: * SR-IOV interfaces with the `vfio` driver type are typically used to enable additional secondary networks for applications that require high throughput or low latency. - * Customer variation on the configuration and number of `SriovNetwork` and `SriovNetworkNodePolicy` custom resources (CRs) is expected. - -* IOMMU kernel command line settings are applied with a `MachineConfig` CR at install time. This ensures that the `SriovOperator` CR does not cause a reboot of the node when adding them. - +* IOMMU kernel command line settings are applied with a `MachineConfig` CR at install time. +This ensures that the `SriovOperator` CR does not cause a reboot of the node when adding them. * SR-IOV support for draining nodes in parallel is not applicable in a {sno} cluster. - -* If you exclude the `SriovOperatorConfig` CR from your deployment, the CR will not be created automatically. - -* In scenarios where you pin or restrict workloads to specific nodes, the SR-IOV parallel node drain feature will not result in the rescheduling of pods. In these scenarios, the SR-IOV Operator disables the parallel node drain functionality. +* You must include the `SriovOperatorConfig` CR in your deployment; the CR is not created automatically. +This CR is included in the reference configuration policies which are applied during initial deployment. +* In scenarios where you pin or restrict workloads to specific nodes, the SR-IOV parallel node drain feature will not result in the rescheduling of pods. +In these scenarios, the SR-IOV Operator disables the parallel node drain functionality. +* NICs which do not support firmware updates under secure boot or kernel lockdown must be pre-configured with sufficient virtual functions (VFs) to support the number of VFs needed by the application workload. +For Mellanox NICs, the Mellanox vendor plugin must be disabled in the SR-IOV Network Operator. +For more information, see "Configuring an SR-IOV network device". +* To change the MTU value of a virtual function after the pod has started, do not configure the MTU field in the `SriovNetworkNodePolicy` CR. +Instead, configure the Network Manager or use a custom systemd script to set the MTU of the physical function to an appropriate value. +For example: ++ +[source,terminal] +---- +# ip link set dev mtu 9000 +---- diff --git a/modules/telco-ran-sriov-fec-operator.adoc b/modules/telco-ran-sriov-fec-operator.adoc index 66e58e2781b9..a66ee2127e61 100644 --- a/modules/telco-ran-sriov-fec-operator.adoc +++ b/modules/telco-ran-sriov-fec-operator.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-sriov-fec-operator_{context}"] @@ -14,13 +14,10 @@ SRIOV-FEC Operator is an optional 3rd party Certified Operator supporting FEC ac Limits and requirements:: * Starting with FEC Operator v2.7.0: - -** `SecureBoot` is supported - -** The `vfio` driver for the `PF` requires the usage of `vfio-token` that is injected into Pods. -Applications in the pod can pass the `VF` token to DPDK by using the EAL parameter `--vfio-vf-token`. +** Secure boot is supported +** `vfio` drivers for PFs require the usage of a `vfio-token` that is injected into the pods. +Applications in the pod can pass the VF token to DPDK by using EAL parameter `--vfio-vf-token`. Engineering considerations:: -* The SRIOV-FEC Operator uses CPU cores from the `isolated` CPU set. - +* The SRIOV-FEC Operator uses CPU cores from the isolated CPU set. * You can validate FEC readiness as part of the pre-checks for application deployment, for example, by extending the validation policy. diff --git a/modules/telco-ran-topology-aware-lifecycle-manager-talm.adoc b/modules/telco-ran-topology-aware-lifecycle-manager-talm.adoc index 56adeae2410a..63cea403fa6b 100644 --- a/modules/telco-ran-topology-aware-lifecycle-manager-talm.adoc +++ b/modules/telco-ran-topology-aware-lifecycle-manager-talm.adoc @@ -1,24 +1,34 @@ // Module included in the following assemblies: // +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc // * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-topology-aware-lifecycle-manager-talm_{context}"] -= {cgu-operator-full} += Topology Aware Lifecycle Manager New in this release:: * No reference design updates in this release Description:: -{cgu-operator-first} is an Operator that runs only on the hub cluster for managing how changes including cluster and Operator upgrades, configuration, and so on are rolled out to the network. +{cgu-operator-full} is an Operator that runs only on the hub cluster for managing how changes like cluster upgrades, Operator upgrades, and cluster configuration are rolled out to the network. +{cgu-operator} supports the following features: -Limits and requirements:: -* {cgu-operator} supports concurrent cluster deployment in batches of 400. +* Progressive rollout of policy updates to fleets of clusters in user configurable batches. +* Per-cluster actions add `ztp-done` labels or other user-configurable labels following configuration changes to managed clusters. +* Precaching of {sno} clusters images: {cgu-operator} supports optional pre-caching of OpenShift, OLM Operator, and additional user images to {sno} clusters before initiating an upgrade. +The precaching feature is not applicable when using the recommended image-based upgrade method for upgrading {sno} clusters. +** Specifying optional pre-caching configurations with `PreCachingConfig` CRs. +Review the link:https://github.com/openshift-kni/cluster-group-upgrades-operator/blob/main/config/pre-cache/precachingconfig.yaml[sample reference `PreCachingConfig` CR] for more information. +** Excluding unused images with configurable filtering. +** Enabling before and after pre-caching storage space validations with configurable space-required parameters. -* Precaching and backup features are for {sno} clusters only. +Limits and requirements:: +* Supports concurrent cluster deployment in batches of 400 +* Pre-caching and backup are limited to {sno} clusters only Engineering considerations:: -* Only policies that have the `ran.openshift.io/ztp-deploy-wave` annotation are automatically applied by {cgu-operator} during initial cluster installation. - -* You can create further `ClusterGroupUpgrade` CRs to control the policies that {cgu-operator} remediates. +* The `PreCachingConfig` CR is optional and does not need to be created if you only need to precache platform-related OpenShift and OLM Operator images. +* The `PreCachingConfig` CR must be applied before referencing it in the `ClusterGroupUpgrade` CR. +* Only policies with the `ran.openshift.io/ztp-deploy-wave` annotation are automatically applied by {cgu-operator} during cluster installation. +* Any policy can be remediated by {cgu-operator} under control of a user created `ClusterGroupUpgrade` CR. diff --git a/modules/telco-ran-workload-partitioning.adoc b/modules/telco-ran-workload-partitioning.adoc index 0f1edce6663e..411da42690cf 100644 --- a/modules/telco-ran-workload-partitioning.adoc +++ b/modules/telco-ran-workload-partitioning.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="telco-ran-workload-partitioning_{context}"] @@ -10,17 +10,18 @@ New in this release:: * No reference design updates in this release Description:: -Workload partitioning pins OpenShift platform and Day 2 Operator pods that are part of the DU profile to the reserved CPU set and removes the reserved CPU from node accounting. +Workload partitioning pins {product-title} and Day 2 Operator pods that are part of the DU profile to the reserved CPU set and removes the reserved CPU from node accounting. This leaves all unreserved CPU cores available for user workloads. +This leaves all non-reserved CPU cores available for user workloads. +Workload partitioning is enabled through a capability set in installation parameters: `cpuPartitioningMode: AllNodes`. +The set of management partition cores are set with the reserved CPU set that you configure in the `PerformanceProfile` CR. Limits and requirements:: * `Namespace` and `Pod` CRs must be annotated to allow the pod to be applied to the management partition - * Pods with CPU limits cannot be allocated to the partition. This is because mutation can change the pod QoS. - -* For more information about the minimum number of CPUs that can be allocated to the management partition, see xref:../../telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc#telco-ran-node-tuning-operator_ran-ref-design-components[Node Tuning Operator]. +* For more information about the minimum number of CPUs that can be allocated to the management partition, see "Node Tuning Operator". Engineering considerations:: -* Workload Partitioning pins all management pods to reserved cores. +* Workload partitioning pins all management pods to reserved cores. A sufficient number of cores must be allocated to the reserved set to account for operating system, management pods, and expected spikes in CPU use that occur when the workload starts, the node reboots, or other system events happen. diff --git a/modules/telco-ref-design-overview.adoc b/modules/telco-ref-design-overview.adoc index 40b5f7e715f7..b272d2754b6a 100644 --- a/modules/telco-ref-design-overview.adoc +++ b/modules/telco-ref-design-overview.adoc @@ -1,9 +1,11 @@ // Module included in the following assemblies: // -// * +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc +// * scalability_and_performance/telco_ref_design_specs/telco-ref-design-specs-overview.adoc + :_mod-docs-content-type: CONCEPT [id="telco-ref-design-overview_{context}"] -= Reference design specifications for telco 5G deployments += Reference design specifications for telco RAN DU 5G deployments Red Hat and certified partners offer deep technical expertise and support for networking and operational capabilities required to run telco applications on {product-title} {product-version} clusters. @@ -15,3 +17,10 @@ Following the RDS minimizes high severity escalations and improves application s 5G use cases are evolving and your workloads are continually changing. Red Hat is committed to iterating over the telco core and RAN DU RDS to support evolving requirements based on customer and partner feedback. + +The reference configuration includes the configuration of the far edge clusters and hub cluster components. + +The reference configurations in this document are deployed using a centrally managed hub cluster infrastructure as shown in the following image. + +.Telco RAN DU deployment architecture +image::474_OpenShift_OpenShift_RAN_RDS_arch_updates_1023.png[A diagram showing two distinctive network far edge deployment processes, one showing how the hub cluster uses {ztp} to install managed clusters, and the other showing how the hub cluster uses TALM to apply policies to managed clusters] diff --git a/modules/telco-reference-application-workload-characteristics.adoc b/modules/telco-reference-application-workload-characteristics.adoc index 02d4e502c310..2d0237a5e78f 100644 --- a/modules/telco-reference-application-workload-characteristics.adoc +++ b/modules/telco-reference-application-workload-characteristics.adoc @@ -29,4 +29,4 @@ query=avg_over_time(pod:container_cpu_usage:sum{namespace="openshift-kube-apiser * Application logs are not collected by the platform log collector -* Aggregate traffic on the primary CNI is less than 1 MBps +* Aggregate traffic on the primary CNI is less than 1 Mbps diff --git a/modules/using-cluster-compare-telco-ref.adoc b/modules/using-cluster-compare-telco-ref.adoc index 7c90bad5138b..25ceb8fe3003 100644 --- a/modules/using-cluster-compare-telco-ref.adoc +++ b/modules/using-cluster-compare-telco-ref.adoc @@ -1,5 +1,8 @@ // Module included in the following assemblies: - +// +// * scalability_and_performance/cluster-compare/using-the-cluster-compare-plugin.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc +// * scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-crs.adoc // *scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-crs.adoc // *scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc // *scalability_and_performance/cluster-compare/using-the-cluster-compare-plugin.adoc @@ -24,20 +27,20 @@ ifdef::cluster-compare-core,cluster-compare-ran[= Comparing a cluster with the { //Intro for procedure in telco core/RAN RDS assembly ifdef::cluster-compare-core,cluster-compare-ran[] -After you deploy a {rds} cluster, you can use the `cluster-compare` plugin to assess the cluster's compliance with the {rds} reference design specification (RDS). The `cluster-compare` plugin is an OpenShift CLI (`oc`) plugin. The plugin uses a {rds} reference configuration to validate the cluster with the {rds} custom resources (CRs). +After you deploy a {rds} cluster, you can use the `cluster-compare` plugin to assess the cluster's compliance with the {rds} reference design specification (RDS). The `cluster-compare` plugin is an OpenShift CLI (`oc`) plugin. The plugin uses a {rds} reference configuration to validate the cluster with the {rds} custom resources (CRs). -The plugin-specific reference configuration for {rds} is packaged in a container image with the {rds} CRs. +The plugin-specific reference configuration for {rds} is packaged in a container image with the {rds} CRs. For further information about the `cluster-compare` plugin, see "Understanding the cluster-compare plugin". endif::cluster-compare-core,cluster-compare-ran[] //Intro for procedure in cluster-compare assembly ifdef::cluster-compare[] -You can use the `cluster-compare` plugin to compare a reference configuration with a configuration from a live cluster or `must-gather` data. +You can use the `cluster-compare` plugin to compare a reference configuration with a configuration from a live cluster or `must-gather` data. This example compares a configuration from a live cluster with the telco core reference configuration. The telco core reference configuration is derived from the telco core reference design specifications (RDS). The telco core RDS is designed for clusters to support large scale telco applications including control plane and some centralized data plane functions. -The reference configuration is packaged in a container image with the telco core RDS. +The reference configuration is packaged in a container image with the telco core RDS. For further examples of using the `cluster-compare` plugin with the telco core and telco RAN distributed unit (DU) profiles, see the "Additional resources" section. endif::cluster-compare[] diff --git a/modules/ztp-telco-hub-cluster-software-versions.adoc b/modules/ztp-telco-hub-cluster-software-versions.adoc new file mode 100644 index 000000000000..c1a5cc695f20 --- /dev/null +++ b/modules/ztp-telco-hub-cluster-software-versions.adoc @@ -0,0 +1,32 @@ +// Module included in the following assemblies: +// +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc + +:_mod-docs-content-type: REFERENCE +[id="ztp-telco-hub-cluster-software-versions_{context}"] += Telco RAN DU {product-version} hub cluster validated software components + +The Red Hat telco RAN {product-version} solution has been validated using the following Red Hat software products for {product-title} hub clusters. + +.Telco hub cluster validated software components +[cols=2*, width="80%", options="header"] +|==== +|Component +|Software version + +|Hub cluster version +|4.18 + +|{rh-rhacm-first} +|2.12^1^ + +|{gitops-title} +|1.14 + +|{ztp} site generate plugins +|4.18 + +|{cgu-operator-first} +|4.18 +|==== +[1] This table will be updated when the aligned {rh-rhacm} version 2.13 is released. diff --git a/modules/ztp-telco-ran-software-versions.adoc b/modules/ztp-telco-ran-software-versions.adoc index 44506f2eb025..194a9ab65a08 100644 --- a/modules/ztp-telco-ran-software-versions.adoc +++ b/modules/ztp-telco-ran-software-versions.adoc @@ -1,13 +1,12 @@ // Module included in the following assemblies: // -// * edge_computing/ztp-preparing-the-hub-cluster.adoc -// * scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-software-artifacts.adoc +// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc :_mod-docs-content-type: REFERENCE [id="ztp-telco-ran-software-versions_{context}"] = Telco RAN DU {product-version} validated software components -The Red Hat telco RAN DU {product-version} solution has been validated using the following Red Hat software products for {product-title} managed clusters and hub clusters. +The Red Hat telco RAN DU {product-version} solution has been validated using the following Red Hat software products for {product-title} managed clusters. .Telco RAN DU managed cluster validated software components [cols=2*, width="80%", options="header"] @@ -16,48 +15,27 @@ The Red Hat telco RAN DU {product-version} solution has been validated using the |Software version |Managed cluster version -|4.17 +|4.18 |Cluster Logging Operator -|6.0 +|6.1^1^ |Local Storage Operator -|4.17 +|4.18 -|{oadp-first} -|1.4.1 +|OpenShift API for Data Protection (OADP) +|1.4 |PTP Operator -|4.17 +|4.18 -|SRIOV Operator -|4.17 +|SR-IOV Operator +|4.18 |SRIOV-FEC Operator -|2.9 +|2.10 -|{lcao} -|4.17 -|==== - -.Hub cluster validated software components -[cols=2*, width="80%", options="header"] -|==== -|Component -|Software version - -|Hub cluster version -|4.17 - -|{rh-rhacm-first} -|2.11 - -|{ztp} plugin -|4.17 - -|{gitops-title} -|1.13 - -|{cgu-operator-first} -|4.17 +|Lifecycle Agent +|4.18 |==== +[1] This table will be updated when the aligned Cluster Logging Operator version 6.2 is released. diff --git a/scalability_and_performance/cluster-compare/using-the-cluster-compare-plugin.adoc b/scalability_and_performance/cluster-compare/using-the-cluster-compare-plugin.adoc index e5d068a196e1..825ba3184d0a 100644 --- a/scalability_and_performance/cluster-compare/using-the-cluster-compare-plugin.adoc +++ b/scalability_and_performance/cluster-compare/using-the-cluster-compare-plugin.adoc @@ -24,5 +24,5 @@ include::modules/using-cluster-compare-telco-ref.adoc[leveloffset=+1] [role="_additional-resources"] [id="additional-resources_{context}"] == Additional resources -* xref:../../scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc#using-cluster-compare-telco_ref_ran-ref-design-crs[Comparing a cluster with the telco RAN DU reference configuration] +* xref:../../scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc#using-cluster-compare-telco_ref_ran-ref-design-crs[Comparing a cluster with the telco RAN DU reference configuration] * xref:../../scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-crs.adoc#using-cluster-compare-telco_ref_ran-core-ref-design-crs[Comparing a cluster with the telco core reference configuration] diff --git a/scalability_and_performance/index.adoc b/scalability_and_performance/index.adoc index c246ab96598a..6760db1a0454 100644 --- a/scalability_and_performance/index.adoc +++ b/scalability_and_performance/index.adoc @@ -28,7 +28,7 @@ xref:../scalability_and_performance/recommended-performance-scale-practices/reco [discrete] == Telco reference design specifications -xref:../scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-design-spec.adoc#telco-ran-architecture-overview_ran-ref-design-spec[Telco RAN DU specification] +xref:../scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc#telco-ran-du-ref-design-specs[Telco RAN DU reference design specification for {product-title} {product-version}] xref:../scalability_and_performance/telco_ref_design_specs/core/telco-core-rds-overview.adoc#telco-core-cluster-service-based-architecture-and-networking-topology_core-ref-design-overview[Telco core reference design specification] diff --git a/scalability_and_performance/telco_ref_design_specs/ran/_attributes b/scalability_and_performance/telco_ran_du_ref_design_specs/_attributes similarity index 100% rename from scalability_and_performance/telco_ref_design_specs/ran/_attributes rename to scalability_and_performance/telco_ran_du_ref_design_specs/_attributes diff --git a/scalability_and_performance/telco_ref_design_specs/ran/images b/scalability_and_performance/telco_ran_du_ref_design_specs/images similarity index 100% rename from scalability_and_performance/telco_ref_design_specs/ran/images rename to scalability_and_performance/telco_ran_du_ref_design_specs/images diff --git a/scalability_and_performance/telco_ref_design_specs/ran/modules b/scalability_and_performance/telco_ran_du_ref_design_specs/modules similarity index 100% rename from scalability_and_performance/telco_ref_design_specs/ran/modules rename to scalability_and_performance/telco_ran_du_ref_design_specs/modules diff --git a/scalability_and_performance/telco_ref_design_specs/ran/snippets b/scalability_and_performance/telco_ran_du_ref_design_specs/snippets similarity index 100% rename from scalability_and_performance/telco_ref_design_specs/ran/snippets rename to scalability_and_performance/telco_ran_du_ref_design_specs/snippets diff --git a/scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc b/scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc new file mode 100644 index 000000000000..8c64cc981b5a --- /dev/null +++ b/scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc @@ -0,0 +1,171 @@ +:_mod-docs-content-type: ASSEMBLY +:telco-ran: +[id="telco-ran-du-ref-design-specs"] += Telco RAN DU reference design specification for {product-title} +include::_attributes/common-attributes.adoc[] +:context: telco-ran-du + +toc::[] + +The telco RAN DU reference design specification (RDS) describes the configuration for clusters running on commodity hardware to host 5G workloads in the Radio Access Network (RAN). +It captures the recommended, tested, and supported configurations to get reliable and repeatable performance for a cluster running the telco RAN DU profile. + +include::modules/telco-ref-design-overview.adoc[leveloffset=+1] + +include::modules/telco-ran-core-ref-design-spec.adoc[leveloffset=+2] + +include::modules/telco-deviations-from-the-ref-design.adoc[leveloffset=+2] + +include::modules/telco-ran-engineering-considerations-for-the-ran-du-use-model.adoc[leveloffset=+2] + +include::modules/telco-ran-du-application-workloads.adoc[leveloffset=+2] + +include::modules/telco-ran-du-reference-design-components.adoc[leveloffset=+1] + +include::modules/telco-ran-bios-tuning.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../edge_computing/ztp-deploying-far-edge-sites.adoc#ztp-configuring-host-firmware-with-gitops-ztp_ztp-deploying-far-edge-sites[Managing host firmware settings with {ztp}] + +* xref:../../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-du-configuring-host-firmware-requirements_sno-configure-for-vdu[Configuring host firmware for low latency and high performance] + +* xref:../../scalability_and_performance/cnf-provisioning-low-latency-workloads.adoc#cnf-provisioning-low-latency-workloads[Provisioning real-time and low latency workloads] + +include::modules/telco-ran-node-tuning-operator.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#about_irq_affinity_setting_cnf-low-latency-perf-profile[Finding the effective IRQ affinity setting for a node] + +include::modules/telco-ran-ptp-operator.adoc[leveloffset=+2] + +include::modules/telco-ran-sr-iov-operator.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../networking/hardware_networks/about-sriov.adoc#supported-devices_about-sriov[Supported devices] + +* xref:../../networking/hardware_networks/configuring-sriov-qinq-support.adoc#configuring-qinq-support[Configuring QinQ support for SR-IOV enabled workloads] + +include::modules/telco-ran-logging.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* link:https://docs.openshift.com/container-platform/4.17/observability/logging/logging-6.0/log6x-about.html[Logging 6.0] + +include::modules/telco-ran-sriov-fec-operator.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* link:https://catalog.redhat.com/software/containers/intel/sriov-fec-operator/6017de1669aea3122e6fa15f[SRIOV-FEC Operator for Intel® vRAN Dedicated Accelerator manager container] + +include::modules/telco-ran-lca-operator.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../edge_computing/image_based_upgrade/cnf-understanding-image-based-upgrade.adoc#cnf-understanding-image-based-upgrade[Understanding the image-based upgrade for {sno} clusters] + +* xref:../../edge_computing/image_based_upgrade/preparing_for_image_based_upgrade/cnf-image-based-upgrade-shared-container-partition.adoc#ztp-image-based-upgrade-shared-container-partition_shared-container-partition[Configuring a shared container directory between ostree stateroots when using {ztp}] + +include::modules/telco-ran-local-storage-operator.adoc[leveloffset=+2] + +include::modules/telco-ran-lvms-operator.adoc[leveloffset=+2] + +include::modules/telco-ran-workload-partitioning.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../scalability_and_performance/enabling-workload-partitioning.adoc#enabling-workload-partitioning[Workload partitioning] + +include::modules/telco-ran-cluster-tuning.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../installing/overview/cluster-capabilities.adoc#cluster-capabilities[Cluster capabilities] + +include::modules/telco-ran-machine-configuration.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-sno-install-time-cluster-config[Recommended {sno} cluster configuration for vDU application workloads]. + +[id="telco-ran-du-deployment-components"] +== Telco RAN DU deployment components + +The following sections describe the various {product-title} components and configurations that you use to configure the hub cluster with {rh-rhacm}. + +include::modules/telco-ran-red-hat-advanced-cluster-management-rhacm.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../edge_computing/ztp-deploying-far-edge-clusters-at-scale.adoc#about-ztp_ztp-deploying-far-edge-clusters-at-scale[Using {ztp} to provision clusters at the network far edge] + +* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes[Red Hat Advanced Cluster Management for Kubernetes] + +include::modules/telco-ran-siteconfig-operator.adoc[leveloffset=+1] + +include::modules/telco-ran-topology-aware-lifecycle-manager-talm.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../edge_computing/cnf-talm-for-cluster-upgrades.adoc#cnf-talm-for-cluster-updates[Updating managed clusters with the {cgu-operator-full}] + +include::modules/telco-ran-gitops-operator-and-ztp-plugins.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../edge_computing/ztp-preparing-the-hub-cluster.adoc#ztp-preparing-the-ztp-git-repository-ver-ind_ztp-preparing-the-hub-cluster[Preparing the {ztp} site configuration repository for version independence] + +* xref:../../edge_computing/policygentemplate_for_ztp/ztp-advanced-policy-config.adoc#ztp-adding-new-content-to-gitops-ztp_ztp-advanced-policy-config[Adding custom content to the {ztp} pipeline] + +include::modules/telco-ran-agent-based-installer-abi.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#installing-with-agent-based-installer[Installing an {product-title} cluster with the Agent-based Installer] + +[id="telco-ran-du-reference-configuration-crs"] +== Telco RAN DU reference configuration CRs + +Use the following custom resources (CRs) to configure and deploy {product-title} clusters with the telco RAN DU profile. +Use the CRs to form the common baseline used in all the specific use models unless otherwise indicated. + +[NOTE] +==== +You can extract the complete set of RAN DU CRs from the `ztp-site-generate` container image. +See xref:../../edge_computing/ztp-preparing-the-hub-cluster.adoc#ztp-preparing-the-ztp-git-repository_ztp-preparing-the-hub-cluster[Preparing the {ztp} site configuration repository] for more information. +==== + +[role="_additional-resources"] +.Additional resources +* xref:../../scalability_and_performance/cluster-compare/understanding-the-cluster-compare-plugin.adoc#understanding-the-cluster-compare-plugin[Understanding the cluster-compare plugin] + +include::modules/telco-ran-crs-cluster-tuning.adoc[leveloffset=+2] + +include::modules/telco-ran-crs-day-2-operators.adoc[leveloffset=+2] + +include::modules/telco-ran-crs-machine-configuration.adoc[leveloffset=+2] + +:context: ran-ref-design-crs +include::modules/using-cluster-compare-telco-ref.adoc[leveloffset=+1] +:context: telco-ran-du + +include::modules/ztp-telco-ran-software-versions.adoc[leveloffset=+1] + +include::modules/ztp-telco-hub-cluster-software-versions.adoc[leveloffset=+1] + +:!telco-ran: diff --git a/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-du-overview.adoc b/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-du-overview.adoc deleted file mode 100644 index 81b9eb9219c0..000000000000 --- a/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-du-overview.adoc +++ /dev/null @@ -1,33 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -:telco-ran: -include::_attributes/common-attributes.adoc[] -[id="telco-ran-du-overview"] -= {rds-caps} use model overview -:context: ran-ref-design-overview - -toc::[] - -Use the following information to plan {rds} workloads, cluster resources, and hardware specifications for the hub cluster and managed {sno} clusters. - -include::modules/telco-ran-du-application-workloads.adoc[leveloffset=+1] - -include::modules/telco-reference-application-workload-characteristics.adoc[leveloffset=+1] - -include::modules/telco-ran-managed-cluster-resources.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../telco_ref_design_specs/ran/telco-ran-ref-software-artifacts.adoc#ztp-telco-ran-software-versions_ran-ref-design-validation[Telco RAN DU {product-version} validated software components] - -include::modules/telco-ran-hub-cluster-management.adoc[leveloffset=+1] - -include::modules/telco-ran-du-reference-components.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* For details of the telco RAN RDS KPI test results, see link:https://access.redhat.com/articles/7089662[Telco RAN DU 4.17 reference design specification KPI test results]. -This information is only available to customers and partners. - -:!telco-ran: diff --git a/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-design-spec.adoc b/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-design-spec.adoc deleted file mode 100644 index f7232abd07f4..000000000000 --- a/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-design-spec.adoc +++ /dev/null @@ -1,18 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -:telco-ran: -include::_attributes/common-attributes.adoc[] -[id="telco-ran-ref-design-spec"] -= {rds-caps} {product-version} reference design overview -:context: ran-ref-design-spec - -toc::[] - -The {rds-first} {product-version} reference design configures an {product-title} {product-version} cluster running on commodity hardware to host {rds} workloads. -It captures the recommended, tested, and supported configurations to get reliable and repeatable performance for a cluster running the {rds} profile. - -// Removing this because we already highlight what is new in the components section. -//include::modules/telco-ran-ref-design-features.adoc[leveloffset=+1] - -include::modules/telco-ran-architecture-overview.adoc[leveloffset=+1] - -:!telco-ran: diff --git a/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc b/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc deleted file mode 100644 index d391ef0388fc..000000000000 --- a/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-components.adoc +++ /dev/null @@ -1,132 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -:telco-ran: -include::_attributes/common-attributes.adoc[] -[id="telco-ran-ref-du-components"] -= {rds-caps} {product-version} reference design components -:context: ran-ref-design-components - -toc::[] - -The following sections describe the various {product-title} components and configurations that you use to configure and deploy clusters to run RAN DU workloads. - -include::modules/telco-ran-bios-tuning.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../edge_computing/ztp-deploying-far-edge-sites.adoc#ztp-configuring-host-firmware-with-gitops-ztp_ztp-deploying-far-edge-sites[Managing host firmware settings with {ztp}] - -* xref:../../../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-du-configuring-host-firmware-requirements_sno-configure-for-vdu[Configuring host firmware for low latency and high performance] - -* xref:../../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#cnf-create-performance-profiles[Creating a performance profile] - -include::modules/telco-ran-node-tuning-operator.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile.adoc#about_irq_affinity_setting_cnf-low-latency-perf-profile[Finding the effective IRQ affinity setting for a node] - -include::modules/telco-ran-ptp-operator.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-sno-du-configuring-ptp_sno-configure-for-vdu[Recommended PTP {sno} cluster configuration for vDU application workloads] - -include::modules/telco-ran-sr-iov-operator.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../edge_computing/ztp-preparing-the-hub-cluster.adoc#ztp-preparing-the-ztp-git-repository-ver-ind_ztp-preparing-the-hub-cluster[Preparing the {ztp} site configuration repository for version independence] - -* xref:../../../networking/hardware_networks/configuring-sriov-qinq-support.adoc#configuring-qinq-support[Configuring QinQ support for SR-IOV enabled workloads] - -include::modules/telco-ran-logging.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -//* xref:../../../observability/logging/logging-6.0/log6x-about.adoc#log6x-about[About logging] -* link:https://docs.openshift.com/container-platform/4.17/observability/logging/logging-6.0/log6x-about.html[About logging] - -include::modules/telco-ran-sriov-fec-operator.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* link:https://catalog.redhat.com/software/containers/intel/sriov-fec-operator/6017de1669aea3122e6fa15f[SRIOV-FEC Operator for Intel® vRAN Dedicated Accelerator manager container] - -include::modules/telco-ran-lca-operator.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../edge_computing/image_based_upgrade/cnf-understanding-image-based-upgrade.adoc#cnf-understanding-image-based-upgrade[Understanding the image-based upgrade for {sno} clusters] - -* xref:../../../edge_computing/image_based_upgrade/preparing_for_image_based_upgrade/cnf-image-based-upgrade-shared-container-partition.adoc#ztp-image-based-upgrade-shared-container-partition_shared-container-partition[Configuring a shared container directory between ostree stateroots when using {ztp}] - -include::modules/telco-ran-local-storage-operator.adoc[leveloffset=+1] - -include::modules/telco-ran-lvms-operator.adoc[leveloffset=+1] - -include::modules/telco-ran-workload-partitioning.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../scalability_and_performance/enabling-workload-partitioning.adoc#enabling-workload-partitioning[Workload partitioning] - -include::modules/telco-ran-cluster-tuning.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../installing/overview/cluster-capabilities.adoc#cluster-capabilities[Cluster capabilities] - -include::modules/telco-ran-machine-configuration.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-sno-install-time-cluster-config[Recommended {sno} cluster configuration for vDU application workloads]. - -[id="telco-reference-ran-du-deployment-components_{context}"] -== {rds-caps} deployment components - -The following sections describe the various {product-title} components and configurations that you use to configure the hub cluster with {rh-rhacm-first}. - -include::modules/telco-ran-red-hat-advanced-cluster-management-rhacm.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../edge_computing/ztp-deploying-far-edge-clusters-at-scale.adoc#about-ztp_ztp-deploying-far-edge-clusters-at-scale[Using {ztp} to provision clusters at the network far edge] - -* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes[Red Hat Advanced Cluster Management for Kubernetes] - -include::modules/telco-ran-topology-aware-lifecycle-manager-talm.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../edge_computing/cnf-talm-for-cluster-upgrades.adoc#cnf-talm-for-cluster-updates[Updating managed clusters with the {cgu-operator-full}] - -include::modules/telco-ran-gitops-operator-and-ztp-plugins.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../edge_computing/ztp-preparing-the-hub-cluster.adoc#ztp-preparing-the-ztp-git-repository-ver-ind_ztp-preparing-the-hub-cluster[Preparing the {ztp} site configuration repository for version independence] - -* xref:../../../edge_computing/policygentemplate_for_ztp/ztp-advanced-policy-config.adoc#ztp-adding-new-content-to-gitops-ztp_ztp-advanced-policy-config[Adding custom content to the {ztp} pipeline] - -include::modules/telco-ran-agent-based-installer-abi.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#installing-with-agent-based-installer[Installing an {product-title} cluster with the Agent-based Installer] - -:!telco-ran: diff --git a/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc b/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc deleted file mode 100644 index 19e99ddf7aeb..000000000000 --- a/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-du-crs.adoc +++ /dev/null @@ -1,43 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -:telco-ran: -include::_attributes/common-attributes.adoc[] -[id="telco-ran-ref-du-crs"] -= {rds-first} reference configuration CRs -:context: ran-ref-design-crs - -toc::[] - -Use the following custom resources (CRs) to configure and deploy {product-title} clusters with the {rds} profile. -Some of the CRs are optional depending on your requirements. -CR fields you can change are annotated in the CR with YAML comments. - -[NOTE] -==== -You can extract the complete set of RAN DU CRs from the `ztp-site-generate` container image. -See xref:../../../edge_computing/ztp-preparing-the-hub-cluster.adoc#ztp-preparing-the-ztp-git-repository_ztp-preparing-the-hub-cluster[Preparing the {ztp} site configuration repository] for more information. -==== - -include::modules/using-cluster-compare-telco-ref.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources -* xref:../../../scalability_and_performance/cluster-compare/understanding-the-cluster-compare-plugin.adoc#understanding-the-cluster-compare-plugin[Understanding the cluster-compare plugin] - -include::modules/telco-ran-crs-day-2-operators.adoc[leveloffset=+1] - -include::modules/telco-ran-crs-cluster-tuning.adoc[leveloffset=+1] - -include::modules/telco-ran-crs-machine-configuration.adoc[leveloffset=+1] - -[id="telco-reference-ran-du-use-case-yaml_{context}"] -== YAML reference - -The following is a complete reference for all the custom resources (CRs) that make up the {rds} {product-version} reference configuration. - -include::modules/telco-ran-yaml-ref-day-2-operators.adoc[leveloffset=+2] - -include::modules/telco-ran-yaml-ref-cluster-tuning.adoc[leveloffset=+2] - -include::modules/telco-ran-yaml-ref-machine-configuration.adoc[leveloffset=+2] - -:!telco-ran: diff --git a/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-software-artifacts.adoc b/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-software-artifacts.adoc deleted file mode 100644 index 4ebf1477d7fb..000000000000 --- a/scalability_and_performance/telco_ref_design_specs/ran/telco-ran-ref-software-artifacts.adoc +++ /dev/null @@ -1,14 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -:telco-ran: -include::_attributes/common-attributes.adoc[] -[id="telco-ran-ref-software-artifacts"] -= {rds-caps} reference configuration software specifications -:context: ran-ref-design-validation - -toc::[] - -The following information describes the telco RAN DU reference design specification (RDS) validated software versions. - -include::modules/ztp-telco-ran-software-versions.adoc[leveloffset=+1] - -:!telco-ran: From 0d1e077c741c1d51fe7852de205b920e38ef37b5 Mon Sep 17 00:00:00 2001 From: Michael Burke Date: Thu, 20 Feb 2025 13:53:50 -0500 Subject: [PATCH 329/669] MCO OCL remove wording around layered MCP --- .../coreos-layering-configuring-on-extensions.adoc | 5 ++--- .../coreos-layering-configuring-on-modifying.adoc | 7 +++---- modules/coreos-layering-configuring-on-revert.adoc | 1 - modules/coreos-layering-configuring-on.adoc | 13 +++++++++---- snippets/coreos-layering-configuring-on-pause.adoc | 7 +++---- 5 files changed, 17 insertions(+), 16 deletions(-) diff --git a/modules/coreos-layering-configuring-on-extensions.adoc b/modules/coreos-layering-configuring-on-extensions.adoc index 42d2be638af9..41fab05636e7 100644 --- a/modules/coreos-layering-configuring-on-extensions.adoc +++ b/modules/coreos-layering-configuring-on-extensions.adoc @@ -31,7 +31,7 @@ apiVersion: machineconfiguration.openshift.io/v1 <1> kind: MachineConfig metadata: labels: - machineconfiguration.openshift.io/role: layered <2> + machineconfiguration.openshift.io/role: worker <2> name: 80-worker-extensions spec: config: @@ -80,9 +80,8 @@ $ oc get machineconfigpools [source,terminal] ---- NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE -layered rendered-layered-221507009cbcdec0eec8ab3ccd789d18 False True False 1 0 0 0 167m <1> master rendered-master-a0b404d061a6183cc36d302363422aba True False False 3 3 3 0 3h38m -worker rendered-worker-221507009cbcdec0eec8ab3ccd789d18 True False False 2 2 2 0 3h38m +worker rendered-worker-221507009cbcdec0eec8ab3ccd789d18 False True False 2 2 2 0 3h38m <1> ---- <1> The value `FALSE` in the `UPDATED` column indicates that the `MachineOSBuild` object is building. When the `UPDATED` column reports `FALSE`, the new custom layered image has rolled out to the nodes. diff --git a/modules/coreos-layering-configuring-on-modifying.adoc b/modules/coreos-layering-configuring-on-modifying.adoc index 2bba8718dfd9..e77afc15768f 100644 --- a/modules/coreos-layering-configuring-on-modifying.adoc +++ b/modules/coreos-layering-configuring-on-modifying.adoc @@ -27,10 +27,10 @@ include::snippets//coreos-layering-configuring-on-pause.adoc[] apiVersion: machineconfiguration.openshift.io/v1alpha1 kind: MachineOSConfig metadata: - name: layered-alpha1 + name: layered spec: machineConfigPool: - name: layered + name: worker buildInputs: containerFile: - containerfileArch: noarch @@ -92,9 +92,8 @@ $ oc get machineconfigpools [source,terminal] ---- NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE -layered rendered-layered-221507009cbcdec0eec8ab3ccd789d18 False True False 1 0 0 0 167m <1> master rendered-master-a0b404d061a6183cc36d302363422aba True False False 3 3 3 0 3h38m -worker rendered-worker-221507009cbcdec0eec8ab3ccd789d18 True False False 2 2 2 0 3h38m +worker rendered-worker-221507009cbcdec0eec8ab3ccd789d18 False True False 2 2 2 0 3h38m <1> ---- <1> The value `FALSE` in the `UPDATED` column indicates that the `MachineOSBuild` object is building. When the `UPDATED` column reports `FALSE`, the new custom layered image has rolled out to the nodes. diff --git a/modules/coreos-layering-configuring-on-revert.adoc b/modules/coreos-layering-configuring-on-revert.adoc index 855b9c7fe48f..29b0470b42ab 100644 --- a/modules/coreos-layering-configuring-on-revert.adoc +++ b/modules/coreos-layering-configuring-on-revert.adoc @@ -45,7 +45,6 @@ $ oc get mcp [source,terminal] ---- NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE -layered rendered-layered-bde4e4206442c0a48b1a1fb35ba56e85 True False False 0 0 0 0 4h46m master rendered-master-8332482204e0b76002f15ecad15b6c2d True False False 3 3 3 0 5h26m worker rendered-worker-bde4e4206442c0a48b1a1fb35ba56e85 False True False 3 2 2 0 5h26m <1> ---- diff --git a/modules/coreos-layering-configuring-on.adoc b/modules/coreos-layering-configuring-on.adoc index 77d129bb2585..a702f7f38982 100644 --- a/modules/coreos-layering-configuring-on.adoc +++ b/modules/coreos-layering-configuring-on.adoc @@ -29,7 +29,12 @@ NAME READY STATUS RESTARTS build-layered-c8765e26ebc87e1e17a7d6e0a78e8bae 2/2 Running 0 11m ---- -When the build is complete, the MCO pushes the new custom layered image to your repository for use when deploying new nodes. You can see the digested image pull spec for the new custom layered image in the `MachineOSBuild` object and `machine-os-builder` pod. +When the build is complete, the MCO pushes the new custom layered image to your repository and rolled out to the nodes in the associated machine config pool. You can see the digested image pull spec for the new custom layered image in the `MachineOSBuild` object and `machine-os-builder` pod. + +[TIP] +==== +You can test a `MachineOSBuild` object to make sure it builds correctly without rolling out the custom layered image to active nodes by using a custom machine config pool that contains non-production nodes. Alternatively, you can use a custom machine config pool that has no nodes. The `MachineOSBuild` object builds even if there are no nodes for the MCO to deploy the custom layered image onto. +==== You should not need to interact with these new objects or the `machine-os-builder` pod. However, you can use all of these resources for troubleshooting, if necessary. @@ -99,7 +104,7 @@ metadata: name: layered <2> spec: machineConfigPool: - name: <3> + name: worker <3> buildInputs: containerFile: <4> - containerfileArch: noarch @@ -120,7 +125,7 @@ spec: ---- <1> Specifies the `machineconfiguration.openshift.io/v1` API that is required for `MachineConfig` CRs. <2> Specifies a name for the `MachineOSConfig` object. This name is used with other on-cluster layering resources. The examples in this documentation use the name `layered`. -<3> Specifies the name of the machine config pool associated with the nodes where you want to deploy the custom layered image. +<3> Specifies the name of the machine config pool associated with the nodes where you want to deploy the custom layered image. The examples in this documentation use the `worker` machine config pool. <4> Specifies the Containerfile to configure the custom layered image. <5> Specifies the name of the image builder to use. This must be `PodImageBuilder`. <6> Specifies the name of the pull secret that the MCO needs in order to pull the base operating system image from the registry. @@ -223,7 +228,7 @@ Spec: Desired Config: Name: rendered-layered-ad5a3cad36303c363cf458ab0524e7c0 Machine OS Config: - Name: layered-alpha1 + Name: layered Rendered Image Pushspec: image-registry.openshift-image-registry.svc:5000/openshift-machine-config-operator/os-images:layered-ad5a3cad36303c363cf458ab0524e7c0 # ... Last Transition Time: 2025-02-12T19:21:28Z diff --git a/snippets/coreos-layering-configuring-on-pause.adoc b/snippets/coreos-layering-configuring-on-pause.adoc index 8892638b0d6a..7055c74705de 100644 --- a/snippets/coreos-layering-configuring-on-pause.adoc +++ b/snippets/coreos-layering-configuring-on-pause.adoc @@ -5,7 +5,7 @@ :_mod-docs-content-type: SNIPPET -Making certain changes to a `MachineOSConfig` object triggers an automatic rebuild of the associated custom layered image. You can mitigate the effects of the rebuild by pausing the machine config pool where the custom layered image is applied as described in "Pausing the machine config pools." For example, if you want to remove and replace a `MachineOSCOnfig` object, pausing the machine config pools before making the change prevents the MCO from reverting the associated nodes to the base image, reducing the number of reboots needed. +Making certain changes to a `MachineOSConfig` object triggers an automatic rebuild of the associated custom layered image. You can mitigate the effects of the rebuild by pausing the machine config pool where the custom layered image is applied as described in "Pausing the machine config pools." For example, if you want to remove and replace a `MachineOSCOnfig` object, pausing the machine config pools before making the change prevents the MCO from reverting the associated nodes to the base image, reducing the number of reboots needed. When a machine config pool is paused, the `oc get machineconfigpools` reports the following status: @@ -13,10 +13,9 @@ When a machine config pool is paused, the `oc get machineconfigpools` reports th [source,terminal] ---- NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE -layered rendered-layered-221507009cbcdec0eec8ab3ccd789d18 False False False 1 0 0 0 3h23m <1> master rendered-master-a0b404d061a6183cc36d302363422aba True False False 3 3 3 0 4h14m -worker rendered-worker-221507009cbcdec0eec8ab3ccd789d18 True False False 2 2 2 0 4h14m +worker rendered-worker-221507009cbcdec0eec8ab3ccd789d18 False False False 2 2 2 0 4h14m <1> ---- -<1> The `layered` machine config pool is paused, as indicated by the three `False` statuses and the `READYMACHINECOUNT` at `0`. +<1> The `worker` machine config pool is paused, as indicated by the three `False` statuses and the `READYMACHINECOUNT` at `0`. After the changes have been rolled out, you can unpause the machine config pool. From 38f21716b35fe2ddc4d453d04396a5965e6c386f Mon Sep 17 00:00:00 2001 From: William Gabor Date: Wed, 11 Sep 2024 14:23:00 -0400 Subject: [PATCH 330/669] )SD)CS-11372 Updated the apiVersion in the first codeblock under the Configuring the TLS security profile for the kubelet topic --- modules/tls-profiles-kubelet-configuring.adoc | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/modules/tls-profiles-kubelet-configuring.adoc b/modules/tls-profiles-kubelet-configuring.adoc index 4a7b7f936bb8..0faf8842799d 100644 --- a/modules/tls-profiles-kubelet-configuring.adoc +++ b/modules/tls-profiles-kubelet-configuring.adoc @@ -20,9 +20,9 @@ endif::[] .Sample `KubeletConfig` CR that configures the `Old` TLS security profile on worker nodes [source,yaml] ---- -apiVersion: config.openshift.io/v1 +apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig - ... +# ... spec: tlsSecurityProfile: old: {} @@ -30,10 +30,10 @@ spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" -#... +# ... ---- -You can see the ciphers and the minimum TLS version of the configured TLS security profile in the `kubelet.conf` file on a configured node. +You can see the ciphers and the minimum TLS version of the configured TLS security profile in the `kubelet.conf` file on a configured node. .Prerequisites From 96affedbca9d5681990141014514541eb771fe5f Mon Sep 17 00:00:00 2001 From: Pan Ousley Date: Mon, 24 Feb 2025 10:53:31 -0500 Subject: [PATCH 331/669] CNV#52365: minor fix to merged doc --- virt/install/preparing-cluster-for-virt.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/virt/install/preparing-cluster-for-virt.adoc b/virt/install/preparing-cluster-for-virt.adoc index aa4267ec20a8..b1e4a1663e08 100644 --- a/virt/install/preparing-cluster-for-virt.adoc +++ b/virt/install/preparing-cluster-for-virt.adoc @@ -124,7 +124,7 @@ The following features are available for use on s390x architecture but function * When xref:../../virt/managing_vms/advanced_vm_management/virt-configuring-default-cpu-model.adoc#virt-configuring-default-cpu-model_virt-configuring-default-cpu-model[configuring the default CPU model], the `spec.defaultCPUModel` value is `"gen15b"` for an {ibm-z-title} cluster. -* When xref:../../virt/vm_networking/virt-hot-plugging-network-interfaces.adoc#virt-hot-plugging-network-interfaces[hot plugging secondary network interfaces], the `virtctl migrate ` command does not migrate the VM. As a workaround, restart the VM by running the following command: +* When xref:../../virt/vm_networking/virt-hot-plugging-network-interfaces.adoc#virt-hot-unplugging-bridge-network-interface_virt-hot-plugging-network-interfaces[hot unplugging a secondary network interface], the `virtctl migrate ` command does not migrate the VM. As a workaround, restart the VM by running the following command: + [source,terminal] ---- From 72823e13e0c5233e27457d9be94d85b2c2c4cac5 Mon Sep 17 00:00:00 2001 From: Michael Burke Date: Thu, 20 Feb 2025 16:27:17 -0500 Subject: [PATCH 332/669] MCO OCL edits per cheesesashimi --- modules/coreos-layering-configuring-on-modifying.adoc | 6 +++--- modules/coreos-layering-configuring-on.adoc | 6 +++--- modules/rhcos-add-extensions.adoc | 2 +- snippets/coreos-layering-configuring-on-pause.adoc | 2 +- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/modules/coreos-layering-configuring-on-modifying.adoc b/modules/coreos-layering-configuring-on-modifying.adoc index e77afc15768f..ea83275db317 100644 --- a/modules/coreos-layering-configuring-on-modifying.adoc +++ b/modules/coreos-layering-configuring-on-modifying.adoc @@ -57,9 +57,9 @@ spec: ---- <1> Optional: Modify the Containerfile, for example to add or remove packages. <2> Optional: Update the secret needed to pull the base operating system image from the registry. -<3> Optional: Modify the image registry to push the newly-built custom layered image to. -<4> Optional: Update the secret needed to push the newly-built custom layered image to the registry. -<5> Optional: Update the secret needed to pull the newly-built custom layered image from the registry. +<3> Optional: Modify the image registry to push the newly built custom layered image to. +<4> Optional: Update the secret needed to push the newly built custom layered image to the registry. +<5> Optional: Update the secret needed to pull the newly built custom layered image from the registry. + When you save the changes, the MCO drains, cordons, and reboots the nodes. After the reboot, the node uses the cluster base {op-system-first} image. If your changes modify a secret only, no new build is triggered and no reboot is performed. diff --git a/modules/coreos-layering-configuring-on.adoc b/modules/coreos-layering-configuring-on.adoc index a702f7f38982..0c2dbee4de8c 100644 --- a/modules/coreos-layering-configuring-on.adoc +++ b/modules/coreos-layering-configuring-on.adoc @@ -129,9 +129,9 @@ spec: <4> Specifies the Containerfile to configure the custom layered image. <5> Specifies the name of the image builder to use. This must be `PodImageBuilder`. <6> Specifies the name of the pull secret that the MCO needs in order to pull the base operating system image from the registry. -<7> Specifies the image registry to push the newly-built custom layered image to. This can be any registry that your cluster has access to. This example uses the internal {product-title} registry. -<8> Specifies the name of the push secret that the MCO needs in order to push the newly-built custom layered image to that registry. -<9> Specifies the secret required by the image registry that the nodes need in order to pull the newly-built custom layered image. This should be a different secret than the one used to push the image to your repository. +<7> Specifies the image registry to push the newly built custom layered image to. This can be any registry that your cluster has access to. This example uses the internal {product-title} registry. +<8> Specifies the name of the push secret that the MCO needs in order to push the newly built custom layered image to that registry. +<9> Specifies the secret required by the image registry that the nodes need in order to pull the newly built custom layered image. This should be a different secret than the one used to push the image to your repository. // + // https://github.com/openshift/openshift-docs/pull/87486/files has the v1 api versions diff --git a/modules/rhcos-add-extensions.adoc b/modules/rhcos-add-extensions.adoc index f9f537e10bfd..e787e9978110 100644 --- a/modules/rhcos-add-extensions.adoc +++ b/modules/rhcos-add-extensions.adoc @@ -11,7 +11,7 @@ Currently, the following extensions are available: -* **usbguard**: The `usbguard` extension protects {op-system} systems from attacks by intrusive USB devices. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/security_hardening/index#usbguard_protecting-systems-against-intrusive-usb-devices[USBGuard] for details. +* **usbguard**: The `usbguard` extension protects {op-system} systems from attacks by intrusive USB devices. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/security_hardening/index#usbguard_protecting-systems-against-intrusive-usb-devices[USBGuard] for details. * **kerberos**: The `kerberos` extension provides a mechanism that allows both users and machines to identify themselves to the network to receive defined, limited access to the areas and services that an administrator has configured. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system-level_authentication_guide/using_kerberos[Using Kerberos] for details, including how to set up a Kerberos client and mount a Kerberized NFS share. diff --git a/snippets/coreos-layering-configuring-on-pause.adoc b/snippets/coreos-layering-configuring-on-pause.adoc index 7055c74705de..2a3b3683930e 100644 --- a/snippets/coreos-layering-configuring-on-pause.adoc +++ b/snippets/coreos-layering-configuring-on-pause.adoc @@ -5,7 +5,7 @@ :_mod-docs-content-type: SNIPPET -Making certain changes to a `MachineOSConfig` object triggers an automatic rebuild of the associated custom layered image. You can mitigate the effects of the rebuild by pausing the machine config pool where the custom layered image is applied as described in "Pausing the machine config pools." For example, if you want to remove and replace a `MachineOSCOnfig` object, pausing the machine config pools before making the change prevents the MCO from reverting the associated nodes to the base image, reducing the number of reboots needed. +Making certain changes to a `MachineOSConfig` object triggers an automatic rebuild of the associated custom layered image. You can mitigate the effects of the rebuild by pausing the machine config pool where the custom layered image is applied as described in "Pausing the machine config pools." While the pools are paused, the MCO does not roll out the newly built image to the nodes after the build is complete. However, the build will still run regardless of whether the pool is paused or not. For example, if you want to remove and replace a `MachineOSCOnfig` object, pausing the machine config pools before making the change prevents the MCO from reverting the associated nodes to the base image, reducing the number of reboots needed. When a machine config pool is paused, the `oc get machineconfigpools` reports the following status: From 11b4b7770a29385707478067a1c9f8110aad4c2c Mon Sep 17 00:00:00 2001 From: Aidan Reilly <74046732+aireilly@users.noreply.github.com> Date: Fri, 14 Feb 2025 10:36:50 +0000 Subject: [PATCH 333/669] Telco core RDS 4.18 docs Michael's feedback Add Martin's NIC queues note Adding Tuned CR to configure NIC queues note for AMD per-pod CPUs final review comments for Core RDS 418 Telco core RDS 4.18 docs Michael's feedback Add Martin's NIC queues note Adding Tuned CR to configure NIC queues note for AMD per-pod CPUs final review comments for Core RDS 418 Last minute comments --- _topic_maps/_topic_map.yml | 26 +- ...co-core-rds-metallb-service-separation.png | Bin 0 -> 130145 bytes .../openshift-telco-core-rds-networking.png | Bin 0 -> 83846 bytes ...bout-the-telco-core-cluster-use-model.adoc | 23 ++ ...lco-core-additional-storage-solutions.adoc | 11 + modules/telco-core-agent-based-installer.adoc | 33 +++ modules/telco-core-application-workloads.adoc | 37 +++ ...-use-model-engineering-considerations.adoc | 48 ++++ .../telco-core-cluster-network-operator.adoc | 40 +-- modules/telco-core-common-baseline-model.adoc | 47 ++++ ...u-partitioning-and-performance-tuning.adoc | 57 +++++ ...telco-core-crs-cluster-infrastructure.adoc | 25 ++ modules/telco-core-crs-networking.adoc | 48 ++-- .../telco-core-crs-node-configuration.adoc | 18 +- modules/telco-core-crs-resource-tuning.adoc | 6 +- modules/telco-core-crs-scheduling.adoc | 16 +- modules/telco-core-crs-storage.adoc | 14 +- .../telco-core-disconnected-environment.adoc | 26 ++ ...-core-gitops-operator-and-ztp-plugins.adoc | 57 +++++ ...irmware-and-boot-loader-configuration.adoc | 20 ++ modules/telco-core-load-balancer.adoc | 46 ++-- modules/telco-core-logging.adoc | 11 +- modules/telco-core-monitoring.adoc | 26 +- modules/telco-core-networking.adoc | 61 +++++ modules/telco-core-nmstate-operator.adoc | 23 ++ modules/telco-core-node-configuration.adoc | 60 ++--- .../telco-core-openshift-data-foundation.adoc | 22 ++ modules/telco-core-power-management.adoc | 9 +- ...ds-product-version-use-model-overview.adoc | 11 + ...e-red-hat-advanced-cluster-management.adoc | 33 +++ modules/telco-core-scalability.adoc | 7 +- modules/telco-core-scheduling.adoc | 30 ++- modules/telco-core-security.adoc | 62 ++++- modules/telco-core-service-mesh.adoc | 10 +- modules/telco-core-signaling-workloads.adoc | 11 + modules/telco-core-software-stack.adoc | 22 +- modules/telco-core-sr-iov.adoc | 39 +++ modules/telco-core-storage.adoc | 28 +-- ...core-topology-aware-lifecycle-manager.adoc | 26 ++ ...erstanding-the-cluster-compare-plugin.adoc | 2 +- .../using-the-cluster-compare-plugin.adoc | 2 +- scalability_and_performance/index.adoc | 2 +- .../_attributes | 0 .../images | 0 .../modules | 0 .../snippets | 0 .../telco-core-rds.adoc | 235 ++++++++++++++++++ .../core/telco-core-rds-overview.adoc | 14 -- .../core/telco-core-rds-use-cases.adoc | 33 --- .../core/telco-core-ref-crs.adoc | 48 ---- .../telco-core-ref-design-components.adoc | 182 -------------- .../telco-core-ref-software-artifacts.adoc | 11 - 52 files changed, 1119 insertions(+), 499 deletions(-) create mode 100644 images/openshift-telco-core-rds-metallb-service-separation.png create mode 100644 images/openshift-telco-core-rds-networking.png create mode 100644 modules/telco-core-about-the-telco-core-cluster-use-model.adoc create mode 100644 modules/telco-core-additional-storage-solutions.adoc create mode 100644 modules/telco-core-agent-based-installer.adoc create mode 100644 modules/telco-core-application-workloads.adoc create mode 100644 modules/telco-core-cluster-common-use-model-engineering-considerations.adoc create mode 100644 modules/telco-core-common-baseline-model.adoc create mode 100644 modules/telco-core-cpu-partitioning-and-performance-tuning.adoc create mode 100644 modules/telco-core-crs-cluster-infrastructure.adoc create mode 100644 modules/telco-core-disconnected-environment.adoc create mode 100644 modules/telco-core-gitops-operator-and-ztp-plugins.adoc create mode 100644 modules/telco-core-host-firmware-and-boot-loader-configuration.adoc create mode 100644 modules/telco-core-networking.adoc create mode 100644 modules/telco-core-nmstate-operator.adoc create mode 100644 modules/telco-core-openshift-data-foundation.adoc create mode 100644 modules/telco-core-rds-product-version-use-model-overview.adoc create mode 100644 modules/telco-core-red-hat-advanced-cluster-management.adoc create mode 100644 modules/telco-core-signaling-workloads.adoc create mode 100644 modules/telco-core-sr-iov.adoc create mode 100644 modules/telco-core-topology-aware-lifecycle-manager.adoc rename scalability_and_performance/{telco_ref_design_specs/core => telco_core_ref_design_specs}/_attributes (100%) rename scalability_and_performance/{telco_ref_design_specs/core => telco_core_ref_design_specs}/images (100%) rename scalability_and_performance/{telco_ref_design_specs/core => telco_core_ref_design_specs}/modules (100%) rename scalability_and_performance/{telco_ref_design_specs/core => telco_core_ref_design_specs}/snippets (100%) create mode 100644 scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc delete mode 100644 scalability_and_performance/telco_ref_design_specs/core/telco-core-rds-overview.adoc delete mode 100644 scalability_and_performance/telco_ref_design_specs/core/telco-core-rds-use-cases.adoc delete mode 100644 scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-crs.adoc delete mode 100644 scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-design-components.adoc delete mode 100644 scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-software-artifacts.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index d329292425e2..07d10927cbc2 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -3279,30 +3279,16 @@ Topics: File: recommended-infrastructure-practices - Name: Recommended etcd practices File: recommended-etcd-practices +- Name: Telco core reference design + Dir: telco_core_ref_design_specs + Topics: + - Name: Telco core reference design specification + File: telco-core-rds - Name: Telco RAN DU reference design Dir: telco_ran_du_ref_design_specs Topics: - - Name: Telco RAN DU RDS + - Name: Telco RAN DU reference design specification File: telco-ran-du-rds -- Name: Reference design specifications - Dir: telco_ref_design_specs - Distros: openshift-origin,openshift-enterprise - Topics: - - Name: Telco reference design specifications - File: telco-ref-design-specs-overview - - Name: Telco core reference design specification - Dir: core - Topics: - - Name: Telco core reference design overview - File: telco-core-rds-overview - - Name: Telco core use model overview - File: telco-core-rds-use-cases - - Name: Core reference design components - File: telco-core-ref-design-components - - Name: Core reference design configuration CRs - File: telco-core-ref-crs - - Name: Telco core software specifications - File: telco-core-ref-software-artifacts - Name: Comparing cluster configurations Dir: cluster-compare Distros: openshift-origin,openshift-enterprise diff --git a/images/openshift-telco-core-rds-metallb-service-separation.png b/images/openshift-telco-core-rds-metallb-service-separation.png new file mode 100644 index 0000000000000000000000000000000000000000..cc7aa2bc38a8264089862fd217cfb9d0ed588ae9 GIT binary patch literal 130145 zcmdSBc{rBe`!=dc^E@InD4M8*5K5ylB=b<2%2?)EC8>lW^OPhrA!JTPnKR2==EpqG z>~lSRfA4C3$t@I{I}qG&CDUu3eI$ zp;=W!L$ji74J}@&zUC#2zt-Nrrf5P#v*{c8Z`lu{NKL#*XL?z|^oEhPsl{z$EgB08 z3-)`4dL|mT?`yFe8S4ZLo};Is*+(OC>4L0fV1KjK8x^f|jy83(i5;h%_dj~_@b)#$ z9jn&9WnZ;SE9&PXoA2xIo~L`c=7&|={qenCy?m-GcU<1FWhIAo$GV?;U(vj4d+4!c zoABA!mwpDCdPrtuP!=C}@*Tj_NbTWHIG<@ z!~?lY3qzb*MQ`RO5*$jmUqAO+F{ ztBfqaF&K{)vKm!?s8h2s2)6wD+~4Gz^X76Lp}F6sQ4?BI)9vovE10LxD=8@@>DEN! z)6SeZGj6U(E=DuHl`C|1qE0>JkeGKzc(-}0-PB8Fp`1a9q0FJ2Ci}SouI?-DOwHXF zCs^+3>xJNsG%F54XTTf7$Y=*LDewW+J)%}kj>B?G&Sw+bfbqh|Vmjv=AcT1Q> zuOPoQeDsrauZ50|&S7!iS2Z#6xwbp0GjX1|=5wga#*G`JWRK!cp}*mTAE!>&*YzAF zKDqOq8(Y%ScO6%F6%lNG)MeKRQ86*Gr<<9BAG8;Qr>9#bPX!A(*zaOj@m;opR&#D3 z?fQ)yRoX*-TpVUWye8VgHiPNM)pO2$d%Ce0|MnY5YqfXj-%N&ceyHJAWJ`U$Z&_KH zskymad7N4{U+dE1l*#lT=k7QoJH50i1B&>>%^11PjK26{cl+_Mb7Li(M|Qbj3S$M+ zqN0vmWi*pr>{MSogb!}LGO#d}yCfj5^5C*zdZ%|u08fH-4&F3m`Gf!TgTLP;qIvZe zX@;AJ?0I;a?qA%+p%(bz!-pL%`ENwS@uxrW2b15~2X`dTjeN(8684+OC~2}|q!b&U z25Y(tO#bvZBo@6pARwTsveE;?s(ER9@9y27zd|pL4S(CL)AK7-m3yK7hoHCm{NLTw z1rPecWTT($H2-I_!_VG`klx2+nHNt~=r=blZFp-_E-+Q{l8&>oqC(na0PCT(iMQjG;nY$-hyCm84zpFg{rye1 znk6J8ew&mFjI}c_oy{D~=%dvmTbbsqQA1mxw`jNr`Jl}HBryqRvvifRB&}lO$=^B_ zxxH819A-+5>^(Ky1-Ln~hQHD}_Quj(xPgIGN$_;mYq*!={U9DtFYIpPFduP1qIqo{#H!9bJ^X@!qil=;nO`{N@UpRc-8f~g$-mV zG_!6@RYP4}T`G1HHL8zC5r8_C??uul8}u@pl25g(FHZju>N?0gb4@f{rqtIj8rOtUeEs1Agx)h$|`3+BO_yT?XRE!qnvd5+FV&ZJ-yk9 zzB;W%8$>^bJ^E;~?LGqo12ciusPYu(M~b-Ss9$EytlEgMp=~k=h*9@*qu1gw$?P^u zog5B@T0W|^;rw<3EqF=K?cC_ktc3xi@>lyWa#OUzIdZ1&2#)!co35=!+$7HpW_DMF zESfI13oRau2s9SOfaR0Pp6n_O@DXEAe?Qp$51n}QhcAaKZjWXybQYTb)AZm6qt$TU zYQ4u4L`{_>|B0z)3w5sXcpll@kpc!zi@_#;sqDnWQ)HhJ7+RQW&9%0lpwy&K|8+6R z-zjfu^?*{ByEtLnwu1Q@w(!J!?$TVYMO0XNIxm5_%?{JwvPN8uVrMoj&4euZmD^7y zE2bN3xi?c<7X~eE9QD#I>#XPUa{SS@pirVvh2M%U0+Lx64GmA#fa{zCdv>QtB3;V4iPVt;$!p-bvVg zB1XMtB}H-jvk-^H8Qz%=Pra{WEQ&d{R)e40-6$gj9nOvay2$m%^5VsdPdCtc;KL#g zOR3uokb#hu&=?4pIZ~(YU?gsAW7SI~#uu$hPV7&D6^+6KUW^~dwj_)!lK9QzB z{2%N-!LT!WGXwW=O!VeV(}!e@ptaHf6lVA2Zd%U z+%sN2;sy8-psA~g5=3%4!^KsTb?FmA*lx(Cm3D~X%w4IL+1j+OoVmdmtI3e1`EJbE z;$#{-<1oOTcl75$o7kA(3}xgmvvLI0#o)f&3F5bKG%5Ggob3P2 zrJ+$fm{wcHKh&i4sbs8ND8x8jWlBu8PUq*V{mp|9zARf^Z8}&N=CHIFaC1F%K@wsm zHr04`y!VtW3bJVN!!}OPerg4b=02%jSD&bnSt4L$U{IPp)e?{mK+B%;5)CJ*nX7x9 zlW?iP!K-cn^w_mEjg5_CnXL=#&P>)lw0)Gq&kx!&R?Z@7W=eDflP_zhQWNzr3xkga znJVm4XDJ<(VjQJyd5(G+X*TPsJvDt)%?mg2_e+_?XJ=D0oj1E`M#~(5qfcpRb%A>K z?iFsmx>s7j9+BU4=p)CDb-Y7aV;L*|_C@Q&7C&0eRNv_o<|TRvLH$7DhrNZG(st?{ zD*G_$gpThxvzhKR3)u%Yd#6TaoP_tY2ARr$@#vPNrKD(meX^EpMjxBQ68>*6ak*kA zTYmrco671IsA*rAY~%>L>K?BMtT=5qJ055|TG*VkZ#*g@xJU;W$dr2(2<^r{_}QOs zASfuvkJ~_r z{fhe1Y~9$xkb^@uN9$ezYg4=@(cfhKZuXLZmhlhl%dCNKvr|JM1SdhiY~*4UUgK|m zi)J%h#>FRi`(l$bRMpvb7?+&-p$TNYe)Z~ACaaFuByV2} zmFlT9Kt>oVR$n~ryw~{J1h%e#&&ACGpTs`5HZ=vlK6q6dyW+P|?m{KLjR>*l18w3k z!G$7ZqTYvw?k5Op@^vli_%!D#a&vEudQZuQoV+c}p3(D;iHFkD^RwbAbI`r24`%## z!ZW1=L=1Cgx@`Y$59u-26vy3|#foD{K3_6HXE1>p_H&J`+5rZsy~ay~R^X+u#<#gr zZ12Ov4|N(_Icg|otVg`A7Mi}0GwPLK%0C*OlEMXumP}WX#47Od@#*HMCn}Tx#zLNN zXN{jq|2OFZ0RoF!aiB%iEZyZg)$wGzSyrClM;_ z@at4l=Ym{%7mVGhHDCs<~>9TvOynv|SUEt}fG z7XtM6J7;CwG8R1b9=R5`(p#6H&UJSP|EH&?PilAL@9I-uvIhT(?+0=U)K=v>EH#-{ zNu+mph(-4)D=SwesOM5_ClE4Sgvo5SA9mDro>6Lc4OguDtrHRiWZ`gTwztIPDl02x z7?qO;E!cLsSk+~=c+Cyv3UwnfURnA>#lD9v@|-v9irySi?Fu!@97v_((yhK?kZ~vf zg?NZyc9#m)xxkH4xo!a*2*G@nY|7&bANQ4(meQTNb7|Arwuk-L77R5Ki!z+rB|ksi zt4gvQ`KzDSZ2lV)nM6=7ZIFc|JaQjQQ)-{?N*}%+92j*YLVM(I{yks~@H!+>B=R58Hrw{^*Z$ky z5|tPh7M7+*{nUqoVNp@{vg_!(OZ>)O?4rUhjot5_im)4OJ2JAh=l|jU|96A?@0tbr zwCXERkO*iogVp-0K&p|EjC`m|mEeng#-W(RGB7ZptjmAv*h%n~PpPSs=5%C|hF_do zq{0;qg++EX_wz&!@@m$uwKde=ek&>bZ*e0%r5MRisBa^GKjiyAdP{YE<0^fs0zbDf zUS-wgr!(50T;4_SKW{a2R>j<#;!YCB7&YD5`?o*?N0X|l>-ELEN=JC2w<@BWNGKhh zZrZwydrs~PyW(2($C){J#pZJ-^Y4u8j5#I3o;_Z9T;=mg&y zSdCX)O@_j!%!FQa?()@eE1U!I{PgKl)&?(XpnOc96_0EI$8cJ|`-2L*(f#Ikmt98Z z$I81QA&bE?$7aeM1d;S@^C_{iM{D+Fhu&`g&^lRv?Ij;06GEIz9L6W7BU7zw1 zpW?oOf*~Xi^6}&&O}a{~^7U4d3*7panjw{q+=Co0Qi&uD3fciFaoflz<-%B3Y4J{Z z73qPm91zoxu%L_q-!==KBSb}`eQC*^wy4t>G|NkjBm0w23{^|Eh0m5u`wXg>)!h7j z+nF)?2j-vTaOzZIIS^%_-kXKy?|Yw6_6Dj>>Sp(ubb86l3GH6zgz4)bi06As!-MOf z7Ge6G{(A9I=1D?ikskb+8kqH78&{&;*I>CjQhd^i){6Sj1_z#?ZIqTmH zomJu=$}%ZkFM+(53@WQxI{k*hDC-nYv*`_b28JpG1dG)lr`^mvz>c#zOYpXmy? z5hWY_XjVz@Y7AT;PTans&+b65#WUoe+9kdfolfrvBfvJXRzIB~c;kb(NR+Io2XsYe zo8xA+p$0;1K#6PGLH%`tt+c~}soecbaRgSU*oWgf652$F12jbX(&F4$(2y-FcyC5L zGB1c=8DYa9x{>sEc(}PgNU>?QeR)L88(m$|+9$(EN+c`l22TwbIq>5HegIXDO@Is-jD}l?g?|3 zG%*1&o@~%%wbIbgxbkw(Ycl;LsA0Ipr+4``LAuuXb>4`OEXDjM!RgcLD)F0vB$wUz zCBWbRG$@65;Mr5;S8#Lo3w`R{9|8=c=>)1gvDxK>e2P1n)IVexMp%bR{ltp)+3%tb ziX%a=IjSYd&CSjKyj?_)d6Bc=c<4C@Sj7y}duFQeT!`G(>G=7&(BCI87@$V%nwT&R zBzoha?5V>dgGj}&ohS?7PpYL0D`+?T1bxe+`1)DLWp-z9Hq>>8KU&C``lk>w$?gW< zgL)S^Z9hM)Vp?(eo}pnGQOFcuZAIV_U7U@fvm-zT&UZQ@m<2*ERyESzEpa6tY z+)Qr$pv3b8B0(`#|2J)2LgR0>DA^1@8qI}E!^K%!mx0iQV)5YOthwsW6vP4pPBpq+4Gh`%2cyOK z*`+Kd5HAO0&56%(aVIQVtaN(;k`r-jQtQ=OypY(mtyMx3|;deE>S{og6vTO8AQ9v|M+TZYTA+7XxP}ZX%cS3 zUbAXx>SlRqI+M5A{uPFiJNvbU068=y*}~G>JD8z;0!Fs~`+PB*nok#YyX4QiFO3Kp z52Vz6kUbiX3{n+wy`0!c8<+*piFqFv!EWF>YoxDV5-A;))M`I>NQ8hYT*kxYf^+(O z{QLyVelRC%Q%b zs8^Eh%qRpt5Bxx`N3#$hK2_I+?A9;HQlq$DW>yxges9r-qqt@r+@QS=zHi0#o68Q8 zIe?sgW#ll|NQXEsP1$ar6;LqoO8U3-;Uv7-on6BJXOEc2(M zAhI=nxRvtcjJR&tbMozAHuW|wwZ{EMILP)@vl+O#G1?{n4zvo7bqHh!ftDfrsZ0~> zaV<)96p3ykRHehNbpAZ(UNSw>UN^V=7x}j6SV9S6>lKlXR*N&;Rmg=vwqwwxnuggE zwI#s<{EFNELax6bY?uTZlDwSW7oRx_?AC_M(c9B=ZE10Vh$Iq7yu)8s@XYd4(POxX zt;C;8Z>-^49U&nNjg4vF<-W+>8esNPQdYLf+(G88{v8Sy*)7s-cDnx|!{b{2jfC%` z88MGZxqz&s%We+jB^_)!;G@j(T}a^8nU~ZEnL$|E{@uVc6o|1NZE zV<=yi;h<3D~6WenX`W`VXaoSdi2+7k~Bq(e17GFrk&QU zjvPl}4T#sRJtWVzLncZ#^_l@0))zfbT%i!Z4x9W}L_sd`dg%B}?{<|1Wfq3cw%7~> zk?sHcUPRF8yT9HZhsG!uTC!!b9nL#9QGG)>OPgG#twthJGU)^1xw&`B8BS|n+ax&M z_L50}j}-96NU_6Zkk~SWnvsG~=}eyEY8~W(-+BqzByDy9EvvGr6IzXHd1(Im>XdNz zb5=2XW{2vKwY*2ibJpPczjL`mc0)0=6eO%7nEJjmW8FGPPCu1fXKObwojZGV?}JOI zIuMij#G>uLp^1>o<-gdOt@?ErC#E86z`#h%gKRq=b$cstgdI>_@2?1wY9Z-kv({O-a1A< zS}5`39D`MT4pkswGgZ*10#^nf7QfS4q9{$&8I(d>6e2?;fjMhp<|RX=PAXG3PS0ye zGZ}Uto|!}8D^_nRAzl$EbQXQ2j>Km>)GWF^L*_M8k!4Dw6X2QuW#u3r63^FR4#b_N0_3j?$7`V)liuqZUiAw>Q~zR z$DTAtcwpX|%<5E$j;KOX;mPimuxg!a@!5azX~&xzl7(b-$b0U8+lZ`D1ea!ZufzO^ zh=;OKHkSibSe5aFjt_K#gT~I5N-QLX2es;P)BXG&ULoxew_rA5DR) z{V{L0E?e+^+j)~3p1k+HJl#%FRW_L~)v6TPwamJs`Q%1rI?e3OS18eT2d3Gv=F={T zlJ%PvTuY1W6JXkkO%@%dG6@p?QMi@1e?#aoG7P;{%=}EIzUmh-Fm}s6^ zl8aUIzC5tFC$H}iEbrc*g-Vgwd@UXO6RhaKYic!HR)ikyO7kS{&Pq?~HU zb*|N-)s81Uhj@NguPL#L$bNzF?^t*SAWUOgHfX?AwDn6nj0@nbN%c|QNh zsH+w6&Dba6?_Y-nhJS>;`0LT*n8VCm6 zkM$xFzAn?e#P^g4Db#AUWLYI#vrFI}s$ZDiuy^)P3wv1I4hH%v75ylULmXyRAvr-y zN@wEIipeL{-xW6Vkl0XDS9ZY&+Bhz)rJ>#Bg8EvbcTZ_tA*({BDXqy^%$q~KMXMP@ zPKJ~?Bob^*iK4L4ahGwQa6D}qFkqdzAys&z6L6SZ_oGOOEUQ)FdcnR4$ocfy8G=x0 zdi3A< z!Vru_fpuYS3ZZ1e-IqBJEmn+Lu^(cRpS*=sxYvny&^&1p_w9F!h6QoDNV;;J> zXuP+Ir_YLv>WiK;$p~*RIA)ld*0n!nF#M^g6XZph42Bf@w6AaWm$Tp-qD{FTEi|Q@ zR3RGzfC%R#)pAd+N7JNwSUS&LNUGX__OrbxFN;qimGomUs_tM8VYt4xwKD z-AF2i)KAo`v_b0L$TYSez$XdC;b)bM#7Uaw3w&t`kacF1zf?)-NYFL8Hd-(jinA zPKhL{RRgD~ugxA0<+}-q+jNJs^dBv9eL|5nNG|0je%k*`8cu;do;fp}LjAQKxA8ah zq{vOrkey4Fy>I!2&2-_VYc_p;pr%0DiD-U-{iycCPrCAjd`n|`(O7ghoO0f_hbvb0 zQD5TN&lRpru7frbb_J?7Q*i$N#&G94C-lq+F3e7BkEQai*!?H8vO%85koyiovOfOj zH-_>F^tR?Ykjg1E$F+~t^qh=#Tujhk; z$y%I9C`Y;IHB?6jib+3RIC6X^Cno_LuszVvlAc!Fv9lWc?oMKTqZS$Nmt&%lWmnK~ zQUY_B(|Ww8h22R_j%`OJoctDc>NwS;yDX`9_TOe*|982=|7FHU6tu9v(l25l11C)ui08uM}O6AHsgD00l;e{T7qpfCeB^`8I#Z;}7kB>ZnwB*;8HJfZhH zNh=-Ikumk933VhOd(w$Ogd?O^8KSE~Yaz^C&>t18Ln2pP>Alb@RWlKY{7L-CXr|oG z6GkE^&ZOyeqv->(l811suTJYZ_3Jpa^7`Pam;#pneqn?y*QoVH&4z+FqpDVZR;-nL@ zw;hBzV7Ou;cIM)jG#EJ4?Tr45ngM?o_6#;st4@1l?Lk^ZsPBfQmO-J0aQ$B43He7R zWxTA!=A_PJ`uu1yC#)jamrFlMNqHU(NM*$82O6*xBE44D*1}2C39&N~BH=N68wM>*tNvz1Z!++e|1s+`d@%A^} z$>|POAwQvo-s>|Yl@tGrcYgTW*ax2MC~G^Aika~Up&%AgP9Pmfgt%BQ{JOXq?PX&D zfyPBg(idm@l95Vn2Tej^df!rkzrNBm+Nkq`M1VK^_Sm)lbsMYIby0vUCY9W!`SPV$ zx;kUhegz&|=>KYw&>8EgNZF%Y&Ha6D?!*}xL-jbtI*WVQ{@&Z1+GVTv5~GY-GZ;mW zQFK>$IN`|fJ#WMrHJ`VOLnekAw5hS`uT!-MGNk5|HmyUltwnMC7ww2Y`q1toTp4wf zbELRECbVQSF^y7%B2-VSgF`SB%RiG|!;SZY^#EzmDDpby$#U?@%bz63{fBe-u}2JY z&{+2&JUqN>{zO`n%))%sbvxbcsm*)Oo+LeGo-FVUaHEa1A=HWZ$oqt%oKD)d_yTM( z2wsTMQPS+QGd@AD*=>yI6tvRxm^WuUntQT#du9D}>-Hy$LrV*}HRan`CHuFv)B&|S z7Uzbp*FG6sz4!hX+B#+!T{Y#Tsp9>=qCxe=$Q-XmRBwz6=yYV=jX-|(V(Zco>CM;k zmb67I^*Jmg%r8#mUQbt18xkbUQw^JgRfH%LQXW&Sg&H}|(`svyq&;2)g)f&4BIk|v z&}i$d$wTh~?o_se(!rr(Qp!cO{7Bm!X-Q)h9C6zl?O+xXbjg_Dn+e&&U)OW# zS-`QCSQP9VOv$n8?^EKxk)8RkvYg5iIvaByDjSP3q*d^ej!;C@?z18qrEJi^l#YTM z0a8u;NHsnL5ef%g(&OA@y#|rJr4u3Z9jrCweYC?O8K#~_k;iga`vz}D$y%%A4*CaY zG*It`+j-^a%?`5!8&!<X4<9xUZf2eCC4%nuT-ZuJ^+?XX^(dbLCP!)D2^$i zF>VpS@fp$91Gq{FasSz;VjCYV0-4W%s)~4Ed#5&RwFl3PNJ&ZQ$}vd&eHf|B zXkr?2x3wd~8L11MyDTCfB!e2-#pJQnxpG48^jx~0AylIMYkTOd-cN!Z{xj|1*D5|- z#Bnf(E2JrLb*|Z>H3%(<=zM2A^9%5jf~@nXX%0Lb!Y~?w`qAP zqz`9e44J4*!u~Kx73Z%%p(HFPj(U9V;w)1gs!0CvLqX;ZQN{ju5(&sw_uDNkOzG6d zDvEfJ_?YWYzMeK00zfB}HA!2)n2b<2Vs3UGjX_ki`%_9vO`s4KGbV%sm8OXnD8+e# z@siXM7ltzRzwA8%XfvdHm-Na~Q^`Tq+6;sksr6Cibpw{glk_3QL;OfV3 zVQAJeGfRXQ*H|N@INDx*bO5|fXgBTkMc;`aDSa@(G@%P9T=S4* zxQqz24{0vvy!5xie_X| zHsbzVj#`K@TDEkcWdsCc61GH-DY42DY@SWp=tL>V)$fECJ(^x_$6kJJOuP&;hjL0* zqZwSN_8;!$5UGtCZ-ZHe9G2z?k@GiUpkElR)IqLl4F>OW#l3E5XWDV)( zmb&giXii#-wO*P@6G{bcLW|Rcz9`@PXA|pZBR_tr{S~z+gPVYhcj?3RkYG}BMJc!x z>E5L6(Q0iJs+uCRVSy-DEeOI@lZd_tJxa{5F?1#9VCNBYmekAORtnbq9+;R$u=yfHH;8uXDdw(@YMFi^|E)M01zc>wawM4C-!=yNYjOn} z^^+w?MbB_jkU+{Lu;7FYwVm+Ghm-$P>j(+r|2I7$^(z4cX<{_i7c`7^lZ((OhvBdW zHui)$-IrZw;4F_1wK_QJnf^aIhqAh$B{OROkGfFM@tt_0Ze1pm&UL6siJ~q46=q=-60V;3Z!9A zbG)ZQ#N${()@jmDVYWc}GYALnYc?WQ2PyNwRv|@H3Dm#U7*5_g-!fgm(*;noAeKq# zV*JOHfBjVFnMcqJqe*gdJARLyASbC8O|)9&aLgiy&L^gtWAx+@VksBYVo;mT3=4L*oY^AOp&Y)oC!o_^L&?r7MIqSWqh661+ zgoh-!y%O;t2Q}6?ftaI@goP01m~xb~Nd;$dpmp&C>W;1b^tbfx%oo$b8-5>}*UDI* zR2y`ekpEwvxJPhqUP462Qq!^xTn6os$p~$R(GJXN0YrC@hElbujJiX5+Hp&R7P-an zlYOA!Zki;c9fOUymG8JNsI$tpJ`{rJ$yWVm#!I6N2rBol7D}phq~ZpNwyB0IpeiZ{ z0uC%fC~9prcDLY6;dS%1Lvtwk9dtW;03`?of=N1%+Jx0`KKOe{A}XtE+LeUfL&3Hy@*_T+*}F{=m*SrX~(-b-ENbW-~i1DzM1;-oJBqTzg_dz zoew`thTw}(IDS~KD+cZ4Lgm#_H+XD@P(qMzU4REN`jjVYKm4|UY>upk|N1>$2!P!= z!%qIvCkgJVS>A4FXyDcx!s4P62>vYcML-I3mYk(DYFr7Q;U4S5dV<%Wtbo`X?JP$1 zc_gVn7s}?ndagZ3af^NyI?p3u62qXitU(_=>0cyS5A8{W=F~WJtHJ4;!4N})L!}1J zw(c!RKW|5beb!&%(BU&$*d2W?psWEpwUpG~j7DA1o~+|G$s-`PCuCP!uF7s_9Qw(; zIP6SB5-FdetcYT5s}@pSz}}~XxueHeytYIygx_LtS3JF1&`XD9G;KR0)~};^s{^e1 zuwnsX(h><%VPQ!I10xy}hiQ23E!AZYn$={(pCcL6XCU{iqx|-N9gyP}I2ijX;oIB( zS+5>>Pm0Z_IN;o@7CI_PeVg=15aV2+1?R)arq{3J{8MwDv?b}MXqN{E5&bsRJhfDx zIrcHKXmWE~E9d$-N3xoW%1b9 zl4=#vGZMwZ5+)K-TQ9w?xvk?nFu%Cwuru8&ty>B?A$@-{e6515$^@QSci}WyNXRj9 zw(t4*w;^Poxx4f5#@Fa3S_~0&x_f78d|_#6eyTy^N0V;JH*enf26_etzC59v+J=U> zodq_O>LdCsS#%Cd3l>g^{jIso%4vp|uUvUr$U8GLGdB!(8;Pw8h>hzENsoMml}eYkNLla%HrM-o6!I(qV6We4KF!1z^=wfNqIrn^=L39v^xWLs`wko^X5cqJE*Zp! zlaUD*!z!vj%QUyNFrm73=FXiv;|A)ce2l1IJ%s79{PykJ5ap|+tp7}*Rc;ywXf8fBoD@S;F6+@&}KSYU- z+%3bDPbBu3X4n+GlVba9#klpgldq_+7|S-eEmzg!)EJn4Q^eMej*w1ZLm3$ zqQJkvdxVrOUfr>{@%jPtW+6t&AYd*YPS3edc_?{zhMHtXwaCv7J6Kk3z# z?GsV=K3iT|dKpt$t;_XEDbD_eY?SOeKk=u}o~>C};=fe7sxf!VzX;aU&~QxAtNYtg z=s7$&dF9rvw-td{^*Hn_0TLyCbD_)hqqod>P^$@|sbCqR5{JS4K zcG^>k}#9*vil|`oc^4eKet%g z*c?1~@LSv8m0P#gFiY<^W-xI2))$(dTSyNlve=mo_qttiar(~jbfRk)Papx&c6a7WA&diy2kQLH|3RR_!>YGoU&hOMy*Qnh1rlwC&^s=?HlW56XBL@(azM-Y3w;oQknDd3|o_lBf`9z|( z*Q}P_0mIr=sd-bYih{CoYx3jZXV3gYP*-}3x{~)M$H=_f|FsTWPQuHu6k$c_XqwsV zyqP&wLB+KcfIk9=qdyl>)%#W^%F^NhqP0S`e)`!n-3~Q@X0n77f3_D2iFZ~7J1kmd z;mF)IQPIR1 z@z1bP_cAh;rtBr7kak5Gydd(g_tD9|gxtRnJ}IVqxc>I^thumdywHS|HDX;iyJFJH zO`A4N>``sM-&cDg3x4q7AN>OZ6Rm)?`i6#D4NZbg8+VHpKgWbbilm$Md52%TD=Ty3 z#tBAYS1T*4+vskt&$QsMwY442U0Nuw+|P9c#C5N+iOFj}KeV=3K4z#P&hU># zj|shOW2F%8%YORLYBA^AyO0(`HPmgzPQ0ss8@^CWLd4hN17{Ch5d4$|=+A^USVON%zi1=q=yC;l^x1G=Be zLWJB=ufL6zz?R}t>jtnP9bQ#ZqQ9e|AyM`_$BrL2pQu;I4vMkIm07@gM`2;%HA%?= zR7qV6xac=RD)D!8b={ep9N=@_2#Ecsm z9v?r$%l_g3i34mn1w}=_z%VJcw{kHRA0!w&+m+&W9XWCYF!Bhtv)+8EqMY2W0|yU= z%EdgY6pF!duZ$~tUUAZkHaAR=o34aCN=Y$Gsr_~L%$CcEl^SPQKy+&4tCx@s@B)i()t5>QUJmC7(Fz;f!i4wUC9h0tsOoVic*jpitx9=L(f`` zzAxm0_r{n7H(lZ5tw+CGFPebfYQGB1u^Rhux&5AQyn1f@QqISZTY7tYcex_nJc|sy z^>4@e#~qW8fBW_=4bA1a1I>G!_!e}nSGj>r3GX!y+g*9Zv-i8+&ntlsx=VLb^PE@iE_ zvqcW-6?=mubz3bci!*cbdIzeqera&Wq?z>aJvQ9#&c{f>fM zbu#!5C!-`0i;}GLlPoP3_c&wUb88C}bX?dvoM>oN%k|v<;ALrAxyAVxi;3wT;akJF z^Ac`eO^&_1vAA>GbGoNH-z;fzd-mZwzI`TUN#-|Rdwa|EYMAg@8?bIttyk0Ot7E;e zW%!cMk)Dp0AC;lV-_i%aknL2HIeOFJw^LfF{RXep@BMn)xUL@f&KJXXcx>zCtrZm& zXDx^KHKgb{!$JR=pPyfO`}HbF=heZ%!Pg`toRX6{ne3-;i#b<6d``1?&-LJo1NM^B z{?vUPz3p(#^JkGQ&u2stD%sXM4Gj&m zUeSEX%UcG{cvVJbTUK`VZ5WmM)yml^M zu|hAq3{RT)Xxb|S>>eL%W`MBZoK{UvHhvhR8lc4dLx$z?%HK{)sXtzO&n*zBie5t~ zo$QeH>?Yx>X-wBjMV@#XxbX2_VYpfq5WQ7;;&Xj!L98BEXMT~m&_;9FSv9&!%b3R<=c59BEcyvGo=sI=- z4GpVg;1>||w(jl-X67v?x2Y-%-N{>{cUAh$qbc{&tUfxmez4+q(bSB1{A)$SPR3^I z>ff_+GJSl?0g=3~*LeI}%azN}Bw7GCJHaTL4G43TJ?bPkS58}xh4kJOCGx<;q{_ia zOKX+9yu3?+hT-FVr!0@Z;B(0r#yq>O{~ts)MYm=hb_3Lo5VFEan0@pKojpB~OD%bM zpqI zH#fJ0XhBM1Jes?E8M(e08ay@%3NC%3>{z*M+w+d#gi!yjR*9b_Pmz6zuhL-IqxtCY zk?#$B^=iwLCr@P08iM`;7;k1=0iFL9aXp0ohe+2Adq?%d%wQPaD(ud=xw+xr_MKj@ zwuwZ~JMH4)VzCf~vB^np-iUrH^uc=ygb&#V`S{3(>wQj7zrgMjaLD{XG-Ierc*Yu% zC!)m$2M4nrib&8N!)5}M{s!Cl29x47k{lca-#|`{f(;PGp^&eK=k8kc)o#R(*x1>x zAe1EH6Yu{KCZt+5)0}J0X5Tyi+u?j$r56wZ8kxIwPnUSh*7cg6Tf1LZn`PVc3g@jD zWWnLHXZPggR`Y zw+##!B!0%~KQJ*iek>W$emK1PJvCumx+GCc-POx$9Jvh|d~NMf_xWr)bJ~n*z|9IRlonKc`iTB?L(7} z3cC-{Paxa$k)vywUJD@hC{H{+w|i2mJBGKf#_(^ladGYR^YioG6m~`F)7@pv%*

    - OpenShift docs are moving and will soon only be available at docs.redhat.com, the home of all Red Hat product documentation. Explore the new docs experience today. + Starting on March 12, 2025, OpenShift docs will only be available at docs.redhat.com. From that time on, docs.openshift.com links will automatically redirect to their locations on docs.redhat.com. +