Skip to content

Commit 20f5c85

Browse files
authored
docs(mm2): updates to mm2 config content and structure to improve usability and clarity (#11695)
Signed-off-by: prmellor <pmellor@redhat.com>
1 parent b3adde8 commit 20f5c85

20 files changed

+700
-541
lines changed

documentation/assemblies/configuring/assembly-config.adoc

Lines changed: 40 additions & 68 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,6 @@
88
[role="_abstract"]
99
Configure and manage a Strimzi deployment to your precise needs using Strimzi custom resources.
1010
Strimzi provides example custom resources with each release, allowing you to configure and create instances of supported Kafka components.
11-
Fine-tune your deployment by configuring custom resources to include additional features according to your specific requirements.
1211

1312
Use custom resources to configure and create instances of the following components:
1413

@@ -23,50 +22,22 @@ New features are sometimes introduced through feature gates, which are controlle
2322

2423
The link:{BookURLConfiguring}[Strimzi Custom Resource API Reference^] describes the properties you can use in your configuration.
2524

26-
.Important Kafka configuration options
27-
Through configuration of the `Kafka` resource, you can introduce the following:
25+
.Centralizing configuration
2826

29-
* Data storage
30-
* Rack awareness
31-
* Listeners for authenticated client access to the Kafka cluster
32-
* Topic Operator for managing Kafka topics
33-
* User Operator for managing Kafka users (clients)
34-
* Cruise Control for cluster rebalancing
35-
* Kafka Exporter for collecting lag metrics
27+
For key configuration areas, such as metrics, logging, and external Kafka Connect connector settings, you can centralize management as follows:
3628

37-
Use `KafkaNodePool` resources to configure distinct groups of nodes within a Kafka cluster.
29+
* xref:configuration-points-config-maps-str[Using `ConfigMap` resources to incorporate configuration].
30+
* xref:assembly-loading-config-with-providers-str[Using configuration providers to load configuration from external sources].
3831

39-
.Common configuration
40-
Common configuration is configured independently for each component, such as the following:
41-
42-
* Bootstrap servers for host/port connection to a Kafka cluster
43-
* Metrics configuration
44-
* Healthchecks and liveness probes
45-
* Resource limits and requests (CPU/Memory)
46-
* Logging frequency
47-
* JVM options for maximum and minimum memory allocation
48-
* Adding additional volumes and volume mounts
49-
50-
.Config maps to centralize configuration
51-
For specific areas of configuration, namely metrics, logging, and external configuration for Kafka Connect connectors, you can also use `ConfigMap` resources.
52-
By using a `ConfigMap` resource to incorporate configuration, you centralize maintenance.
53-
You can also use configuration providers to load configuration from external sources, which we recommend for supplying the credentials for Kafka Connect connector configuration.
32+
We recommend configuration providers for securely supplying Kafka Connect connector credentials.
5433

5534
.TLS certificate management
35+
5636
When deploying Kafka, the Cluster Operator automatically sets up and renews TLS certificates to enable encryption and authentication within your cluster.
5737
If required, you can manually renew the cluster and clients CA certificates before their renewal period starts.
5838
You can also replace the keys used by the cluster and clients CA certificates.
59-
For more information, see xref:proc-renewing-ca-certs-manually-{context}[Renewing CA certificates manually] and xref:proc-replacing-private-keys-{context}[Replacing private keys].
6039

61-
.Applying changes to a custom resource configuration file
62-
You add configuration to a custom resource using `spec` properties.
63-
After adding the configuration, you can use `kubectl` to apply the changes to a custom resource configuration file:
64-
65-
.Applying changes to a resource configuration file
66-
[source,shell,subs=+quotes]
67-
----
68-
kubectl apply -f <kafka_configuration_file>
69-
----
40+
For more information, see xref:proc-renewing-ca-certs-manually-{context}[Renewing CA certificates manually] and xref:proc-replacing-private-keys-{context}[Replacing private keys].
7041

7142
NOTE: Labels applied to a custom resource are also applied to the Kubernetes resources making up its cluster.
7243
This provides a convenient mechanism for resources to be labeled as required.
@@ -135,54 +106,55 @@ include::../../modules/configuring/proc-altering-connector-offsets.adoc[leveloff
135106
//procedure to reset offsets
136107
include::../../modules/configuring/proc-resetting-connector-offsets.adoc[leveloffset=+2]
137108

138-
//`KafkaMirrorMaker2` resource config
109+
//`KafkaMirrorMaker2` core resource config
139110
include::../../modules/configuring/con-config-mirrormaker2.adoc[leveloffset=+1]
140-
//configuring replication modes
141-
include::../../modules/overview/con-overview-mirrormaker2.adoc[leveloffset=+2]
142-
//running multiple MM2 instances
143-
include::../../modules/configuring/con-config-mm2-multiple-instances.adoc[leveloffset=+1]
111+
//MirrorMaker securing connections
112+
include::../../modules/configuring/proc-config-mirrormaker2-securing-connection.adoc[leveloffset=+2]
113+
//MirrorMaker naming
114+
include::../../modules/configuring/con-config-mirrormaker2-topic-names.adoc[leveloffset=+2]
115+
//MirrorMaker sync offset config
116+
include::../../modules/configuring/con-config-mirrormaker2-sync.adoc[leveloffset=+2]
117+
//MirrorMaker sync acls config
118+
include::../../modules/configuring/con-config-mirrormaker2-sync-acls.adoc[leveloffset=+2]
119+
//MirrorMaker filters
120+
include::../../modules/configuring/con-config-mirrormaker2-connect-workers.adoc[leveloffset=+2]
121+
//Running multiple instances
122+
include::../../modules/configuring/con-config-mm2-multiple-instances.adoc[leveloffset=+2]
123+
//Disaster recovery
124+
include::../../modules/configuring/con-mm2-recovery.adoc[leveloffset=+2]
125+
144126
//configuring MM2 connectors
145-
include::../../modules/configuring/con-config-mirrormaker2-connectors.adoc[leveloffset=+2]
146-
//configuring MM2 connector producers and consumers
147-
include::../../modules/configuring/con-config-mirrormaker2-producers-consumers.adoc[leveloffset=+2]
127+
include::../../modules/configuring/con-config-mirrormaker2-connectors.adoc[leveloffset=+1]
128+
//Using the heartbeat connector to verify replication
129+
include::../../modules/configuring/con-config-mirrormaker2-heartbeat.adoc[leveloffset=+2]
148130
//increasing the number of tasks
149131
include::../../modules/configuring/con-config-mirrormaker2-tasks-max.adoc[leveloffset=+2]
150-
//handling of ACLs in replication
151-
include::../../modules/configuring/con-config-mirrormaker2-acls.adoc[leveloffset=+2]
152-
//securing connections to and from mirrormaker
153-
include::../../modules/configuring/proc-config-mirrormaker2-securing-connection.adoc[leveloffset=+2]
154132
//Procedure to manually pause or stop an MM2 connector
155133
include::../../modules/configuring/proc-manual-stop-pause-mirrormaker2-connector.adoc[leveloffset=+2]
156134
//Procedure to restart an MM2 connector
157135
include::../../modules/configuring/proc-manual-restart-mirrormaker2-connector.adoc[leveloffset=+2]
158136
//Procedure to restart an MM2 connector task
159137
include::../../modules/configuring/proc-manual-restart-mirrormaker2-connector-task.adoc[leveloffset=+2]
160-
//Disaster recovery
161-
include::../../modules/configuring/con-mm2-recovery.adoc[leveloffset=+2]
138+
//configuring MM2 connector producers and consumers
139+
include::../../modules/configuring/con-config-mirrormaker2-producers-consumers.adoc[leveloffset=+2]
162140

163141
//`KafkaBridge` resource config
164142
include::../../modules/configuring/con-config-kafka-bridge.adoc[leveloffset=+1]
165143

166-
//configuring CPU and memory resources and limits
167-
include::../../modules/configuring/con-config-resources.adoc[leveloffset=+1]
168-
169-
//Kubernetes labels
170-
include::../../modules/configuring/ref-kubernetes-labels.adoc[leveloffset=+1]
171-
144+
//common config examples
145+
include::../../modules/configuring/con-common-configuration.adoc[leveloffset=+1]
146+
//configuring log levels
147+
include::assembly-logging-configuration.adoc[leveloffset=+2]
172148
//scheduling separate Kafka pods
173-
include::assembly-scheduling.adoc[leveloffset=+1]
174-
149+
include::assembly-scheduling.adoc[leveloffset=+2]
175150
//disabling pod disruption budgets
176-
include::../../modules/configuring/proc-disable-pod-disruption-budget-generation.adoc[leveloffset=+1]
177-
178-
//configuring log levels
179-
include::assembly-logging-configuration.adoc[leveloffset=+1]
180-
151+
include::../../modules/configuring/proc-disable-pod-disruption-budget-generation.adoc[leveloffset=+2]
181152
//loading configuration from configmaps for certain types of data
182-
include::../../modules/configuring/con-configuration-points-configmaps.adoc[leveloffset=+1]
183-
153+
include::../../modules/configuring/con-configuration-points-configmaps.adoc[leveloffset=+2]
184154
//loading configuration from external sources for all Kafka components
185-
include::assembly-external-config.adoc[leveloffset=+1]
186-
155+
include::assembly-external-config.adoc[leveloffset=+2]
187156
//customizing Kubernetes resources like Deployment etc
188-
include::assembly-customizing-kubernetes-resources.adoc[leveloffset=+1]
157+
include::assembly-customizing-kubernetes-resources.adoc[leveloffset=+2]
158+
159+
//Kubernetes labels
160+
include::../../modules/configuring/ref-kubernetes-labels.adoc[leveloffset=+1]

documentation/assemblies/configuring/assembly-logging-configuration.adoc

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ WARNING: Strimzi operators and Kafka components use log4j2 for logging.
1111
However, Kafka 3.9 and earlier versions rely on log4j1.
1212
For log4j1-based configuration examples, refer to the link:{DocArchive}[Strimzi 0.45 documentation^].
1313

14+
You can set log levels to `INFO`, `ERROR`, `WARN`, `TRACE`, `DEBUG`, `FATAL` or `OFF`.
1415
Configure the logging levels of Kafka components and Strimzi operators through their custom resources.
1516
You can use either of these options:
1617

@@ -85,6 +86,4 @@ include::../../modules/configuring/proc-creating-configmap.adoc[leveloffset=+1]
8586
//cluster operator logging config
8687
include::../../modules/operators/ref-operator-cluster-logging-configmap.adoc[leveloffset=+1]
8788
//adding logging filters to operators
88-
include::../../modules/configuring/proc-creating-logging-filters.adoc[leveloffset=+1]
89-
//warnings on locks for cluster operations
90-
include::../../modules/configuring/con-failed-lock-warnings.adoc[leveloffset=+1]
89+
include::../../modules/configuring/proc-creating-logging-filters.adoc[leveloffset=+1]

documentation/assemblies/deploying/assembly-deploy-intro-operators.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,4 +58,6 @@ include::../../modules/operators/con-operators-namespaces.adoc[leveloffset=+1]
5858
//cluster operator's management of rbac resources
5959
include::../../modules/operators/ref-operator-cluster-rbac-resources.adoc[leveloffset=+1]
6060
//cluster operator's management of pod resources
61-
include::../../modules/configuring/con-pod-management.adoc[leveloffset=+1]
61+
include::../../modules/configuring/con-pod-management.adoc[leveloffset=+1]
62+
//warnings on locks for cluster operations
63+
include::../../modules/configuring/con-failed-lock-warnings.adoc[leveloffset=+1]
Lines changed: 177 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,177 @@
1+
// This assembly is included in the following assemblies:
2+
//
3+
// assembly-config.adoc
4+
5+
[id='con-common-config-{context}']
6+
= Applying optional common configuration
7+
You can further configure Strimzi components by applying any of the following optional common configuration settings.
8+
Common configuration is configured independently for each component, such as the following:
9+
10+
* Resource limits and requests (Recommended)
11+
* Metrics configuration
12+
* Liveness and readiness probes
13+
* JVM options for maximum and minimum memory allocation
14+
* Adding additional volumes and volume mounts
15+
* Template configuration for pods and containers
16+
* Logging frequency
17+
18+
Advanced or specialized options include:
19+
20+
* Custom container images
21+
* Rack awareness
22+
* Distributed tracing
23+
24+
Configure common options for Strimzi custom resources in the `.spec` section of the custom resource.
25+
For more information on these configuration options, refer to link:{BookURLConfiguring}[Common configuration properties^].
26+
27+
== Resource limits and requests (recommended)
28+
29+
To ensure stability and optimal performance for your Kafka clusters, we recommend defining CPU and memory resource limits and requests for all Strimzi containers.
30+
By default, the Strimzi Cluster Operator does not set these values, but fine-tuning them based on your workload requirements helps performance and improves reliability.
31+
32+
.Example resource configuration
33+
[source,yaml]
34+
----
35+
# ...
36+
spec:
37+
resources:
38+
requests:
39+
cpu: "1"
40+
memory: 2Gi
41+
limits:
42+
cpu: "2"
43+
memory: 2Gi
44+
# ...
45+
----
46+
47+
== Metrics configuration
48+
49+
Enable metrics collection for monitoring.
50+
51+
.Example metrics configuration
52+
[source,yaml]
53+
----
54+
# ...
55+
spec:
56+
metricsConfig:
57+
type: jmxPrometheusExporter
58+
valueFrom:
59+
configMapKeyRef:
60+
name: my-metrics-config
61+
key: kafka-metrics-config.yml
62+
# ...
63+
----
64+
65+
Configuration varies depending on the component and exporter used: Prometheus JMX Exporter or Strimzi Metrics Reporter.
66+
For more information, see xref:assembly-metrics-str[Introducing metrics].
67+
68+
== Liveness and readiness probes
69+
70+
Configure health checks for the container.
71+
72+
.Example liveness and readiness probes
73+
[source,yaml]
74+
----
75+
# ...
76+
spec:
77+
livenessProbe:
78+
initialDelaySeconds: 15
79+
timeoutSeconds: 5
80+
readinessProbe:
81+
initialDelaySeconds: 10
82+
timeoutSeconds: 5
83+
# ...
84+
----
85+
86+
== JVM options
87+
88+
Configure the Java Virtual Machine (JVM) for the component.
89+
To enable garbage collector (GC) logging, set `gcLoggingEnabled` to `true`.
90+
91+
.Example JVM options
92+
[source,yaml]
93+
----
94+
# ...
95+
spec:
96+
jvmOptions:
97+
-Xms: "512m"
98+
-Xmx: "1g"
99+
gcLoggingEnabled: true
100+
# ...
101+
----
102+
103+
== Additional volumes and mounts
104+
105+
Add extra volumes to the container and mount them in specific locations.
106+
107+
.Example additional volumes
108+
[source,yaml]
109+
----
110+
# ...
111+
spec:
112+
kafka:
113+
template:
114+
pod:
115+
volumes:
116+
- name: example-secret
117+
secret:
118+
secretName: secret-name
119+
- name: example-configmap
120+
configMap:
121+
name: config-map-name
122+
kafkaContainer:
123+
volumeMounts:
124+
- name: example-secret
125+
mountPath: /mnt/secret-volume
126+
- name: example-configmap
127+
mountPath: /mnt/cm-volume
128+
# ...
129+
----
130+
131+
NOTE: You can use `template` configuration to add other customizations to pods and containers, such as affinity and security context.
132+
For more information, see xref:assembly-scheduling-str[Configuring pod scheduling] and xref:assembly-security-providers-str[Applying security context to Strimzi pods and containers].
133+
134+
== Custom container image
135+
136+
Override the default container image.
137+
*Use only in special situations.*
138+
139+
.Example custom image
140+
[source,yaml]
141+
----
142+
# ...
143+
spec:
144+
image: my-org/custom-kafka-image:latest
145+
# ...
146+
----
147+
148+
== Rack awareness
149+
150+
Enable rack-aware broker assignment to improve fault tolerance.
151+
*This is a specialized option intended for a deployment within the same location, not across regions.*
152+
153+
.Example rack awareness configuration
154+
[source,yaml]
155+
----
156+
# ...
157+
spec:
158+
rack:
159+
topologyKey: topology.kubernetes.io/zone
160+
# ...
161+
----
162+
163+
== Distributed tracing configuration
164+
165+
Enable distributed tracing using OpenTelemetry to monitor Kafka component operations.
166+
167+
.Example tracing configuration
168+
[source,yaml]
169+
----
170+
# ...
171+
spec:
172+
tracing:
173+
type: opentelemetry
174+
# ...
175+
----
176+
177+
For more information see xref:assembly-distributed-tracing-str[Introducing distributed tracing].

0 commit comments

Comments
 (0)