Skip to content

Commit 693cdce

Browse files
im-kongeFrawlesssee-quick
authored
[ST] Change DynamicConfSharedST - remove randomness and pick few dynamic configurations to be tested (#12103)
Signed-off-by: Lukas Kral <lukywill16@gmail.com> Signed-off-by: Lukáš Král <53821852+im-konge@users.noreply.github.com> Co-authored-by: Jakub Stejskal <xstejs24@gmail.com> Co-authored-by: Maros Orsak <maros.orsak159@gmail.com>
1 parent 5e4ea3e commit 693cdce

4 files changed

Lines changed: 109 additions & 247 deletions

File tree

development-docs/systemtests/io.strimzi.systemtest.kafka.dynamicconfiguration.DynamicConfSharedST.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,16 +17,19 @@
1717

1818
<hr style="border:1px solid">
1919

20-
## testDynConfiguration
20+
## testDynamicConfiguration
2121

22-
**Description:** This test dynamically selects and applies three Kafka dynamic configuration properties to verify that the changes do not trigger a rolling update in the Kafka cluster. It applies the configurations, waits for stability, and then verifies that the new configuration is applied both to the CustomResource (CR) and the running Kafka pods.
22+
**Description:** Parametrized test taking 3 pre-defined per-broker and 3 pre-defined cluster-wide configurations that are being tested to see if dynamic configuration works.For each of the configuration (and its value), it goes through following steps:
23+
1. Apply the configuration
24+
2. Wait for stability of the cluster - no Pods will be rolled.
25+
3. Verify that configuration is correctly set in CR and either all Pods or for whole cluster (based on scope).
2326

2427
**Steps:**
2528

2629
| Step | Action | Result |
2730
| - | - | - |
28-
| 1. | Randomly choose three configuration properties for dynamic update. | Three configurations are selected without duplication. |
29-
| 2. | Apply the chosen configuration properties to the Kafka CustomResource. | The configurations are applied successfully without triggering a rolling update. |
31+
| 1. | Update configuration (with value) in Kafka. | Configuration is successfully updated. |
32+
| 2. | For one minute, periodically check that there is no rolling update of Kafka Pods. | No Kafka Pods will be rolled. |
3033
| 3. | Verify the applied configuration on both the Kafka CustomResource and the Kafka pods. | The applied configurations are correctly reflected in the Kafka CustomResource and the kafka pods. |
3134

3235
**Labels:**

development-docs/systemtests/labels/kafka.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ These tests are crucial to ensure that Kafka clusters can handle production work
3131
- [testCustomSoloCertificatesForNodePort](../io.strimzi.systemtest.kafka.listeners.ListenersST.md)
3232
- [testCustomSoloCertificatesForRoute](../io.strimzi.systemtest.kafka.listeners.ListenersST.md)
3333
- [testDeployUnsupportedKafka](../io.strimzi.systemtest.kafka.KafkaST.md)
34-
- [testDynConfiguration](../io.strimzi.systemtest.kafka.dynamicconfiguration.DynamicConfSharedST.md)
34+
- [testDynamicConfiguration](../io.strimzi.systemtest.kafka.dynamicconfiguration.DynamicConfSharedST.md)
3535
- [testDynamicallyAndNonDynamicSetConnectLoggingLevels](../io.strimzi.systemtest.log.LoggingChangeST.md)
3636
- [testDynamicallySetBridgeLoggingLevels](../io.strimzi.systemtest.log.LoggingChangeST.md)
3737
- [testDynamicallySetClusterOperatorLoggingLevels](../io.strimzi.systemtest.log.LoggingChangeST.md)

systemtest/src/main/java/io/strimzi/systemtest/utils/kafkaUtils/KafkaUtils.java

Lines changed: 40 additions & 118 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,6 @@
2323
import io.strimzi.api.kafka.model.kafka.cruisecontrol.KafkaAutoRebalanceStatusBrokers;
2424
import io.strimzi.api.kafka.model.kafka.listener.GenericKafkaListener;
2525
import io.strimzi.api.kafka.model.kafka.listener.ListenerStatus;
26-
import io.strimzi.kafka.config.model.ConfigModel;
27-
import io.strimzi.kafka.config.model.ConfigModels;
2826
import io.strimzi.kafka.config.model.Scope;
2927
import io.strimzi.operator.common.Util;
3028
import io.strimzi.systemtest.TestConstants;
@@ -44,21 +42,15 @@
4442
import org.hamcrest.CoreMatchers;
4543

4644
import java.io.File;
47-
import java.io.FileInputStream;
4845
import java.io.IOException;
49-
import java.io.InputStream;
5046
import java.nio.charset.Charset;
5147
import java.time.Duration;
52-
import java.util.HashMap;
5348
import java.util.List;
5449
import java.util.Map;
5550
import java.util.Random;
5651
import java.util.function.Consumer;
5752
import java.util.function.Supplier;
58-
import java.util.stream.Collectors;
5953

60-
import static io.strimzi.api.kafka.model.kafka.KafkaClusterSpec.FORBIDDEN_PREFIXES;
61-
import static io.strimzi.api.kafka.model.kafka.KafkaClusterSpec.FORBIDDEN_PREFIX_EXCEPTIONS;
6254
import static io.strimzi.systemtest.enums.CustomResourceStatus.NotReady;
6355
import static io.strimzi.systemtest.enums.CustomResourceStatus.Ready;
6456
import static io.strimzi.systemtest.resources.types.KafkaType.kafkaClient;
@@ -263,134 +255,64 @@ public synchronized static boolean verifyCrDynamicConfiguration(final String nam
263255

264256
/**
265257
* Verifies that updated configuration was successfully changed inside Kafka pods
266-
* @param namespaceName name of the namespace
267-
* @param kafkaPodNamePrefix prefix of Kafka pods
268-
* @param brokerConfigName key of specific property
269-
* @param value value of specific property
270-
* @param kafkaVersion Kafka version to get the config model
258+
*
259+
* @param namespaceName Name of the namespace.
260+
* @param clusterName Name of the Kafka cluster.
261+
* @param scraperPodName Name of Scraper Pod.
262+
* @param configName Name of the configuration.
263+
* @param value Value of specific property.
264+
*
271265
* @return
272266
* true = if specific property match the excepted property
273267
* false = if specific property doesn't match the excepted property
274268
*/
275-
public synchronized static boolean verifyPodDynamicConfiguration(final String namespaceName, String scraperPodName, String bootstrapServer, String kafkaPodNamePrefix, String brokerConfigName, Object value, String kafkaVersion) {
276-
List<Pod> brokerPods = KubeResourceManager.get().kubeClient().listPodsByPrefixInName(namespaceName, kafkaPodNamePrefix);
269+
public synchronized static boolean verifyPodDynamicConfiguration(
270+
final String namespaceName,
271+
final String clusterName,
272+
String scraperPodName,
273+
String scope,
274+
String configName,
275+
String value
276+
) {
277+
String bootstrapServer = KafkaResources.plainBootstrapAddress(clusterName);
278+
String brokerPodSetName = KafkaComponents.getBrokerPodSetName(clusterName);
279+
280+
List<Pod> brokerPods = KubeResourceManager.get().kubeClient().listPodsByPrefixInName(namespaceName, brokerPodSetName);
277281
int[] brokerId = {0};
278282

279-
Map<String, ConfigModel> configModelMap = readConfigModel(kafkaVersion);
280-
281283
// the check/describe for a dynamic change is different depending on the property being cluster-wide or per-broker
282-
if (configModelMap.get(brokerConfigName).getScope().equals(Scope.CLUSTER_WIDE)) {
284+
if (Scope.valueOf(scope).equals(Scope.CLUSTER_WIDE)) {
285+
TestUtils.waitFor("cluster-wide dyn.configuration to change", TestConstants.GLOBAL_POLL_INTERVAL, TestConstants.RECONCILIATION_INTERVAL + Duration.ofSeconds(10).toMillis(), () -> {
286+
String result = KafkaCmdClient.describeKafkaBrokerDefaultsUsingPodCli(namespaceName, scraperPodName, bootstrapServer);
283287

284-
TestUtils.waitFor("cluster-wide dyn.configuration to change", TestConstants.GLOBAL_POLL_INTERVAL, TestConstants.RECONCILIATION_INTERVAL + Duration.ofSeconds(10).toMillis(),
285-
() -> {
286-
String result = KafkaCmdClient.describeKafkaBrokerDefaultsUsingPodCli(namespaceName, scraperPodName, bootstrapServer);
287-
288-
LOGGER.debug("This cluster-wide dyn.configuration {}", result);
289-
290-
if (!result.contains(brokerConfigName + "=" + value)) {
291-
LOGGER.error("Cluster-wide configuration doesn't contain {} with value {}", brokerConfigName, value);
292-
LOGGER.error("Kafka configuration {}", result);
293-
return false;
294-
}
295-
return true;
296-
});
288+
LOGGER.debug("This is cluster-wide dyn.configuration {}", result);
297289

290+
if (!result.contains(configName + "=" + value)) {
291+
LOGGER.error("Cluster-wide configuration doesn't contain {} with value {}", configName, value);
292+
LOGGER.error("Kafka configuration {}", result);
293+
return false;
294+
}
295+
return true;
296+
});
298297
} else {
299-
300298
for (Pod pod : brokerPods) {
299+
TestUtils.waitFor("dyn.configuration to change", TestConstants.GLOBAL_POLL_INTERVAL, TestConstants.RECONCILIATION_INTERVAL + Duration.ofSeconds(10).toMillis(), () -> {
300+
String result = KafkaCmdClient.describeKafkaBrokerUsingPodCli(namespaceName, scraperPodName, bootstrapServer, brokerId[0]++);
301301

302-
TestUtils.waitFor("dyn.configuration to change", TestConstants.GLOBAL_POLL_INTERVAL, TestConstants.RECONCILIATION_INTERVAL + Duration.ofSeconds(10).toMillis(),
303-
() -> {
304-
String result = KafkaCmdClient.describeKafkaBrokerUsingPodCli(namespaceName, scraperPodName, bootstrapServer, brokerId[0]++);
305-
306-
LOGGER.debug("This dyn.configuration {} inside the Kafka Pod: {}/{}", result, namespaceName, pod.getMetadata().getName());
302+
LOGGER.debug("This is dyn.configuration {} inside the Kafka Pod: {}/{}", result, namespaceName, pod.getMetadata().getName());
307303

308-
if (!result.contains(brokerConfigName + "=" + value)) {
309-
LOGGER.error("Kafka Pod: {}/{} doesn't contain {} with value {}", namespaceName, pod.getMetadata().getName(), brokerConfigName, value);
310-
LOGGER.error("Kafka configuration {}", result);
311-
return false;
312-
}
313-
return true;
314-
});
304+
if (!result.contains(configName + "=" + value)) {
305+
LOGGER.error("Kafka Pod: {}/{} doesn't contain {} with value {}", namespaceName, pod.getMetadata().getName(), configName, value);
306+
LOGGER.error("Kafka configuration {}", result);
307+
return false;
308+
}
309+
return true;
310+
});
315311
}
316312
}
317313
return true;
318314
}
319315

320-
/**
321-
* Loads all kafka config parameters supported by the given {@code kafkaVersion}, as generated by #KafkaConfigModelGenerator in config-model-generator.
322-
* @param kafkaVersion specific kafka version
323-
* @return all supported kafka properties
324-
*/
325-
public static Map<String, ConfigModel> readConfigModel(String kafkaVersion) {
326-
String name = TestUtils.USER_PATH + "/../cluster-operator/src/main/resources/kafka-" + kafkaVersion + "-config-model.json";
327-
try {
328-
try (InputStream in = new FileInputStream(name)) {
329-
ConfigModels configModels = new ObjectMapper().readValue(in, ConfigModels.class);
330-
if (!kafkaVersion.equals(configModels.getVersion())) {
331-
throw new RuntimeException("Incorrect version");
332-
}
333-
return configModels.getConfigs();
334-
}
335-
} catch (IOException e) {
336-
throw new RuntimeException("Error reading from classpath resource " + name, e);
337-
}
338-
}
339-
340-
/**
341-
* Return dynamic Kafka configs supported by the given version of Kafka.
342-
* @param kafkaVersion specific kafka version
343-
* @return all dynamic properties for specific kafka version
344-
*/
345-
@SuppressWarnings({"checkstyle:CyclomaticComplexity", "checkstyle:BooleanExpressionComplexity"})
346-
public static Map<String, ConfigModel> getDynamicConfigurationProperties(String kafkaVersion) {
347-
348-
Map<String, ConfigModel> configs = KafkaUtils.readConfigModel(kafkaVersion);
349-
350-
LOGGER.info("Kafka config {}", configs.toString());
351-
352-
LOGGER.info("Number of all Kafka configs {}", configs.size());
353-
354-
Map<String, ConfigModel> dynamicConfigs = configs
355-
.entrySet()
356-
.stream()
357-
.filter(a -> {
358-
String[] prefixKey = a.getKey().split("\\.");
359-
360-
// filter all which is Scope = ClusterWide or PerBroker
361-
boolean isClusterWideOrPerBroker = a.getValue().getScope() == Scope.CLUSTER_WIDE || a.getValue().getScope() == Scope.PER_BROKER;
362-
363-
if (prefixKey[0].equals("ssl") || prefixKey[0].equals("sasl") || prefixKey[0].equals("advertised") ||
364-
prefixKey[0].equals("listeners") || prefixKey[0].equals("listener")) {
365-
return isClusterWideOrPerBroker && !FORBIDDEN_PREFIXES.contains(prefixKey[0]);
366-
}
367-
368-
return isClusterWideOrPerBroker;
369-
})
370-
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
371-
372-
LOGGER.info("Number of dynamic-configs {}", dynamicConfigs.size());
373-
374-
Map<String, ConfigModel> forbiddenExceptionsConfigs = configs
375-
.entrySet()
376-
.stream()
377-
.filter(a -> FORBIDDEN_PREFIX_EXCEPTIONS.contains(a.getKey()))
378-
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
379-
380-
LOGGER.info("Number of forbidden-exception-configs {}", forbiddenExceptionsConfigs.size());
381-
382-
Map<String, ConfigModel> dynamicConfigsWithExceptions = new HashMap<>();
383-
384-
dynamicConfigsWithExceptions.putAll(dynamicConfigs);
385-
dynamicConfigsWithExceptions.putAll(forbiddenExceptionsConfigs);
386-
387-
LOGGER.info("Size of dynamic-configs with forbidden-exception-configs {}", dynamicConfigsWithExceptions.size());
388-
389-
dynamicConfigsWithExceptions.forEach((key, value) -> LOGGER.info("{} -> {}:{}", key, value.getScope(), value.getType()));
390-
391-
return dynamicConfigsWithExceptions;
392-
}
393-
394316
/**
395317
* Generated random name for the Kafka resource based on prefix
396318
* @param clusterName name prefix

0 commit comments

Comments
 (0)