Skip to content

Update default testsuite kafka image to 0.45.0-kafka-3.9.0 #258

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 11, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion examples/docker/kafka-oauth-strimzi/kafka/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM quay.io/strimzi/kafka:0.39.0-kafka-3.6.1
FROM quay.io/strimzi/kafka:0.45.0-kafka-3.9.0

COPY libs/* /opt/kafka/libs/strimzi/
COPY config/* /opt/kafka/config/
Expand Down
2 changes: 1 addition & 1 deletion examples/docker/kafka-oauth-strimzi/zookeeper/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM quay.io/strimzi/kafka:0.39.0-kafka-3.6.1
FROM quay.io/strimzi/kafka:0.45.0-kafka-3.9.0

COPY start.sh /opt/kafka/
COPY simple_zk_config.sh /opt/kafka/
Expand Down
2 changes: 1 addition & 1 deletion examples/docker/strimzi-kafka-image/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM quay.io/strimzi/kafka:0.44.0-kafka-3.8.0
FROM quay.io/strimzi/kafka:0.45.0-kafka-3.9.0

COPY target/libs/* /opt/kafka/libs/oauth/
ENV CLASSPATH /opt/kafka/libs/oauth/*
24 changes: 12 additions & 12 deletions examples/docker/strimzi-kafka-image/README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
Strimzi Kafka Image with SNAPSHOT Strimzi Kafka OAuth
=====================================================

This is a build of a Docker image based on `quay.io/strimzi/kafka:0.44.0-kafka-3.8.0` with added most recently locally built SNAPSHOT version of Strimzi Kafka OAuth libraries.
This is a build of a Docker image based on `quay.io/strimzi/kafka:0.45.0-kafka-3.9.0` with added most recently locally built SNAPSHOT version of Strimzi Kafka OAuth libraries.

This image adds a `/opt/kafka/libs/oauth` directory, and copies the latest jars for OAuth support in it.
Then it puts this directory as the first directory on the classpath.

The result is that the specific version of Strimzi Kafka OAuth jars and their dependencies is used, because they appear on the classpath before the ones that are part of `quay.io/strimzi/kafka:0.44.0-kafka-3.8.0` which are located in the `/opt/kafka/libs` directory.
The result is that the specific version of Strimzi Kafka OAuth jars and their dependencies is used, because they appear on the classpath before the ones that are part of `quay.io/strimzi/kafka:0.45.0-kafka-3.9.0` which are located in the `/opt/kafka/libs` directory.


Building
--------

Run `mvn install` then, use `docker build` to build the image:

docker build --output type=docker -t strimzi/kafka:latest-kafka-3.8.0-oauth .
docker build --output type=docker -t strimzi/kafka:latest-kafka-3.9.0-oauth .

You can choose a different tag if you want.

Expand All @@ -34,15 +34,15 @@ Validating

You can start an interactive shell container and confirm that the jars are there.

docker run --rm -ti strimzi/kafka:latest-kafka-3.8.0-oauth /bin/sh
docker run --rm -ti strimzi/kafka:latest-kafka-3.9.0-oauth /bin/sh
ls -la libs/oauth/
echo "$CLASSPATH"

If you want to play around more within the container you may need to make yourself `root`.

You achieve that by running the docker session as `root` user:

docker run --rm -ti --user root strimzi/kafka:latest-kafka-3.8.0-oauth /bin/sh
docker run --rm -ti --user root strimzi/kafka:latest-kafka-3.9.0-oauth /bin/sh



Expand All @@ -63,12 +63,12 @@ For example if you are using Kubernetes Kind as described in [HACKING.md](../../

You need to retag the built image, so you can push it to Docker Registry:

docker tag strimzi/kafka:latest-kafka-3.8.0-oauth $DOCKER_REG/strimzi/kafka:latest-kafka-3.8.0-oauth
docker push $DOCKER_REG/strimzi/kafka:latest-kafka-3.8.0-oauth
docker tag strimzi/kafka:latest-kafka-3.9.0-oauth $DOCKER_REG/strimzi/kafka:latest-kafka-3.9.0-oauth
docker push $DOCKER_REG/strimzi/kafka:latest-kafka-3.9.0-oauth

Actually, Kubernetes Kind supports an even simpler option how to make an image available to Kubernetes:

kind load docker-image strimzi/kafka:latest-kafka-3.8.0-oauth
kind load docker-image strimzi/kafka:latest-kafka-3.9.0-oauth

If you're using minikube, you'll need to run `minikube docker-env` before building the image.

Expand All @@ -79,9 +79,9 @@ Deploying

In order for the operator to use your Kafka image, you have to replace the Kafka image coordinates in `packaging/install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml` in your `strimzi-kafka-operator` project.

This image builds the kafka-3.8.0 replacement image, so we need to replace all occurrences where kafka-3.8.0 is referred to into the proper coordinates to our image:
This image builds the kafka-3.9.0 replacement image, so we need to replace all occurrences where kafka-3.9.0 is referred to into the proper coordinates to our image:

sed -Ei "s#quay.io/strimzi/kafka:latest-kafka-3.8.0#${DOCKER_REG}/strimzi/kafka:latest-kafka-3.8.0-oauth#" \
sed -Ei "s#quay.io/strimzi/kafka:latest-kafka-3.9.0#${DOCKER_REG}/strimzi/kafka:latest-kafka-3.9.0-oauth#" \
packaging/install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml


Expand All @@ -94,11 +94,11 @@ You can now deploy Strimzi Kafka Operator following instructions in [HACKING.md]

## Via Helm

You can also run the operator via its Helm chart and set the `kafka.image.registry` property to your local registry. As an example, if you've built and tagged the image as `local.dev/strimzi/kafka:0.44.0-kafka-3.8.0`. You can run it using the operator as:
You can also run the operator via its Helm chart and set the `kafka.image.registry` property to your local registry. As an example, if you've built and tagged the image as `local.dev/strimzi/kafka:0.45.0-kafka-3.9.0`. You can run it using the operator as:

helm repo add strimzi https://strimzi.io/charts/ --force-update
helm upgrade -i -n strimzi strimzi strimzi/strimzi-kafka-operator \
--version 0.44.0 \
--version 0.45.0 \
--set watchNamespaces="{default}" \
--set generateNetworkPolicy=false \
--set kafka.image.registry="local.dev" \
Expand Down
2 changes: 1 addition & 1 deletion examples/kubernetes/kafka-oauth-authz-metrics-client.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ metadata:
spec:
containers:
- name: kafka-client-shell
image: quay.io/strimzi/kafka:latest-kafka-3.6.1
image: quay.io/strimzi/kafka:latest-kafka-3.9.0
command:
- /bin/sh
env:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
*/
package io.strimzi.testsuite.oauth.common;

import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
import io.strimzi.kafka.oauth.common.HttpUtil;

import java.io.BufferedReader;
Expand Down Expand Up @@ -82,33 +81,30 @@ void addMetric(String key, Map<String, String> attrs, String value) {
}

/**
* Returns a value of a single metric matching the key and the attributes.
* Get the sum of values of all the metrics matching the key and the attributes
* <p>
* Attributes are specified as: name1, value1, name2, value2, ...
* Not all attributes have to be specified, but those specified have to match (equality).
*
* @param key Metric name to retrieve
* Different Strimzi Kafka images seem to expose internal metrics structures of type CumulativeSum and CumulativeCount differently.
* The later versions seem to add '_total' suffix, whereas the older versions don't.
*
* @param keyPrefix The key prefix for the key identifying the metric
* @param attrs The attributes filter passed as attrName1, attrValue1, attrName2, attrValue2 ...
* @return Metric value as String
* @return The sum of the values of all the matching metrics as String
*/
@SuppressFBWarnings("THROWS_METHOD_THROWS_RUNTIMEEXCEPTION")
public String getValue(String key, String... attrs) {
boolean match = false;
String result = null;
public BigDecimal getStartsWithValueSum(String keyPrefix, String... attrs) {

BigDecimal result = new BigDecimal(0);
next:
for (MetricEntry entry: entries) {
if (entry.key.equals(key)) {
if (entry.key.startsWith(keyPrefix)) {
for (int i = 0; i < attrs.length; i += 2) {
if (!attrs[i + 1].equals(entry.attrs.get(attrs[i]))) {
continue next;
}
}
if (!match) {
match = true;
result = entry.value;
} else {
throw new RuntimeException("More than one matching metric entry");
}
result = result.add(new BigDecimal(entry.value));
}
}
return result;
Expand Down
2 changes: 1 addition & 1 deletion testsuite/docker/kafka/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM quay.io/strimzi/kafka:0.39.0-kafka-3.6.1
FROM quay.io/strimzi/kafka:0.45.0-kafka-3.9.0

USER root
RUN rm -rf /opt/kafka/libs/bcpkix* /opt/kafka/libs/bcprov* /opt/kafka/libs/keycloak*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -98,10 +98,10 @@ void oauthMetricsClientAuth() throws Exception {
TestMetrics metrics = getPrometheusMetrics(URI.create("http://kafka:9404/metrics"));

// Request for token from login callback handler
BigDecimal value = metrics.getValueSum("strimzi_oauth_authentication_requests_count", "context", "INTROSPECT", "kind", "client-auth", "outcome", "success");
BigDecimal value = metrics.getStartsWithValueSum("strimzi_oauth_authentication_requests_count", "context", "INTROSPECT", "kind", "client-auth", "outcome", "success");
Assert.assertEquals("strimzi_oauth_authentication_requests_count for client-auth == 2", 2, value.intValue());

value = metrics.getValueSum("strimzi_oauth_authentication_requests_totaltimems", "context", "INTROSPECT", "kind", "client-auth", "outcome", "success");
value = metrics.getStartsWithValueSum("strimzi_oauth_authentication_requests_totaltimems", "context", "INTROSPECT", "kind", "client-auth", "outcome", "success");
Assert.assertTrue("strimzi_oauth_authentication_requests_totaltimems for client-auth > 0.0", value.doubleValue() > 0.0);

value = metrics.getValueSum("strimzi_oauth_authentication_requests_avgtimems", "context", "INTROSPECT", "kind", "client-auth", "outcome", "success");
Expand All @@ -114,10 +114,10 @@ void oauthMetricsClientAuth() throws Exception {
Assert.assertTrue("strimzi_oauth_authentication_requests_maxtimems for client-auth > 0.0", value.doubleValue() > 0.0);

// Authentication to keycloak to exchange clientId + cesret for an access token during login callback handler call
value = metrics.getValueSum("strimzi_oauth_http_requests_count", "context", "INTROSPECT", "kind", "client-auth", "host", authHostPort, "path", tokenPath, "outcome", "success");
value = metrics.getStartsWithValueSum("strimzi_oauth_http_requests_count", "context", "INTROSPECT", "kind", "client-auth", "host", authHostPort, "path", tokenPath, "outcome", "success");
Assert.assertEquals("strimzi_oauth_http_requests_count for client-auth == 2", 2, value.intValue());

value = metrics.getValueSum("strimzi_oauth_http_requests_totaltimems", "context", "INTROSPECT", "kind", "client-auth", "host", authHostPort, "path", tokenPath, "outcome", "success");
value = metrics.getStartsWithValueSum("strimzi_oauth_http_requests_totaltimems", "context", "INTROSPECT", "kind", "client-auth", "host", authHostPort, "path", tokenPath, "outcome", "success");
Assert.assertTrue("strimzi_oauth_http_requests_totaltimems for client-auth > 0.0", value.doubleValue() > 0.0);

value = metrics.getValueSum("strimzi_oauth_http_requests_avgtimems", "context", "INTROSPECT", "kind", "client-auth", "host", authHostPort, "path", tokenPath, "outcome", "success");
Expand Down Expand Up @@ -177,17 +177,17 @@ void clientCredentialsWithJwtECDSAValidation() throws Exception {
// Check metrics

TestMetrics metrics = getPrometheusMetrics(URI.create("http://kafka:9404/metrics"));
BigDecimal value = metrics.getValueSum("strimzi_oauth_http_requests_count", "kind", "jwks", "host", authHostPort, "path", jwksPath, "outcome", "success");
BigDecimal value = metrics.getStartsWithValueSum("strimzi_oauth_http_requests_count", "kind", "jwks", "host", authHostPort, "path", jwksPath, "outcome", "success");
Assert.assertTrue("strimzi_oauth_http_requests_count for jwks > 0", value.doubleValue() > 0.0);

value = metrics.getValueSum("strimzi_oauth_http_requests_totaltimems", "kind", "jwks", "host", authHostPort, "path", jwksPath, "outcome", "success");
value = metrics.getStartsWithValueSum("strimzi_oauth_http_requests_totaltimems", "kind", "jwks", "host", authHostPort, "path", jwksPath, "outcome", "success");
Assert.assertTrue("strimzi_oauth_http_requests_totaltimems for jwks > 0.0", value.doubleValue() > 0.0);

value = metrics.getValueSum("strimzi_oauth_validation_requests_count", "context", "JWT", "kind", "jwks", "mechanism", "OAUTHBEARER", "outcome", "success");
value = metrics.getStartsWithValueSum("strimzi_oauth_validation_requests_count", "context", "JWT", "kind", "jwks", "mechanism", "OAUTHBEARER", "outcome", "success");
// There is no inter-broker connection on this listener, producer did 2 validations, and consumer also did 2 validations
Assert.assertTrue("strimzi_oauth_validation_requests_count for jwks >= 4", value != null && value.intValue() >= 4);

value = metrics.getValueSum("strimzi_oauth_validation_requests_totaltimems", "context", "JWT", "kind", "jwks", "mechanism", "OAUTHBEARER", "outcome", "success");
value = metrics.getStartsWithValueSum("strimzi_oauth_validation_requests_totaltimems", "context", "JWT", "kind", "jwks", "mechanism", "OAUTHBEARER", "outcome", "success");
Assert.assertTrue("strimzi_oauth_http_requests_totaltimems for jwks > 0.0", value.doubleValue() > 0.0);
}

Expand Down Expand Up @@ -246,12 +246,12 @@ void clientCredentialsWithJwtRSAValidation() throws Exception {
// Check metrics

TestMetrics metrics = getPrometheusMetrics(URI.create("http://kafka:9404/metrics"));
BigDecimal value = metrics.getValueSum("strimzi_oauth_validation_requests_count", "context", "JWTPLAIN", "kind", "jwks", "host", authHostPort, "path", jwksPath, "mechanism", "OAUTHBEARER", "outcome", "success");
BigDecimal value = metrics.getStartsWithValueSum("strimzi_oauth_validation_requests_count", "context", "JWTPLAIN", "kind", "jwks", "host", authHostPort, "path", jwksPath, "mechanism", "OAUTHBEARER", "outcome", "success");

// There is no inter-broker connection on this listener, producer did 2 validations, and consumer also did 2
Assert.assertTrue("strimzi_oauth_validation_requests_count for jwks >= 4", value != null && value.intValue() >= 4);

value = metrics.getValueSum("strimzi_oauth_validation_requests_totaltimems", "context", "JWTPLAIN", "kind", "jwks", "host", authHostPort, "path", jwksPath, "mechanism", "OAUTHBEARER", "outcome", "success");
value = metrics.getStartsWithValueSum("strimzi_oauth_validation_requests_totaltimems", "context", "JWTPLAIN", "kind", "jwks", "host", authHostPort, "path", jwksPath, "mechanism", "OAUTHBEARER", "outcome", "success");
Assert.assertTrue("strimzi_oauth_validation_requests_totaltimems for jwks > 0.0", value.doubleValue() > 0.0);
}

Expand Down Expand Up @@ -304,11 +304,11 @@ void accessTokenWithIntrospection() throws Exception {
// Check metrics
TestMetrics metrics = getPrometheusMetrics(URI.create("http://kafka:9404/metrics"));

BigDecimal value = metrics.getValueSum("strimzi_oauth_http_requests_count", "kind", "introspect", "host", authHostPort, "path", introspectPath, "outcome", "success");
BigDecimal value = metrics.getStartsWithValueSum("strimzi_oauth_http_requests_count", "kind", "introspect", "host", authHostPort, "path", introspectPath, "outcome", "success");
// Inter-broker connection did some validation, producer and consumer did some
Assert.assertTrue("strimzi_oauth_http_requests_count for introspect >= 5", value != null && value.intValue() >= 5);

value = metrics.getValueSum("strimzi_oauth_http_requests_totaltimems", "kind", "introspect", "host", authHostPort, "path", introspectPath, "outcome", "success");
value = metrics.getStartsWithValueSum("strimzi_oauth_http_requests_totaltimems", "kind", "introspect", "host", authHostPort, "path", introspectPath, "outcome", "success");
Assert.assertTrue("strimzi_oauth_http_requests_totaltimems for introspect > 0.0", value.doubleValue() > 0.0);
}

Expand Down Expand Up @@ -366,11 +366,11 @@ void refreshTokenWithIntrospection() throws Exception {

// Check metrics
TestMetrics metrics = getPrometheusMetrics(URI.create("http://kafka:9404/metrics"));
BigDecimal value = metrics.getValueSum("strimzi_oauth_http_requests_count", "kind", "introspect", "host", authHostPort, "path", introspectPath, "outcome", "success");
BigDecimal value = metrics.getStartsWithValueSum("strimzi_oauth_http_requests_count", "kind", "introspect", "host", authHostPort, "path", introspectPath, "outcome", "success");
// On top of the access token test, producer and consumer together did 4 requests
Assert.assertTrue("strimzi_oauth_http_requests_count for introspect >= 9", value != null && value.intValue() >= 9);

value = metrics.getValueSum("strimzi_oauth_http_requests_totaltimems", "kind", "introspect", "host", authHostPort, "path", introspectPath, "outcome", "success");
value = metrics.getStartsWithValueSum("strimzi_oauth_http_requests_totaltimems", "kind", "introspect", "host", authHostPort, "path", introspectPath, "outcome", "success");
Assert.assertTrue("strimzi_oauth_http_requests_totaltimems for introspect > 0.0", value.doubleValue() > 0.0);
}

Expand Down
Loading
Loading