Skip to content

After Kraft Migration "java.lang.IllegalArgumentException: Invalid connection string." for tieredstorage #695

@P2W2

Description

@P2W2

What can we help you with?

Hello,
we migrated Kafka from a zookeeper deployment to a Kraft.
We using the tieredStorage option with Azure.
Now we getting a "java.lang.IllegalArgumentException: Invalid connection string." Error.
If I add the Connection string directly it is working without a Secret the configuration is working.
I think my env. variable is in the wrong format but I can't figure out which is the right one for a configuration with a nodepool.
We are using Strimzi to deploy our cluster so, I also created an Issue on Strimzi's site
This Kafka Configuration was working for Zookeeper:

---
# Source: strimzi-kafka-operator/templates/kafka-cluster.yaml
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name:-kafka-cluster
  annotations:
    strimzi.io/kraft: "enabled"
    strimzi.io/node-pools: "enabled"
spec:
  kafka:
    version: 3.9.0
    listeners: 
      - name: plain
        port: 9092
        tls: false
        type: internal
      - authentication:
          type: scram-sha-512
        name: tls
        port: 9093
        tls: true
        type: internal
    authorization:
      type: simple
    tieredStorage:
      type: custom
      remoteStorageManager:
        className: io.aiven.kafka.tieredstorage.RemoteStorageManager
        classPath: /opt/kafka/plugins/tiered-storage/*
        config: 
          chunk.size: "5242880"
          storage.azure.connection.string: ${env:AZURE_CONNECTION_STRING}
          storage.azure.container.name:-devddc-kafka-strorage
          storage.backend.class: io.aiven.kafka.tieredstorage.storage.azure.AzureBlobStorage
    config: 
      auto.create.topic.enable: "false"
      config.providers: env
      config.providers.env.class: io.strimzi.kafka.EnvVarConfigProvider
      controller.quorum.election.timeout.ms: 3000
      controller.quorum.request.timeout.ms: 3000
      controller.quorum.retry.backoff.ms: 200
      default.replication.factor: 3
      fetch.message.max.bytes: 5000000
      log.retention.ms: 43200000
      log.roll.jitter.ms: 60000
      log.roll.ms: 300000
      log.segment.bytes: 134217728
      max.request.size: 5000000
      message.max.bytes: 5000000
      min.insync.replicas: 2
      offsets.topic.replication.factor: 3
      producer.max.request.size: 5000000
      remote.log.storage.manager.class.name: io.aiven.kafka.tieredstorage.RemoteStorageManager
      remote.log.storage.manager.class.path: /opt/kafka/plugins/tiered-storage/*
      remote.log.storage.system.enable: "true"
      remote.storage.enable: "true"
      replica.fetch.max.bytes: 5000000
      rlmm.config.remote.log.metadata.topic.replication.factor: 1
      transaction.state.log.min.isr: 2
      transaction.state.log.replication.factor: 3
    template:
      pod:
        imagePullSecrets: 
          - name:-acr-images-creds-secret
      kafkaContainer:
        env: 
          - name: AZURE_CONNECTION_STRING
            valueFrom:
              secretKeyRef:
                key: DEV_KAFKA_STORAGE_CONNECTION_STRING
                name: kafka-strimzi-kafka-operator-vault-secrets
    image: crmegahubwesteurope.azurecr.io/-devddc/images/-kafka:3.9.0
    metricsConfig: ...
  entityOperator: ...
    userOperator: ...
---
# Source: strimzi-kafka-operator/templates/kafka-node-pool.yaml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
  name: controller
  labels:
    strimzi.io/cluster: kafka-cluster
spec:
  replicas: 3
  resources:
    requests:
      memory: "3Gi"
      cpu: "700m"
    limits:
      memory: "4Gi"
      cpu: "1000m"
  roles:
    - controller
  storage:
    type: jbod
    volumes:
      - id: 0
        type: persistent-claim
        size: "20Gi"
        kraftMetadata: shared
        deleteClaim: false
  template:
    pod:
      metadata:
        annotations:
          prometheus.io/path: /metrics
          prometheus.io/port: "9404"
          prometheus.io/scrape: "true"
    kafkaContainer:
      env:
        - name: AZURE_CONNECTION_STRING
          valueFrom:
            secretKeyRef:
              name: "kafka-strimzi-kafka-operator-vault-secrets"
              key: "DEV_KAFKA_STORAGE_CONNECTION_STRING"
---
# Source: strimzi-kafka-operator/templates/kafka-node-pool.yaml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
  name: broker
  labels:
    strimzi.io/cluster: kafka-cluster
spec:
  replicas: 4
  resources:
    requests:
      memory: "5Gi"
      cpu: "700m"
    limits:
      memory: "6Gi"
      cpu: "1000m"
  roles:
    - broker
  storage:
    type: jbod
    volumes:
      - id: 0
        type: persistent-claim
        size: "50Gi"
        kraftMetadata: shared
        deleteClaim: false
  template:
    pod:
      metadata:
        annotations:
          prometheus.io/path: /metrics
          prometheus.io/port: "9404"
          prometheus.io/scrape: "true"
    kafkaContainer:
      env:
        - name: AZURE_CONNECTION_STRING
          valueFrom:
            secretKeyRef:
              name: "kafka-strimzi-kafka-operator-vault-secrets"
              key: "DEV_KAFKA_STORAGE_CONNECTION_STRING"

Info: This is not the full cluster name I just removed internal names that I don't want to post here, so it is not starting with "-".

Full Error Log:

removed directory '/tmp/hsperfdata_kafka'
removed '/tmp/kafka/strimzi.kafka.metadata.config.state'
removed '/tmp/kafka/clients.truststore.p12'
removed '/tmp/kafka/cluster.keystore.p12'
removed '/tmp/kafka/cluster.truststore.p12'
removed directory '/tmp/kafka'
removed '/tmp/kafka-agent.properties'
removed '/tmp/strimzi.properties'
STRIMZI_BROKER_ID=0
Preparing truststore for replication listener
Adding /opt/kafka/cluster-ca-certs/ca.crt to truststore /tmp/kafka/cluster.truststore.p12 with alias ca
Certificate was added to keystore
Preparing truststore for replication listener is complete
Looking for the CA matching the server certificate
CA matching the server certificate found: /opt/kafka/cluster-ca-certs/ca.crt
Preparing keystore for replication and clienttls listener
Preparing keystore for replication and clienttls listener is complete
Preparing truststore for client authentication
Adding /opt/kafka/client-ca-certs/ca.crt to truststore /tmp/kafka/clients.truststore.p12 with alias ca
Certificate was added to keystore
Preparing truststore for client authentication is complete
Starting Kafka with configuration:
##############################
##############################
# This file is automatically generated by the Strimzi Cluster Operator
# Any changes to this file will be ignored and overwritten!
##############################
##############################

##########
# Node / Broker ID
##########
node.id=0

##########
# Kafka message logs configuration
##########
log.dirs=/var/lib/kafka/data-0/kafka-log0

##########
# Control Plane listener
##########
...

##########
# Replication listener
##########
...

##########
# Listener configuration: PLAIN-9092
##########

##########
# Listener configuration: TLS-9093
##########
...


##########
# Common listener configuration
##########
...

##########
# Authorization
##########
...

##########
# Kafka tiered storage configuration
##########
# RLMM configuration generated by Strimzi
remote.log.storage.system.enable=true
remote.log.metadata.manager.impl.prefix=rlmm.config.
remote.log.metadata.manager.class.name=org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
remote.log.metadata.manager.listener.name=REPLICATION-9091
rlmm.config.remote.log.metadata.common.client.bootstrap.servers=-kafka-cluster-kafka-brokers:9091
rlmm.config.remote.log.metadata.common.client.security.protocol=SSL
rlmm.config.remote.log.metadata.common.client.ssl.keystore.location=/tmp/kafka/cluster.keystore.p12
rlmm.config.remote.log.metadata.common.client.ssl.keystore.password=[hidden]
rlmm.config.remote.log.metadata.common.client.ssl.keystore.type=PKCS12
rlmm.config.remote.log.metadata.common.client.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
rlmm.config.remote.log.metadata.common.client.ssl.truststore.password=[hidden]
rlmm.config.remote.log.metadata.common.client.ssl.truststore.type=PKCS12
# RSM configs set by the operator and by the user
remote.log.storage.manager.class.name=io.aiven.kafka.tieredstorage.RemoteStorageManager
remote.log.storage.manager.class.path=/opt/kafka/plugins/tiered-storage/*
remote.log.storage.manager.impl.prefix=rsm.config.
rsm.config.chunk.size=5242880
rsm.config.storage.azure.connection.string=${env:AZURE_CONNECTION_STRING}
rsm.config.storage.azure.container.name=-devddc-kafka-strorage
rsm.config.storage.backend.class=io.aiven.kafka.tieredstorage.storage.azure.AzureBlobStorage

##########
# Config providers
##########
# Configuration providers configured by the user and by Strimzi
config.providers=env,strimzienv,strimzifile,strimzidir
config.providers.strimzienv.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider
config.providers.strimzienv.param.allowlist.pattern=.*
config.providers.strimzifile.class=org.apache.kafka.common.config.provider.FileConfigProvider
config.providers.strimzifile.param.allowed.paths=/opt/kafka
config.providers.strimzidir.class=org.apache.kafka.common.config.provider.DirectoryConfigProvider
config.providers.strimzidir.param.allowed.paths=/opt/kafka

##########
# User provided configuration
##########
auto.create.topic.enable=false
config.providers.env.class=io.strimzi.kafka.EnvVarConfigProvider
controller.quorum.election.timeout.ms=3000
default.replication.factor=3
fetch.message.max.bytes=5000000
log.retention.ms=43200000
log.roll.jitter.ms=60000
log.roll.ms=300000
log.segment.bytes=134217728
max.request.size=5000000
message.max.bytes=5000000
min.insync.replicas=2
offsets.topic.replication.factor=3
producer.max.request.size=5000000
remote.log.storage.manager.class.name=io.aiven.kafka.tieredstorage.RemoteStorageManager
remote.log.storage.manager.class.path=/opt/kafka/plugins/tiered-storage/*
remote.log.storage.system.enable=true
remote.storage.enable=true
replica.fetch.max.bytes=5000000
rlmm.config.remote.log.metadata.topic.replication.factor=1
transaction.state.log.min.isr=2
transaction.state.log.replication.factor=3


##########
# KRaft configuration
##########
process.roles=broker
controller.listener.names=CONTROLPLANE-9090
controller.quorum.voters=4@-kafka-cluster-controller-4.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local:9090,5@-kafka-cluster-controller-5.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local:9090,6@-kafka-cluster-controller-6.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local:9090

##########
# KRaft metadata log dir configuration
##########
metadata.log.dir=/var/lib/kafka/data-0/kafka-log0
Configuring Java heap: -Xms3113851289 -Xmx3113851289
Kafka metadata config state [4]
Using KRaft [true]
Making sure the Kraft storage is formatted with cluster ID 7wDfh2TwQPayDWMrH1i0UQ and metadata version 3.9
2025-06-17 07:43:27,792 INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [main]
2025-06-17 07:43:28,066 INFO Configuring EnvVar config provider: {} (io.strimzi.kafka.EnvVarConfigProvider) [main]
2025-06-17 07:43:28,072 INFO Closing EnvVar config provider (io.strimzi.kafka.EnvVarConfigProvider) [main]
2025-06-17 07:43:28,079 INFO KafkaConfig values:
	advertised.listeners = REPLICATION-9091://-kafka-cluster-broker-0.-kafka-cluster-kafka-brokers.-kafka.svc:9091,PLAIN-9092://-kafka-cluster-broker-0.-kafka-cluster-kafka-brokers.-kafka.svc:9092,TLS-9093://-kafka-cluster-broker-0.-kafka-cluster-kafka-brokers.-kafka.svc:9093
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = org.apache.kafka.metadata.authorizer.StandardAuthorizer
	auto.create.topics.enable = true
	auto.include.jmx.reporter = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.heartbeat.interval.ms = 2000
	broker.id = 0
	broker.id.generation.enable = true
	broker.rack = null
	broker.session.timeout.ms = 9000
	client.quota.callback.class = null
	compression.gzip.level = -1
	compression.lz4.level = 9
	compression.type = producer
	compression.zstd.level = 3
	connection.failed.authentication.delay.ms = 100
	connections.max.idle.ms = 600000
	connections.max.reauth.ms = 0
	control.plane.listener.name = null
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.listener.names = CONTROLPLANE-9090
	controller.quorum.append.linger.ms = 25
	controller.quorum.bootstrap.servers = []
	controller.quorum.election.backoff.max.ms = 1000
	controller.quorum.election.timeout.ms = 3000
	controller.quorum.fetch.timeout.ms = 2000
	controller.quorum.request.timeout.ms = 2000
	controller.quorum.retry.backoff.ms = 20
	controller.quorum.voters = [4@-kafka-cluster-controller-4.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local:9090, 5@-kafka-cluster-controller-5.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local:9090, 6@-kafka-cluster-controller-6.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local:9090]
	controller.quota.window.num = 11
	controller.quota.window.size.seconds = 1
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 3
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delegation.token.secret.key = null
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	early.start.listeners = null
	eligible.leader.replicas.enable = false
	fetch.max.bytes = 57671680
	fetch.purgatory.purge.interval.requests = 1000
	group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor]
	group.consumer.heartbeat.interval.ms = 5000
	group.consumer.max.heartbeat.interval.ms = 15000
	group.consumer.max.session.timeout.ms = 60000
	group.consumer.max.size = 2147483647
	group.consumer.migration.policy = disabled
	group.consumer.min.heartbeat.interval.ms = 5000
	group.consumer.min.session.timeout.ms = 45000
	group.consumer.session.timeout.ms = 45000
	group.coordinator.append.linger.ms = 10
	group.coordinator.new.enable = false
	group.coordinator.rebalance.protocols = [classic]
	group.coordinator.threads = 1
	group.initial.rebalance.delay.ms = 3000
	group.max.session.timeout.ms = 1800000
	group.max.size = 2147483647
	group.min.session.timeout.ms = 6000
	group.share.delivery.count.limit = 5
	group.share.enable = false
	group.share.heartbeat.interval.ms = 5000
	group.share.max.groups = 10
	group.share.max.heartbeat.interval.ms = 15000
	group.share.max.record.lock.duration.ms = 60000
	group.share.max.session.timeout.ms = 60000
	group.share.max.size = 200
	group.share.min.heartbeat.interval.ms = 5000
	group.share.min.record.lock.duration.ms = 15000
	group.share.min.session.timeout.ms = 45000
	group.share.partition.max.record.locks = 200
	group.share.record.lock.duration.ms = 30000
	group.share.session.timeout.ms = 45000
	initial.broker.registration.timeout.ms = 60000
	inter.broker.listener.name = REPLICATION-9091
	inter.broker.protocol.version = 3.9-IV0
	kafka.metrics.polling.interval.secs = 10
	kafka.metrics.reporters = []
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = CONTROLPLANE-9090:SSL,REPLICATION-9091:SSL,PLAIN-9092:PLAINTEXT,TLS-9093:SASL_SSL
	listeners = REPLICATION-9091://0.0.0.0:9091,PLAIN-9092://0.0.0.0:9092,TLS-9093://0.0.0.0:9093
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.max.compaction.lag.ms = 9223372036854775807
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dir.failure.timeout.ms = 30000
	log.dirs = /var/lib/kafka/data-0/kafka-log0
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.initial.task.delay.ms = 30000
	log.local.retention.bytes = -2
	log.local.retention.ms = -2
	log.message.downconversion.enable = true
	log.message.format.version = 3.0-IV1
	log.message.timestamp.after.max.ms = 9223372036854775807
	log.message.timestamp.before.max.ms = 9223372036854775807
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = 43200000
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = 60000
	log.roll.ms = 300000
	log.segment.bytes = 134217728
	log.segment.delete.delay.ms = 60000
	max.connection.creation.rate = 2147483647
	max.connections = 2147483647
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides =
	max.incremental.fetch.session.cache.slots = 1000
	max.request.partition.size.limit = 2000
	message.max.bytes = 5000000
	metadata.log.dir = /var/lib/kafka/data-0/kafka-log0
	metadata.log.max.record.bytes.between.snapshots = 20971520
	metadata.log.max.snapshot.interval.ms = 3600000
	metadata.log.segment.bytes = 1073741824
	metadata.log.segment.min.bytes = 8388608
	metadata.log.segment.ms = 604800000
	metadata.max.idle.interval.ms = 500
	metadata.max.retention.bytes = 104857600
	metadata.max.retention.ms = 604800000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 2
	node.id = 0
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 10080
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 3
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
	process.roles = [broker]
	producer.id.expiration.check.interval.ms = 600000
	producer.id.expiration.ms = 86400000
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.window.num = 11
	quota.window.size.seconds = 1
	remote.fetch.max.wait.ms = 500
	remote.log.index.file.cache.total.size.bytes = 1073741824
	remote.log.manager.copier.thread.pool.size = -1
	remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
	remote.log.manager.copy.quota.window.num = 11
	remote.log.manager.copy.quota.window.size.seconds = 1
	remote.log.manager.expiration.thread.pool.size = -1
	remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
	remote.log.manager.fetch.quota.window.num = 11
	remote.log.manager.fetch.quota.window.size.seconds = 1
	remote.log.manager.task.interval.ms = 30000
	remote.log.manager.task.retry.backoff.max.ms = 30000
	remote.log.manager.task.retry.backoff.ms = 500
	remote.log.manager.task.retry.jitter = 0.2
	remote.log.manager.thread.pool.size = 10
	remote.log.metadata.custom.metadata.max.bytes = 128
	remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
	remote.log.metadata.manager.class.path = null
	remote.log.metadata.manager.impl.prefix = rlmm.config.
	remote.log.metadata.manager.listener.name = REPLICATION-9091
	remote.log.reader.max.pending.tasks = 100
	remote.log.reader.threads = 10
	remote.log.storage.manager.class.name = io.aiven.kafka.tieredstorage.RemoteStorageManager
	remote.log.storage.manager.class.path = /opt/kafka/plugins/tiered-storage/*
	remote.log.storage.manager.impl.prefix = rsm.config.
	remote.log.storage.system.enable = true
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 5000000
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 30000
	replica.selector.class = null
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.client.callback.handler.class = null
	sasl.enabled.mechanisms = []
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.connect.timeout.ms = null
	sasl.login.read.timeout.ms = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.login.retry.backoff.max.ms = 10000
	sasl.login.retry.backoff.ms = 100
	sasl.mechanism.controller.protocol = GSSAPI
	sasl.mechanism.inter.broker.protocol = GSSAPI
	sasl.oauthbearer.clock.skew.seconds = 30
	sasl.oauthbearer.expected.audience = null
	sasl.oauthbearer.expected.issuer = null
	sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
	sasl.oauthbearer.jwks.endpoint.url = null
	sasl.oauthbearer.scope.claim.name = scope
	sasl.oauthbearer.sub.claim.name = sub
	sasl.oauthbearer.token.endpoint.url = null
	sasl.server.callback.handler.class = null
	sasl.server.max.receive.size = 524288
	security.inter.broker.protocol = PLAINTEXT
	security.providers = null
	server.max.startup.time.ms = 9223372036854775807
	socket.connection.setup.timeout.max.ms = 30000
	socket.connection.setup.timeout.ms = 10000
	socket.listen.backlog.size = 50
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.allow.dn.changes = false
	ssl.allow.san.changes = false
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
	ssl.endpoint.identification.algorithm = HTTPS
	ssl.engine.factory.class = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.certificate.chain = null
	ssl.keystore.key = null
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.principal.mapping.rules = DEFAULT
	ssl.protocol = TLSv1.3
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.certificates = null
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	telemetry.max.bytes = 1048576
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
	transaction.max.timeout.ms = 900000
	transaction.partition.verification.enable = true
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 2
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 3
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	unclean.leader.election.interval.ms = 300000
	unstable.api.versions.enable = false
	unstable.feature.versions.enable = false
	zookeeper.clientCnxnSocket = null
	zookeeper.connect = null
	zookeeper.connection.timeout.ms = null
	zookeeper.max.in.flight.requests = 10
	zookeeper.metadata.migration.enable = false
	zookeeper.metadata.migration.min.batch.size = 200
	zookeeper.session.timeout.ms = 18000
	zookeeper.set.acl = false
	zookeeper.ssl.cipher.suites = null
	zookeeper.ssl.client.enable = false
	zookeeper.ssl.crl.enable = false
	zookeeper.ssl.enabled.protocols = null
	zookeeper.ssl.endpoint.identification.algorithm = HTTPS
	zookeeper.ssl.keystore.location = null
	zookeeper.ssl.keystore.password = null
	zookeeper.ssl.keystore.type = null
	zookeeper.ssl.ocsp.enable = false
	zookeeper.ssl.protocol = TLSv1.2
	zookeeper.ssl.truststore.location = null
	zookeeper.ssl.truststore.password = null
	zookeeper.ssl.truststore.type = null
 (kafka.server.KafkaConfig) [main]
2025-06-17 07:43:28,116 INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) [main]
All of the log directories are already formatted.
KRaft storage formatting is done
Removing quorum-state file

Preparing Kafka Agent configuration

+ exec /usr/bin/tini -w -e 143 -- /opt/kafka/bin/kafka-server-start.sh /tmp/strimzi.properties
2025-06-17 07:43:29,773 INFO Starting KafkaAgent with brokerReadyFile=null, sessionConnectedFile=null, sslKeyStorePath=/tmp/kafka/cluster.keystore.p12, sslTrustStore=/tmp/kafka/cluster.truststore.p12 (io.strimzi.kafka.agent.KafkaAgent) [main]
2025-06-17 07:43:29,780 INFO Logging initialized @845ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) [main]
2025-06-17 07:43:29,886 INFO jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.14+7-LTS (org.eclipse.jetty.server.Server) [main]
2025-06-17 07:43:29,904 INFO Started o.e.j.s.h.ContextHandler@45385f75{/v1/broker-state,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) [main]
2025-06-17 07:43:29,904 INFO Started o.e.j.s.h.ContextHandler@1c9b0314{/v1/ready,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) [main]
2025-06-17 07:43:29,904 INFO Started o.e.j.s.h.ContextHandler@49c90a9c{/v1/kraft-migration,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) [main]
2025-06-17 07:43:30,201 INFO x509=X509@1838ccb8(-kafka-cluster-broker-0,h=[-kafka-cluster-kafka-bootstrap.-kafka.svc.cluster.local, -kafka-cluster-kafka-brokers, -kafka-cluster-kafka-brokers.-kafka, -kafka-cluster-kafka-bootstrap.-kafka.svc, -kafka-cluster-broker-0.-kafka-cluster-kafka-brokers.-kafka.svc, -kafka-cluster-kafka-brokers.-kafka.svc, -kafka-cluster-kafka-brokers.-kafka.svc.cluster.local, -kafka-cluster-kafka-bootstrap.-kafka, -kafka-cluster-broker-0.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local, -kafka-cluster-kafka-bootstrap, -kafka-cluster-kafka],a=[],w=[]) for Server@6c2ed0cd[provider=null,keyStore=file:///tmp/kafka/cluster.keystore.p12,trustStore=file:///tmp/kafka/cluster.truststore.p12] (org.eclipse.jetty.util.ssl.SslContextFactory) [main]
2025-06-17 07:43:30,380 INFO Started ServerConnector@26abb146{SSL, (ssl, http/1.1)}{0.0.0.0:8443} (org.eclipse.jetty.server.AbstractConnector) [main]
2025-06-17 07:43:30,382 INFO Started ServerConnector@14dd7b39{HTTP/1.1, (http/1.1)}{localhost:8080} (org.eclipse.jetty.server.AbstractConnector) [main]
2025-06-17 07:43:30,382 INFO Started @1446ms (org.eclipse.jetty.server.Server) [main]
2025-06-17 07:43:30,382 INFO Starting metrics registry (io.strimzi.kafka.agent.KafkaAgent) [main]
2025-06-17 07:43:30,387 INFO Found class org.apache.kafka.server.metrics.KafkaYammerMetrics for Kafka 3.3 and newer. (io.strimzi.kafka.agent.KafkaAgent) [main]
2025-06-17 07:43:30,487 INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [main]
2025-06-17 07:43:30,789 INFO Configuring EnvVar config provider: {} (io.strimzi.kafka.EnvVarConfigProvider) [main]
2025-06-17 07:43:30,791 INFO Closing EnvVar config provider (io.strimzi.kafka.EnvVarConfigProvider) [main]
2025-06-17 07:43:30,889 INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) [main]
2025-06-17 07:43:31,112 INFO Configuring EnvVar config provider: {} (io.strimzi.kafka.EnvVarConfigProvider) [main]
2025-06-17 07:43:31,113 INFO Closing EnvVar config provider (io.strimzi.kafka.EnvVarConfigProvider) [main]
2025-06-17 07:43:31,176 INFO Configuring EnvVar config provider: {} (io.strimzi.kafka.EnvVarConfigProvider) [main]
2025-06-17 07:43:31,177 INFO Closing EnvVar config provider (io.strimzi.kafka.EnvVarConfigProvider) [main]
2025-06-17 07:43:31,276 INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) [main]
2025-06-17 07:43:31,280 INFO [BrokerServer id=0] Transition from SHUTDOWN to STARTING (kafka.server.BrokerServer) [main]
2025-06-17 07:43:31,281 INFO [SharedServer id=0] Starting SharedServer (kafka.server.SharedServer) [main]
2025-06-17 07:43:31,281 INFO Configuring EnvVar config provider: {} (io.strimzi.kafka.EnvVarConfigProvider) [main]
2025-06-17 07:43:31,282 INFO Closing EnvVar config provider (io.strimzi.kafka.EnvVarConfigProvider) [main]
2025-06-17 07:43:31,392 INFO [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log0] Recovering unflushed segment 7856. 0/1 recovered for __cluster_metadata-0. (kafka.log.LogLoader) [main]
2025-06-17 07:43:31,393 INFO [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log0] Loading producer state till offset 7856 with message format version 2 (kafka.log.UnifiedLog$) [main]
2025-06-17 07:43:31,394 INFO [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log0] Reloading from producer snapshot and rebuilding producer state from offset 7856 (kafka.log.UnifiedLog$) [main]
2025-06-17 07:43:31,394 INFO Deleted producer state snapshot /var/lib/kafka/data-0/kafka-log0/__cluster_metadata-0/00000000000000013660.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [main]
2025-06-17 07:43:31,394 INFO Deleted producer state snapshot /var/lib/kafka/data-0/kafka-log0/__cluster_metadata-0/00000000000000014303.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [main]
2025-06-17 07:43:31,412 INFO [ProducerStateManager partition=__cluster_metadata-0] Wrote producer snapshot at offset 7856 with 0 producer ids in 16 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [main]
2025-06-17 07:43:31,412 INFO [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log0] Producer state recovery took 0ms for snapshot load and 18ms for segment recovery from offset 7856 (kafka.log.UnifiedLog$) [main]
2025-06-17 07:43:31,584 INFO [ProducerStateManager partition=__cluster_metadata-0] Wrote producer snapshot at offset 14303 with 0 producer ids in 15 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) [main]
2025-06-17 07:43:31,590 INFO [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log0] Loading producer state till offset 14303 with message format version 2 (kafka.log.UnifiedLog$) [main]
2025-06-17 07:43:31,590 INFO [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log0] Reloading from producer snapshot and rebuilding producer state from offset 14303 (kafka.log.UnifiedLog$) [main]
2025-06-17 07:43:31,590 INFO Deleted producer state snapshot /var/lib/kafka/data-0/kafka-log0/__cluster_metadata-0/00000000000000007856.snapshot (org.apache.kafka.storage.internals.log.SnapshotFile) [main]
2025-06-17 07:43:31,590 INFO [ProducerStateManager partition=__cluster_metadata-0] Loading producer state from snapshot file 'SnapshotFile(offset=14303, file=/var/lib/kafka/data-0/kafka-log0/__cluster_metadata-0/00000000000000014303.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) [main]
2025-06-17 07:43:31,591 INFO [LogLoader partition=__cluster_metadata-0, dir=/var/lib/kafka/data-0/kafka-log0] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 14303 (kafka.log.UnifiedLog$) [main]
2025-06-17 07:43:31,621 INFO Initialized snapshots with IDs SortedSet(OffsetAndEpoch(offset=7856, epoch=25)) from /var/lib/kafka/data-0/kafka-log0/__cluster_metadata-0 (kafka.raft.KafkaMetadataLog$) [main]
2025-06-17 07:43:31,696 INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper) [raft-expiration-reaper]
2025-06-17 07:43:31,706 INFO [RaftManager id=0] Reading KRaft snapshot and log as part of the initialization (org.apache.kafka.raft.KafkaRaftClient) [main]
2025-06-17 07:43:31,707 INFO [RaftManager id=0] Loading snapshot (OffsetAndEpoch(offset=7856, epoch=25)) since log start offset (7856) is greater than the internal listener's next offset (-1) (org.apache.kafka.raft.internals.KRaftControlRecordStateMachine) [main]
2025-06-17 07:43:31,876 INFO [RaftManager id=0] Starting voters are VoterSet(voters={4=VoterNode(voterKey=ReplicaKey(id=4, directoryId=Optional.empty), listeners=Endpoints(endpoints={ListenerName(CONTROLPLANE-9090)=-kafka-cluster-controller-4.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local/10.67.192.8:9090}), supportedKRaftVersion=SupportedVersionRange[min_version:0, max_version:0]), 5=VoterNode(voterKey=ReplicaKey(id=5, directoryId=Optional.empty), listeners=Endpoints(endpoints={ListenerName(CONTROLPLANE-9090)=-kafka-cluster-controller-5.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local/10.67.194.14:9090}), supportedKRaftVersion=SupportedVersionRange[min_version:0, max_version:0]), 6=VoterNode(voterKey=ReplicaKey(id=6, directoryId=Optional.empty), listeners=Endpoints(endpoints={ListenerName(CONTROLPLANE-9090)=-kafka-cluster-controller-6.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local/10.67.193.197:9090}), supportedKRaftVersion=SupportedVersionRange[min_version:0, max_version:0])}) (org.apache.kafka.raft.KafkaRaftClient) [main]
2025-06-17 07:43:31,877 INFO [RaftManager id=0] Starting request manager with static voters: [-kafka-cluster-controller-4.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local:9090 (id: 4 rack: null), -kafka-cluster-controller-5.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local:9090 (id: 5 rack: null), -kafka-cluster-controller-6.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local:9090 (id: 6 rack: null)] (org.apache.kafka.raft.KafkaRaftClient) [main]
2025-06-17 07:43:31,881 WARN [RaftManager id=0] Epoch from quorum store file (/var/lib/kafka/data-0/kafka-log0/__cluster_metadata-0/quorum-state) is 0, which is smaller than last written epoch 25 in the log (org.apache.kafka.raft.QuorumState) [main]
2025-06-17 07:43:31,882 INFO [RaftManager id=0] Attempting durable transition to Unattached(epoch=25, votedKey=null, voters=[4, 5, 6], electionTimeoutMs=3439, highWatermark=Optional.empty) from null (org.apache.kafka.raft.QuorumState) [main]
2025-06-17 07:43:32,091 INFO [RaftManager id=0] Completed transition to Unattached(epoch=25, votedKey=null, voters=[4, 5, 6], electionTimeoutMs=3439, highWatermark=Optional.empty) from null (org.apache.kafka.raft.QuorumState) [main]
2025-06-17 07:43:32,094 INFO [kafka-0-raft-outbound-request-thread]: Starting (org.apache.kafka.raft.KafkaNetworkChannel$SendThread) [kafka-0-raft-outbound-request-thread]
2025-06-17 07:43:32,094 INFO [kafka-0-raft-io-thread]: Starting (org.apache.kafka.raft.KafkaRaftClientDriver) [kafka-0-raft-io-thread]
2025-06-17 07:43:32,113 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:32,114 INFO [BrokerServer id=0] Starting broker (kafka.server.BrokerServer) [main]
2025-06-17 07:43:32,116 INFO Configuring EnvVar config provider: {} (io.strimzi.kafka.EnvVarConfigProvider) [main]
2025-06-17 07:43:32,116 INFO Closing EnvVar config provider (io.strimzi.kafka.EnvVarConfigProvider) [main]
2025-06-17 07:43:32,175 INFO [broker-0-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [broker-0-ThrottledChannelReaper-Fetch]
2025-06-17 07:43:32,175 INFO [broker-0-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [broker-0-ThrottledChannelReaper-Produce]
2025-06-17 07:43:32,176 INFO [broker-0-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [broker-0-ThrottledChannelReaper-Request]
2025-06-17 07:43:32,178 INFO [broker-0-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [broker-0-ThrottledChannelReaper-ControllerMutation]
2025-06-17 07:43:32,269 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:32,370 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:32,394 INFO RemoteIndexCache directory /var/lib/kafka/data-0/kafka-log0/remote-log-index-cache already exists. Re-using the same directory. (org.apache.kafka.storage.internals.log.RemoteIndexCache) [main]
2025-06-17 07:43:32,394 INFO RemoteIndexCache starts up in 1 ms. (org.apache.kafka.storage.internals.log.RemoteIndexCache) [main]
2025-06-17 07:43:32,395 INFO [remote-log-index-cleaner]: Starting (org.apache.kafka.storage.internals.log.RemoteIndexCache$1) [remote-log-index-cleaner]
2025-06-17 07:43:32,468 INFO [BrokerServer id=0] Waiting for controller quorum voters future (kafka.server.BrokerServer) [main]
2025-06-17 07:43:32,468 INFO [BrokerServer id=0] Finished waiting for controller quorum voters future (kafka.server.BrokerServer) [main]
2025-06-17 07:43:32,470 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:32,570 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:32,586 INFO [broker-0-to-controller-forwarding-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) [broker-0-to-controller-forwarding-channel-manager]
2025-06-17 07:43:32,670 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:32,670 INFO [client-metrics-reaper]: Starting (org.apache.kafka.server.util.timer.SystemTimerReaper$Reaper) [client-metrics-reaper]
2025-06-17 07:43:32,676 INFO [RaftManager id=0] Registered the listener org.apache.kafka.image.loader.MetadataLoader@710231875 (org.apache.kafka.raft.KafkaRaftClient) [kafka-0-raft-io-thread]
2025-06-17 07:43:32,771 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:32,871 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:32,971 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:33,072 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:33,078 INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) [main]
2025-06-17 07:43:33,172 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:33,272 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:33,372 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:33,473 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:33,573 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:33,673 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:33,773 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:33,871 INFO [RaftManager id=0] Attempting durable transition to FollowerState(fetchTimeoutMs=2000, epoch=25, leader=4, leaderEndpoints=Endpoints(endpoints={ListenerName(CONTROLPLANE-9090)=-kafka-cluster-controller-4.-kafka-cluster-kafka-brokers.-kafka.svc/<unresolved>:9090}), voters=[4, 5, 6], highWatermark=Optional.empty, fetchingSnapshot=Optional.empty) from Unattached(epoch=25, votedKey=null, voters=[4, 5, 6], electionTimeoutMs=3439, highWatermark=Optional.empty) (org.apache.kafka.raft.QuorumState) [kafka-0-raft-io-thread]
2025-06-17 07:43:33,874 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:33,896 INFO [RaftManager id=0] Completed transition to FollowerState(fetchTimeoutMs=2000, epoch=25, leader=4, leaderEndpoints=Endpoints(endpoints={ListenerName(CONTROLPLANE-9090)=-kafka-cluster-controller-4.-kafka-cluster-kafka-brokers.-kafka.svc/<unresolved>:9090}), voters=[4, 5, 6], highWatermark=Optional.empty, fetchingSnapshot=Optional.empty) from Unattached(epoch=25, votedKey=null, voters=[4, 5, 6], electionTimeoutMs=3439, highWatermark=Optional.empty) (org.apache.kafka.raft.QuorumState) [kafka-0-raft-io-thread]
2025-06-17 07:43:33,974 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:33,992 INFO [broker-0-to-controller-forwarding-channel-manager]: Recorded new KRaft controller, from now on will use node -kafka-cluster-controller-4.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local:9090 (id: 4 rack: null) (kafka.server.NodeToControllerRequestThread) [broker-0-to-controller-forwarding-channel-manager]
2025-06-17 07:43:34,074 INFO [MetadataLoader id=0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:34,082 INFO [RaftManager id=0] High watermark set to Optional[LogOffsetMetadata(offset=14942, metadata=Optional.empty)] for the first time for epoch 25 (org.apache.kafka.raft.FollowerState) [kafka-0-raft-io-thread]
2025-06-17 07:43:34,084 INFO [MetadataLoader id=0] handleLoadSnapshot(00000000000000007856-0000000025): incrementing HandleLoadSnapshotCount to 1. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:34,172 INFO [MetadataLoader id=0] handleLoadSnapshot(00000000000000007856-0000000025): generated a metadata delta between offset -1 and this snapshot in 88483 us. (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:34,173 INFO [MetadataLoader id=0] maybePublishMetadata(SNAPSHOT): The loader is still catching up because we have loaded up to offset 7855, but the high water mark is 14942 (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:34,175 INFO [SocketServer listenerType=BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(REPLICATION-9091) (kafka.network.SocketServer) [main]
2025-06-17 07:43:34,175 INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) [main]
2025-06-17 07:43:34,178 INFO [SocketServer listenerType=BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAIN-9092) (kafka.network.SocketServer) [main]
2025-06-17 07:43:34,178 INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) [main]
2025-06-17 07:43:34,189 INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin) [main]
2025-06-17 07:43:34,276 INFO [SocketServer listenerType=BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(TLS-9093) (kafka.network.SocketServer) [main]
2025-06-17 07:43:34,289 INFO [broker-0-to-controller-alter-partition-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) [broker-0-to-controller-alter-partition-channel-manager]
2025-06-17 07:43:34,289 INFO [broker-0-to-controller-alter-partition-channel-manager]: Recorded new KRaft controller, from now on will use node -kafka-cluster-controller-4.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local:9090 (id: 4 rack: null) (kafka.server.NodeToControllerRequestThread) [broker-0-to-controller-alter-partition-channel-manager]
2025-06-17 07:43:34,371 INFO [MetadataLoader id=0] maybePublishMetadata(LOG_DELTA): The loader finished catching up to the current high water mark of 14942 (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:34,375 INFO [MetadataLoader id=0] InitializeNewPublishers: initializing SnapshotGenerator with a snapshot at offset 14941 (org.apache.kafka.image.loader.MetadataLoader) [kafka-0-metadata-loader-event-handler]
2025-06-17 07:43:34,388 INFO [broker-0-to-controller-directory-assignments-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) [broker-0-to-controller-directory-assignments-channel-manager]
2025-06-17 07:43:34,388 INFO [broker-0-to-controller-directory-assignments-channel-manager]: Recorded new KRaft controller, from now on will use node -kafka-cluster-controller-4.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local:9090 (id: 4 rack: null) (kafka.server.NodeToControllerRequestThread) [broker-0-to-controller-directory-assignments-channel-manager]
2025-06-17 07:43:34,466 INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Produce]
2025-06-17 07:43:34,468 INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Fetch]
2025-06-17 07:43:34,468 INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-DeleteRecords]
2025-06-17 07:43:34,469 INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-ElectLeader]
2025-06-17 07:43:34,470 INFO [ExpirationReaper-0-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-RemoteFetch]
2025-06-17 07:43:34,485 INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Heartbeat]
2025-06-17 07:43:34,485 INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-Rebalance]
2025-06-17 07:43:34,586 INFO Unable to read the broker epoch in /var/lib/kafka/data-0/kafka-log0. (kafka.log.LogManager) [main]
2025-06-17 07:43:34,586 INFO [broker-0-to-controller-heartbeat-channel-manager]: Starting (kafka.server.NodeToControllerRequestThread) [broker-0-to-controller-heartbeat-channel-manager]
2025-06-17 07:43:34,587 INFO [broker-0-to-controller-heartbeat-channel-manager]: Recorded new KRaft controller, from now on will use node -kafka-cluster-controller-4.-kafka-cluster-kafka-brokers.-kafka.svc.cluster.local:9090 (id: 4 rack: null) (kafka.server.NodeToControllerRequestThread) [broker-0-to-controller-heartbeat-channel-manager]
2025-06-17 07:43:34,590 INFO [BrokerLifecycleManager id=0] Incarnation CN6v319-ReSimRYzngFE8g of broker 0 in cluster 7wDfh2TwQPayDWMrH1i0UQ is now STARTING. (kafka.server.BrokerLifecycleManager) [broker-0-lifecycle-manager-event-handler]
2025-06-17 07:43:34,593 INFO [StandardAuthorizer 0] set super.users=User:CN=-kafka-cluster-kafka-exporter,O=io.strimzi,User:CN=-kafka-cluster-entity-topic-operator,O=io.strimzi,User:CN=cluster-operator,O=io.strimzi,User:CN=-kafka-cluster-entity-user-operator,O=io.strimzi,User:CN=-kafka-cluster-cruise-control,O=io.strimzi,User:CN=-kafka-cluster-kafka,O=io.strimzi, default result=DENIED (org.apache.kafka.metadata.authorizer.StandardAuthorizerData) [main]
2025-06-17 07:43:34,679 INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [ExpirationReaper-0-AlterAcls]
2025-06-17 07:43:34,768 INFO [BrokerLifecycleManager id=0] Successfully registered broker 0 with broker epoch 14943 (kafka.server.BrokerLifecycleManager) [broker-0-lifecycle-manager-event-handler]
2025-06-17 07:43:34,770 INFO RemoteStorageManagerConfig values:
	chunk.size = 5242880
	compression.enabled = false
	compression.heuristic.enabled = false
	custom.metadata.fields.include = []
	encryption.enabled = false
	key.prefix =
	key.prefix.mask = false
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	storage.backend.class = class io.aiven.kafka.tieredstorage.storage.azure.AzureBlobStorage
	upload.rate.limit.bytes.per.second = null
 (io.aiven.kafka.tieredstorage.config.RemoteStorageManagerConfig) [main]
2025-06-17 07:43:34,775 INFO AzureBlobStorageConfig values:
	azure.account.key = null
	azure.account.name = null
	azure.connection.string = [hidden]
	azure.container.name = -devddc-kafka-strorage
	azure.endpoint.url = null
	azure.sas.token = null
	azure.upload.block.size = 26214400
 (io.aiven.kafka.tieredstorage.storage.azure.AzureBlobStorageConfig) [main]
2025-06-17 07:43:34,792 ERROR Invalid connection string. (com.azure.storage.blob.BlobServiceClientBuilder) [main]
2025-06-17 07:43:34,792 INFO [BrokerServer id=0] Transition from STARTING to STARTED (kafka.server.BrokerServer) [main]
2025-06-17 07:43:34,794 ERROR [BrokerServer id=0] Fatal error during broker startup. Prepare to shutdown (kafka.server.BrokerServer) [main]
java.lang.IllegalArgumentException: Invalid connection string.
	at com.azure.storage.common.implementation.connectionstring.ConnectionSettings.fromConnectionString(ConnectionSettings.java:81)
	at com.azure.storage.common.implementation.connectionstring.StorageConnectionString.create(StorageConnectionString.java:105)
	at com.azure.storage.blob.BlobServiceClientBuilder.connectionString(BlobServiceClientBuilder.java:323)
	at io.aiven.kafka.tieredstorage.storage.azure.AzureBlobStorage.configure(AzureBlobStorage.java:60)
	at io.aiven.kafka.tieredstorage.config.RemoteStorageManagerConfig.storage(RemoteStorageManagerConfig.java:318)
	at io.aiven.kafka.tieredstorage.RemoteStorageManager.configure(RemoteStorageManager.java:151)
	at org.apache.kafka.server.log.remote.storage.ClassLoaderAwareRemoteStorageManager.lambda$configure$0(ClassLoaderAwareRemoteStorageManager.java:48)
	at org.apache.kafka.server.log.remote.storage.ClassLoaderAwareRemoteStorageManager.withClassLoader(ClassLoaderAwareRemoteStorageManager.java:65)
	at org.apache.kafka.server.log.remote.storage.ClassLoaderAwareRemoteStorageManager.configure(ClassLoaderAwareRemoteStorageManager.java:47)
	at kafka.log.remote.RemoteLogManager.configureRSM(RemoteLogManager.java:353)
	at kafka.log.remote.RemoteLogManager.startup(RemoteLogManager.java:396)
	at kafka.server.BrokerServer.$anonfun$startup$18(BrokerServer.scala:438)
	at kafka.server.BrokerServer.$anonfun$startup$18$adapted(BrokerServer.scala:425)
	at scala.Option.foreach(Option.scala:437)
	at kafka.server.BrokerServer.startup(BrokerServer.scala:425)
	at kafka.server.KafkaRaftServer.$anonfun$startup$2(KafkaRaftServer.scala:99)
	at kafka.server.KafkaRaftServer.$anonfun$startup$2$adapted(KafkaRaftServer.scala:99)
	at scala.Option.foreach(Option.scala:437)
	at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:99)
	at kafka.Kafka$.main(Kafka.scala:112)
	at kafka.Kafka.main(Kafka.scala)
2025-06-17 07:43:34,796 INFO [BrokerServer id=0] Transition from STARTED to SHUTTING_DOWN (kafka.server.BrokerServer) [main]
2025-06-17 07:43:34,796 INFO [BrokerServer id=0] shutting down (kafka.server.BrokerServer) [main]

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions