Skip to content

Bucket refill fatal error on broker start #706

@artV829

Description

@artV829

What happened?

When setting rsm.config.upload.rate.limit.bytes.per.second higher than 1000000000 in broker configs we are getting fatal error on kafka broker startup

After setting rsm.config.upload.rate.limit.bytes.per.second=1000000001

[2025-07-04 08:10:06,460] ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$)
java.lang.IllegalArgumentException: 1 token/nanosecond is not permitted refill rate, because highest supported rate is 1 token/nanosecond
at io.github.bucket4j.BucketExceptions.tooHighRefillRate(BucketExceptions.java:176)
at io.github.bucket4j.BandwidthBuilder$BandwidthBuilderImpl.setRefill(BandwidthBuilder.java:323)
at io.github.bucket4j.BandwidthBuilder$BandwidthBuilderImpl.refillGreedy(BandwidthBuilder.java:255)
at io.aiven.kafka.tieredstorage.transform.RateLimitedInputStream.lambda$rateLimitBucket$0(RateLimitedInputStream.java:52)
at io.github.bucket4j.local.LocalBucketBuilder.addLimit(LocalBucketBuilder.java:56)
at io.aiven.kafka.tieredstorage.transform.RateLimitedInputStream.rateLimitBucket(RateLimitedInputStream.java:49)
at io.aiven.kafka.tieredstorage.RemoteStorageManager.lambda$configure$1(RemoteStorageManager.java:181)
at java.base/java.util.OptionalInt.ifPresent(OptionalInt.java:165)
at io.aiven.kafka.tieredstorage.RemoteStorageManager.configure(RemoteStorageManager.java:180)
at org.apache.kafka.server.log.remote.storage.ClassLoaderAwareRemoteStorageManager.lambda$configure$0(ClassLoaderAwareRemoteStorageManager.java:48)
at org.apache.kafka.server.log.remote.storage.ClassLoaderAwareRemoteStorageManager.withClassLoader(ClassLoaderAwareRemoteStorageManager.java:65)
at org.apache.kafka.server.log.remote.storage.ClassLoaderAwareRemoteStorageManager.configure(ClassLoaderAwareRemoteStorageManager.java:47)
at kafka.log.remote.RemoteLogManager.configureRSM(RemoteLogManager.java:353)
at kafka.log.remote.RemoteLogManager.startup(RemoteLogManager.java:396)
at kafka.server.KafkaServer.$anonfun$startup$24(KafkaServer.scala:579)
at kafka.server.KafkaServer.$anonfun$startup$24$adapted(KafkaServer.scala:569)
at scala.Option.foreach(Option.scala:437)
at kafka.server.KafkaServer.startup(KafkaServer.scala:569)
at kafka.Kafka$.main(Kafka.scala:112)
at kafka.Kafka.main(Kafka.scala)
[2025-07-04 08:10:06,461] INFO [KafkaServer id=223] shutting down (kafka.server.KafkaServer)

What did you expect to happen?

broker starting up normally

Being unable to set rsm.config.upload.rate.limit.bytes.per.second higher than 1000000000 limits upload to s3 severely.
If not possible - an update to configuration documentation describing possible hard limits will do.

What else do we need to know?

platform Ubuntu 24.04.1 LTS
kafka version 3.9.0 via Confluent Platform 7.9.0 (cluster controlled by zookeeper)
Aiven-Open tiered-storage-for-apache-kafka v1.0.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions