Repartition errors in Hyades pods #2072
Unanswered
cartermitchellLM
asked this question in
Q&A
Replies: 1 comment 1 reply
-
|
So this happens because Kafka Streams tries to clean up after itself, and purge records from the repartition topic once they're no longer needed. It seems that either Redpanda doesn't properly support the deletion of records, or your instance lacks permissions to delete records (should not be the case unless you explicitly configured ACLs in Redpanda). Functionality-wise, this is not a big concern. The repartition topics should have limited retention so the broker will eventually remove old records anyway. To get rid of the noise, you have at least two options:
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
We are troubleshooting the database connectivity of our hyades k8s cluster, and in the logs of our Repo Meta Analyzer and Vulnerability Analyzer we are seeing the following error that seems to be related to our redpanda (kafka) instance.
2026-03-05 18:56:47,394 ERROR [org.apa.kaf.cli.adm.int.DeleteRecordsHandler] (kafka-admin-client-thread | hyades-repository-meta-analyzer-3131bda9-a1f8-4703-a9f5-5d1cc6a01660-admin) [AdminClient clientId=hyades-repository-meta-analyzer-3131bda9-a1f8-4703-a9f5-5d1cc6a01660-admin] DeleteRecords request for topic partition hyades-repository-meta-analyzer-command-by-purl-coordinates-repartition-1 failed due to an unexpected error OFFSET_OUT_OF_RANGEThe error repeats in groups of 3, referring to repartitions 0 through 2. What are some ways to clear out the message queue or get redpanda to play nicely again with the other services?
Beta Was this translation helpful? Give feedback.
All reactions