Description
Self-Hosted Version
Self-Hosted Version Sentry 25.2.0
CPU Architecture
x86_64
Docker Version
28.0.1
Docker Compose Version
2.33.1
Machine Specification
- My system meets the minimum system requirements of Sentry
Steps to Reproduce
When the applications sends about 600k transactions in hour to sentry. I get a transaction-consumer container error
I've I tried cleaning out the
kafka kafka-consumer-groups --bootstrap-server kafka:9092 --all-groups --all-topics --reset-offsets --to-latest --execute
also tried stopping the containers, deleting the kafka volume, and restarting again .install.sh.
This works for a while. then the transactions don't appear in sentry again.
my sentry .env file include
SENTRY_EVENT_RETENTION_DAYS=30
from docker-compose.yml
kafka:
<<: *restart_policy
image: "confluentinc/cp-kafka:7.6.1"
# ports:
# - 9092
environment:
# https://docs.confluent.io/platform/current/installation/docker/config-reference.html#cp-kakfa-example
KAFKA_PROCESS_ROLES: "broker,controller"
KAFKA_CONTROLLER_QUORUM_VOTERS: "[email protected]:29093"
KAFKA_CONTROLLER_LISTENER_NAMES: "CONTROLLER"
KAFKA_NODE_ID: "1001"
CLUSTER_ID: "MkU3OEVBNTcwNTJENDM2Qk"
KAFKA_LISTENERS: "PLAINTEXT://0.0.0.0:29092,INTERNAL://0.0.0.0:9093,EXTERNAL://0.0.0.0:9092,CONTROLLER://0.0.0.0:29093"
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://127.0.0.1:29092,INTERNAL://kafka:9093,EXTERNAL://kafka:9092"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "PLAINTEXT:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT,CONTROLLER:PLAINTEXT"
KAFKA_INTER_BROKER_LISTENER_NAME: "PLAINTEXT"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1"
KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS: "1"
#KAFKA_LOG_CLEANUP_POLICY: delete
KAFKA_LOG_CLEANER_ENABLE: true
KAFKA_LOG_CLEANUP_POLICY: delete
KAFKA_LOG_RETENTION_HOURS: "12"
KAFKA_MESSAGE_MAX_BYTES: "700000000" #50MB or bust
KAFKA_MAX_REQUEST_SIZE: "600000000" #50MB on requests apparently too
# KAFKA_MAX_RECORDS_PER_USER_OP:
CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
KAFKA_LOG4J_LOGGERS: "kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,state.change.logger=WARN"
KAFKA_LOG4J_ROOT_LOGLEVEL: "DEBUG"
KAFKA_TOOLS_LOG4J_LOGLEVEL: "DEBUG"
ulimits:
nofile:
soft: 8192
hard: 8192
volumes:
- "sentry-kafka:/var/lib/kafka/data"
- "sentry-kafka-log:/var/lib/kafka/log"
- "sentry-secrets:/etc/kafka/secrets"
healthcheck:
<<: *healthcheck_defaults
test: ["CMD-SHELL", "nc -z localhost 9092"]
interval: 10s
timeout: 10s
retries: 30
from relay/config.yml
limits:
max_concurrent_requests: 100000
max_concurrent_queries: 1000
max_thread_count: 800
Expected Result
working sentry
Actual Result
“File “/.venv/lib/python3.13/site-packages/arroyo/backends/kafka/consumer.py”, line 422, in poll
transactions-consumer-1 | raise OffsetOutOfRange(str(error))
transactions-consumer-1 | arroyo.errors.OffsetOutOfRange: KafkaError{code=_AUTO_OFFSET_RESET,val=-140,str=“fetch failed due to requested offset not available on the broker: Broker: Offset out of range (broker 1001)”}"”
Event ID
No response
Metadata
Metadata
Assignees
Type
Projects
Status
No status
Status
Waiting for: Community