-
Notifications
You must be signed in to change notification settings - Fork 75
Description
Logstash information:
Please include the following information:
- Logstash version (e.g.
bin/logstash --version): 8.17.0 - Logstash installation source (e.g. built from source, with a package manager: DEB/RPM, expanded from tar or zip archive, docker): Built from Source
- How is Logstash being run (e.g. as a service/service manager: systemd, upstart, etc. Via command line, docker/kubernetes): Kubernetes
- How was the Logstash Plugin installed: Built from source together with Logstash.
JVM (e.g. java -version): NA
OS version (uname -a if on a Unix-like system): NA
Description of the problem including expected versus actual behavior:
Log events are sent from producer via input TCP interface. There is a scenario where loss of logs observed when logstash pipeline goes for a reload, in this scenario log producer sends the event to TCP stack and it acknowledges to producer, but logstash never reads that event from TCP socket as the pipeline is reloaded and logstash does not send any acknowledgement. The subsequent log event that is sent by log producers receives a broken-pipe or connection reset by peer, in that specific case if log producer handles retry mechanism they can resend the log events again. Here we could see loss of some events due to lack of logstash level ack mechanism incase of TCP. Can we fix this to handle ack mechanism to TCP, so that producer knows that logs are not processed by logstash
Steps to reproduce:
NA
Provide logs (if relevant):
NA