Skip to content

Logs are lost when Logstash reloads due to lack of acknowledgment from the input-tcp plugin. #234

@sasikiranvaddi

Description

@sasikiranvaddi

Logstash information:

Please include the following information:

  1. Logstash version (e.g. bin/logstash --version): 8.17.0
  2. Logstash installation source (e.g. built from source, with a package manager: DEB/RPM, expanded from tar or zip archive, docker): Built from Source
  3. How is Logstash being run (e.g. as a service/service manager: systemd, upstart, etc. Via command line, docker/kubernetes): Kubernetes
  4. How was the Logstash Plugin installed: Built from source together with Logstash.

JVM (e.g. java -version): NA

OS version (uname -a if on a Unix-like system): NA

Description of the problem including expected versus actual behavior:

Log events are sent from producer via input TCP interface. There is a scenario where loss of logs observed when logstash pipeline goes for a reload, in this scenario log producer sends the event to TCP stack and it acknowledges to producer, but logstash never reads that event from TCP socket as the pipeline is reloaded and logstash does not send any acknowledgement. The subsequent log event that is sent by log producers receives a broken-pipe or connection reset by peer, in that specific case if log producer handles retry mechanism they can resend the log events again. Here we could see loss of some events due to lack of logstash level ack mechanism incase of TCP. Can we fix this to handle ack mechanism to TCP, so that producer knows that logs are not processed by logstash

Steps to reproduce:

NA

Provide logs (if relevant):
NA

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions