Open
Description
Describe the bug
I have fluentd deployed as a deamonset in kubernetes cluster which is sending logs using output forward plugin. I have shared the config below. In the netstat output of fluentd, I see alot of close_wait connections. How can I resolve this ?
tcp 1 0 fluentd-operator-:<port> 1.1.1.1:<port> CLOSE_WAIT
To Reproduce
This issue is seen when the remote side of the connection has closed the connection.
Expected behavior
Ideally fluentd should handle closing the connection and relive the socket.
Your Environment
- Fluentd version: 1.13.3
- Package version:
- Operating system: VMware Photon OS: V3.0
- Kernel version: 4.18.0-372.36.1.el8_6.x86_64
Your Configuration
<match **>
@type forward
keepalive: true
recoverWait: 20s
requireAckResponse: false
send_timeout: 90s
keepalive_timeout: 30s
<buffer>
@type file
path buffer1.buf
flush_mode interval
flush_thread_count 8
flush_interval 2
retry_type periodic
retry_wait 10s
retry_max_times 1
chunk_limit_size 16m
total_limit_size 32m
overflow_action drop_oldest_chunk
disable_chunk_backup true
</buffer>
</match>
Your Error Log
[error]: #9 failed to flush the buffer, and hit limit for retries. dropping all chunks in the buffer queue. retry_times=1 records=8139 error_class=Errno::EPIPE error="Broken pipe - sendfile"
[error]: #9 suppressed same stacktrace
[error]: #7 failed to flush the buffer, and hit limit for retries. dropping all chunks in the buffer queue. retry_times=1 records=2457 error_class=Errno::ETIMEDOUT error="Connection timed out - sendfile"
[error]: #7 suppressed same stacktrace
Additional context
No response
Metadata
Metadata
Assignees
Type
Projects
Status
Work-In-Progress