Open
Description
Describe the bug
If a logfile is filled out quickly enough, it can be rotated several times per second. The throughput counter however lives in IOHandler
, which is re-created on file rotation. As a consequence, read_bytes_limit_per_second
is not respected when the log source is spammy enough.
To Reproduce
I deployed the following pod in an K8S cluster:
apiVersion: v1
kind: Pod
metadata:
name: logflooder
namespace: default
spec:
containers:
- image: ubuntu:bionic
command: ["bash"]
args: ["-c", "while true; do cat /etc/passwd; done"]
imagePullPolicy: IfNotPresent
name: fluentd
resources:
limits:
cpu: "5"
memory: 400Mi
requests:
cpu: "5"
memory: 400Mi
The log throughput seemed to effectively constrained by the CPU limit and not the value of read_bytes_limit_per_second
. The detected rotation of
message appears several times per second.
Expected behavior
I'd expect the total log throughput to be bound by read_bytes_limit_per_second
regardless of file rotations.
Your Environment
- Fluentd version: 1.14.2
- Operating system: Amazon Linux 2
- Kernel version: 4.14.252-195.483.amzn2.x86_64
Your Configuration
...
<source>
@type tail
@id in_tail_container_logs
path "/var/log/containers/*.log"
pos_file "/var/log/fluentd-containers.log.pos"
read_bytes_limit_per_second 100k
tag "kubernetes.*"
exclude_path ["/var/log/containers/fluentd-*"]
read_from_head true
<parse>
@type "json"
time_format "%Y-%m-%dT%H:%M:%S.%NZ"
unmatched_lines
time_type string
</parse>
</source>
...