Description
We are using fluentbit version 3.2.4 basically for kubernetes logging where fluentbit is daemonset
and fluentd is aggregator with following configuration
Tail (Input) section
tail:
enable: true
refreshIntervalSeconds: 10.0
path: "/var/log/containers/*.log"
skipLongLines: true
readFromHead: false
storageType: "filesystem"
bufferMaxSize: "50MB"
bufferChunkSize: "10MB"
pauseOnChunksOverlimit: "off"
filter section
filter:
kubernetes:
enable: true
labels: false
annotations: false
grep:
enable: true
grepKeyword: "regex"
condition: "and"
regex_expression:
- regex: "regex log /(ERROR|NOT_ENOUGH_REPLICAS|INFO|Exception|org.redisson.client.RedisException)/"
containerd:
enable: true
multilineParser:
enable: true
key: "log"
parsers:
for Back pressure handling we are using following config
storage:
path: /tmp/log/flb-storage/
maxChunksUp: 40
storage.type- filesystem
All works fine data filtered data gets ingested uniformly but in the fluentbit pods we have observed
thats memory consumption keeps getting piled up over time and after 3-4 days fluentbit pod gets restart due to
OOM.
Below is the resource configuration of fluentbit pod
resources:
limits:
cpu: "500m"
memory: "256Mi"
requests:
cpu: "10m"
memory: "25Mi"
Any thoughts ? why this is happening or any configuration thats can help controling the fluentbit memory
we have around 30 fluentbit send data to 4 fluentd which in turn send data to our monitoring system
{"log":"YOUR LOG MESSAGE HERE","stream":"stdout","time":"2018-06-11T14:37:30.681701731Z"}
- Steps to reproduce the problem:
**Expected behavior**
<!--- A clear and concise description of what you expected to happen. -->
**Screenshots**
<!--- If applicable, add screenshots to help explain your problem. -->
**Your Environment**
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used:
* Configuration:
* Environment name and version (e.g. Kubernetes? What version?):
* Server type and version:
* Operating System and version:
* Filters and plugins:
**Additional context**
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->