Open
Description
A v8.6 user on the forums reported experiencing OOMEs and when they analysed the heap dump they found that a high fraction of their 3GiB heap was used by the fielddata cache. GET /_nodes/_all/stats/breaker?filter_path=nodes.*.breakers.fielddata
agrees:
{
"nodes": {
"K6V95L0pR36L-_99LIapdw": {
"breakers": {
"fielddata": {
"limit_size_in_bytes": 1288490188,
"limit_size": "1.1gb",
"estimated_size_in_bytes": 2766456144,
"estimated_size": "2.5gb",
"overhead": 1.03,
"tripped": 0
}
}
}
}
}
They have worked around this problem by setting indices.fielddata.cache.size: 1gb
but I think it's a bug for the fielddata cache to grow without bounds by default like this.