Description
Elasticsearch version (bin/elasticsearch --version
): 7.1.1
Plugins installed: N/A
JVM version (java -version
): 12.0.1
OS version (uname -a
if on a Unix-like system):
Description of the problem including expected versus actual behavior:
When a node tripped over against the parent circuit breaker, an exception log event as follow would be logged. This example log event could be confusing and misleading reader to intepret it that it was the transport request
that used up 28.5GB when in fact that is the parent usage total, and the transport request usage really was only [2764/2.6kb]
.
[2019-10-02T11:57:47,676][DEBUG][o.e.a.a.c.n.i.TransportNodesInfoAction] [host01] failed to execute on node [rB1Yz5ZnR0SYK38trALhdQ]
org.elasticsearch.transport.RemoteTransportException: [host02][192.168.24.1:9300][cluster:monitor/nodes/info[n]]
Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too
large, data for [<transport_request>] would be [30681474684/28.5gb], which is larger than
the limit of [30493933568/28.3gb], real usage: [30681471920/28.5gb], new bytes reserved:
[2764/2.6kb]
at
org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService.checkParentLimit(HierarchyCircuitBreakerService.java:343) ~[elasticsearch-7.1.1.jar:7.1.1]
It would be good to refine the logging to indicate explicitly and clearly on the utilization as applicable eg.
Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too
large, data for [parent] would be [30681474684/28.5gb], which is larger than
the limit of [30493933568/28.3gb], real usage: [30681471920/28.5gb], new bytes reserved
for [<transport_request>]: [2764/2.6kb]
in the previous version we seemed to also have the utilization breakdown as follow in the usage that would be great to be kept that identify and highlight the areas of major consumption:
[2019-05-29T10:48:26,985][WARN ][r.suppressed ] [hostA] path: /_bulk, params: {}
org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for
[<http_request>] would be [18014463104/16.7gb], which is larger than the limit of
[18014457036/16.7gb], usages [request=0/0b, fielddata=8317714554/7.7gb,
in_flight_requests=1125/1kb, accounting=9696747425/9gb]