es-data nodes exceeding Xmx memory #233
Description
Hello,
We are running version 5.6.0 in our K8 1.10.3 cluster with docker version 1.13.1.
The es-data nodes, constantly exceed the configured the Xmx value.
Settings:
ES_JAVA_OPTS=-Xms2048m -Xmx2048m
Kubernetes memory limit: 4096M
Full JVM arguments:
-XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Xms2048m, -Xmx2048m, -Des.path.home=/elasticsearch
The es-data nodes grow beyond 4GB and then are getting killed.
The container we run is:
quay.io/pires/docker-elasticsearch-kubernetes:5.6.0
Any idea why this is happening? Any advice on how to improve this so es-data nodes stay within their limits?
Thanks a ton for any advice/help!
- Felix