Skip to content

out-of-memory error on large dataset #161

Open
@John-A-Davies

Description

Describe the bug
I tried to view a large dataset with the explorer-mvs URL https://hostname:7554/ui/v1/explorer-mvs/#/ which resulted in OOM.

SIGINFO     Dump Event "systhrow" (00040000) Detail "java/lang/OutOfMemoryError" "Java heap space" received

The following files were created in the zowe runtime directory

257360 Snap.20200408.145112.66253.0004.trc
8596631 heapdump.20200408.145112.66253.0002.phd
1488765 javacore.20200408.145112.66253.0003.txt

The core dump says

IBM_JAVA_COMMAND_LINE=java -Xms16m -Xmx512m -Dibm.serversocket.recover=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/tmp -Xquickstart -Dserver.port=8547 -Dserver.ssl.keyAlias=localhost -Dserver.ssl.keyStore=/global/zowe/keystore/localhost/localhost.keystore.p12 -Dserver.ssl.keyStorePassword=password -Dserver.ssl.keyStoreType=PKCS12 -Dserver.compression.enabled=true -Dzosmf.httpsPort=10443 -Dzosmf.ipAddress=S0W1.DAL-EBIS.IHOST.COM -Dspring.main.banner-mode=off -jar /S0W1/tmp/usr/lpp/zowe/components/files-api/bin/data-sets-api-server-0.2.9-boot.jar

so it looks like the error (out of memory) came from files-api/bin/data-sets-api-server.

To Reproduce
Steps to reproduce the behavior:

  1. start zowe
  2. URL https://hostname:7554/ui/v1/explorer-mvs/#/
  3. select a large dataset (minimum size TBD)
  4. See new dump files in runtime directory

Expected behavior
Large datasets should be tolerated without creating error events and their associated Snap, heap, core dumps.

Desktop (please complete the following information):

  • Browser Internet Explorer

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions