Skip to content

Serve v0.31.0 SNAPSHOT Crashes with Heap Space Errors in Docker #2786

@Null1515

Description

@Null1515

Hey
The Problem I'm Facing:

I’m using Serve v0.31.0 SNAPSHOT in Docker with:

  • 4GB shared memory
  • GPU enabled

I’m trying to use Serve as a dynamic batcher for requests from my Java client to avoid performance issues. However, I’m encountering Serve crashes with heap space errors, causing Docker container to stop.

Logs:
2025-03-31 152801 INFO ModelServer.txt

I’ve Tried to Adjusted memory allocation in Docker and Experimented with WLM.
Error on WLM:
java.util.concurrent.ExecutionException: ai.djl.serving.wlm.util.WlmException: Receiver class ai.djl.pytorch.engine.PtNDArray does not define or inherit an implementation of the resolved method 'abstract java.nio.ByteBuffer toByteBuffer(boolean)' of interface ai.djl.ndarray.NDArray.

so, regarding these issues I have 2 questions to ask:

  1. Are there known memory optimizations for Serve in Docker?
  2. Is the WLM error related to a PyTorch engine compatibility issue?

I’d greatly appreciate any guidance! Sorry for any confusions and thank you for your time.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingstale

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions