Skip to content

Headless llmster memory leak #502

@mistrjirka

Description

@mistrjirka

Which version of LM Studio?
Headless lms / llmster
Observed bundle path: ~/.lmstudio/llmster/0.0.6-1/...

Which operating system?
Arch Linux

What is the bug?
One of the internal node helper processes spawned by llmster keeps growing in memory usage over time, especially in swap, until it creates heavy memory pressure on the whole machine.

At the time, I was running Qwen 3.5 9B Q6. However, this does not look like model memory or normal inference memory by itself. The main offender appears to be an internal server/helper process under:

~/.lmstudio/.internal/utils/node

and in this case it is specifically loading:

~/.lmstudio/llmster/0.0.6-1/.bundle/lib/llmworker.js

So although a model was loaded, the issue seems to be in the server-side worker / internal process handling rather than in the model weights themselves or normal inference allocation.

At the time I captured it, that process had grown to about:

  • Swap: 3.3 GiB
  • USS: 477.2 MiB
  • PSS: 480.3 MiB
  • RSS: 488.9 MiB

Other related processes were much smaller, so this one worker seems to be the main problem.

Relevant process tree at the time:

UID    PID   PPID  CMD
jirka  1358     1  llmster
jirka  1414  1358  /home/jirka/.lmstudio/.internal/utils/node ...
jirka  7054  1358  /home/jirka/.lmstudio/.internal/utils/node ...

The offending child process command line was:

/home/jirka/.lmstudio/.internal/utils/node -e
function connectPort(port) { ... }
process.parentPort = connectPort(0);
process.rcPort = connectPort(1);
process.resourcesPath = undefined;
require("/home/jirka/.lmstudio/llmster/0.0.6-1/.bundle/lib/llmworker.js");
/home/jirka/.lmstudio/llmster/0.0.6-1/.bundle/lib/llmworker.js

From /proc/7054/smaps_rollup:

Rss:           467816 kB
Pss:           459077 kB
Private_Clean: 148832 kB
Private_Dirty: 307080 kB
Swap:         3472300 kB
SwapPss:      3472300 kB

Expected behavior: this worker should not keep growing into multi-gigabyte swap usage during normal use.

Screenshots
I can attach screenshots from htop / process monitoring if helpful.

Logs
Relevant memory snapshot from smem -r -k -s swap -t:

PID 7054
Command: /home/jirka/.lmstudio/.internal/utils/node ... llmworker.js
Swap: 3.3G
USS: 477.2M
PSS: 480.3M
RSS: 488.9M901 of  1565 MiB (57.57%), swap free: 41121 of 48383 MiB (84.99%)
bře 13 00:17:20 minimrd earlyoom[455]: mem avail:   896 of  1573 MiB (57.00%), swap free: 41122 of 48383 MiB (84.99%)
bře 13 00:18:20 minimrd earlyoom[455]: mem avail:   891 of  1559 MiB (57.19%), swap free: 41124 of 48383 MiB (85.00%)
bře 13 00:19:20 minimrd earlyoom[455]: mem avail:  1121 of  1393 MiB (80.51%), swap free: 40728 of 48383 MiB (84.18%)
bře 13 00:20:20 minimrd earlyoom[455]: mem avail:  1041 of  1444 MiB (72.06%), swap free: 40865 of 48383 MiB (84.46%)
bře 13 00:21:21 minimrd earlyoom[455]: mem avail:  1026 of  1434 MiB (71.54%), swap free: 40870 of 48383 MiB (84.47%)
bře 13 00:22:21 minimrd earlyoom[455]: mem avail:  1032 of  1441 MiB (71.62%), swap free: 40870 of 48383 MiB (84.47%)
bře 13 00:23:21 minimrd earlyoom[455]: mem avail:  1027 of  1437 MiB (71.48%), swap free: 40871 of 48383 MiB (84.47%)
bře 13 00:24:21 minimrd earlyoom[455]: mem avail:  1035 of  1445 MiB (71.60%), swap free: 40872 of 48383 MiB (84.47%)
bře 13 00:25:21 minimrd earlyoom[455]: mem avail:  1026 of  1436 MiB (71.45%), swap free: 40871 of 48383 MiB (84.47%)
bře 13 00:26:22 minimrd earlyoom[455]: mem avail:  1046 of  1457 MiB (71.76%), swap free: 40873 of 48383 MiB (84.48%)
bře 13 00:27:22 minimrd earlyoom[455]: mem avail:  1026 of  1440 MiB (71.25%), swap free: 40875 of 4

PID 1414  /home/jirka/.lmstudio/.internal/utils/node ...   Swap: 61.3M
PID 1358  llmster                                           Swap: 152.3M

To Reproduce

  1. Start llmster
  2. Load and run Qwen 3.5 9B Q6
  3. Use the server normally for some time
  4. Observe the internal ~/.lmstudio/.internal/utils/node worker processes
  5. One of the workers loading llmworker.js keeps growing in memory usage, especially swap
  6. Eventually this causes severe memory pressure on the machine

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions