Skip to content

Conat keyed-KV updates cause unbounded CoreStream array growth #8702

@haraldschilly

Description

@haraldschilly

Summary
We’re seeing steady memory growth in project containers (100MB → 1GB over days) with low activity. In a local test setup we see, that CoreStream’s raw/messages arrays grow steadily even while the key set is stable. This indicates a leak in the Conat keyed-KV pipeline: old keyed rows are deleted on the persist server without emitting delete events, so clients never shrink their arrays.

Call path / data flow

  1. Browser keeps file open → SyncDoc.initInterestLoop() calls client.touchOpenFile() every CONAT_OPEN_FILE_TOUCH_INTERVAL (30s).
    • packages/sync/editor/generic/sync-doc.ts
  2. Project service also calls touchOpenFilesLoop() every 30s and does openFiles.setBackend(path, id).
    • packages/project/conat/open-files.ts
  3. openFiles uses DKO → DKV → CoreStream:
    • packages/project/conat/open-files.tspackages/project/conat/sync.tspackages/conat/sync/open-files.tspackages/conat/sync/dko.tspackages/conat/sync/dkv.tspackages/conat/sync/core-stream.ts
  4. On the server, keyed updates are implemented by deleting all previous rows for that key, then inserting a new row.
    • packages/conat/persist/storage.ts (set(), keyed path)

Observed behavior

  • Client side CoreStream holds all past messages in raw[]/messages[]. For keyed updates, old rows are deleted on the server, but the client never hears about those deletes, so the arrays grow forever.
  • gcKv() only blanks buffers; it doesn’t shrink the arrays.

Repro + logging
To confirm, I instrumented open-files to log DKV/CoreStream stats every 15s (debug), then opened a single file and edited briefly after a fresh restart.

Example logs (note entries stable, raw/messages grow):

2026-01-16T14:15:25.806Z ... open-files stats {
  openDocs: 4,
  stats: { entries: 7, dkv: { rawLength: 42, messagesLength: 42, kvLength: 30 } },
  rssMiB: 216
}
2026-01-16T14:15:55.806Z ... open-files stats {
  openDocs: 2,
  stats: { entries: 7, dkv: { rawLength: 48, messagesLength: 48, kvLength: 30 } },
  rssMiB: 218
}
2026-01-16T14:16:55.807Z ... open-files stats {
  openDocs: 2,
  stats: { entries: 7, dkv: { rawLength: 56, messagesLength: 56, kvLength: 30 } },
  rssMiB: 214
}
2026-01-16T14:17:25.807Z ... open-files stats {
  openDocs: 2,
  stats: { entries: 7, dkv: { rawLength: 60, messagesLength: 60, kvLength: 30 } },
  rssMiB: 214
}

Root cause
In packages/conat/persist/storage.ts, set() for keyed messages deletes old rows without using RETURNING and without calling emitDelete(). Clients never see delete events for keyed updates, so CoreStream.processPersistentDelete() is never triggered and raw/messages arrays grow unbounded.

Impact

  • Each keyed update currently adds a new entry in CoreStream arrays forever.
  • Open-files touches every 30s per open file. Over days/weeks, this accumulates and matches observed RSS growth.

Notes
Other Conat uses aren’t as chatty as open-files, which explains why the leak shows up primarily in project processes.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions