-
Notifications
You must be signed in to change notification settings - Fork 229
Description
Summary
We’re seeing steady memory growth in project containers (100MB → 1GB over days) with low activity. In a local test setup we see, that CoreStream’s raw/messages arrays grow steadily even while the key set is stable. This indicates a leak in the Conat keyed-KV pipeline: old keyed rows are deleted on the persist server without emitting delete events, so clients never shrink their arrays.
Call path / data flow
- Browser keeps file open →
SyncDoc.initInterestLoop()callsclient.touchOpenFile()everyCONAT_OPEN_FILE_TOUCH_INTERVAL(30s).packages/sync/editor/generic/sync-doc.ts
- Project service also calls
touchOpenFilesLoop()every 30s and doesopenFiles.setBackend(path, id).packages/project/conat/open-files.ts
openFilesuses DKO → DKV → CoreStream:packages/project/conat/open-files.ts→packages/project/conat/sync.ts→packages/conat/sync/open-files.ts→packages/conat/sync/dko.ts→packages/conat/sync/dkv.ts→packages/conat/sync/core-stream.ts
- On the server, keyed updates are implemented by deleting all previous rows for that key, then inserting a new row.
packages/conat/persist/storage.ts(set(), keyed path)
Observed behavior
- Client side
CoreStreamholds all past messages inraw[]/messages[]. For keyed updates, old rows are deleted on the server, but the client never hears about those deletes, so the arrays grow forever. gcKv()only blanks buffers; it doesn’t shrink the arrays.
Repro + logging
To confirm, I instrumented open-files to log DKV/CoreStream stats every 15s (debug), then opened a single file and edited briefly after a fresh restart.
Example logs (note entries stable, raw/messages grow):
2026-01-16T14:15:25.806Z ... open-files stats {
openDocs: 4,
stats: { entries: 7, dkv: { rawLength: 42, messagesLength: 42, kvLength: 30 } },
rssMiB: 216
}
2026-01-16T14:15:55.806Z ... open-files stats {
openDocs: 2,
stats: { entries: 7, dkv: { rawLength: 48, messagesLength: 48, kvLength: 30 } },
rssMiB: 218
}
2026-01-16T14:16:55.807Z ... open-files stats {
openDocs: 2,
stats: { entries: 7, dkv: { rawLength: 56, messagesLength: 56, kvLength: 30 } },
rssMiB: 214
}
2026-01-16T14:17:25.807Z ... open-files stats {
openDocs: 2,
stats: { entries: 7, dkv: { rawLength: 60, messagesLength: 60, kvLength: 30 } },
rssMiB: 214
}
Root cause
In packages/conat/persist/storage.ts, set() for keyed messages deletes old rows without using RETURNING and without calling emitDelete(). Clients never see delete events for keyed updates, so CoreStream.processPersistentDelete() is never triggered and raw/messages arrays grow unbounded.
Impact
- Each keyed update currently adds a new entry in CoreStream arrays forever.
- Open-files touches every 30s per open file. Over days/weeks, this accumulates and matches observed RSS growth.
Notes
Other Conat uses aren’t as chatty as open-files, which explains why the leak shows up primarily in project processes.