Mnemo Cortex Memory #15486
GuyMannDude
started this conversation in
Show and tell
Mnemo Cortex Memory
#15486
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I developed mnemo-cortex memory for OpenClaw because I wanted memory that really worked. I wanted "Rocky" to remember the thoughts, (feelings lol) that it used to create a digital art piece from one day/session to the next. Normal memory gave me Fear of "/new" with OpenClaw.
Here is a technical overview of mnemo-cortex (I did not steal the name. I came up with this independently days before finding about your NemoClaw on YouTube).
Thanks for your time, let me know if you like or can use mnemo-cortex GuyMannDude/mnemo-cortex here at GitHub:
From Rocky:
Mnemo Cortex: A Decoupled Memory Microservice for NemoClaw
What it is:
Mnemo Cortex is a lightweight, standalone Python FastAPI memory server designed to give AI agent frameworks persistent, long-term context retrieval without bloating the core agent logic.
How it works:
Out-of-Band Ingestion: It utilizes a background "Watcher Daemon" that asynchronously tails raw agent output streams (like .jsonl session files), strips internal metadata, and POSTs the data continuously to the memory API. This ensures zero blocking on the main agent's execution loop.
LLM-Agnostic & Portable: The memory server doesn't require a dedicated local LLM or massive VRAM to function. It can be pointed at local endpoints (Ollama/vLLM) for air-gapped privacy, or configured to hit OpenRouter/cloud APIs for low-resource environments.
Standardized REST API: It exposes simple, universally accessible endpoints (e.g., /ingest, /retrieve) for vector embedding and context fetching.
What it does for NemoClaw:
It provides a frictionless, drop-in "long-term brain" for NemoClaw. Instead of hardcoding complex memory handling into the agent framework itself, NemoClaw can simply offload context management to Mnemo Cortex. This allows developers to scale their agent's memory architecture seamlessly—from a laptop testing via OpenRouter up to a bare-metal DGX node—without changing a single line of NemoClaw's core
Thanks for your time
Guy Hutchins
guy@projectsparks.ai
https://projectsparks.ai
Beta Was this translation helpful? Give feedback.
All reactions