-
Notifications
You must be signed in to change notification settings - Fork 93
Memory-based long conversation handling (no compaction) #686
Copy link
Copy link
Open
Labels
agentdomain:agent-coreFramework, tools, registry, memory, skills, orchestrationFramework, tools, registry, memory, skills, orchestrationenhancementNew feature or requestNew feature or requestp0high priorityhigh prioritytrack:consumer-appHermes-competitor consumer product — mobile-first, voice + messaging + memory + skillsHermes-competitor consumer product — mobile-first, voice + messaging + memory + skills
Metadata
Metadata
Assignees
Labels
agentdomain:agent-coreFramework, tools, registry, memory, skills, orchestrationFramework, tools, registry, memory, skills, orchestrationenhancementNew feature or requestNew feature or requestp0high priorityhigh prioritytrack:consumer-appHermes-competitor consumer product — mobile-first, voice + messaging + memory + skillsHermes-competitor consumer product — mobile-first, voice + messaging + memory + skills
Problem
Long conversations overflow the model's context window. The naive solution (context compaction / summarization) loses critical information — OpenClaw's biggest failure was losing safety instructions during compaction.
Approach
Use the memory system + RAG instead of compaction. Important context is offloaded to persistent storage and retrieved via RAG when needed. The memory system IS the solution to long conversations, not summarization/pruning.
Design:
~/.gaia/memory/Dependencies
Acceptance Criteria