Skip to content

drewOrc/Mem-Palace-skill

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Memory Palace

Cross-conversation memory for AI assistants. Never lose context again.

License: MIT Claude Code Cowork

Built on MemPalace by Milla Jovovich & Ben Sigman. Their work on structured AI memory (wings, rooms, tunnels, ChromaDB wake-up) is the foundation this project adapts for Claude Code and Cowork.

Memory Palace is an open-source skill/plugin for Claude Code and Cowork that gives your AI assistant persistent memory across conversations. It uses the MemPalace "Memory Palace" metaphor (wings, notes, tunnels) to organize knowledge so every new session picks up where the last one left off.

New here? Open welcome.html in your browser for an interactive guide.

The Problem

You spend an hour discussing architecture decisions with Claude. Next conversation? It remembers nothing. You re-explain, re-share context, waste time.

The Solution

Memory Palace stores your conversation highlights in a structured file hierarchy that the AI reads at the start of each new session:

memory/
├── PALACE.md                    # Hot cache: global status (~200 lines, always loaded)
├── wings/
│   ├── my-project/
│   │   ├── 2026-04-08_api-design-decisions.md
│   │   └── 2026-04-10_benchmark-results.md
│   ├── research/
│   │   └── 2026-04-08_harness-engineering.md
│   └── ...
└── tunnels/
    └── routing-patterns-across-layers.md   # Cross-topic insight links

Wings = major topics (like floors of a building) Notes = session summaries (like rooms on a floor) Tunnels = cross-topic connections (like secret passages between floors)

Quick Start

Option 1: Install as Cowork Plugin

Download the .skill file from Releases and install it in Cowork.

Option 2: Use the Unified CLI

Clone the repo and use the palace command for all operations:

# Clone into your project's skill directory
git clone https://github.com/drewOrc/Mem-Palace-skill.git .claude/skills/memory-palace

# Initialize a new palace
cd /path/to/your/project
.claude/skills/memory-palace/palace init ./memory --wings paper1 paper2 research --name "My Project"

# All operations via unified CLI
.claude/skills/memory-palace/palace search ./memory "keyword"
.claude/skills/memory-palace/palace health ./memory
.claude/skills/memory-palace/palace stats ./memory
.claude/skills/memory-palace/palace map ./memory
.claude/skills/memory-palace/palace archive ./memory --dry-run

Or add the skill directory to your PATH for shorter syntax:

export PATH="/path/to/Mem-Palace-skill:$PATH"
palace init ./memory --wings project,research
palace search ./memory "keyword"

Option 3: Just Use the Structure

You don't need the skill at all. Copy the example/ directory into your project as memory/, add the CLAUDE.md snippet to your project, and you're set.

Usage

Once installed, just talk to Claude naturally:

What you say What happens
"What did we discuss last time?" Claude reads your palace and summarizes recent activity
"Save to palace" Claude writes today's key findings into the right wing
"Show palace map" Generates a visual Mermaid diagram of your memory structure
"Palace stats" Shows token counts, note counts, and activity timeline
"Export palace" Exports to Markdown or Obsidian vault format
"Continue working on X" Claude auto-loads relevant wing context

CLI Reference

All commands run via palace <command>. Use palace help to see all options.

Command Purpose
palace init Scaffold a new palace structure
palace search Full-text search across notes
palace health Validate structure, find stale notes
palace stats Show palace metrics (wings, notes, token counts)
palace map Generate Mermaid visualization
palace archive Move old notes to archive
palace export Export to Markdown or Obsidian format
palace import Import from conversation transcripts
palace autosave Toggle automatic save hooks

Note: You can also run individual scripts directly from scripts/ (e.g., python scripts/palace_search.py ./memory "query").

Features

Core: Read & Write Memory

  • Auto-loads palace at conversation start
  • Saves notes with structured format (key content, connections, references)
  • Updates hot cache table in PALACE.md

Auto-Save

Memory persists automatically without manual save prompts. palace_autosave.py integrates with conversation hooks to detect significant findings and commit them to the palace. This is the key differentiator from manual memory systems — your insights are captured even if you forget to say "save to palace."

# Hooks automatically trigger on conversation end
# No manual intervention needed

Archive

Keep your palace lean by archiving old or completed notes:

palace archive ./memory --older-than 90  # days
# Or preview changes first
palace archive ./memory --older-than 90 --dry-run

Stale notes are moved to memory/archive/ while keeping reference links intact.

Tunnel Auto-Suggest

Claude proactively suggests cross-wing connections when saving notes. If a new finding relates to multiple wings, the skill automatically creates tunnel entries to link them together. This surfaces unexpected patterns and prevents knowledge silos.

Hall Classification

Optional sub-organization within wings for crowded memory spaces. Group related notes under a "hall" (subdirectory) while maintaining the same PALACE.md index structure.

wings/my-project/
├── hall_architecture/
│   ├── 2026-04-08_api-design.md
│   └── 2026-04-09_database-schema.md
└── hall_deployment/
    └── 2026-04-10_ci-cd-pipeline.md

Visualization

Generate an interactive map of your entire memory palace:

palace map ./memory
# Outputs: palace_map.mermaid + palace_map.html

Statistics

See how your palace is growing:

palace stats ./memory
┌──────────────────────────────────────┐
│      MEMORY PALACE STATISTICS        │
├──────────────────────────────────────┤
│ Wings: 5    Notes: 12    Tunnels: 3  │
│ Hot Cache:  448 tokens               │
│ Full Load:  2,526 tokens             │
│ Most Active: research-learning       │
└──────────────────────────────────────┘

Search

Full-text search across all notes with context display:

palace search ./memory "your search term"
# Displays matching notes with surrounding context

Health Check

Validate palace structure and identify stale notes or orphaned tunnels:

palace health ./memory
# Reports issues and suggestions for maintenance

Import / Export

Export to Obsidian:

palace export ./memory --format obsidian --output ./my-vault

Export to single Markdown:

palace export ./memory --format markdown --output palace_backup.md

Import from Claude/ChatGPT transcript:

palace import transcript.json ./memory --wing my-project --dry-run

(Use --dry-run to preview changes before committing)

CLAUDE.md Snippet

Add this to your project's CLAUDE.md so Claude knows to use the palace:

## Memory Palace

This project uses a Memory Palace for cross-conversation context.

**Every new conversation:**
1. Read `memory/PALACE.md` for global status
2. Read relevant wing notes based on the user's question
3. Check `memory/tunnels/` for cross-topic links

**End of conversation (or on significant progress):**
1. Save key findings to `wings/<wing>/YYYY-MM-DD_<topic>.md`
2. Update PALACE.md hot cache table
3. Create tunnels for cross-topic discoveries

How It Works (for the curious)

Memory Palace is essentially context engineering applied to conversation persistence. The key insight from MemPalace is that you don't need vector databases or embeddings for effective AI memory. A well-organized file hierarchy with metadata filtering gives you most of the benefits at near-zero cost.

The hot cache pattern (PALACE.md) mirrors what MemPalace calls "wake-up": load ~200 lines to restore the AI's state, then selectively read deeper files only when needed. This keeps token costs minimal while preserving full recall capability.

Project Structure

memory-palace/
├── palace               # Unified CLI entry point (executable)
├── SKILL.md            # Skill definition (instructions for Claude)
├── scripts/
│   ├── __init__.py
│   ├── palace_utils.py  # Shared utility module
│   ├── palace_map.py    # Mermaid visualization generator
│   ├── palace_stats.py  # Statistics analyzer
│   ├── palace_search.py # Full-text search with context
│   ├── palace_health.py # Validate structure, find stale notes
│   ├── palace_init.py   # Scaffold a new palace
│   ├── palace_export.py # Export to Markdown/Obsidian
│   ├── palace_import.py # Import from conversation transcripts
│   ├── palace_autosave.py # Auto-detect and save findings (key differentiator)
│   ├── palace_archive.py  # Archive old notes to keep palace lean
│   └── test_palace.py   # Unit tests for all scripts
├── hooks/
│   └── README.md        # Integration guide for auto-save hooks
├── example/             # Example palace you can copy into your project
│   └── memory/
│       ├── PALACE.md
│       ├── wings/...
│       └── tunnels/...
├── README.md
└── README_zh.md         # Chinese documentation

Note: Individual scripts can also be run directly (e.g., python scripts/palace_search.py ./memory "query"), but the unified palace CLI is recommended for convenience.

Requirements

  • Python 3.8+ (for scripts, standard library only)
  • Claude Code or Cowork (for the skill)
  • No external dependencies. No API keys. Everything runs locally.

Contributing

PRs welcome! Some ideas:

  • Auto-detect when to save (instead of waiting for user to say "save to palace") — palace_autosave.py is live
  • Search across notes — palace_search.py is live
  • Health check system — palace_health.py validates structure
  • Initialize new palace — palace_init.py scaffolds from CLI
  • Web UI dashboard for palace browsing
  • Git integration (auto-commit palace changes)
  • Support for image/PDF notes
  • VS Code extension for palace navigation
  • Support for other AI assistants (Cursor, Copilot)
  • ChromaDB-backed semantic search (optional enhancement)

License

MIT

Acknowledgments

This project would not exist without:

  • MemPalace by Milla Jovovich & Ben Sigman — the original Memory Palace architecture for AI agents. Their design of wings/rooms/tunnels, ChromaDB-backed retrieval, and the wake-up mechanism (restoring agent state in ~170 tokens) is what this entire project is built on. We adapted their architecture into a file-based skill for Claude Code and Cowork. Go star their repo.
  • The concept of harness engineering and context engineering which this skill implements at the conversation level

About

Cross-conversation memory for Claude Code and Cowork

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors