Project-specific instructions for Claude when working with this codebase.
This is an MCP (Model Context Protocol) server that provides long-term memory for Claude conversations. It stores context in Redis with semantic search capabilities to survive context window limitations.
Key Principle: This server IS the solution to context loss - treat it with care and always maintain backward compatibility.
IMPORTANT: Be selective with memory storage to avoid context bloat.
Store HIGH-SIGNAL context only:
- ✅ High-level decisions and reasoning ("We chose PostgreSQL over MongoDB because...")
- ✅ Project preferences (coding style, tech stack, architecture patterns)
- ✅ Critical constraints (API limits, business rules, security requirements)
- ✅ Learned patterns from bugs/solutions ("Avoid X because it causes Y")
Don't store LOW-SIGNAL content:
- ❌ Code snippets or implementations (put those in files)
- ❌ Obvious facts or general knowledge
- ❌ Temporary context (only needed in current session)
- ❌ Duplicates of what's already in documentation
Examples:
-
✅ GOOD: "API rate limit is 1000 req/min, prefer caching for frequently accessed data"
-
❌ BAD: "Here's the entire implementation of our caching layer: [50 lines of code]"
-
✅ GOOD: "Team prefers Tailwind CSS over styled-components for consistency"
-
❌ BAD: "Tailwind is a utility-first CSS framework that..."
Remember: Recall is for high-level context, not a code repository. Quality over quantity.
Use this tool to retrieve consolidated context from specific time periods:
Perfect for:
- 📋 Building context files from work sessions ("Give me everything from the last 2 hours as markdown")
- 🔄 Session handoffs ("Show me what we worked on in the last hour")
- 📊 Progress summaries ("Get all decisions from today")
- 📝 Documentation ("Export the last 4 hours as a context file")
How to use:
"Give me the context for the last 2 hours"
"Show me all high-importance memories from the last hour, grouped by type"
"Export the last 30 minutes as JSON"
- Markdown (default): Clean formatted context ready to paste
- JSON: Structured data for processing
- Text: Simple plain text summary
- Chronological: Time-ordered (default, oldest to newest)
- By type: Grouped by context_type (decisions, patterns, etc.)
- By importance: High to low priority
- By tags: Organized by tag categories
DO:
- ✅ Use for building context files after work sessions
- ✅ Filter by importance (>= 8) for critical context only
- ✅ Group by type when exporting for specific purposes
- ✅ Use markdown format for human-readable output
- ✅ Use JSON format when passing to external tools
DON'T:
- ❌ Retrieve huge time windows (>24 hours) without filtering
- ❌ Use when semantic search would be better (use
search_memoriesinstead) - ❌ Store the output as another memory (creates redundancy)
- TypeScript: Strict mode, full type safety
- ESM Modules: Use
.jsextensions in imports (even for.tsfiles) - Naming: camelCase for variables/functions, PascalCase for types/classes
- Files: kebab-case for filenames (e.g.,
memory-store.ts)
- Immutable Memory IDs: Never change ULID generation - memories must remain accessible
- Backward Compatible: New context types OK, removing types breaks existing memories
- Index Integrity: Always update ALL indexes when modifying/deleting memories
- Atomic Operations: Use Redis pipelines for multi-step updates
- Error Handling: Use MCP error codes (
ErrorCode.InvalidRequest,ErrorCode.InternalError)
NEVER change these key patterns without migration:
memory:{id} → Hash
memories:all → Set
memories:timeline → Sorted Set (score = timestamp)
memories:type:{type} → Set
memories:tag:{tag} → Set
memories:important → Sorted Set (score = importance)
session:{id} → Hash
sessions:all → Set
These 10 types are core to the system:
directive,information,heading,decision,code_pattern,requirement,error,todo,insight,preference
Adding new types: OK, add to enum in types.ts Removing types: NO - breaks existing memories
- 1-3: Low (transient)
- 4-7: Medium (general)
- 8-10: High (critical, auto-indexed)
Do not change: The ≥8 threshold for memories:important index
- Add Zod schema to types.ts
- Add method to
MemoryStoreclass in memory-store.ts - Add tool handler to tools/index.ts
- Update documentation in README.md
- Add resource handler to resources/index.ts
- Add routing in index.ts
ReadResourceRequestSchemahandler - Add to resource list in
ListResourcesRequestSchemahandler - Update documentation
CRITICAL: If changing MemoryStore methods:
- Ensure index updates are atomic (use pipelines)
- Test with existing Redis data
- Document migration path if needed
- Update version in package.json
- Keep bundle size small (currently 35KB)
- Prefer native Node.js APIs when possible
- Check for ESM compatibility
- Update package.json
npm run build # Production build
npm run dev # Watch mode# Start Redis
redis-server
# Run server (manual test)
REDIS_URL=redis://localhost:6379 OPENAI_API_KEY=sk-... node dist/index.js
# In another terminal, test Redis
redis-cli
> KEYS *# Check Claude Desktop config
cat ~/Library/Application\ Support/Claude/claude_desktop_config.json
# Check logs
tail -f ~/Library/Logs/Claude/mcp*.logEdit embeddings/generator.ts:
model: 'text-embedding-3-small', // Current
// Change to: 'text-embedding-3-large' for better quality- Edit types.ts:
export const ContextType = z.enum([
// existing...
'your_new_type',
]);- Update documentation in README.md
If switching to larger embedding model:
- Update
embeddingfield handling in memory-store.ts - Existing memories will have wrong dimensions - need migration
- Consider versioning:
embedding_v1,embedding_v2
Current: No formal migration system
If Redis Schema Changes:
- Create migration script in
scripts/migrate-{version}.ts - Document in
MIGRATIONS.md - Provide rollback instructions
- Test on copy of production data first
Never delete old keys without migration path!
Current: O(n) cosine similarity in-app
- Fine for <10k memories (~2s)
- Slow for >50k memories
Future: Use RediSearch with vector similarity
- O(log n) with HNSW index
- Requires Redis Stack
- Need migration for index creation
text-embedding-3-small: ~$0.0001 per 1k tokens- Average memory: ~100 tokens = $0.00001
- 10k memories: ~$0.10
- Use batch API when storing >5 memories
- Per memory: ~2KB (content + embedding + indexes)
- 10k memories: ~20MB
- 100k memories: ~200MB
- Redis can handle this easily in-memory
- ✅ Runs on localhost
- ✅ No network exposure
- ✅ Uses local Redis
Would need:
- Redis AUTH password
- TLS for Redis connection
- Rate limiting on tools
- User namespacing
- API key rotation
- Audit logging
# Check Redis
redis-cli ping
# Check env vars
echo $REDIS_URL
echo $OPENAI_API_KEY
# Check logs
tail -f ~/Library/Logs/Claude/mcp*.log- Check OpenAI API key validity
- Check Redis connection
- Look for errors in Claude Desktop logs
- Test Redis directly:
redis-cli KEYS memory:*
- Verify embeddings are generated (check
embeddingfield length) - Check OpenAI API quota
- Verify cosine similarity calculation
- Test with exact content match first
When modifying functionality:
- Update README.md - User-facing docs
- Update QUICKSTART.md - If setup changes
- Update ai_docs/learnings/README.md - Technical insights
- Update ai_docs/plans/README.md - Architecture changes
- Update this file - Development guidelines
Current: 1.0.0
Semantic Versioning:
- Major (2.0.0): Breaking changes (schema changes, removed tools/resources)
- Minor (1.1.0): New features (new tools, resources, context types)
- Patch (1.0.1): Bug fixes, performance improvements
Before Publishing:
- Test with real Redis instance
- Verify all tools work
- Check bundle size
- Update CHANGELOG.md
- types.ts - Schema changes break existing data
- memory-store.ts - Storage logic changes need migration
- package.json - Dependency changes affect bundle
- README.md - Documentation only
- resources/index.ts - Adding resources is safe
- tools/index.ts - Adding tools is safe
Before committing major changes:
- TypeScript compiles (
npm run build) - Bundle size reasonable (
ls -lh dist/index.js) - Shebang present (
head -1 dist/index.js) - Can store memory
- Can retrieve memory
- Can search memories
- Sessions work
- All indexes update correctly
- Error handling works
- Documentation updated
If production Redis has issues:
# Backup Redis
redis-cli SAVE
cp /var/lib/redis/dump.rdb dump.rdb.backup
# Restore from backup
redis-cli SHUTDOWN
cp dump.rdb.backup /var/lib/redis/dump.rdb
redis-serverMaintainer: José Airosa
Issues: File in GitHub (once published)
Logs: ~/Library/Logs/Claude/
Last Updated: 2025-10-02 Version: 1.0.0