AI-powered content management system with unified ReAct agent, 44 tools, semantic image search, stock photo integration (Pexels), web research (Exa AI), and automatic metadata generation using native AI SDK v6 patterns.
- Getting Started Guide - Complete setup walkthrough with test cases
- Quick Reference Card - Commands, URLs, patterns, and troubleshooting
pnpm installGet your OpenRouter API key from https://openrouter.ai/keys and add it to .env:
OPENROUTER_API_KEY=your-actual-api-key-here# Push schema to SQLite
pnpm db:push
# Seed with sample data
pnpm seed
# (Optional) Populate vector index for semantic search
pnpm reindex# Install Redis (one-time setup)
brew install redis
# Start all services (Redis + dev processes)
pnpm start:all # Starts Redis, then shows dev command
pnpm start # Start dev processes (server, preview, web, worker)
# Or use individual commands:
pnpm start:redis # Start Redis only
pnpm dev # Alias for 'start'What runs where:
- API Server: http://localhost:8787
- Preview Server: http://localhost:4000
- Next.js Frontend: http://localhost:3000
- Worker: Background process (no port)
# Download and process 3 sample images (mountain, puppy, workspace)
pnpm seed:images
# Wait 5-10 seconds for processing to complete
# Check worker logs for: "β
Job ... completed successfully"# Visit the AI assistant
http://localhost:3000/assistant
# Preview rendered pages
http://localhost:4000/pages/home?locale=enserver/ # Backend Express API
βββ db/ # Database schema & client
βββ services/ # Business logic layer
β βββ cms/ # CMS services (pages, sections, entries)
β βββ storage/ # Image processing & storage services
β βββ ai/ # AI services (metadata, embeddings)
β βββ renderer.ts # Nunjucks template rendering
β βββ vector-index.ts # LanceDB vector search
β βββ session-service.ts # Session management
β βββ approval-queue.ts # HITL approval coordination
βββ routes/ # API routes
β βββ agent.ts # SSE streaming endpoints
β βββ sessions.ts # Session CRUD routes
β βββ upload.ts # Image upload endpoint
β βββ images.ts # Image serving endpoints
βββ middleware/ # Express middleware
β βββ upload.ts # Multer file upload validation
βββ queues/ # Job queues
β βββ image-queue.ts # BullMQ image processing queue
βββ workers/ # Background workers
β βββ image-worker.ts # Image processing worker
βββ agent/ # AI agent orchestrator
β βββ orchestrator.ts # Unified ReAct agent (native AI SDK v6)
βββ tools/ # Agent tools (44 tools)
β βββ all-tools.ts # Tool registry with experimental_context
β βββ image-tools.ts # 8 image management tools
β βββ post-tools.ts # 7 blog/post tools
β βββ site-settings-tools.ts # 5 navigation tools
β βββ web-research-tools.ts # 3 Exa AI web research tools
β βββ pexels-tools.ts # 2 stock photo tools
βββ prompts/ # Single unified prompt
β βββ react.xml # ReAct pattern prompt
βββ templates/ # Nunjucks templates
β βββ layout/ # Page layout (HTML shell)
β βββ sections/ # Section templates (hero, feature, cta)
β βββ assets/ # Static assets (CSS)
βββ index.ts # API server (port 8787)
βββ preview.ts # Preview server (port 4000)
βββ utils/ # Helper functions
app/ # Next.js frontend
βββ assistant/ # Main assistant UI
β βββ page.tsx # Layout (chat + debug panel)
β βββ _components/ # Chat pane, enhanced debug panel
β β βββ enhanced-debug/ # LangSmith-inspired trace observability
β βββ _hooks/ # use-agent (SSE streaming + pattern detection)
β βββ _stores/ # chat-store, trace-store, session-store, models-store
βββ api/ # Next.js API routes (proxies)
βββ globals.css # OKLCH theme with blue bubbles
data/ # Local data (git-ignored)
βββ sqlite.db # SQLite database
βββ lancedb/ # Vector index
uploads/ # Media files (git-ignored)
βββ images/ # Uploaded images
βββ YYYY/MM/DD/
βββ original/ # Full-size originals
βββ variants/ # Responsive sizes (WebP/AVIF)
Start Services:
pnpm start # Start dev processes (server, preview, web, worker)
pnpm start:redis # Start Redis only
pnpm start:all # Start Redis + show dev instructions
pnpm dev # Alias for 'start'Stop Services:
pnpm stop # Stop dev processes only
pnpm stop:redis # Stop Redis only
pnpm stop:all # Stop everything (dev + Redis)Utilities:
pnpm restart # Restart dev processes
pnpm status # Check what's running
pnpm ps # Process monitor - shows all services, ports, and duplicatesIndividual Services (if needed):
pnpm dev:server # API server only (port 8787)
pnpm dev:preview # Preview server only (port 4000)
pnpm dev:web # Next.js only (port 3000)
pnpm dev:worker # Worker onlypnpm db:push- Push schema changes to SQLitepnpm db:studio- Open Drizzle Studiopnpm seed- Seed database with sample datapnpm seed:images- Download and process 3 sample imagespnpm check:images- Verify image setuppnpm retry:images- Retry failed image processing jobspnpm reindex- Populate vector index with existing data
pnpm reset:system- Clear Redis cache and checkpoint DB (~2s)pnpm reset:data- Truncate tables, reseed data (~15-20s)pnpm reset:complete- Nuclear reset with schema recreation (~18-25s)pnpm verify- Run 10 health checks (Redis, DB, images, ports)
pnpm typecheck- Check TypeScript typespnpm lint- Run Biome linterpnpm format- Format code with Biomepnpm build- Build for productionpnpm prod- Start production server
Base URL: http://localhost:8787/v1/teams/dev-team/sites/local-site/environments/main
GET /pages- List all pagesPOST /pages- Create new pageGET /pages/:id- Get page by IDPUT /pages/:id- Update pageDELETE /pages/:id- Delete pageGET /pages/:id/contents- Get page with sections
GET /sections- List section definitionsPOST /sections- Create section definitionGET /sections/:id- Get section by IDPUT /sections/:id- Update sectionDELETE /sections/:id- Delete section
GET /collections- List collectionsPOST /collections- Create collectionGET /collections/:id/entries- List entriesPOST /collections/:id/entries- Create entry
POST /search/resources- Vector-based fuzzy search{ "query": "homepage", "type": "page", "limit": 3 }
POST /api/upload- Upload images (1-10 files)GET /api/images/:id/status- Check processing statusGET /api/images/:id/details- Full metadata & variantsGET /api/images/:id/thumbnail- Serve 150x150 WebP thumbnailGET /api/images/search?q=query- Semantic image searchPOST /api/images/find- Find best match by descriptionDELETE /api/images/:id- Delete image with cascade
The preview server renders your CMS pages as a real website using Nunjucks templates.
Base URL: http://localhost:4000
GET /- Redirects to/pages/home?locale=en(default homepage)GET /pages/:slug?locale=en- Render page as HTMLGET /pages/:slug/raw?locale=en- Get page data as JSON (debugging)GET /assets/*- Static assets (CSS, images)GET /health- Health check with template registry
Note: The root path (/) returns a 404 without the redirect - this is expected behavior as the preview server is designed to render specific page slugs.
IMPORTANT: Navigation links must use the preview URL format:
/pages/{slug}?locale=en
Examples:
/pages/home?locale=enβ/pages/about?locale=enβ/pages/contact?locale=enβ
Wrong (causes 404):
/β/aboutβ/contactβ
The AI agent automatically uses this format when adding pages to navigation after creation.
Templates are located in server/templates/:
- Layout:
layout/page.njk- HTML shell with<head>and<body> - Sections:
sections/{templateKey}/{variant}.njk- Section componentshero/default.njk- Standard hero sectionhero/centered.njk- Centered hero variantfeature/default.njk- Feature list sectioncta/default.njk- Call-to-action section
- Fallback:
sections/_default.njk- Used when template not found - Assets:
assets/styles.css- Production-quality CSS
{{ text | markdown }}- Render markdown to HTML{{ text | truncate(100) }}- Truncate text to N characters{{ path | asset }}- Resolve asset URL
- Frontend: Next.js 15, React 19, Tailwind CSS, shadcn/ui
- Backend: Express, Drizzle ORM, SQLite
- Templates: Nunjucks with custom filters (markdown, truncate, asset)
- AI: Vercel AI SDK v6 (native patterns), OpenRouter (GPT-4o-mini)
- Vector Search: LanceDB with OpenRouter embeddings
- State: Zustand with localStorage persistence
- Image Processing: Sharp, BullMQ, Redis, CLIP embeddings
- Storage: Filesystem with date-based organization (optional CDN)
What's running:
- β Redis (brew service) - Job queue for BullMQ worker
- β SQLite (file-based) - Main database (no service needed)
- β LanceDB (file-based) - Vector search index (no service needed)
- β Docker - Not used in this project
Quick commands:
pnpm status # Check what's running (simple)
pnpm ps # Process monitor (detailed - shows duplicates!)
pnpm start:all # Start everything (Redis + dev)
pnpm stop:all # Stop everythingThis project uses a 3-server architecture:
- API Server (port 8787): RESTful CRUD operations + AI agent streaming
- Preview Server (port 4000): Renders pages as HTML using Nunjucks templates
- Next.js (port 3000): AI assistant UI with blue chat bubbles
- Worker (background): Image processing queue (BullMQ + Redis)
Native AI SDK v6 pattern - no custom abstractions:
- Single agent with all 44 tools available always
- Think β Act β Observe β Repeat autonomous loop
- Max 15 steps per conversation turn
- Auto-retry with exponential backoff (3 attempts)
- Auto-checkpoint every 3 steps for crash recovery
- Streaming SSE with execution log events
- Agent status indicator - real-time UI feedback showing thinking/tool states
Problem: Fetching entire pages with all content wastes tokens when user asks for one specific field.
Solution: Granular fetching with 40-96% token savings:
-
Lightweight First (default):
cms_getPage({ slug: "about" }); // Returns metadata + section IDs (~100 tokens) cms_getSectionContent({ pageSectionId: "s1" }); // Get only needed content (~150 tokens) // Total: ~250 tokens vs ~2000 tokens with full fetch
-
Full Fetch (opt-in):
cms_getPage({ slug: "about", includeContent: true }); // All content (~2000 tokens)
New Tools:
cms_getPageSections- Get all sections for a page (metadata or full content)cms_getSectionContent- Get content for one specific sectioncms_getCollectionEntries- Get all entries for a collection (metadata or full content)cms_getEntryContent- Get content for one specific entry
Agent learns optimal strategy: ReAct pattern naturally prefers efficient granular fetching for targeted queries.
The agent uses a single unified prompt (server/prompts/react.xml) - 82 lines, replaces 800+ lines of modular system.
Single file with embedded examples:
- Agent identity and ReAct pattern
- Think β Act β Observe β Repeat instructions
- Complete example session (create page + add sections)
- Tool list (injected dynamically)
- Session context (sessionId, date)
<agent>
You are an autonomous AI assistant using the ReAct pattern.
**CORE LOOP:**
Think β Act β Observe β Repeat until completion
**CRITICAL RULES:**
1. THINK before acting
2. EXECUTE immediately (no permission needed)
3. CHAIN operations (multi-step in one turn)
4. OBSERVE results
5. RECURSE when needed
**EXAMPLE SESSION:**
User: "Add a hero section to the about page"
[Shows complete multi-step flow with thinking, tool calls, observations]
**AVAILABLE TOOLS:** {{toolCount}} tools
{{toolsFormatted}}
</agent>β
Simpler: 82 lines vs 800+ lines (90% reduction)
β
Faster: No composition overhead (~0ms vs ~1ms)
β
Clearer: Everything in one file, easy to understand
β
More effective: Agent sees complete example flow
β
Hot-reload: Edit and test immediately
To modify the prompt:
- Edit
server/prompts/react.xml - Server auto-reloads in development
- Test with agent immediately
- Use Handlebars syntax for variables:
{{toolCount}}
The agent uses native AI SDK v6 patterns without custom abstractions:
Tools created once with execute functions that receive context automatically:
export const cmsGetPage = tool({
description: "Get page by slug or ID",
inputSchema: z.object({
slug: z.string().optional(),
}),
execute: async (input, { experimental_context }) => {
const ctx = experimental_context as AgentContext;
return await ctx.services.pageService.getPageBySlug(input.slug);
},
});No factories, no wrappers, no recreation - tools passed AS-IS to agent.
Replaces 331-line memory manager with 15 lines:
prepareStep: async ({ stepNumber, messages }) => {
// Auto-checkpoint every 3 steps
if (stepNumber % 3 === 0) {
await sessionService.saveMessages(sessionId, messages);
}
// Trim history (keep last 20 messages)
if (messages.length > 20) {
return { messages: [messages[0], ...messages.slice(-10)] };
}
return {};
};Built into orchestrator, follows v0 pattern:
async function executeWithRetry() {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await agent.generate({ messages, experimental_context });
} catch (error) {
if (attempt < maxRetries - 1) {
const delay = baseDelay * Math.pow(2, attempt) + jitter();
await sleep(delay);
continue;
}
throw error;
}
}
}Simple message array save/load:
// Save checkpoint
await sessionService.saveMessages(sessionId, messages);
// Resume from checkpoint
const messages = await sessionService.loadMessages(sessionId);
const result = await agent.generate({ messages, experimental_context });β
Simpler: 28% less code (1,200 β 860 lines)
β
Native: Follows AI SDK v6 patterns exactly
β
Reliable: No "_zod" errors, no context issues
β
Fast: No overhead from abstractions
β
Maintainable: Easy to understand and extend
Multiple chat sessions with full history persistence:
- Unlimited sessions - Create as many conversations as needed
- Session sidebar - Switch between sessions instantly
- Full history - All messages saved to SQLite database
- Auto-save - Messages persisted after each agent response
- Smart titles - Auto-generated from first user message
- Session actions - Clear history or delete session
Backend (server/services/session-service.ts):
class SessionService {
async createSession(title?: string): Promise<Session>;
async loadMessages(sessionId: string): Promise<ModelMessage[]>;
async saveMessages(sessionId: string, messages: ModelMessage[]): Promise<void>;
async updateSessionTitle(sessionId: string, title: string): Promise<void>;
async deleteSession(sessionId: string): Promise<void>;
async clearSessionHistory(sessionId: string): Promise<void>;
}Frontend (app/assistant/_stores/session-store.ts):
interface SessionStore {
sessions: Session[];
currentSessionId: string | null;
loadSessions(): Promise<void>;
createSession(): Promise<void>;
switchSession(sessionId: string): Promise<void>;
deleteSession(sessionId: string): Promise<void>;
}-- sessions table
CREATE TABLE sessions (
id TEXT PRIMARY KEY,
title TEXT NOT NULL,
checkpoint TEXT, -- JSON checkpoint data
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL
)
-- messages table
CREATE TABLE messages (
id TEXT PRIMARY KEY,
session_id TEXT NOT NULL REFERENCES sessions(id) ON DELETE CASCADE,
role TEXT NOT NULL, -- 'user' | 'assistant' | 'tool'
content TEXT NOT NULL, -- JSON content
tool_name TEXT,
step_idx INTEGER,
created_at INTEGER NOT NULL
)Professional interface with modern color system:
- Blue chat bubbles - Assistant messages in light blue/purple gradient
- 2/3 chat layout - Chat is main focus (was 1/3 in old design)
- 1/3 execution log - Terminal icon with color-coded events
- OKLCH colors - Modern color system with better perceptual uniformity
- Reduced border radius - 0.375rem for sharper, more professional look
- Improved dark mode - Better contrast and consistent theming
- Responsive - Session sidebar hidden on mobile
/* Primary colors (purple/blue) */
--primary: 262.1 83.3% 57.8%
--primary-foreground: 210 20% 98%
/* Assistant message bubbles */
.message-assistant {
background: linear-gradient(135deg, oklch(var(--primary) / 0.1) 0%, oklch(var(--primary) / 0.15) 100%);
border-left: 3px solid oklch(var(--primary));
}βββββββββββββββββββββββββββββββββββββββββββ
β Header: Bot Icon + "CMS ReAct Agent" β
ββββββββββββ¬βββββββββββββββββββββββββββββββ€
β Session β Chat Pane β
β Sidebar β (Blue Bubbles) β
β (1/6) β (2/3) β
β ββββββββββββββββββββββββββββββββ€
β β Enhanced Debug Panel β
β β (Trace timeline, filters, β
β β working memory, metrics) β
β β (1/3) β
ββββββββββββ΄βββββββββββββββββββββββββββββββ
Complete image management system with semantic search and agent integration.
- π€ AI Metadata Generation - GPT-4o-mini automatically generates descriptions, tags, categories, colors, and mood
- π Semantic Search - Find images by natural language ("sunset photo", "blue product image")
- π― Agent Tools - 6 dedicated tools for finding, attaching, replacing, and deleting images
- β»οΈ Deduplication - SHA256 hash checking prevents duplicate storage
- π¦ Async Processing - BullMQ + Redis queue handles metadata, variants, and embeddings
- πΌοΈ Modern Formats - Automatic WebP/AVIF variants in 3 sizes (640w, 1024w, 1920w)
- π Status Tracking - Real-time processing status (processing β completed β failed)
# 1. Start all services (one command)
pnpm start:all # Starts Redis + shows dev command
pnpm start # Start dev processes
# Or check status first
pnpm status # See what's running
# 2. Seed sample images (optional - 3 test images)
pnpm seed:images
# Downloads: mountain landscape, golden puppy, desk workspace
# Wait 5-10 seconds for processing
# 3. Upload an image (or use seed:images above)
curl -X POST http://localhost:8787/api/upload \
-F "[email protected]" \
-F "sessionId=test-123"
# 4. Search for images
curl "http://localhost:8787/api/images/search?q=sunset&limit=5"
# Or try: "mountain", "puppy", "workspace" if you used seed:images
# 5. Test the complete pipeline
./scripts/test-image-upload.sh
# 6. Stop services when done
pnpm stop # Stop dev only (Redis stays running)
pnpm stop:all # Stop everythingThe agent has 8 image operation tools:
- cms_findImage - Find single image by natural language description
- cms_searchImages - Search for multiple images (semantic)
- cms_listConversationImages - List images uploaded in current session
- cms_listAllImages - List all images in the system
- cms_addImageToSection - Attach image to page section field
- cms_updateSectionImage - Update image in existing section
- cms_replaceImage - Replace image across all locations
- cms_deleteImage - Safe deletion with confirmation
Example prompts:
"Find the sunset photo and add it to the hero section"
"What images did I upload in this conversation?"
"Search for product images with blue backgrounds"
"Replace the old logo with the new one across all pages"
Section images use the Inline JSON Content Pattern - image data is stored directly in the page_section_contents.content JSON field:
Storage Example:
{
"title": "Welcome to Our CMS",
"image": {
"url": "/uploads/images/2025/11/22/original/uuid.jpg",
"alt": "AI-generated description"
},
"ctaText": "Get Started"
}Why Inline JSON?
- β Simpler - Content is self-contained
- β Faster - No database joins on render
- β Template-friendly - Direct access to image data
- β Industry standard - Matches WordPress, Contentful, Strapi
Agent Tools:
cms_updateSectionImage- Update image field in sectioncms_addImageToSection- Add image to section fieldcms_replaceImage- Find and replace images across sections
See docs/IMAGE_ARCHITECTURE.md for complete architecture guide and decision record.
- docs/IMAGE_HANDLING_README.md - Complete API reference and examples
- docs/IMAGE_SETUP_CHECKLIST.md - Setup verification checklist
- docs/IMAGE_SYSTEM_COMPLETE.md - Implementation summary
- docs/IMAGE_ARCHITECTURE.md - Architecture pattern and decision record
Free stock photo integration for finding and downloading professional images.
- π Search Photos - Find high-quality stock photos by keyword
- π₯ Download Photos - Download and process into CMS image pipeline
- π Free License - All Pexels photos free for commercial use
- π Auto-Processing - Downloaded images go through full AI metadata pipeline
-
Get Pexels API Key from https://www.pexels.com/api/
-
Add to
.env:
PEXELS_API_KEY=your-api-key-here| Tool | Use Case |
|---|---|
pexels_searchPhotos |
Find stock photos by keyword |
pexels_downloadPhoto |
Download photo to CMS library |
"Find stock photos of mountains"
"Search for professional office workspace images"
"Download a sunset photo for the hero section"
AI-powered web research tools for gathering fresh, up-to-date information from the web.
- π Quick Search - Fast web lookups for news, facts, and resources
- π Deep Research - Comprehensive multi-source research with citations
- π Content Fetch - Extract full text and summaries from URLs
- π€ Smart Mode Selection - Agent automatically chooses shallow vs deep research
-
Get Exa API Key from https://dashboard.exa.ai
-
Add to
.env:
EXA_API_KEY=your-api-key-here
EXA_DEFAULT_MODEL=exa-research # or exa-research-pro for higher quality
EXA_RESEARCH_TIMEOUT=120 # seconds (30-300)- Test the integration:
# Quick search test
pnpm tsx scripts/test-exa-search.ts quick
# Deep research test (takes 30-90s)
pnpm tsx scripts/test-exa-search.ts deep
# Fetch URL content
pnpm tsx scripts/test-exa-search.ts fetch https://example.comThree web research tools available to the agent:
| Tool | Use Case | Speed |
|---|---|---|
web_quickSearch |
Quick facts, news, links | <5s |
web_deepResearch |
Blog posts, comprehensive pages | 30-120s |
web_fetchContent |
Read specific URLs | <10s |
Quick Search (shallow):
- "What's the weather in Paris?"
- "Latest AI news"
- "Find React documentation link"
Deep Research (comprehensive):
- "Create a blog post about sustainable fashion, search the web for recent trends"
- "Build an about page, research AI industry statistics"
- "Write an article on renewable energy innovations"
"What's the current Bitcoin price?"
β Uses web_quickSearch with livecrawl: "always" for real-time data
"Create a blog post about electric vehicles, research the latest developments"
β Uses web_deepResearch first, then cms_createPost with research findings
"Read this article and summarize: https://example.com/article"
β Uses web_fetchContent with includeSummary: true
| Operation | exa-research | exa-research-pro |
|---|---|---|
| Search | $5/1k queries | $5/1k queries |
| Page read | $5/1k pages | $10/1k pages |
| Research task | ~$0.10-0.20 | ~$0.15-0.30 |
Costs are logged in tool responses for visibility.
Prevent duplicate processes (avoid resource drain):
# 1. ALWAYS check first (make it a habit!)
pnpm ps # Shows all services, ports, and duplicates
# 2. If anything running, stop it first
pnpm stop:all # Clean slate
# 3. Start fresh
pnpm start:all # Start Redis
pnpm start # Start dev processes
# 4. When done for the day
pnpm stop # Stop dev (leave Redis running)Duplicate processes happen when:
- β Using Ctrl+C (doesn't kill child processes from concurrently)
- β Terminal crashes (leaves orphaned processes)
- β Starting without stopping (stacks processes)
Prevention:
- β
Always use
pnpm stoporpnpm stop:all(never Ctrl+C) - β
Check
pnpm psbefore starting (catch duplicates early) - β
Run
pnpm pswhen things feel slow (likely duplicates) - β
Weekly cleanup:
pnpm stop:allto fully reset
pnpm ps shows:
- π΄ Redis status - Running/Stopped
- π Port usage - What's using 8787, 4000, 3000, 6379
- π» Project processes - All tsx/node/pnpm processes with PIDs
β οΈ Duplicate detection - Highlights when multiple instances running- π» Zombie detection - Finds old processes from previous sessions
| Problem | Solution |
|---|---|
| Agent not responding | Check API logs, verify OpenRouter API key |
| Blue bubbles not showing | Hard refresh (Cmd+Shift+R), check globals.css |
| Tool execution fails | Check execution log, agent auto-retries 3x |
| Database locked | pnpm stop:all, rm data/sqlite.db, re-seed |
| Vector search no results | Run pnpm reindex |
| Port in use | Run pnpm ps to see what's using it, then pnpm stop:all |
| Redis connection refused | pnpm start:redis, verify with redis-cli ping |
| Worker not processing | Check pnpm ps, restart with pnpm restart |
| Image upload fails | Check UPLOADS_DIR exists, verify file size limits |
| Image search no results | Wait for processing, check status shows "completed" |
| Duplicate/zombie processes | Run pnpm ps to identify, then pnpm stop:all |
| System slow/high CPU | Run pnpm ps - likely duplicate processes |
Three-tier reset system for different scenarios:
# 1. Cache Reset (fastest - ~2s)
pnpm reset:system
# Clears Redis cache, checkpoints DB (WAL files)
# Kills orphaned processes
# Use when: Things feel slow or broken
# 2. Data Reset (fast - ~15-20s)
pnpm reset:data
# Truncates all tables (preserves schema)
# Clears uploads, vector store
# Reseeds data + processes images
# Use when: Need fresh data, schema unchanged
# 3. Complete Reset (nuclear - ~18-25s)
pnpm reset:complete
# Deletes entire database + schema
# Clears all caches, uploads, processes
# Recreates schema + reseeds + processes images
# Use when: Schema changed or deep corruption
# 4. System Verification
pnpm verify
# Runs 10 health checks (Redis, DB, images, ports, etc.)
# Use after: Any reset to confirm system stateWhen to use each:
- reset:system: Browser cache issues, stale sessions, slow performance
- reset:data: Testing fresh data, navigation changes, content updates
- reset:complete: Schema migrations, corrupted database, major refactors
- verify: After any reset, before reporting bugs
See scripts:
scripts/reset-system.ts- Cache + checkpoint resetscripts/reset-data-only.ts- Data-only resetscripts/reset-complete.ts- Nuclear reset with verificationscripts/verify-system.ts- 10-point health check
See QUICK_REFERENCE.md for detailed troubleshooting.
Comprehensive 7-layer architecture documentation in docs/architecture/:
| Layer | Name | Description |
|---|---|---|
| 1 | Server Core | Express bootstrap, middleware, routes |
| 2 | Database | Drizzle ORM, entity hierarchy, vector storage |
| 3 | Agent System | ReAct loop, 44 tools, working memory, HITL |
| 4 | Services | CMS, sessions, image processing, renderer |
| 5 | Background | Redis, BullMQ queues, worker lifecycle |
| 6 | Client | Zustand stores, SSE streaming, chat components |
| 7 | Rendering | Nunjucks engine, templates, preview server |
Each layer has detailed sub-documents (e.g., LAYER_3.2_TOOLS.md for tool system).
Layer 6 (Client) highlights:
LAYER_6.5_HITL_UI.md- Human-in-the-loop approval modalLAYER_6.6_TRACE_OBSERVABILITY.md- LangSmith-inspired debug panel with 23 trace entry types, working memory visualization, system prompt inspection
See docs/PROGRESS.md for complete sprint-by-sprint details.
Major Milestones:
- β Sprints 0-11: Foundation, CMS API, Agent Core, Modular Prompts, Frontend
- β Sprint 12: Native AI SDK v6 Refactor (28% code reduction)
- β Sprint 13: Unified ReAct Agent (no modes, single prompt)
- β Sprint 14: Modern UI with OKLCH theme (blue bubbles)
- β Sprint 22: Enhanced Debug Panel (LangSmith-inspired trace observability)
Key Refactors:
- Native AI SDK v6 Pattern - Eliminated custom abstractions
- Unified ReAct Agent - Removed mode complexity
- UI Overhaul - Modern design with blue bubbles
Current Status: Production-ready prototype with 44 tools across 8 categories (CMS, images, posts, navigation, search, web research, stock photos, HTTP), unified agent, modern UI with real-time status indicator, AI-powered image management, and LangSmith-inspired trace observability.
The Enhanced Debug Panel provides deep visibility into agent execution, inspired by LangSmith's tracing UI.
- π 23 Trace Entry Types - Tool calls, results, errors, memory updates, retries, checkpoints
- β±οΈ Duration Tracking - Automatic timing for tool calls and total trace duration
- π§ Working Memory Panel - Real-time visualization of tracked entities (pages, sections, images)
- π System Prompt Inspection - View the compiled prompt sent to the LLM
- π Filtering & Search - Filter by type groups (LLM, Tools, Memory, Jobs), levels, full-text search
- π Export - Copy logs to clipboard or export full trace as JSON
- π Multi-Trace Support - Switch between trace sessions
- docs/development/DEBUG_LOGGING_SYSTEM.md - Complete technical reference
- docs/architecture/LAYER_6.6_TRACE_OBSERVABILITY.md - Architecture layer documentation