This document is the canonical technical narrative for the project. It describes what exists today, why it is designed this way, and what changes are planned by phase.
Deliver a fast, shareable professional site at https://paulprae.com that presents a high-quality AI-generated resume and downloadable artifacts (PDF, DOCX, Markdown).
- Keep Phase 1 simple, reproducible, and low-cost.
- Generate recruiter-facing artifacts from structured career data.
- Deploy as static output for predictable hosting behavior.
- Preserve clear migration paths for Phase 2 (interactive) and Phase 3 (graph/agent workflows).
- Frontend: Next.js App Router static site (
output: 'export') - Backend at request time: none (no API routes, no SSR)
- Build-time AI: Claude API invoked locally by scripts
- Hosting: Vercel static hosting from
out/
| Layer | Technology |
|---|---|
| Framework | Next.js 16 + TypeScript |
| Styling | Tailwind CSS 4 |
| Markdown rendering | react-markdown + remark-gfm |
| AI generation | @anthropic-ai/sdk (Claude Opus 4.6) |
| Validation | Zod |
| Testing | Vitest |
| Export | Pandoc (DOCX) + Typst (PDF) |
| Deployment | Vercel |
- Site must remain static-export compatible in Phase 1.
- No server runtime secrets are needed in Vercel for generation.
- Generated resume markdown is an artifact; source-of-truth logic is in generation scripts.
- Recruiter-facing data is versioned in git; raw LinkedIn exports remain local/gitignored.
- LinkedIn CSV exports under
data/sources/linkedin/(gitignored) - Curated knowledge JSON under
data/sources/knowledge/(committed)
data/generated/career-data.json(normalized structured data)data/generated/Paul-Prae-Resume.md(AI-generated source resume)public/Paul-Prae-Resume.{pdf,docx,md}(served downloadable assets)
- Do not model private/sensitive information that should not be public.
- Keep generated public-facing data portable and reproducible.
- Keep credentials/tokens out of repository content and docs.
Pipeline order:
npm run ingest-> parse CSV/knowledge inputs intocareer-data.jsonnpm run generate-> generate Markdown resume from structured datanpm run export-> produce PDF/DOCX artifacts from Markdownnpm run build-> static export intoout/- push to
main-> Vercel builds and deploys
Supporting commands:
npm run pipelinefor end-to-end executionnpm run pipeline:contentfor AI generation onlynpm run pipeline:renderfor export/build from existing markdownnpm run brandfor OG image and favicon assets
- Deployment behavior and commands are documented in
README.md.
- DNS operations, verification, and rollback are documented in
domain-dns-runbook.md.
- README: deployment workflow
- DNS runbook: domain records and propagation checks
- This document: architectural intent and system boundaries
- Pre-commit:
lint-stagedruns Prettier on staged files (via huskypreparehook — installs onnpm install). Uses POSIX-safe nvm PATH detection (notnvm.shsourcing) so hooks work underdash/sh. Windows Git clients (GitHub Desktop, VS Code) auto-delegate to WSL whennpxis unavailable npm run lintnpm run format:checknpm testnpm run test:pipeline(validates generated outputs when available)
- Validate live routing and downloadable assets after deployment
- Spot-check generated resume quality for factual consistency and tone
- Introduce server runtime and API routes
- Add recruiter chat and tailored resume generation
- Add Supabase + pgvector for retrieval workflows
- Add Neo4j knowledge graph
- Add agent/tool orchestration for richer multi-step reasoning
- Add automation workflows for ingestion/enrichment
Active implementation tasks are tracked in .claude/plans/backlog.md.
- No auth-gated admin interface
- No production API routes
- No dynamic server-side personalization at request time
- No database dependency for current static-site operation
README.md(setup, pipeline, deployment)CLAUDE.md(project memory and guardrails)docs/domain-dns-runbook.md(domain DNS operations)docs/linux-dev-environment-setup.md(Linux/WSL setup)docs/windows-dev-environment-setup.md(Windows setup)docs/mcp-setup.md(MCP configuration)