An AI-powered documentation site built with Next.js, Fumadocs, and integrated with RAG (Retrieval-Augmented Generation) for intelligent question answering.
- Framework: Next.js 16.0.1 with Turbopack
- Documentation: Fumadocs (UI + MDX)
- LLM: Groq (Llama 3.3 70B Versatile)
- Vector Database: Pinecone
- Embeddings: Xenova Transformers (all-MiniLM-L6-v2) - runs locally
- UI Library: React 19.2, Tailwind CSS 4.1
graph LR
A[User Question] --> B[Embed Query]
B --> C[Search Pinecone]
C --> D[Retrieve Context]
D --> E[Send to Groq LLM]
E --> F[Stream Response]
F --> G[Display with References]
- User asks a question in the Ask AI chat
- Query is embedded using local Xenova transformer model
- Pinecone searches for the 3 most relevant documentation chunks
- Context is sent to Groq's Llama model with strict instructions to only use provided context
- Response is streamed back to the user
- Reference links to source documentation pages are appended
- Node.js 18+ installed
- Groq API key (Get one here)
- Pinecone API key (Get one here)
-
Clone the repository
git clone <your-repo-url> cd my-app
-
Install dependencies
npm install
-
Set up environment variables
Create a
.envfile in the root directory:GROQ_API_KEY=your_groq_api_key_here PINECONE_API_KEY=your_pinecone_api_key_here
-
Initialize the knowledge base
Run this command to index your documentation into Pinecone:
npm run update-index
-
Start the development server
npm run dev
-
Open http://localhost:3000 in your browser
| Path | Description |
|---|---|
app/(home) |
Landing page and home routes |
app/docs |
Documentation layout and pages |
app/api/chat/route.ts |
AI chat API - handles RAG pipeline |
components/AskAI.tsx |
Chat UI component with markdown rendering |
content/docs/ |
Your documentation (MDX files) |
lib/source.ts |
Content source adapter |
scripts/ingest.js |
Indexing script for Pinecone |
.github/workflows/ |
CI/CD automation |
- Context-aware: Only answers from your documentation
- Streaming responses: Real-time response generation
- Clickable references: Links to source documentation pages
- Vector search: Fast semantic search using embeddings
When you add or modify documentation files:
-
Manual update:
npm run update-index
-
Automatic update (via GitHub Actions):
- Push changes to
mainbranch - GitHub Actions automatically re-indexes on changes to
content/docs/
- Push changes to
Create .github/workflows/update-index.yml:
name: Update Knowledge Base
on:
push:
branches:
- main
- master
paths:
- 'content/docs/**'
jobs:
update-index:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '18'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Update Index
run: npm run update-index
env:
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}- Go to your GitHub repository Settings
- Navigate to Secrets and variables > Actions
- Add repository secret:
PINECONE_API_KEY - Push changes to trigger the workflow
| Command | Description |
|---|---|
npm run dev |
Start development server |
npm run build |
Build for production |
npm start |
Start production server |
npm run update-index |
Re-index documentation to Pinecone |
npm run types:check |
Run TypeScript type checking |
- Create a new
.mdxfile incontent/docs/ - Add frontmatter:
--- title: Your Page Title description: Page description ---
- Run
npm run update-indexto make it searchable by AI
Edit app/api/chat/route.ts at line ~45 to change how the AI responds.
source.config.ts: MDX and frontmatter schema configurationnext.config.mjs: Next.js configurationtsconfig.json: TypeScript configurationtailwind.config.ts: Tailwind CSS configuration
When contributing:
- Add/update documentation in
content/docs/ - Run
npm run update-indexlocally to test AI responses - Ensure AI provides accurate answers before pushing