Open-source self-hostable chat client for Nano-GPT.
Test it out at nanochat.app
Get 25 Free Daily prompts using any nano-gpt subscription model without needing an API key
Get the native NanoChat experience on your devices:
- Android: nanochat-android
- iOS: nanochat-ios (TestFlight Beta)
- Desktop: nanochat-desktop - Linux Only for now
- Convex -> SQLite + Drizzle
- Docker + Docker Compose
- Yarn -> Bun
- Openrouter -> Nano-GPT (nano-gpt.com)
- Theme inspired by T3 Chat
- Nano-GPT Web Search / Deep Search (Linkup / Tavily / Exa / Kagi)
- Nano-GPT Web Scraping when you enter a URL (adds to context)
- Nano-GPT Context Memory (Single Chat)
- Cross-Conversation Memory (All Chats)
- Nano-GPT Image Generation + img2img support
- Nano-GPT Speech-to-Text (Whisper/Wizper/ElevenLabs)
- Passkey support (requires HTTPS)
- Nano-GPT Video Generation
- Selectable System Prompts (Assistants)
- KaraKeep Integration (Thanks to jcrabapple)
- Nano-GPT YouTube Transcripts (Thanks to thejudge22)
- Follow-up Questions - Contextual follow-up question suggestions generated by LLM after each response.
- Configurable System Themes
- Model Performance Tracking and Analytics.
- Projects
- Benchmark Data from artificialanalysis.ai API
- Provider Selection for Models (NanoGPT)
- Nano-GPT Video Generation
- Clone the repository
git clone https://github.com/nanogpt-community/nanochat.git cd nanochatcp .env.example .env- Edit the .env file with your configuration
docker compose up
- Install Bun (https://bun.sh/)
- Clone the repository
git clone https://github.com/nanogpt-community/nanochat.git cd nanochatcp .env.example .env- Edit the .env file with your configuration
bun installbun run dev- run
npx drizzle-kit pushto upgrade your database schema when new features are added!
Ensure to have the following in your server block if you use nginx:
proxy_buffer_size 256k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
client_max_body_size 50M;
The follow-up questions feature automatically generates 2-3 contextual questions after each AI response. Key details:
- Generation: Uses zai-org/GLM-4.5-Air model via Nano-GPT
- Display: Shows 1 second after message generation completes
- Persistence: Suggestions are stored in the database and shown when loading historical conversations
- User Control: Can be toggled on/off in Account Settings
- Length Check: Only generates for assistant messages over 100 characters
- Interaction: Clicking a suggestion inserts it into the input field
- Cleanup: Suggestions are hidden when user sends a new message
Multiple search providers available:
- Linkup - Standard and deep web search
- Tavily - Optimized for AI applications
- Exa - Neural search engine
- Kagi - Premium search results
Automatically fetch and transcribe YouTube videos when URLs are detected in user messages. Costs $0.01 per transcript.
- Context Memory: Compresses long conversations within a single chat for better context retention
- Persistent Memory: Remembers facts about the user across different conversations
Create custom system prompts with:
- Custom name and instructions
- Default model selection
- Default web search mode
- Web search provider selection
Generate images using Nano-GPT's image models with support for:
- Text-to-image
- Image-to-image (img2img)
Listen to assistant messages read aloud using a variety of models:
- Models: OpenAI (TTS-1, HD), Kokoro (Multilingual), ElevenLabs (Premium)
- Controls: Play/Stop, Speed Control (0.25x - 4.0x)
- Cost Efficient: Supports ultra-low cost models like GPT-4o Mini TTS ($0.0006/1k)
Transcribe voice messages using:
- Models: Whisper Large V3 (OpenAI), Wizper (Fast), ElevenLabs
- Usage: Click the microphone icon in the chat input
- Analytics: Usage and costs are tracked in Model Analytics
Save conversations as bookmarks to your KaraKeep instance for long-term storage and organization.
View performance benchmarks from Artificial Analysis directly in the model picker:
- For LLMs: Intelligence Index, Coding Index, Math Index, and Speed (tokens/sec)
- For Image Models: ELO rating and Rank
- Benchmarks appear in the model info panel (click the info icon on any model)
- Requires
ARTIFICIAL_ANALYSIS_API_KEYenvironment variable
For models supported by multiple providers on NanoGPT, you can:
- Select a specific provider (e.g., 'openai', 'anthropic', 'google') for a model.
- Configure preferred and excluded providers in Account Settings.
- Enable automatic fallback to other providers if the preferred one fails.
Generate videos using NanoGPT's video models:
- Text-to-video generation
- View generation status and history
- Download generated videos
You can use URL parameters to pre-configure your chat session. This is useful for creating bookmarks or "bang" style shortcuts (e.g. in your browser).
| Parameter | Description | Example |
|---|---|---|
q |
Pre-fills the chat input | ?q=Explain quantum physics |
model |
Selects the AI model | ?model=zai-org/glm-4.7 |
model_provider |
Selects the provider for the model, or clears to auto with auto |
?model_provider=cerebras |
search |
Sets web search mode (off, standard, deep) |
?search=deep |
search_provider |
Sets search provider (linkup, tavily, exa, kagi, perplexity, valyu, Brave modes) |
?search_provider=brave-research |
search_context_size |
Sets shared search context size (low, medium, high) |
?search_context_size=high |
search_exa_depth |
Sets Exa depth (fast, auto, neural, deep) |
?search_exa_depth=neural |
search_kagi_source |
Sets Kagi source (web, news, search) |
?search_kagi_source=news |
search_valyu_search_type |
Sets Valyu search type (all, web) |
?search_valyu_search_type=web |
projectId |
Contextualizes chat with a specific Project | ?projectId=123 |
Example "Bang" URL:
https://nanochat.app/chat?model=zai-org/glm-5.1&search=deep&q=%s
In-Chat Shortcuts:
@<rule_name>: Apply a specific user rule to the current message (e.g.,@concise)
| Variable | Description |
|---|---|
DATABASE_URL |
SQLite database path (default: ./data/nanochat.db) |
NANOGPT_API_KEY |
Nano-GPT API key for generation |
BETTER_AUTH_SECRET |
Authentication secret |
BETTER_AUTH_URL |
Base URL for authentication |
ARTIFICIAL_ANALYSIS_API_KEY |
(Optional) API key for model benchmarks from artificialanalysis.ai |
API_KEY_HASH_SECRET |
(Optional) Dedicated secret for developer API key lookup hashes; defaults to ENCRYPTION_KEY or BETTER_AUTH_SECRET |
ENCRYPTION_KEY |
Encryption key for API keys and other stored secrets at rest. Generate with openssl rand -base64 32 |
The application supports encrypting API keys stored in the database using AES-256-GCM:
- Required for new secrets:
ENCRYPTION_KEYmust be set before creating developer API keys, provider keys, or other stored secrets - At rest: Secrets are encrypted with AES-256-GCM
- Developer API auth: API keys are also indexed with a non-reversible lookup hash so requests no longer require decrypting the entire key table
- Schema update: Run
npx drizzle-kit pushafter upgrading so theapi_keys.key_hashcolumn exists - Migration: Run
bun run scripts/migrate-encrypt-api-keys.tsto encrypt existing keys - Details: See
scripts/README-API-KEY-ENCRYPTION.md
The application uses SQLite with Drizzle ORM. Key tables:
- messages: Stores chat messages with content, role, annotations, and follow-up suggestions
- user_settings: User preferences including follow-up questions toggle
- conversations: Chat sessions with metadata
- assistants: Custom system prompts
- user_memories: Persistent cross-conversation memory
- Frontend: SvelteKit + Svelte 5
- Styling: Tailwind CSS
- Database: SQLite + Drizzle ORM
- Auth: Better Auth
- AI Provider: Nano-GPT (nano-gpt.com)
- Runtime: Bun
- Container: Docker
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
MIT License - See LICENSE file for details.