Skip to content

nanogpt-community/nanochat

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

520 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

nanochat

Open-source self-hostable chat client for Nano-GPT.

Test it out at nanochat.app

Get 25 Free Daily prompts using any nano-gpt subscription model without needing an API key

Native Apps

Get the native NanoChat experience on your devices:


Changes with this fork

  • Convex -> SQLite + Drizzle
  • Docker + Docker Compose
  • Yarn -> Bun
  • Openrouter -> Nano-GPT (nano-gpt.com)
  • Theme inspired by T3 Chat
  • Nano-GPT Web Search / Deep Search (Linkup / Tavily / Exa / Kagi)
  • Nano-GPT Web Scraping when you enter a URL (adds to context)
  • Nano-GPT Context Memory (Single Chat)
  • Cross-Conversation Memory (All Chats)
  • Nano-GPT Image Generation + img2img support
  • Nano-GPT Speech-to-Text (Whisper/Wizper/ElevenLabs)
  • Passkey support (requires HTTPS)
  • Nano-GPT Video Generation
  • Selectable System Prompts (Assistants)
  • KaraKeep Integration (Thanks to jcrabapple)
  • Nano-GPT YouTube Transcripts (Thanks to thejudge22)
  • Follow-up Questions - Contextual follow-up question suggestions generated by LLM after each response.
  • Configurable System Themes
  • Model Performance Tracking and Analytics.
  • Projects
  • Benchmark Data from artificialanalysis.ai API
  • Provider Selection for Models (NanoGPT)
  • Nano-GPT Video Generation

Setup (Docker)

Installation

  • Clone the repository git clone https://github.com/nanogpt-community/nanochat.git
  • cd nanochat
  • cp .env.example .env
  • Edit the .env file with your configuration
  • docker compose up

Setup (Bun)

Installation

  • Install Bun (https://bun.sh/)
  • Clone the repository git clone https://github.com/nanogpt-community/nanochat.git
  • cd nanochat
  • cp .env.example .env
  • Edit the .env file with your configuration
  • bun install
  • bun run dev
  • run npx drizzle-kit push to upgrade your database schema when new features are added!

Nginx

Ensure to have the following in your server block if you use nginx:

proxy_buffer_size 256k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
client_max_body_size 50M;

Features Overview

Follow-up Questions

The follow-up questions feature automatically generates 2-3 contextual questions after each AI response. Key details:

  • Generation: Uses zai-org/GLM-4.5-Air model via Nano-GPT
  • Display: Shows 1 second after message generation completes
  • Persistence: Suggestions are stored in the database and shown when loading historical conversations
  • User Control: Can be toggled on/off in Account Settings
  • Length Check: Only generates for assistant messages over 100 characters
  • Interaction: Clicking a suggestion inserts it into the input field
  • Cleanup: Suggestions are hidden when user sends a new message

Web Search

Multiple search providers available:

  • Linkup - Standard and deep web search
  • Tavily - Optimized for AI applications
  • Exa - Neural search engine
  • Kagi - Premium search results

YouTube Transcripts

Automatically fetch and transcribe YouTube videos when URLs are detected in user messages. Costs $0.01 per transcript.

Memory Systems

  • Context Memory: Compresses long conversations within a single chat for better context retention
  • Persistent Memory: Remembers facts about the user across different conversations

Assistants

Create custom system prompts with:

  • Custom name and instructions
  • Default model selection
  • Default web search mode
  • Web search provider selection

Image Generation

Generate images using Nano-GPT's image models with support for:

  • Text-to-image
  • Image-to-image (img2img)

Text-to-Speech

Listen to assistant messages read aloud using a variety of models:

  • Models: OpenAI (TTS-1, HD), Kokoro (Multilingual), ElevenLabs (Premium)
  • Controls: Play/Stop, Speed Control (0.25x - 4.0x)
  • Cost Efficient: Supports ultra-low cost models like GPT-4o Mini TTS ($0.0006/1k)

Speech-to-Text

Transcribe voice messages using:

  • Models: Whisper Large V3 (OpenAI), Wizper (Fast), ElevenLabs
  • Usage: Click the microphone icon in the chat input
  • Analytics: Usage and costs are tracked in Model Analytics

KaraKeep Integration

Save conversations as bookmarks to your KaraKeep instance for long-term storage and organization.

Model Benchmarks

View performance benchmarks from Artificial Analysis directly in the model picker:

  • For LLMs: Intelligence Index, Coding Index, Math Index, and Speed (tokens/sec)
  • For Image Models: ELO rating and Rank
  • Benchmarks appear in the model info panel (click the info icon on any model)
  • Requires ARTIFICIAL_ANALYSIS_API_KEY environment variable

Provider Selection

For models supported by multiple providers on NanoGPT, you can:

  • Select a specific provider (e.g., 'openai', 'anthropic', 'google') for a model.
  • Configure preferred and excluded providers in Account Settings.
  • Enable automatic fallback to other providers if the preferred one fails.

Video Generation

Generate videos using NanoGPT's video models:

  • Text-to-video generation
  • View generation status and history
  • Download generated videos

URL Parameters & Shortcuts

You can use URL parameters to pre-configure your chat session. This is useful for creating bookmarks or "bang" style shortcuts (e.g. in your browser).

Parameter Description Example
q Pre-fills the chat input ?q=Explain quantum physics
model Selects the AI model ?model=zai-org/glm-4.7
model_provider Selects the provider for the model, or clears to auto with auto ?model_provider=cerebras
search Sets web search mode (off, standard, deep) ?search=deep
search_provider Sets search provider (linkup, tavily, exa, kagi, perplexity, valyu, Brave modes) ?search_provider=brave-research
search_context_size Sets shared search context size (low, medium, high) ?search_context_size=high
search_exa_depth Sets Exa depth (fast, auto, neural, deep) ?search_exa_depth=neural
search_kagi_source Sets Kagi source (web, news, search) ?search_kagi_source=news
search_valyu_search_type Sets Valyu search type (all, web) ?search_valyu_search_type=web
projectId Contextualizes chat with a specific Project ?projectId=123

Example "Bang" URL: https://nanochat.app/chat?model=zai-org/glm-5.1&search=deep&q=%s

In-Chat Shortcuts:

  • @<rule_name>: Apply a specific user rule to the current message (e.g., @concise)

Environment Variables

Variable Description
DATABASE_URL SQLite database path (default: ./data/nanochat.db)
NANOGPT_API_KEY Nano-GPT API key for generation
BETTER_AUTH_SECRET Authentication secret
BETTER_AUTH_URL Base URL for authentication
ARTIFICIAL_ANALYSIS_API_KEY (Optional) API key for model benchmarks from artificialanalysis.ai
API_KEY_HASH_SECRET (Optional) Dedicated secret for developer API key lookup hashes; defaults to ENCRYPTION_KEY or BETTER_AUTH_SECRET
ENCRYPTION_KEY Encryption key for API keys and other stored secrets at rest. Generate with openssl rand -base64 32

API Key Encryption

The application supports encrypting API keys stored in the database using AES-256-GCM:

  • Required for new secrets: ENCRYPTION_KEY must be set before creating developer API keys, provider keys, or other stored secrets
  • At rest: Secrets are encrypted with AES-256-GCM
  • Developer API auth: API keys are also indexed with a non-reversible lookup hash so requests no longer require decrypting the entire key table
  • Schema update: Run npx drizzle-kit push after upgrading so the api_keys.key_hash column exists
  • Migration: Run bun run scripts/migrate-encrypt-api-keys.ts to encrypt existing keys
  • Details: See scripts/README-API-KEY-ENCRYPTION.md

Database Schema

The application uses SQLite with Drizzle ORM. Key tables:

  • messages: Stores chat messages with content, role, annotations, and follow-up suggestions
  • user_settings: User preferences including follow-up questions toggle
  • conversations: Chat sessions with metadata
  • assistants: Custom system prompts
  • user_memories: Persistent cross-conversation memory

Tech Stack

  • Frontend: SvelteKit + Svelte 5
  • Styling: Tailwind CSS
  • Database: SQLite + Drizzle ORM
  • Auth: Better Auth
  • AI Provider: Nano-GPT (nano-gpt.com)
  • Runtime: Bun
  • Container: Docker

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Submit a pull request

License

MIT License - See LICENSE file for details.

About

Open-source T3 chat alternative for nano-gpt.com

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

Packages

 
 
 

Contributors

Languages

  • TypeScript 52.6%
  • Svelte 45.0%
  • CSS 1.8%
  • JavaScript 0.2%
  • Dockerfile 0.1%
  • Python 0.1%
  • Other 0.2%