This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
ChatGPUI is a GPU-accelerated native LLM chat client built with GPUI (the rendering engine from Zed editor). It supports multiple LLM providers with streaming responses, conversation persistence, and image attachments.
# Development
cargo run # Run in dev mode
mise run dev # Alternative via mise
# Build
cargo build --release # Release build
mise run build # Alternative via mise
# Code Quality
cargo fmt # Format code
cargo clippy # Lint check
# Database Cleanup
mise run clean # Clean PostgreSQL processes and shared memory
mise run clean-all # Delete all data including conversationsToolchain: Rust nightly (nightly-2026-01-18), Edition 2024
src/
├── main.rs # Entry point, menu bar, window creation
├── app.rs # ChatApp - main application component
├── chat/ # Chat functionality module
│ ├── view.rs # ChatView - main chat container
│ ├── sidebar.rs # ChatSidebar - conversation history
│ ├── message_list.rs # MessageList - virtual list of messages
│ ├── message_input.rs # MessageInput - input with attachments
│ ├── message.rs # Message data structures
│ └── scroll_manager.rs
├── model_selector.rs # LLM provider/model picker
├── icons/ # Icon assets and helpers
│ ├── app_icon.rs # AppIcon enum
│ ├── llm_provider.rs # LlmProvider icon mapping
│ └── language.rs # Programming language icons
├── settings/ # Settings management
│ ├── state.rs # Settings state and persistence
│ ├── view.rs # Settings window UI
│ └── provider.rs # Provider configuration
├── windows/ # Standalone windows
│ └── about.rs # About window
├── llm/ # LLM provider implementations
├── database/ # Database layer
└── components/ # Shared UI components
Components communicate via GPUI's EventEmitter pattern:
ModelSelectorChangedEvent→ ChatView reloads LLM clientConversationSelectedEvent→ ChatView loads conversationConversationUpdatedEvent→ Sidebar refreshes list
Providers implement the LlmProvider trait in src/llm/:
stream_chat()- Streaming chat completionfetch_models()- Dynamic model list (optional)models()- Static model fallback
Supported: Anthropic, OpenAI, Google AI. Implementation pattern: reqwest streaming → async channel → GPUI update loop.
- Embedded PostgreSQL via
postgresql_embedded - ORM: SeaORM with entities in
crates/entity/ - Migrations:
crates/migration/ - Global Service:
DatabaseService(GPUI Global trait)
Tables: conversations, messages, attachments
| Crate | Purpose |
|---|---|
entity |
SeaORM database entities |
migration |
Database migrations |
- Virtual Lists: MessageList and ChatSidebar use
v_virtual_listfor large datasets - Streaming Debounce: 50ms debounce for LLM streaming updates
- Measurement Caching: Height estimates cached, significant changes (>100px) trigger remeasure
- Debug Optimization: shadow-rs disabled in debug builds for faster incremental compilation
- License Header: All files include SPDX headers (AGPL-3.0-only OR LicenseRef-Commercial)
- i18n: Default locale zh-CN, translations in
locales/ - Assets: Embedded via rust-embed, icons in
assets/icons/ - Icon Source: MGC Icon System Pro v1.40, use
lightvariant (/Users/Shiro/Developer/MGC Icon System Pro v1.40/SVG/light/)
- Create
src/llm/{provider_name}.rsimplementingLlmProvider - Add to
src/llm/mod.rsfactory function - Add provider config to
src/settings/provider.rs - Add icon to
assets/icons/llm_provider/and register insrc/icons/llm_provider.rs
gpui/gpui-component- UI framework and components (git dependencies)gpui-tokio-bridge- Bridges Tokio async runtime to GPUIsea-orm- Database ORM