Skip to content

olegshulyakov/llama.ui

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ¦™ llama.ui - Minimal Interface for Local AI Companion ✨

Tired of complex AI setups? 😩 llama.ui is an open-source desktop application that provides a beautiful ✨, user-friendly interface for interacting with large language models (LLMs) powered by llama.cpp. Designed for simplicity and privacy πŸ”’, this project lets you chat with powerful quantized models on your local machine - no cloud required! 🚫☁️

⚑ TL;DR

This repository is a fork of llama.cpp WebUI with:

  • Fresh new styles 🎨
  • Extra functionality βš™οΈ
  • Smoother experience ✨

welcome-screen

🌟 Key Features

  1. Multi-Provider Support: Works with llama.cpp, LM Studio, Ollama, vLLM, OpenAI,.. and many more!

  2. Conversation Management:

    • IndexedDB storage for conversations
    • Branching conversation support (edit messages while preserving history)
    • Import/export functionality
  3. Rich UI Components:

    • Markdown rendering with syntax highlighting
    • LaTeX math support
    • File attachments (text, images, PDFs)
    • Theme customization with DaisyUI themes
    • Responsive design for mobile and desktop
  4. Advanced Features:

    • PWA support with offline capabilities
    • Streaming responses with Server-Sent Events
    • Customizable generation parameters
    • Performance metrics display
  5. Privacy Focused: All data is stored locally in your browser - no cloud required!

  6. Localized Interface: Most popular language packs are included in the app, and you can choose the language at any time.

πŸš€ Getting Started in 60 Seconds!

πŸ’» Standalone Mode (Zero Installation)

  1. ✨ Open our hosted UI instance
  2. βš™οΈ Click the gear icon β†’ General settings
  3. 🌐 Set "Base URL" to your local llama.cpp server (e.g. http://localhost:8080)
  4. πŸŽ‰ Start chatting with your AI!
πŸ”§ Need HTTPS magic for your local instance? Try this mitmproxy hack!

Uh-oh! Browsers block HTTP requests from HTTPS sites 😀. Since llama.cpp uses HTTP, we need a bridge πŸŒ‰. Enter mitmproxy - our traffic wizard! πŸ§™β€β™‚οΈ

Local setup:

mitmdump -p 8443 --mode reverse:http://localhost:8080/

Docker quickstart:

docker run -it -p 8443:8443 mitmproxy/mitmproxy mitmdump -p 8443 --mode reverse:http://localhost:8080/

Pro-tip with Docker Compose:

services:
  mitmproxy:
    container_name: mitmproxy
    image: mitmproxy/mitmproxy:latest
    ports:
      - '8443:8443' # πŸ” Port magic happening here!
    command: mitmdump -p 8443 --mode reverse:http://localhost:8080/
    # ... (other config)

⚠️ Certificate Tango Time!

  1. Visit http://localhost:8443
  2. Click "Trust this certificate" 🀝
  3. Restart πŸ¦™ llama.ui page πŸ”„
  4. Profit! πŸ’Έ

Voilà! You've hacked the HTTPS barrier! 🎩✨

πŸ–₯️ Full Local Installation (Power User Edition)

  1. πŸ“¦ Grab the latest release from our releases page
  2. πŸ—œοΈ Unpack the archive (feel that excitement! 🀩)
  3. ⚑ Fire up your llama.cpp server:

Linux/MacOS:

./server --host 0.0.0.0 \
         --port 8080 \
         --path "/path/to/llama.ui" \
         -m models/llama-2-7b.Q4_0.gguf \
         --ctx-size 4096

Windows:

llama-server ^
             --host 0.0.0.0 ^
             --port 8080 ^
             --path "C:\path\to\llama.ui" ^
             -m models\mistral-7b.Q4_K_M.gguf ^
             --ctx-size 4096
  1. 🌐 Visit http://localhost:8080 and meet your new AI buddy! πŸ€–β€οΈ

🌟 Join Our Awesome Community!

We're building something special together! πŸš€

  • 🎯 PRs are welcome! (Seriously, we high-five every contribution! βœ‹)
  • πŸ› Bug squashing? Yes please! 🧯
  • πŸ“š Documentation heroes needed! 🦸
  • ✨ Make magic with your commits! (Follow Conventional Commits)

πŸ› οΈ Developer Wonderland

Prerequisites:

Build the future:

npm ci       # πŸ“¦ Grab dependencies
npm run build  # πŸ”¨ Craft the magic
npm start    # 🎬 Launch dev server (http://localhost:5173) for live-coding bliss! πŸ”₯

🧰 Preconfiguring Defaults

Planning to redistribute the app with opinionated settings out of the box? Any JSON under src/config is baked into immutable defaults at build time (see src/config/index.ts).

If those baked defaults include a non-empty baseUrl, the inference server will auto-sync on first load so model metadata is fetched without requiring manual input.

πŸ—οΈ Architecture

Core Technologies

Key Components

  1. App Context: Manages global configuration and settings
  2. Inference Context: Handles API communication with inference providers
  3. Message Context: Manages conversation state and message generation
  4. Storage Utils: IndexedDB operations and localStorage management
  5. Inference API: HTTP client for communicating with inference servers

πŸ“œ License - Freedom First!

llama.ui is proudly MIT licensed - go build amazing things! πŸš€ See LICENSE for details.


Made with ❀️ and β˜• by humans who believe in private AI