Skip to content

Releases: PurpleDoubleD/locally-uncensored

v1.5.3 — Production .exe Fixed + Download Manager

02 Apr 00:42

Choose a tag to compare

What's New

Tauri .exe Production — Fully Working

  • CORS fix: All Ollama/ComfyUI calls now route through Rust proxy to bypass CORS blocking
  • Ollama fix: Added missing /api prefix — models, chat, and pull now work in .exe
  • CSP fix: Added ipc.localhost to allow Tauri IPC
  • ComfyUI auto-discovery: Deep scans home directory (depth 7), finds non-standard paths
  • ComfyUI path UI: Manual path input in Model Manager when auto-discovery fails

Download Manager

  • Pause/Resume: Pause downloads and resume later from where you left off (HTTP Range headers)
  • Cancel: Cancel downloads and clean up temp files
  • Progress tracking: Real-time speed, bytes downloaded, progress bar per file

6 Complete Model Bundles (3 Image + 3 Video)

Every bundle is a complete package — "Install All" downloads everything needed to generate.

Image:

  • Juggernaut XL V9 (SDXL, 6.5 GB, 6-8 GB VRAM)
  • FLUX.1 schnell FP8 (16 GB total, 8-10 GB VRAM) — includes T5-XXL + CLIP_L
  • FLUX.1 dev FP8 (16 GB total, 8-10 GB VRAM) — includes T5-XXL + CLIP_L

Video:

  • Wan 2.1 1.3B (7.6 GB, 8-10 GB VRAM)
  • Wan 2.1 14B FP8 (19.1 GB, 12+ GB VRAM)
  • HunyuanVideo 1.5 T2V FP8 (21.5 GB, 12+ GB VRAM) — NEW

Other Fixes

  • TTS auto-speak: Chat responses read aloud when TTS enabled
  • Diffusion models now show in Model Manager (not just checkpoints)
  • AI Agents tab enabled
  • Python discovery improved (AppData, Conda paths)
  • ComfyUI stdout/stderr deadlock fixed
  • All download URLs verified working (HTTP 200)

v1.5.2 — Fix: Desktop App Production Build

01 Apr 22:17

Choose a tag to compare

Critical Desktop Fix

The .exe was broken because of multiple cascading failures in the Tauri backend.

What was broken

  • Download progress stuck at 0% — spawned task had no access to shared state
  • ComfyUI process deadlocked — stdout/stderr pipes never read, buffer fills up, process blocks
  • Version mismatch — Cargo.toml (1.2.1) vs tauri.conf.json (1.3.0) vs release (1.5.x)
  • Missing endpoint mappingfetch_external commands not in dev-mode route table

What's fixed

  • Download progress worksArc<Mutex<>> shared between spawned task and progress endpoint, updates every 500ms with speed indicator
  • ComfyUI doesn't deadlock — background threads drain stdout/stderr, Python binary validated before spawn
  • Versions synced — all at 1.5.2
  • Routing completefetch_external + fetch_external_bytes in both Tauri and dev endpoint maps
  • GitHub Actions — updated to tauri-action@v0.5

Full changelog since v1.4.0

  • Dynamic Workflow Builder (auto-detects 600+ ComfyUI nodes)
  • CivitAI Model Marketplace (search + one-click download)
  • Privacy hardening (local image proxy, no CDN, no analytics)
  • 5 critical bug fixes (CLIP fallback, BM25, agent image gen, whisper, chat history)
  • Dark mode redesign + light mode fixes
  • VRAM unload button
  • Default view = Model Manager

Built with Locally Uncensored + Claude Code

v1.5.1 — Fix: Tauri .exe Production Build

01 Apr 21:25

Choose a tag to compare

Critical Fix

The v1.5.0 .exe release was non-functional. This release fixes all production issues.

What was broken

  • CivitAI model search and workflow downloads failed (CORS — no server-side proxy in Tauri)
  • External model downloads failed (redirect handling missing)
  • Image thumbnails from CivitAI didn't load
  • All API calls using relative URLs (/civitai-api/...) only worked through Vite dev proxy

What's fixed

  • New Rust proxy commands: fetch_external (text) and fetch_external_bytes (binary) handle all external HTTP requests server-side in the Tauri backend
  • Unified routing: fetchExternal() / fetchExternalBytes() in frontend route to Tauri invoke() in production or Vite proxy in dev — same code, both work
  • CSP updated: CivitAI, HuggingFace, Ollama domains whitelisted for the .exe
  • Default view: App now opens on Model Manager instead of Chat — new users need models first

Also includes (from v1.5.0)

  • Dynamic Workflow Builder (auto-detects 600+ ComfyUI nodes)
  • CivitAI Model Marketplace
  • Privacy hardening (all thumbnails proxied locally in dev mode)
  • 5 critical bug fixes (CLIP fallback, BM25, agent image gen, whisper check, chat history)

Built with Locally Uncensored + Claude Code

v1.5.0 — Dynamic Workflows + Model Marketplace + Privacy

01 Apr 18:38

Choose a tag to compare

What's New in v1.5.0

Dynamic Workflow Builder

  • One builder to rule them all — replaces 4 hardcoded workflow builders with a single dynamic system
  • Queries ComfyUI's 600+ available nodes via /object_info and auto-constructs the optimal pipeline
  • Auto-detects strategy: UNET-based (FLUX/Wan), Checkpoint-based (SDXL/SD1.5), or AnimateDiff
  • Falls back to legacy builders if dynamic fails — zero risk of breakage

Model Marketplace

  • Search CivitAI directly from the Model Manager for checkpoints, LoRAs, and VAEs
  • One-click download into the correct ComfyUI folder
  • Real-time download progress with speed indicator
  • Model list auto-refreshes after download completes

Privacy Hardening

  • All CivitAI thumbnails proxied through local server — external servers never see your IP
  • PDF.js worker bundled locally instead of Cloudflare CDN
  • Google Fonts removed from all pages (system fonts)
  • Zero analytics, zero telemetry, zero external scripts

UI Polish

  • ComfyUI execution time shown in output info bar (e.g. "85.3s")
  • VRAM unload button to free GPU memory after generation
  • Logo removed (text only until proper logo is designed)
  • RAG, Voice, AI Agents marked as "work in progress" in README
  • Generation timer runs continuously (no more freezing)

Bug Fixes

  • Fixed CreateView crash from useEffect ordering
  • Fixed model type always re-classified from filename at generation time
  • Incompatible workflows automatically skipped

Built with Locally Uncensored + Claude Code

v1.4.0 — Workflow Finder + Dark Mode Redesign

01 Apr 17:47

Choose a tag to compare

What's New

Workflow Finder

  • Load Workflow button in the Create view — search CivitAI for ComfyUI workflows, browse built-in templates, or import via URL/JSON paste
  • Auto-detects model type (FLUX, SDXL, SD1.5, Wan, Hunyuan) and assigns compatible workflows
  • Built-in templates for all supported model architectures
  • CivitAI API key support with guided setup flow
  • Incompatible workflows are automatically skipped (e.g. SDXL workflow on a FLUX model)

Image Generation Fixes

  • Fixed model type classification — always re-classifies from filename at generation time
  • Fixed Vite 8 POST proxy blocking (custom middleware for ComfyUI requests)
  • Auto-resolves VAE and CLIP loaders from ComfyUI's installed models
  • Supports both ComfyUI API format and web/UI format workflow JSONs
  • Server-side download proxy for CivitAI workflow ZIPs

New: Unload Models

  • Free VRAM/RAM button in the Create view — unloads models from ComfyUI after generation

UI/UX Improvements

  • Dark mode redesign: deeper blacks, sharper contrasts, no more ChatGPT-grey
  • Light mode: fully working dual-mode for all Create view components
  • Minimal pulse-ring loading animation (replaces emoji + progress bar)
  • Layout: output area on top, prompt input on bottom
  • Compact parameter panel: side-by-side sampler/scheduler and steps/cfg
  • Generation timer now counts continuously (fixed 32s freeze)
  • Shows real ComfyUI execution time on completion
  • "Connected" status bar auto-hides after 10 seconds
  • Onboarding screens match new dark theme
  • AI Agents marked as "Work in Progress"

Built with Locally Uncensored + Claude Code

Locally Uncensored v1.3.0

01 Apr 12:55

Choose a tag to compare

What's New in v1.3.0

RAG Document Chat

Upload PDF, DOCX, or TXT files and chat with your documents — all 100% local.

  • Hybrid search (vector + BM25 keyword matching)
  • Confidence scores with color-coded badges
  • Auto-downloads embedding model (nomic-embed-text)
  • Per-conversation toggle and source citations

Voice Integration

Talk to your AI and hear responses read aloud.

  • Push-to-talk mic button (Web Speech API)
  • Text-to-speech with sentence-level streaming
  • Voice settings: voice selection, rate, pitch

AI Agents

Give your AI a goal and watch it work autonomously.

  • ReAct-style reasoning with 5 built-in tools
  • Web search, file read/write, Python execution, image generation
  • User approval for destructive actions
  • Task breakdown visualization and color-coded execution log

Web Search

  • Multi-tier: SearXNG > DuckDuckGo API > Wikipedia fallback
  • One-click SearXNG install for enhanced search

Fixes

  • Cross-platform Python detection (Windows Store alias handling)
  • DuckDuckGo CAPTCHA fallback to Brave Search

Download

Windows: Download the .exe installer. Choose "Install for current user only" for a portable experience. Or use .msi for system-wide installation.

Linux: Download the .AppImage, make it executable (chmod +x), and run. Or use .deb / .rpm for your distro.

All versions are portable-friendly — no admin rights required.

v1.2.1 - Portable Mode Linux/Mac

31 Mar 08:41
0339e90

Choose a tag to compare

Fix icons: make all icons square (required by Tauri)

- icon.png: 512x491 → 512x512
- 128x128.png: 128x123 → 128x128
- 128x128@2x.png: 128x123 → 256x256
- 32x32.png: 32x31 → 32x32
- icon.ico: regenerated with correct square sizes

https://claude.ai/code/session_01D8KbgRG2kWWvTNVydnHiqn

v1.2.0 - Portable Mode

30 Mar 18:35

Choose a tag to compare

Add src-tauri/target/ to .gitignore

v1.1.0 - Desktop App + ComfyUI Wizard

27 Mar 00:29

Choose a tag to compare

Desktop app + ComfyUI wizard + community files

v1.0.0 - Initial Release

25 Mar 15:00

Choose a tag to compare

Locally Uncensored v1.0.0

The first public release of Locally Uncensored — an all-in-one local AI app.

Features

  • Uncensored Chat via Ollama (auto-detects installed models)
  • Image Generation via ComfyUI (SDXL, FLUX, Pony, auto-detects checkpoints)
  • Video Generation (Wan 2.1/2.2, AnimateDiff)
  • 25+ Built-in Personas (ChadGPT, Chef Ramsay, No Filter, and more)
  • Model Manager — browse, install, and manage models from the app
  • Dark/Light Mode with particle background effects
  • Zero Cloud — no telemetry, no tracking, no accounts
  • One-Click Windows Setup that installs everything including Ollama

Tech Stack

React 19, TypeScript, Tailwind CSS, Vite

License

MIT