Releases: PurpleDoubleD/locally-uncensored
v2.3.9 — Create empty-state, start-in-Chat, Backend Selector opt-out, header toggle fix
Drop-in patch release on top of v2.3.8. No new headline features — five Discord-reported UX reliability fixes plus a docs refresh.
What's fixed
- Create view no longer crashes when no image or video model is installed. On fresh Windows installs with no ComfyUI models yet, opening Create could go unresponsive and hard-close the app. Reported by @figuringitallout on Discord. Create now detects this state and shows a calm empty-state card with a primary Go to Model Manager button and an Already downloaded? Refresh list secondary link. Mode switcher (Image / Video) stays available. Stale persisted model names from older installs are proactively cleared.
- Header-level Create toggle + model dropdown no longer crash on click. The Lichtschalter + model picker that sit in the app header while you're on Create used to throw
TypeError: activeList is undefinedwhen opening the dropdown andsetComfyRunning is not a functionwhen the start-poll tick fired. Reported on Discord by @diimmortalis with a precise console-log dump. Four store fields (imageModelList,videoModelList,comfyRunning,setComfyRunning) that the component expected were simply never wired intocreateStore— they are now, anduseCreate.fetchModels/checkConnectionpopulate them. - Backend Selector modal no longer reopens every 5-10 seconds. Users with multiple local backends (Ollama + LM Studio, Ollama + vLLM, etc) reported on Discord that the "N local backends detected" modal kept re-appearing regardless of how often they dismissed it. The old
sessionStorage-only guard wasn't surviving WebView2 reloads. Fixed with a persistenthideBackendSelectorflag, a pre-checked "Don't show this again" tickbox, and a clickable Settings → Providers link in the modal body so users always know where to manage providers. - LU always starts in the Chat sidebar tab on boot, not Code. Some users were landing inside the empty Code panel after install or update because
codexStorepersisted the sidebar tab between sessions. Fixed: the default is now always Chat on every fresh boot. If you want Code or Remote, click the tab each session. classifyModel()is null-safe. Defensive guard against stale or missing model names anywhere else in the Create pipeline.
Docs
- CONTRIBUTING.md now spells out the three dev workflows (
npm run tauri:dev,npm run dev,npm run tauri:build) with the trade-offs between them — Tauri invokes only resolve undertauri:dev. Reported on Discord by @k-wilkinson.
Known work in progress
Same caveat as v2.3.8: Codex (the coding-agent tab) is still actively evolving and is not yet production-finished. Chat, Create, Agent Mode, and Remote remain stable.
Upgrade
Auto-update will prompt on your next launch. If it doesn't, grab the installer below.
Stability
- 2205 / 2205 Vitest tests green
cargo checkcleantsc --noEmitclean- No breaking changes, no localStorage migration
v2.3.8 — Codex end-to-end overhaul
v2.3.8 — Internal plumbing + UX polish
Drop-in patch release on top of v2.3.7. No new headline features — this is internal stability work so future feature releases have a cleaner foundation.
What's in this build
- Built-in tool executors now thread the active chat-id through to Rust — the
agent-context.tsplumbing was designed for per-chat workspace isolation but the frontend executors (fs_read,fs_write,fs_list,fs_search,shell_execute,execute_code) never actually passed the id through, so relative paths silently landed in a shared fallback folder. Fixed end-to-end. - Tool-call JSON extraction is more robust — the old greedy
\{[^}]*\}regex failed on any nested brace or string value containing{(e.g. Python f-strings). Replaced with a locate-header-then-balance scanner that respects string escapes. Fixes models that emit tool calls as JSON insidecontent(qwen2.5-coder, some llama variants) when their output contains f-strings or dict literals. - Cleaner chat bubbles across models that emit tool calls in content — a new
stripRanges()helper removes the exact extracted JSON substrings from the displayed content instead of a greedy regex that missed edge cases. Eliminates stacked raw-JSON bubbles. - Arg-validator error hint is concrete now — when a tool call fails schema validation, the retry hint lists the exact required fields with types + what the model actually sent, so small models self-correct on the next iteration instead of repeating the same malformed call.
- Family grouping in the model dropdown (QWEN / GEMMA / LLAMA / HERMES / PHI / DOLPHIN / MISTRAL / DEEPSEEK / …) with reactive refresh when any provider's enabled state changes.
- Various internal cleanups — development-only diagnostic code removed from the release binary, devtools flag off for production builds.
Stability
- 2202 / 2202 Vitest tests green
cargo checkcleantsc --noEmitclean- Drop-in upgrade from v2.3.7, no breaking changes, no localStorage migration
Known work still in progress
Codex (the coding-agent tab) is under active development. Several improvements landed in this release but the feature set is still evolving — do not yet treat Codex as production-finished. Chat, Create, Agent Mode, and Remote remain stable.
Install
- Windows: download
Locally.Uncensored_2.3.8_x64-setup.exebelow, or auto-update if you're already on v2.3.4+ - Linux: AppImage / .deb / .rpm below
- Existing users get the in-app updater notification on next launch
Full technical changelog: CHANGELOG.md
v2.3.7 — Remote Ollama + OLLAMA_HOST env var support
Fixes Issue #31 by @k-wilkinson — LU now honors OLLAMA_HOST env var and the Settings → Providers → Ollama → Endpoint field across the whole app.
What was broken
Four places hardcoded http://localhost:11434 so any non-default OLLAMA_HOST (0.0.0.0:11434, LAN IPs, custom ports) silently failed:
- backend detector reported "No local backend detected"
- model dropdown stayed empty
- Settings → Providers → Endpoint field had zero effect
- Test button always said Failed — even when curl against the configured endpoint worked
What v2.3.7 does
A single ollama_base field now flows end-to-end.
Rust (state.rs + process.rs + proxy.rs + remote.rs):
load_ollama_base()reads, in priority order:config.json→OLLAMA_HOSTenv var (same semantics Ollama itself uses) → default.- New
set_ollama_host/get_ollama_hostcommands normalise input (barehost:port, scheme-less, or full URL all accepted), persist toconfig.json, updateAppState.ollama_base. proxy_localhostSSRF allow-list widened to accept the configured Ollama + ComfyUI hosts (everything else still blocked — no arbitrary intranet SSRF).pull_model_streamreads from state instead of hardcoding localhost:11434 — model downloads now hit the configured Ollama too.- Mobile Remote proxy (
proxy_ollama) routes to the configured base, with localhost rewritten to 127.0.0.1 for reqwest-in-subprocess.
Frontend (backend.ts + ollama-provider.ts + vite.config.ts + AppShell.tsx):
_ollamaBasemodule state +setOllamaBase/isOllamaLocal/normalizeOllamaBase.ollamaUrl()in Tauri mode returns${_ollamaBase}/api${path}— was hardcoded localhost.OllamaProvider.apiUrl()delegates to unifiedollamaUrl()— single source of truth, no more split Tauri/dev branches that ignoredconfig.baseUrl.- Vite
/apiproxy target computed fromprocess.env.OLLAMA_HOSTat dev-server startup —OLLAMA_HOST=… npm run devnow just works. - AppShell polls for
__TAURI_INTERNALS__(async in Tauri v2), then waits foruseProviderStore.persist.onFinishHydration(avoids the race where zustand hydration clobbered our post-mount sync), pulls Rust's authoritative base, mirrors into the store. Subscribe armed before the initialsetProviderConfigso Rust'sconfig.jsongets written on startup too. Subsequent GUI edits keepconfig.jsonauthoritative.
Verified end-to-end on the release binary
OLLAMA_HOST=127.0.0.1:11435 ollama serveon a second port.- Single LU instance launched with
OLLAMA_HOST=127.0.0.1:11435set in the parent shell. config.jsonsynced:ollama_base: http://127.0.0.1:11435. Settings → Providers → Ollama → Endpoint field:http://127.0.0.1:11435. Test: Connected. Model dropdown: populated. Chat streams.- Edited endpoint in the GUI to
11434,11999(wrong),11435, andlocalhost:11434in sequence — each edit wrote the new value toconfig.jsonand the Test button reflected reality (Connected / Failed / Connected / Connected). - Definitive routing proof: killed Ollama on :11435 mid-session. Next chat error:
proxy_localhost_stream: error sending request for url (http://127.0.0.1:11435/api/chat)— the request targets the configured endpoint, not the old hardcoded localhost:11434 (which was still up).
Test suite
2183 → 2202 green. 19 new regression tests in backend-urls.test.ts covering normaliseOllamaBase / setOllamaBase / isOllamaLocal / custom-host ollamaUrl() across both Tauri and dev modes. provider-ollama.test.ts mock updated for the new unified ollamaUrl.
Drop-in upgrade
No breaking changes. Default endpoint still http://localhost:11434 — existing users see zero behavior change. If you had OLLAMA_HOST in your environment (Docker, LAN, homelab) it's now honored. If you'd edited Settings → Providers → Ollama → Endpoint that value now actually flows through the app.
Auto-update rolls to any running 2.3.x install. Or download below.
v2.3.6 — Remote ComfyUI + LM Studio CORS fix
v2.3.6 (2026-04-21)
Added
- Configurable ComfyUI host (Settings → ComfyUI → Host). Point LU at a remote ComfyUI running in Docker, on a LAN machine, or on a headless homelab server. Requested in Discussion #1 by @ShoaibSajid. When the host resolves to the local machine (
localhost/127.0.0.1/::1/0.0.0.0) the Start/Stop/Restart/Install/Path controls stay visible; remote hosts hide those controls since LU can't manage a remote Python process. Mobile Remote proxy honors the new host. +17 regression tests.
Fixed
- ComfyUI port now actually persists across restarts. Pre-existing bug:
set_comfyui_portwrote toconfig.jsonbutAppState::new()never read it back on startup, so a custom port got reverted to 8188 on next launch. Newload_comfy_config_values()helper runs at startup. - OpenAI-compat local backends (LM Studio, vLLM, llama.cpp, KoboldCpp, Jan, oobabooga, GPT4All, Aphrodite, SGLang, TGI, LocalAI, TabbyAPI) can actually be reached from LU's Tauri webview.
openai-provider.tsused plainfetch()which CORS-blocks localhost inside WebView2, so the "Test" button always showed Failed and models never appeared in the dropdown even when the backend was obviously up via curl. Each HTTP call now pickslocalFetch/localFetchStreamwhen the provider baseUrl hostname is local; cloud endpoints skip the proxy.
Changed
- Test suite 2166 → 2183 green.
Notes
- Drop-in upgrade from v2.3.5. No breaking changes. Default host is
localhost— existing users see zero behavior change unless they switch to a remote host.
Install
- Windows: download the
.exesetup below and run it (NSIS installer). - Existing 2.3.x users: auto-update prompts on next launch.
🤖 Auto-generated from the v2.3.6 tag; CI builds signed artifacts and appends them to this release over the next ~10 minutes.
Locally Uncensored v2.3.5 — LM Studio Detection + Setup Clarity
TL;DR
Hotfix on top of v2.3.4. Recommended for anyone running multiple local AI backends (Ollama + LM Studio together) or anyone who previously ran setup.bat expecting the desktop app.
Fixed
LU now sees LM Studio models even when Ollama is also running
AppShell's post-onboarding detection only pre-enabled an openai-compat backend when exactly one local backend was detected. With 2+ — the very common Ollama + LM Studio setup — it opened a selector modal without pre-enabling anything. Users who dismissed the modal ended up with the openai provider silently disabled, so LM Studio's /v1/models was never queried. From the outside that looks exactly like "LU doesn't recognize my models".
Fix: the first non-Ollama detected backend is always pre-enabled now, even when multiple are running. The selector modal stays as an educational picker so you can change which one is primary. Ollama is untouched — it has its own provider slot.
Reproduced live with a mock LM Studio endpoint on port 1234 with Ollama also running, and re-verified against the same setup on the release binary. Reported by djoks.exe on Discord.
setup.bat / setup.ps1 / setup.sh no longer mislead end-users into dev mode
Previously advertised as "Windows One-Click Setup" in the README, but actually launched npm run dev — Vite at localhost:5173 in your default browser, with fewer features than the installed Tauri app and noisy [vite] http proxy error: /system_stats ECONNREFUSED if ComfyUI wasn't installed.
Fix: each script now opens with a prominent dev-mode banner, links to the installer on Releases, and asks Y/N before continuing. README's setup section reframed for contributors only. Reported via issue #30 by @EnotikSergo.
No more terminal flashes on Windows
Two Windows-branch Command::new spawns were missing CREATE_NO_WINDOW, so killing ComfyUI + Claude Code at LU shutdown (AppState::Drop) and installing SearXNG briefly flashed a console window. Both now carry the flag. 100% of Windows-branch subprocess spawns are covered. LU itself never spawns LM Studio (it only talks HTTP to a user-run instance), so the "no terminal when using LM Studio" guarantee was already true on that path; this tightens the peripheral surface.
Test & build
- 2166 / 2166 vitest green (+5 regression tests for the backend-autoenable fix)
tsc --noEmitcleancargo checkclean- Auto-update via signed NSIS channel — existing v2.3.x users get the prompt automatically
Upgrading
Existing users (v2.3.x): the in-app updater will pick this up on next launch. Click Restart Now when prompted. Your chat history, settings, onboarding state, and model list all survive — drop-in upgrade, no migration.
New users: grab the Windows .exe installer below, or the .AppImage / .deb / .rpm for Linux.
Download
Windows: .exe NSIS installer or .msi. Portable-friendly — no admin rights required.
Linux: .AppImage — chmod +x and run.
Locally Uncensored v2.3.4 — Reliability Hotfix
TL;DR
Hotfix release on top of v2.3.3. Recommended for everyone — fixes the big "lost my chat history after the update" bug plus the Ollama 0.21 compatibility break.
Fixed
Chat history now survives updates
isTauri() was checking the Tauri v1 global (window.__TAURI__), but Tauri 2 renamed it to window.__TAURI_INTERNALS__. Inside the packaged .exe every Tauri-only backend command (backup_stores, restore_stores, set_onboarding_done, ComfyUI manager, whisper, process control) silently fell through to the dev-mode fetch path and no-op'd.
- Dual-global detection for v1 + v2 compat
- 100 ms × 50-tick polling for async
withGlobalTauriinit - Backup cadence tightened: 30 s → 5 s interval + 1 s event-driven debounce +
beforeunloadsync flush __tsmarker so the snapshot is never empty- Full destructive wipe+restore roundtrip live-verified on the release binary
Ollama 0.21 / 0.20.7 compatibility
The auto-upgraded Ollama now returns HTTP 404 model not found on /api/show for pre-existing models whose on-disk manifest lacks the new capabilities field.
- New top-of-app
StaleModelsBanner+ Header Lichtschalter chip - One-click Refresh all re-pulls each stale model and verifies via a second probe before clearing the warning
- Error parser tolerates 400 / 404 / Rust-proxy-wrapped-500 forms
Codex infinite-loop guard
Small 3 B coder models (qwen2.5-coder:3b, llama3.2:1b) could loop forever repeating the same file_write + shell_execute batch when a test failed. Codex now halts with a clear "same tool sequence repeated — try a larger model" message after two consecutive identical batches.
Stop button is now actually instant
abort.signal.aborted is checked at the top of the for await chat stream and the NDJSON reader loop; reader.cancel() on abort. No more 30–60 s of thinking tokens leaking after you click Stop on a Gemma-4 response.
Other
- Stale-chip state leak on model switch fixed
isHtmlSnippetexport restored (19 failing CodeBlock tests → green)- Create view
getKnownFileSizesCommonJSrequire()→ dynamicimport()(was silently broken in Vite/browser bundle) - flux2 CFG scale test regression corrected
Test & build
- 2161/2161 vitest green (+56 regression tests over 2.3.3)
tsc --noEmitcleancargo checkclean- Auto-update via signed NSIS/MSI channel — existing users get prompted automatically
Upgrading
Existing users: the in-app updater will pick this up automatically on next launch. Click Restart Now when prompted. Your chat history, settings, and model list all survive the upgrade — that's literally what this release fixes.
New users: Download the Windows .exe installer below.
Download
Windows: .exe NSIS installer or .msi. Portable-friendly — no admin rights required.
Linux: .AppImage — chmod +x and run.
v2.3.3 — Remote Access, Codex Streaming, Qwen 3.6, ERNIE-Image, 2105 Tests
What's New in v2.3.3
The biggest release yet — Remote Access, Codex overhaul, 6 new image/video models, and Qwen 3.6 day-0 support.
Remote Access + Mobile Web App
- Access your AI from your phone — Dispatch via LAN or Cloudflare Tunnel (Internet)
- 6-digit passcodes with rate limiting, JWT auth, and auto-regenerating tokens
- Full mobile web app with hamburger drawer, chat list, Codex mode, file attach, thinking toggle, plugins (Caveman + Personas)
- Mobile Agent Mode with 13 tools — Thought/Action/Observation cards, collapsible steps
- Mobile-Desktop sync — messages mirror in real-time, memory extraction works across both
- Security hardened — permissions enforced on proxy, CSP headers, content validation, no static file leaks
Codex Coding Agent — Major Upgrade
- Live streaming between tool calls — see tokens as they generate (was: blank screen for 2+ minutes)
- Continue capability — tool-call history persisted as hidden messages, model remembers what it did
- AUTONOMY CONTRACT — explicit prompt prevents "Now I will..." premature stopping
- Fallback answer — never shows empty bubble after tool calls
- Streaming arg repair — fixes Ollama JSON-string argument issue
Agent Mode — 13-Phase Rewrite
- Parallel tool execution with side-effect grouping (file-write serial, reads parallel)
- Budget system — max 50 tool calls / 25 iterations per task
- Sub-agent delegation —
delegate_taskspawns isolated sub-agents (depth 2) - In-turn cache — deduplicates identical tool calls within one turn
- MCP integration — external tools via
ToolRegistry.registerExternal() - Embedding-based routing — reduces tool definitions by ~80% for large registries
- Filesystem awareness — agent now uses file_list/system_info before acting
New Models
- Qwen 3.6 (day-0) — 35B MoE, 3B active, vision + agentic coding + thinking preservation, 256K context. One-click Ollama pull
- ERNIE-Image (Baidu) — Turbo (8 steps) + Base (50 steps), 28.9 GB each. ConditioningZeroOut workflow, no custom nodes needed
- Z-Image — Own ModelType with correct CLIP matching (was misclassified as flux2)
- 75+ downloadable models — all URLs verified, file sizes corrected
Image + Video
- Image-to-Image — upload source, adjust denoise, transform with any model
- FramePack fixes — correct node names, DualCLIPLoader, CLIPVision
- 6 ComfyUI E2E fixes — real error messages, direct fetch fallback, stale model reset
UI/UX
- AE-style text header — clean typography replaces icon pills for better discoverability
- Plugins dropdown — Caveman Mode (Off/Lite/Full/Ultra) + Personas in one menu
- Thinking mode — tri-state (true/false/undefined), auto-retry on 400, universal tag stripper
- Gemma 3/4 planner bypass — no more "Plan: / Constraint Checklist:" preamble
Developer
- 2105 tests (83 files) — comprehensive smoke tests covering entire app surface
- Auto-update — signed NSIS installers, in-app download with progress bar
- NSIS persistence — localStorage backup/restore survives updates
- Process cleanup — Windows Job Object kills ComfyUI on app close
Bug Fixes
- Thinking tags leaked past toggle (QwQ, DeepSeek-R1, Gemma)
- I2V image upload FormData corruption
- Chat homepage null crash on fresh install
- Light theme contrast issues
- Caveman mode missing in Codex/Claude Code
- Download polling race condition
- 13 file sizes corrected (up to 95% off)
- Terminal window popup on Windows (cloudflared)
Full changelog: See CLAUDE.md entries 1-95
v2.3.2 — GLM-4.7-Flash, Model Loading Fix, Agent Badge Audit
What's New
GLM-4.7-Flash (11 variants)
ZhipuAI's strongest 30B class model with native tool calling and 198K context window.
- 4 Uncensored (Heretic): IQ2_M (10 GB), Q4_K_M (19 GB), Q6_K (25 GB), Q8_0 (32 GB)
- 7 Mainstream: IQ2_M through Q8_0 — fits 12 GB VRAM at IQ2_M
- All variants marked as AGENT (tool calling compatible)
GLM 5.1 754B MoE
Listed as cloud-available via Ollama. 754B MoE (40B active), frontier agentic engineering model.
Model Loading Fix (Discussion #22)
Fixed 3 bugs causing "0 models loaded" in ComfyUI Create View:
- Race condition — ComfyUI responds to health check before scanning model directories. Now calls /api/refresh before querying models.
- Broken auto-retry — 0-models case set both modelsLoaded and modelLoadError, preventing retries. Fixed: retries every 3s, max 12x (~36s).
- Stale cache — After downloading models, ComfyUI's directory cache was never refreshed. Now calls /api/refresh after every download.
Agent Badge Audit
Audited all 75+ models for correct tool calling flags:
- Added agent flag to: Qwen 3.5 (all sizes), Qwen3 8B/14B, GLM-4 9B, GPT-OSS 20B, Llama 3.1 8B, Llama 3.3 70B, Phi-4 14B, Qwen 2.5 7B
- Removed all HOT badges — cleaner UI, only AGENT badges shown
Other Fixes
- Think-mode guard for non-thinking models (amber hint instead of crash)
- Chat homepage null crash fix
- Light theme contrast improvements
Downloads
CI builds the installers automatically. Check back in ~5 minutes for:
- Locally.Uncensored_2.3.2_x64-setup.exe — NSIS installer (recommended)
- Locally.Uncensored_2.3.2_x64_en-US.msi — Windows Installer
Test Results
- 607 tests passing
- E2E tested: GLM downloads (start/cancel/re-download), model loading fix, all existing models
v2.3.1 — In-App Ollama Install, Configurable ComfyUI Port
What's New
In-App Ollama Download & Install
No more hunting for external links — click Install Ollama in the onboarding wizard and watch it download with a real-time progress bar (speed, bytes, elapsed timer). Silent install, auto-start, auto-detect. Zero manual steps.
Configurable ComfyUI Port & Path
The ComfyUI port was hardcoded to 8188 in 20+ places — now fully configurable in Settings > ComfyUI (Image & Video). Users with the new ComfyUI Desktop App (which uses a different port) can now connect by simply changing the port.
Path is also editable in Settings with a Connect button — no need to go through onboarding again.
ComfyUI Install Progress
The one-click ComfyUI install now shows step-by-step progress (Step 1/3: Clone, Step 2/3: PyTorch, Step 3/3: Dependencies) with an elapsed timer. Previously the install got stuck at "Starting..." forever because the Rust thread never reported completion.
Provider Status Fix
Provider connection dots in Settings now show actual status (green = connected, red = failed, gray = unknown) instead of always showing green. Auto-checks connection on page load.
Full Changelog: v2.3.0...v2.3.1
v2.3.0 — ComfyUI Plug & Play, 20 Model Bundles, I2I, I2V
v2.3.0 — ComfyUI Plug & Play, 20 Model Bundles, Image-to-Image, Image-to-Video
Highlights
- ComfyUI Plug & Play — Auto-detect, one-click install, auto-start. Zero config image and video generation.
- 20 Model Bundles — 8 image + 12 video bundles with one-click download. Verified models marked, untested show "Coming Soon".
- Z-Image Turbo/Base — Uncensored image model. 8-15 seconds per image. No safety filters.
- FLUX 2 Klein — Next-gen FLUX architecture with Qwen 3 text encoder.
- Image-to-Image (I2I) — Upload a source image, adjust denoise, transform with any image model.
- Image-to-Video (I2V) — FramePack F1 (6 GB VRAM!), CogVideoX, SVD with drag & drop.
- Dynamic Workflow Builder — 14 strategies auto-detect installed nodes and build correct pipelines.
New Features
- VRAM-aware model filtering (Lightweight / Mid-Range / High-End tabs)
- Unified download manager with progress, speed, retry for failed files
- Think Mode moved to chat input (always accessible)
- Hardware-aware onboarding recommends Gemma 4, Qwen 3.5 based on GPU VRAM
- Verified/Coming Soon badges on model bundles
- ComfyUI process auto-cleanup on app close (Windows Job Object)
- GLM 5.1, Qwen 3.5, Gemma 4 added to Discover models
Bug Fixes
- SSRF protection added to proxy_localhost (localhost-only validation)
- npm vulnerabilities fixed (vite updated)
- FramePack workflow: DualCLIPLoader fix, VAEEncode fix, preflight custom node check
- Z-Image: own ModelType + strategy (was misclassified as flux2)
- Think-Mode guard for non-thinking models
- All 105 download URLs verified HTTP 200/302
Downloads
- Windows (.exe) — Recommended. NSIS installer.
- Windows (.msi) — Windows Installer alternative.
- Other platforms: build from source.