Skip to content

Commit cb32b65

Browse files
PurpleDoubleDclaude
andcommitted
release: v2.3.5 — LM Studio auto-detection fix + Windows terminal cleanup
Reported via Discord #help-chat on 2026-04-21 by djoks.exe: "LU does not recognize the models". Root cause: AppShell's post-onboarding local-backend detection only pre-enabled an openai-compat provider when exactly one backend was detected. With two or more (very common setup — Ollama + LM Studio together) it opened the BackendSelector modal without pre-enabling anything. If the user dismissed the modal, LM Studio stayed disabled and its models never showed up in the chat dropdown — exactly the "LU doesn't see my models" symptom. Fix: always pre-enable the first non-Ollama detected backend, even when multiple are running. The selector modal stays as an educational picker so the user can change which one is primary. Live-verified against a mock LM Studio endpoint with Ollama also running on the rebuilt release binary (hash differs from v2.3.4). Also rolled into this release: - CREATE_NO_WINDOW on the remaining Windows subprocess spawns that flashed a console window (state.rs taskkill at LU shutdown, search.rs docker pull/run at SearXNG install). Audit shows 100% coverage on the Windows code path now. Suite 2161 -> 2166 green (+5 regression tests for the backend-autoenable fix). tsc clean. cargo clean. Drop-in upgrade from v2.3.4, no breaking changes, no localStorage migration. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1 parent 67d6dd8 commit cb32b65

7 files changed

Lines changed: 29 additions & 20 deletions

File tree

CHANGELOG.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,18 @@
22

33
All notable changes to Locally Uncensored are documented here.
44

5+
## [2.3.5] - 2026-04-21
6+
7+
### Fixed
8+
- **LM Studio (and other openai-compat backends) now show up when Ollama is also running**`AppShell`'s post-onboarding detection only auto-enabled a backend when exactly one was detected. With two or more (the very common Ollama + LM Studio setup) it showed the `BackendSelector` modal but pre-enabled nothing. Users who dismissed the modal saw zero LM Studio models in the chat dropdown even though LM Studio was clearly running — looked from the outside like "LU doesn't recognize my models". Reported via Discord `#help-chat` on 2026-04-21. Fix: the first non-Ollama detected backend is always pre-enabled (Ollama is left untouched since it has its own provider slot); the selector stays as an educational picker so you can change which openai-compat backend is primary. Reproduced live with a mock LM Studio endpoint on port 1234 with Ollama also running, verified the fix against the same setup on the release binary. Five regression tests in `AppShell-backend-autoenable.test.ts`.
9+
- **No more terminal flashes on Windows when LU kills subprocesses** — two Windows-branch `Command::new` spawns were missing `CREATE_NO_WINDOW`: the `taskkill` calls in `AppState::Drop` that tear down ComfyUI + Claude Code process trees on LU shutdown, and the `docker pull` / `docker run` in `search.rs` that installs SearXNG. Both briefly flashed a console window at the user. Now 100% of Windows-branch subprocess spawns carry the flag. LU itself never spawns LM Studio (only talks HTTP to a user-run instance), so the "no terminal when using LM Studio" guarantee was already true on that path; this tightens the peripheral surface.
10+
11+
### Changed
12+
- Test suite 2161 → 2166 green (+5 regression tests for the backend-autoenable fix).
13+
14+
### Notes
15+
- Drop-in upgrade from v2.3.4. No breaking changes. No localStorage migration. Everything from v2.3.4 (chat-history persistence, Ollama 0.21 compat, Codex loop guard, stop-button fast-path, stale-chip fix, 12-backend auto-detect, Mobile Remote, Codex streaming, Agent Mode rewrite, ERNIE-Image, Qwen 3.6, 75+ one-click model downloads) still applies.
16+
517
## [2.3.4] - 2026-04-20
618

719
### Fixed

README.md

Lines changed: 7 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -35,19 +35,16 @@ No cloud. No data collection. No API keys. Auto-detects 12 local backends. Your
3535

3636
---
3737

38-
## v2.3.4 — Current Release
38+
## v2.3.5 — Current Release
3939

40-
**Chat-history persistence fix, Ollama 0.21 compatibility, Codex loop guard, 2161 Tests**
40+
**LM Studio auto-detection fix + Windows terminal-popup cleanup, 2166 Tests**
4141

4242
### Critical Fixes (why you want this update)
43-
- **Chat history now survives updates** — NSIS auto-update, crashes, and abrupt process kills no longer wipe your conversations. `isTauri()` detection was broken in the v1→v2 migration, causing every backup/restore call to silently no-op. Fixed with dual-global detection + async-init polling + 5 s backup interval + event-driven + graceful-quit flush. Fully live-verified: destructive wipe + restore roundtrip confirmed on the release binary.
44-
- **Ollama 0.21 / 0.20.7 compatibility** — the auto-upgraded Ollama now rejects pre-existing models with `HTTP 404 model not found` on `/api/show` when their on-disk manifest lacks the new `capabilities` field. New top-of-app banner + Header Lichtschalter chip detect stale models and offer a one-click re-pull that verifies the fix before clearing the warning.
45-
- **Codex infinite-loop guard** — small 3 B coder models could get stuck repeating the same `file_write + shell_execute` batch forever when a test failed. Codex now halts with a clear "same tool sequence repeated — try a larger model" message.
46-
- **Stop button now instant** — abort signal checked between chunks in the for-await and NDJSON-reader loops. Thinking tokens no longer leak for 30–60 s after you click Stop on a Gemma-4 thinking response.
47-
- **Stale-chip state-leak** — switching from a stale model to a fresh one now clears the red toggle and chip immediately.
43+
- **LU now recognizes your LM Studio models when Ollama is also running** — if the first-launch detection found 2+ local backends (the very common "Ollama + LM Studio" setup), the backend selector modal opened but no provider got auto-enabled. Users who dismissed the modal saw zero LM Studio models in the dropdown even though LM Studio was clearly running. Fixed: the first non-Ollama backend is always pre-enabled; the selector stays as an educational picker so you can still switch primaries. Reproduced live and verified against a real LM Studio-like endpoint.
44+
- **No more terminal flashes on Windows** — a couple of subprocess spawns on the Windows code path were missing `CREATE_NO_WINDOW`, so killing ComfyUI/Claude Code during LU shutdown or installing SearXNG briefly flashed a console window. 100% of Windows-branch spawns are now flagged.
4845

49-
### What's still in v2.3.4 from v2.3.3
50-
This is a hotfix release — v2.3.3's feature surface is unchanged.
46+
### What's still in v2.3.5 from v2.3.4
47+
This is a hotfix release — v2.3.4's feature surface (chat-history persistence, Ollama 0.21 compat, Codex loop guard, stop-button instant, stale-chip fix, 12 backend auto-detection, Mobile Remote, Codex streaming, Agent Mode, ERNIE-Image, Qwen 3.6, 75+ downloadable models) is unchanged. Every fix from v2.3.4 and earlier still applies.
5148

5249
### Remote Access + Mobile Web App
5350
- **Access your AI from your phone** — Dispatch via LAN or Cloudflare Tunnel (Internet)
@@ -78,7 +75,7 @@ This is a hotfix release — v2.3.3's feature surface is unchanged.
7875
- **AE-style text header** — clean typography for better discoverability
7976
- **Plugins dropdown** — Caveman Mode + Personas in one menu
8077
- **Thinking mode** — tri-state, auto-retry, universal tag stripper
81-
- **2161 tests** — comprehensive smoke tests covering the entire app
78+
- **2166 tests** — comprehensive smoke tests covering the entire app
8279

8380
---
8481

docs/index.html

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@
3333
<link rel="icon" href="https://locallyuncensored.com/favicon.png">
3434
<link rel="apple-touch-icon" href="https://locallyuncensored.com/favicon.png">
3535
<script type="application/ld+json">
36-
{"@context":"https://schema.org","@type":"SoftwareApplication","name":"Locally Uncensored","alternateName":["LU","Locally Uncensored Desktop App"],"applicationCategory":"DeveloperApplication","applicationSubCategory":"Artificial Intelligence","operatingSystem":"Windows 10, Windows 11, Linux","description":"Open-source desktop app for running AI locally. Chat with 20+ providers, generate images and video via ComfyUI, and code with a built-in AI agent — all on your own hardware. Auto-detects 12 local backends. No cloud, no telemetry.","url":"https://locallyuncensored.com/","downloadUrl":"https://github.com/PurpleDoubleD/locally-uncensored/releases/latest","softwareVersion":"2.3.4","license":"https://www.gnu.org/licenses/agpl-3.0.html","author":{"@type":"Person","name":"PurpleDoubleD","url":"https://github.com/PurpleDoubleD"},"offers":{"@type":"Offer","price":"0","priceCurrency":"USD","availability":"https://schema.org/InStock"},"featureList":["Plug & Play Setup with 12 Local Backend Auto-Detection (Ollama, LM Studio, vLLM, KoboldCpp, Jan, llama.cpp, LocalAI, GPT4All, TabbyAPI, Aphrodite, SGLang, TGI)","20+ Provider Presets (local + cloud)","Qwen 3.6 Day-0 Support (35B MoE, Vision, Agentic Coding, 256K context)","GPT-OSS Compatible (120B, 20B via Ollama)","GLM-4.7 Flash Integration","DeepSeek R1 and DeepSeek V3 Support","Llama 4 and Llama 3.3 Ready","Gemma 4 E4B and 27B (Native Vision)","Mistral Small 3 and Phi 4 Support","Qwen 3 Coder and Qwen 2.5 Coder","Abliterated Model Variants","Codex Coding Agent with Live Streaming and Apply-Patch","Claude Code CLI Integration","Agent Mode with 14 Tools, Parallel Execution, MCP, Sub-Agent Delegation","Remote Access from Phone via LAN or Cloudflare Tunnel (6-digit Passcode, QR Setup)","Mobile Web App (Chat, Codex, Tools, Plugins)","Image Generation via ComfyUI (FLUX 2 Klein, FLUX.1, Juggernaut XL, Z-Image Turbo, ERNIE-Image, SDXL, SD 3.5)","Video Generation (Wan 2.1, Wan 2.2, HunyuanVideo 1.5, LTX 2.3, FramePack F1, AnimateDiff, CogVideoX)","Image-to-Image with Denoise Control","Image-to-Video (FramePack F1, CogVideoX, SVD)","Granular Permission System (7 Tool Categories)","File Upload with Vision Support","Thinking Mode with Universal Tag Stripping","A/B Model Compare","Local Benchmark","Memory System and Document RAG","Voice (STT / TTS)","Auto-Update over Signed NSIS Channel","75+ One-Click Model Downloads","Hardware-Aware Model Recommendations","100% Offline and Private (No Telemetry)","AGPL-3.0 Open Source"],"screenshot":"https://raw.githubusercontent.com/PurpleDoubleD/locally-uncensored/master/docs/social-preview.png","softwareRequirements":"Any OpenAI-compatible local backend (Ollama, LM Studio, vLLM, KoboldCpp, etc.) or cloud API key","memoryRequirements":"8 GB RAM minimum; 8-16 GB VRAM for image/video generation","storageRequirements":"6 GB for default model"}
36+
{"@context":"https://schema.org","@type":"SoftwareApplication","name":"Locally Uncensored","alternateName":["LU","Locally Uncensored Desktop App"],"applicationCategory":"DeveloperApplication","applicationSubCategory":"Artificial Intelligence","operatingSystem":"Windows 10, Windows 11, Linux","description":"Open-source desktop app for running AI locally. Chat with 20+ providers, generate images and video via ComfyUI, and code with a built-in AI agent — all on your own hardware. Auto-detects 12 local backends. No cloud, no telemetry.","url":"https://locallyuncensored.com/","downloadUrl":"https://github.com/PurpleDoubleD/locally-uncensored/releases/latest","softwareVersion":"2.3.5","license":"https://www.gnu.org/licenses/agpl-3.0.html","author":{"@type":"Person","name":"PurpleDoubleD","url":"https://github.com/PurpleDoubleD"},"offers":{"@type":"Offer","price":"0","priceCurrency":"USD","availability":"https://schema.org/InStock"},"featureList":["Plug & Play Setup with 12 Local Backend Auto-Detection (Ollama, LM Studio, vLLM, KoboldCpp, Jan, llama.cpp, LocalAI, GPT4All, TabbyAPI, Aphrodite, SGLang, TGI)","20+ Provider Presets (local + cloud)","Qwen 3.6 Day-0 Support (35B MoE, Vision, Agentic Coding, 256K context)","GPT-OSS Compatible (120B, 20B via Ollama)","GLM-4.7 Flash Integration","DeepSeek R1 and DeepSeek V3 Support","Llama 4 and Llama 3.3 Ready","Gemma 4 E4B and 27B (Native Vision)","Mistral Small 3 and Phi 4 Support","Qwen 3 Coder and Qwen 2.5 Coder","Abliterated Model Variants","Codex Coding Agent with Live Streaming and Apply-Patch","Claude Code CLI Integration","Agent Mode with 14 Tools, Parallel Execution, MCP, Sub-Agent Delegation","Remote Access from Phone via LAN or Cloudflare Tunnel (6-digit Passcode, QR Setup)","Mobile Web App (Chat, Codex, Tools, Plugins)","Image Generation via ComfyUI (FLUX 2 Klein, FLUX.1, Juggernaut XL, Z-Image Turbo, ERNIE-Image, SDXL, SD 3.5)","Video Generation (Wan 2.1, Wan 2.2, HunyuanVideo 1.5, LTX 2.3, FramePack F1, AnimateDiff, CogVideoX)","Image-to-Image with Denoise Control","Image-to-Video (FramePack F1, CogVideoX, SVD)","Granular Permission System (7 Tool Categories)","File Upload with Vision Support","Thinking Mode with Universal Tag Stripping","A/B Model Compare","Local Benchmark","Memory System and Document RAG","Voice (STT / TTS)","Auto-Update over Signed NSIS Channel","75+ One-Click Model Downloads","Hardware-Aware Model Recommendations","100% Offline and Private (No Telemetry)","AGPL-3.0 Open Source"],"screenshot":"https://raw.githubusercontent.com/PurpleDoubleD/locally-uncensored/master/docs/social-preview.png","softwareRequirements":"Any OpenAI-compatible local backend (Ollama, LM Studio, vLLM, KoboldCpp, etc.) or cloud API key","memoryRequirements":"8 GB RAM minimum; 8-16 GB VRAM for image/video generation","storageRequirements":"6 GB for default model"}
3737
</script>
3838
<script type="application/ld+json">
3939
{"@context":"https://schema.org","@type":"Organization","name":"PurpleDoubleD","url":"https://github.com/PurpleDoubleD","logo":"https://locallyuncensored.com/logos/LU-monogram-bw-transparent.png","sameAs":["https://github.com/PurpleDoubleD","https://github.com/PurpleDoubleD/locally-uncensored","https://reddit.com/user/GroundbreakingMall54","https://locallyuncensored.com"]}
@@ -827,7 +827,7 @@
827827

828828
<!-- Top meta strip -->
829829
<div class="mast-strip">
830-
<span class="numero">v 2.3.4</span>
830+
<span class="numero">v 2.3.5</span>
831831
<span class="sep">·</span>
832832
<span>AGPL-3.0</span>
833833
<span class="sep">·</span>
@@ -868,10 +868,10 @@ <h1 class="mast-title">Locally&nbsp;<span class="accent">Uncensored</span></h1>
868868
<div class="container">
869869
<div class="dl-card">
870870
<div class="dl-info">
871-
<h3>Locally Uncensored <em>&mdash; 2.3.4</em></h3>
871+
<h3>Locally Uncensored <em>&mdash; 2.3.5</em></h3>
872872
<p>WINDOWS &middot; ~152 MB &middot; AGPL-3.0</p>
873873
</div>
874-
<a class="dl-btn" href="https://github.com/PurpleDoubleD/locally-uncensored/releases/download/v2.3.4/Locally.Uncensored_2.3.4_x64-setup.exe" download>
874+
<a class="dl-btn" href="https://github.com/PurpleDoubleD/locally-uncensored/releases/download/v2.3.5/Locally.Uncensored_2.3.5_x64-setup.exe" download>
875875
<svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2.4" stroke-linecap="round" stroke-linejoin="round" aria-hidden="true"><path d="M21 15v4a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2v-4"/><polyline points="7 10 12 15 17 10"/><line x1="12" y1="15" x2="12" y2="3"/></svg>
876876
Download .exe
877877
</a>
@@ -1153,10 +1153,10 @@ <h3>Can I use this on macOS?</h3>
11531153
<div class="narrow">
11541154
<div class="dl-card">
11551155
<div class="dl-info">
1156-
<h3>Locally Uncensored <em>&mdash; 2.3.4</em></h3>
1156+
<h3>Locally Uncensored <em>&mdash; 2.3.5</em></h3>
11571157
<p>WINDOWS &middot; ~152 MB &middot; AGPL-3.0</p>
11581158
</div>
1159-
<a class="dl-btn" href="https://github.com/PurpleDoubleD/locally-uncensored/releases/download/v2.3.4/Locally.Uncensored_2.3.4_x64-setup.exe" download>
1159+
<a class="dl-btn" href="https://github.com/PurpleDoubleD/locally-uncensored/releases/download/v2.3.5/Locally.Uncensored_2.3.5_x64-setup.exe" download>
11601160
<svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2.4" stroke-linecap="round" stroke-linejoin="round" aria-hidden="true"><path d="M21 15v4a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2v-4"/><polyline points="7 10 12 15 17 10"/><line x1="12" y1="15" x2="12" y2="3"/></svg>
11611161
Download .exe
11621162
</a>

package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"name": "locally-uncensored",
3-
"version": "2.3.4",
3+
"version": "2.3.5",
44
"private": false,
55
"description": "Generate anything — text, images, video. Locally. Uncensored.",
66
"license": "AGPL-3.0",

src-tauri/Cargo.lock

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

src-tauri/Cargo.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[package]
22
name = "locally-uncensored"
3-
version = "2.3.4"
3+
version = "2.3.5"
44
description = "Private, local AI chat & image/video generation"
55
authors = ["purpledoubled"]
66
edition = "2021"

src-tauri/tauri.conf.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
{
22
"$schema": "https://raw.githubusercontent.com/tauri-apps/tauri/dev/crates/tauri-cli/schema.json",
33
"productName": "Locally Uncensored",
4-
"version": "2.3.4",
4+
"version": "2.3.5",
55
"identifier": "com.purpledoubled.locally-uncensored",
66
"build": {
77
"beforeBuildCommand": "npm run build",

0 commit comments

Comments
 (0)