v2.3.7 — Remote Ollama + OLLAMA_HOST env var support
Fixes Issue #31 by @k-wilkinson — LU now honors OLLAMA_HOST env var and the Settings → Providers → Ollama → Endpoint field across the whole app.
What was broken
Four places hardcoded http://localhost:11434 so any non-default OLLAMA_HOST (0.0.0.0:11434, LAN IPs, custom ports) silently failed:
- backend detector reported "No local backend detected"
- model dropdown stayed empty
- Settings → Providers → Endpoint field had zero effect
- Test button always said Failed — even when curl against the configured endpoint worked
What v2.3.7 does
A single ollama_base field now flows end-to-end.
Rust (state.rs + process.rs + proxy.rs + remote.rs):
load_ollama_base()reads, in priority order:config.json→OLLAMA_HOSTenv var (same semantics Ollama itself uses) → default.- New
set_ollama_host/get_ollama_hostcommands normalise input (barehost:port, scheme-less, or full URL all accepted), persist toconfig.json, updateAppState.ollama_base. proxy_localhostSSRF allow-list widened to accept the configured Ollama + ComfyUI hosts (everything else still blocked — no arbitrary intranet SSRF).pull_model_streamreads from state instead of hardcoding localhost:11434 — model downloads now hit the configured Ollama too.- Mobile Remote proxy (
proxy_ollama) routes to the configured base, with localhost rewritten to 127.0.0.1 for reqwest-in-subprocess.
Frontend (backend.ts + ollama-provider.ts + vite.config.ts + AppShell.tsx):
_ollamaBasemodule state +setOllamaBase/isOllamaLocal/normalizeOllamaBase.ollamaUrl()in Tauri mode returns${_ollamaBase}/api${path}— was hardcoded localhost.OllamaProvider.apiUrl()delegates to unifiedollamaUrl()— single source of truth, no more split Tauri/dev branches that ignoredconfig.baseUrl.- Vite
/apiproxy target computed fromprocess.env.OLLAMA_HOSTat dev-server startup —OLLAMA_HOST=… npm run devnow just works. - AppShell polls for
__TAURI_INTERNALS__(async in Tauri v2), then waits foruseProviderStore.persist.onFinishHydration(avoids the race where zustand hydration clobbered our post-mount sync), pulls Rust's authoritative base, mirrors into the store. Subscribe armed before the initialsetProviderConfigso Rust'sconfig.jsongets written on startup too. Subsequent GUI edits keepconfig.jsonauthoritative.
Verified end-to-end on the release binary
OLLAMA_HOST=127.0.0.1:11435 ollama serveon a second port.- Single LU instance launched with
OLLAMA_HOST=127.0.0.1:11435set in the parent shell. config.jsonsynced:ollama_base: http://127.0.0.1:11435. Settings → Providers → Ollama → Endpoint field:http://127.0.0.1:11435. Test: Connected. Model dropdown: populated. Chat streams.- Edited endpoint in the GUI to
11434,11999(wrong),11435, andlocalhost:11434in sequence — each edit wrote the new value toconfig.jsonand the Test button reflected reality (Connected / Failed / Connected / Connected). - Definitive routing proof: killed Ollama on :11435 mid-session. Next chat error:
proxy_localhost_stream: error sending request for url (http://127.0.0.1:11435/api/chat)— the request targets the configured endpoint, not the old hardcoded localhost:11434 (which was still up).
Test suite
2183 → 2202 green. 19 new regression tests in backend-urls.test.ts covering normaliseOllamaBase / setOllamaBase / isOllamaLocal / custom-host ollamaUrl() across both Tauri and dev modes. provider-ollama.test.ts mock updated for the new unified ollamaUrl.
Drop-in upgrade
No breaking changes. Default endpoint still http://localhost:11434 — existing users see zero behavior change. If you had OLLAMA_HOST in your environment (Docker, LAN, homelab) it's now honored. If you'd edited Settings → Providers → Ollama → Endpoint that value now actually flows through the app.
Auto-update rolls to any running 2.3.x install. Or download below.