Skip to content

v2.3.7 — Remote Ollama + OLLAMA_HOST env var support

Choose a tag to compare

@PurpleDoubleD PurpleDoubleD released this 22 Apr 07:24
· 13 commits to master since this release

Fixes Issue #31 by @k-wilkinson — LU now honors OLLAMA_HOST env var and the Settings → Providers → Ollama → Endpoint field across the whole app.

What was broken

Four places hardcoded http://localhost:11434 so any non-default OLLAMA_HOST (0.0.0.0:11434, LAN IPs, custom ports) silently failed:

  • backend detector reported "No local backend detected"
  • model dropdown stayed empty
  • Settings → Providers → Endpoint field had zero effect
  • Test button always said Failed — even when curl against the configured endpoint worked

What v2.3.7 does

A single ollama_base field now flows end-to-end.

Rust (state.rs + process.rs + proxy.rs + remote.rs):

  • load_ollama_base() reads, in priority order: config.jsonOLLAMA_HOST env var (same semantics Ollama itself uses) → default.
  • New set_ollama_host / get_ollama_host commands normalise input (bare host:port, scheme-less, or full URL all accepted), persist to config.json, update AppState.ollama_base.
  • proxy_localhost SSRF allow-list widened to accept the configured Ollama + ComfyUI hosts (everything else still blocked — no arbitrary intranet SSRF).
  • pull_model_stream reads from state instead of hardcoding localhost:11434 — model downloads now hit the configured Ollama too.
  • Mobile Remote proxy (proxy_ollama) routes to the configured base, with localhost rewritten to 127.0.0.1 for reqwest-in-subprocess.

Frontend (backend.ts + ollama-provider.ts + vite.config.ts + AppShell.tsx):

  • _ollamaBase module state + setOllamaBase / isOllamaLocal / normalizeOllamaBase. ollamaUrl() in Tauri mode returns ${_ollamaBase}/api${path} — was hardcoded localhost.
  • OllamaProvider.apiUrl() delegates to unified ollamaUrl() — single source of truth, no more split Tauri/dev branches that ignored config.baseUrl.
  • Vite /api proxy target computed from process.env.OLLAMA_HOST at dev-server startup — OLLAMA_HOST=… npm run dev now just works.
  • AppShell polls for __TAURI_INTERNALS__ (async in Tauri v2), then waits for useProviderStore.persist.onFinishHydration (avoids the race where zustand hydration clobbered our post-mount sync), pulls Rust's authoritative base, mirrors into the store. Subscribe armed before the initial setProviderConfig so Rust's config.json gets written on startup too. Subsequent GUI edits keep config.json authoritative.

Verified end-to-end on the release binary

  • OLLAMA_HOST=127.0.0.1:11435 ollama serve on a second port.
  • Single LU instance launched with OLLAMA_HOST=127.0.0.1:11435 set in the parent shell.
  • config.json synced: ollama_base: http://127.0.0.1:11435. Settings → Providers → Ollama → Endpoint field: http://127.0.0.1:11435. Test: Connected. Model dropdown: populated. Chat streams.
  • Edited endpoint in the GUI to 11434, 11999 (wrong), 11435, and localhost:11434 in sequence — each edit wrote the new value to config.json and the Test button reflected reality (Connected / Failed / Connected / Connected).
  • Definitive routing proof: killed Ollama on :11435 mid-session. Next chat error: proxy_localhost_stream: error sending request for url (http://127.0.0.1:11435/api/chat) — the request targets the configured endpoint, not the old hardcoded localhost:11434 (which was still up).

Test suite

2183 → 2202 green. 19 new regression tests in backend-urls.test.ts covering normaliseOllamaBase / setOllamaBase / isOllamaLocal / custom-host ollamaUrl() across both Tauri and dev modes. provider-ollama.test.ts mock updated for the new unified ollamaUrl.

Drop-in upgrade

No breaking changes. Default endpoint still http://localhost:11434 — existing users see zero behavior change. If you had OLLAMA_HOST in your environment (Docker, LAN, homelab) it's now honored. If you'd edited Settings → Providers → Ollama → Endpoint that value now actually flows through the app.


Auto-update rolls to any running 2.3.x install. Or download below.