Skip to content

Commit 863ba56

Browse files
committed
fix: Ollama 403 on Windows — explicit Origin override (v0.3.12-rc1)
A user packet capture on Windows confirmed plugin-http v2.5.8 was auto-injecting `origin: http://tauri.localhost` into every outbound request. Ollama's default OLLAMA_ORIGINS allowlist accepts `tauri://*` (matches macOS/Linux webview origin) but NOT `http://tauri.localhost`, so Windows users hit 403 even with a correctly-installed Ollama. Fix: in `src/lib/llm-providers.ts`, set `Origin` explicitly on both the `ollama` and `custom` (OpenAI-compat) provider branches, using the request's own host. Same-origin is always trusted by Ollama regardless of OLLAMA_ORIGINS contents or version. Why this works at all: - plugin-http v2.5.x JS shim only adds browser-default headers when the user did NOT already set them (verified against `node_modules/@tauri-apps/plugin-http/dist-js/index.js`, line 71-75 — the loop after `new Request(input, init)` skips `headers.set()` if the key is already present). - The `unsafe-headers` Cargo feature is already enabled, so Rust-side reqwest forwards `Origin` to the wire instead of stripping it. Verification layers: - Layer 1 (source review of plugin-http): documented in code comment; high confidence the fix works. - Layer 2 (mocked unit tests, +6 in __tests__/llm-providers.test.ts): pin Origin = same-origin for ollama (localhost / LAN / trailing /v1), pin custom OpenAI-compat does the same, pin commercial providers DON'T get an Origin override, pin malformed URL falls back to no-Origin gracefully. - Layer 3 (real-TCP fake-server test, new file `src/lib/llm-client.real-llm.test.ts`): stands up a Node HTTP server mimicking Ollama's CORS check, drives `streamChat`, asserts Origin reaches the wire correctly. Includes a "trivially-green" sanity test that proves the server's check actually fires when Origin doesn't match. - Layer 4 (manual Windows packet recapture against this rc1 build) — owner is doing this; will tag v0.3.12 if confirmed. Test count: 651 mocked + 27 real-LLM, all green.
1 parent 9f62d36 commit 863ba56

7 files changed

Lines changed: 444 additions & 4 deletions

File tree

package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
{
22
"name": "llm-wiki",
33
"private": true,
4-
"version": "0.3.11",
4+
"version": "0.3.12-rc1",
55
"type": "module",
66
"scripts": {
77
"dev": "vite",

src-tauri/Cargo.lock

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

src-tauri/Cargo.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[package]
22
name = "llm-wiki"
3-
version = "0.3.11"
3+
version = "0.3.12-rc1"
44
description = "LLM Wiki - A personal knowledge base for LLM concepts"
55
authors = []
66
edition = "2021"

src-tauri/tauri.conf.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
{
22
"$schema": "https://schema.tauri.app/config/2",
33
"productName": "LLM Wiki",
4-
"version": "0.3.11",
4+
"version": "0.3.12-rc1",
55
"identifier": "com.llmwiki.app",
66
"build": {
77
"beforeDevCommand": "npm run dev",

src/lib/__tests__/llm-providers.test.ts

Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -303,3 +303,105 @@ describe("Sampling override translation across wires", () => {
303303
expect(body.max_tokens).toBe(8192)
304304
})
305305
})
306+
307+
// ── Origin header for local LLM providers (Ollama + custom OpenAI) ──
308+
//
309+
// Pinned by a real packet capture from a Windows user (v0.3.11) where
310+
// plugin-http auto-injected `origin: http://tauri.localhost`,
311+
// triggering Ollama 403 because that origin isn't in the default
312+
// `OLLAMA_ORIGINS` allowlist. Setting Origin = same-origin makes
313+
// Ollama always trust the request.
314+
//
315+
// Plugin-http v2.5.x respects user-set headers (override behavior
316+
// confirmed by reading dist-js/index.js line 71-75), and the
317+
// `unsafe-headers` Cargo feature ensures Rust-side reqwest forwards
318+
// it. So this header surfacing here = bytes on the wire.
319+
320+
describe("Origin header — local LLM CORS workaround", () => {
321+
it("Ollama provider sets Origin to the request's own host (localhost)", () => {
322+
const cfg = getProviderConfig({
323+
provider: "ollama",
324+
apiKey: "",
325+
model: "llama3",
326+
ollamaUrl: "http://localhost:11434",
327+
customEndpoint: "",
328+
maxContextSize: 8192,
329+
})
330+
expect(cfg.headers["Origin"]).toBe("http://localhost:11434")
331+
})
332+
333+
it("Ollama provider Origin reflects remote LAN deployment (not localhost)", () => {
334+
const cfg = getProviderConfig({
335+
provider: "ollama",
336+
apiKey: "",
337+
model: "llama3",
338+
ollamaUrl: "http://192.168.1.50:11434",
339+
customEndpoint: "",
340+
maxContextSize: 8192,
341+
})
342+
expect(cfg.headers["Origin"]).toBe("http://192.168.1.50:11434")
343+
})
344+
345+
it("Ollama provider strips trailing /v1 before deriving Origin", () => {
346+
// User pasted "http://localhost:11434/v1" as their Ollama URL. The
347+
// URL the provider builds will be "http://localhost:11434/v1/chat/completions",
348+
// so Origin must still be "http://localhost:11434" (URL.origin
349+
// never includes a path anyway, but pinning this catches a
350+
// regression where someone derives Origin from the full URL string
351+
// by hand).
352+
const cfg = getProviderConfig({
353+
provider: "ollama",
354+
apiKey: "",
355+
model: "llama3",
356+
ollamaUrl: "http://localhost:11434/v1",
357+
customEndpoint: "",
358+
maxContextSize: 8192,
359+
})
360+
expect(cfg.headers["Origin"]).toBe("http://localhost:11434")
361+
})
362+
363+
it("custom OpenAI-compat endpoint also gets a same-origin Origin (LM Studio / llama.cpp / vLLM)", () => {
364+
const cfg = getProviderConfig({
365+
provider: "custom",
366+
apiKey: "",
367+
model: "qwen3",
368+
ollamaUrl: "",
369+
customEndpoint: "http://127.0.0.1:1234",
370+
maxContextSize: 8192,
371+
apiMode: "chat_completions",
372+
} as RealLlmConfig)
373+
expect(cfg.headers["Origin"]).toBe("http://127.0.0.1:1234")
374+
})
375+
376+
it("commercial provider (OpenAI) does NOT get an explicit Origin override", () => {
377+
// OpenAI's CORS doesn't care about Origin (auth is via API key).
378+
// Setting Origin would just be noise. Pin that we DON'T touch
379+
// commercial endpoints.
380+
const cfg = getProviderConfig({
381+
provider: "openai",
382+
apiKey: "k",
383+
model: "gpt-4o",
384+
ollamaUrl: "",
385+
customEndpoint: "",
386+
maxContextSize: 128000,
387+
})
388+
expect(cfg.headers["Origin"]).toBeUndefined()
389+
})
390+
391+
it("malformed ollamaUrl falls back gracefully (no Origin header) instead of crashing", () => {
392+
// Settings UI normalizes URLs but a stale config from an older
393+
// version could carry rubbish. Origin parsing must not throw.
394+
const cfg = getProviderConfig({
395+
provider: "ollama",
396+
apiKey: "",
397+
model: "llama3",
398+
ollamaUrl: "not a url",
399+
customEndpoint: "",
400+
maxContextSize: 8192,
401+
})
402+
expect(cfg.headers["Origin"]).toBeUndefined()
403+
// The URL itself will obviously be broken — but the provider
404+
// builder shouldn't have thrown.
405+
expect(typeof cfg.url).toBe("string")
406+
})
407+
})

0 commit comments

Comments
 (0)