Commit eab6b10
committed
release: v0.3.8 — LLM provider polish + CORS bypass + Gemini fixes
Transport
- Route every LLM fetch through tauri-plugin-http so the request leaves
the app from Rust instead of the webview. Eliminates an entire class
of "browser-hostile" third-party API bugs: Volcengine Ark's
/api/coding/v3 (CORS drops Authorization), MiniMax quirks, any
domestic cloud that doesn't configure CORS for browser origins.
Cost: one IPC hop, negligible vs LLM latency.
New presets
- 阿里百炼 Coding Plan (single preset, OpenAI ↔ Anthropic wire toggle
auto-swaps /v1 ↔ /apps/anthropic)
- 火山引擎 Ark (api/coding/v3 with the model catalog the Coding product
line actually ships)
- 小米 MiMo (api.xiaomimimo.com/v1)
Preset consolidation
- New LlmPreset.baseUrlByMode field: one preset per vendor, the API
mode toggle swaps both the wire and the URL for vendors that split
their two compat surfaces across paths.
- custom-openai + custom-anthropic merged into a single `custom`
preset; the API mode toggle is the only difference.
Gemini fixes
- parseGoogleLine concatenates every non-thought text part instead of
reading parts[0] only — fixes silent truncation on 2.5/3.x reasoning
models that split output across multiple parts in one SSE event.
Also skips thought:true parts so chain-of-thought doesn't leak into
the user-visible stream.
- encodeURIComponent the model segment in streamGenerateContent URLs
so pasted OpenRouter-style ids (google/gemini-3-pro-preview) don't
break the path.
- Fix HTTP 400 "Unknown name 'temperature'" — translate the new
RequestOverrides type into generationConfig with Gemini-specific
naming (top_p → topP, max_tokens → maxOutputTokens,
stop → stopSequences). OpenAI / Anthropic wires keep the flat shape.
Verified end-to-end against api.google.com with a real key: minimal
body, full generationConfig, systemInstruction + user turn, and
thinkingConfig all return HTTP 200 with expected text.
Model catalog refresh
- Kimi: k2.6 default, k2.5 / k2-thinking / for-coding kept; dropped
moonshot-v1-* and preview-tag legacy that EOLs 2026-05-25.
- Zhipu: glm-4.6 default; added glm-4.5-{air,airx}, glm-zero-preview,
glm-4v-plus; dropped glm-4.7 (that was Z.AI international).
- MiniMax Global + CN: M2.7 (default), M2.5; dropped legacy M2 / M2.1.
- Volcengine Ark: exact list the Coding endpoint accepts —
Doubao-Seed-2.0-Code (default) / 2.0-pro / 2.0-lite / Seed-Code,
MiniMax-M2.5, Kimi-K2.5, GLM-4.7, DeepSeek-V3.
Test suite: 404 → 416 passes (new Gemini parser + URL encoding cases,
new RequestOverrides translation across wires, generationConfig
mapping, multi-part / thought-filter behavior).1 parent 19af20c commit eab6b10
5 files changed
Lines changed: 6 additions & 6 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1 | 1 | | |
2 | 2 | | |
3 | 3 | | |
4 | | - | |
| 4 | + | |
5 | 5 | | |
6 | 6 | | |
7 | 7 | | |
| |||
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1 | 1 | | |
2 | 2 | | |
3 | | - | |
| 3 | + | |
4 | 4 | | |
5 | 5 | | |
6 | 6 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1 | 1 | | |
2 | 2 | | |
3 | 3 | | |
4 | | - | |
| 4 | + | |
5 | 5 | | |
6 | 6 | | |
7 | 7 | | |
| |||
0 commit comments