feat: add GLM (Zhipu AI) provider support#1059
feat: add GLM (Zhipu AI) provider support#1059Aneaire wants to merge 2 commits intopingdotgg:mainfrom
Conversation
Add GLM as a second provider alongside Codex, enabling users to use GLM models (glm-4.7, glm-4.7-flash, glm-5) via the OpenAI-compatible chat completions API at open.bigmodel.cn. Changes span contracts, shared utilities, server adapter/registry/health, and the web UI (settings, model picker, provider selector, store). Authentication is via GLM_API_KEY or ZAI_API_KEY environment variables. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can disable the changed files summary in the walkthrough.Disable the |
| const getSession = (threadId: ThreadId): GlmSession => { | ||
| const session = sessions.get(threadId); | ||
| if (!session) { | ||
| throw new ProviderAdapterSessionNotFoundError({ | ||
| provider: PROVIDER, | ||
| threadId, | ||
| }); | ||
| } | ||
| return session; | ||
| }; |
There was a problem hiding this comment.
🟡 Medium Layers/GlmAdapter.ts:361
getSession throws ProviderAdapterSessionNotFoundError synchronously inside Effect.sync() blocks, so the exception becomes a Cause.Die defect instead of a typed Cause.Fail. Methods like sendTurn, interruptTurn, respondToRequest, readThread, and rollbackThread therefore die unrecoverably rather than returning a typed ProviderAdapterError that callers can handle. Consider using Effect.fail(new ProviderAdapterSessionNotFoundError(...)) or Effect.try with catch to convert the throw into a typed failure.
- const getSession = (threadId: ThreadId): GlmSession => {
- const session = sessions.get(threadId);
- if (!session) {
- throw new ProviderAdapterSessionNotFoundError({
- provider: PROVIDER,
- threadId,
- });
- }
- return session;
- };🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file apps/server/src/provider/Layers/GlmAdapter.ts around lines 361-370:
`getSession` throws `ProviderAdapterSessionNotFoundError` synchronously inside `Effect.sync()` blocks, so the exception becomes a `Cause.Die` defect instead of a typed `Cause.Fail`. Methods like `sendTurn`, `interruptTurn`, `respondToRequest`, `readThread`, and `rollbackThread` therefore die unrecoverably rather than returning a typed `ProviderAdapterError` that callers can handle. Consider using `Effect.fail(new ProviderAdapterSessionNotFoundError(...))` or `Effect.try` with `catch` to convert the throw into a typed failure.
Evidence trail:
1. GlmAdapter.ts lines 361-370: `getSession` function throws `ProviderAdapterSessionNotFoundError` synchronously
2. GlmAdapter.ts line 740-741: `sendTurn` calls `getSession` inside `Effect.sync()`
3. GlmAdapter.ts line 788-789: `interruptTurn` calls `getSession` inside `Effect.sync()`
4. GlmAdapter.ts line 806-807: `respondToRequest` calls `getSession` inside `Effect.sync()`
5. GlmAdapter.ts line 858-859: `readThread` calls `getSession` inside `Effect.sync()`
6. GlmAdapter.ts line 867-868: `rollbackThread` calls `getSession` inside `Effect.sync()`
7. Effect-TS docs (https://effect-ts.github.io/effect/effect/Effect.ts.html): "The provided function (thunk) must not throw errors; if it does, the error will be treated as a 'defect'."
| }>; | ||
| } | ||
|
|
||
| async function* parseSseStream( |
There was a problem hiding this comment.
🟢 Low Layers/GlmAdapter.ts:307
In parseSseStream, when reader.read() returns done: true, the function breaks immediately without processing any remaining data in buffer. If the server sends a final data: line without a trailing newline before closing the connection, that chunk is lost and never yielded.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file apps/server/src/provider/Layers/GlmAdapter.ts around line 307:
In `parseSseStream`, when `reader.read()` returns `done: true`, the function breaks immediately without processing any remaining data in `buffer`. If the server sends a final `data:` line without a trailing newline before closing the connection, that chunk is lost and never yielded.
Evidence trail:
apps/server/src/provider/Layers/GlmAdapter.ts lines 306-337 at REVIEWED_COMMIT - specifically line 319 (`if (done) break;`), lines 323-324 (`const lines = buffer.split("\n"); buffer = lines.pop() ?? "";`), and the absence of any buffer processing after the while loop or in the finally block (line 336-337)
getSession was throwing ProviderAdapterSessionNotFoundError inside Effect.sync blocks, which converts to a Cause.Die defect. Changed to return Effect.fail for a typed Cause.Fail that callers can handle. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Julius will add claude soon, and I don’t think we should integrate his change until he releases his own Claude adapter. so you can build ontop of it. |
Summary
glm-4.7,glm-4.7-flash,glm-5) via the OpenAI-compatible chat completions APIGLM_API_KEY/ZAI_API_KEYat startup with a status banner in the UIFiles changed
Contracts (
packages/contracts/):ProviderKindextended from"codex"to["codex", "glm"]Shared (
packages/shared/):inferProviderFromModel()utility and GLM model slug setServer (
apps/server/):GlmAdapterservice + layer (HTTP adapter with SSE streaming and 6 tools)ProviderAdapterRegistryandserverLayersProviderHealthProviderSessionDirectoryandProviderCommandReactorto recognize"glm"Web (
apps/web/):Setup
Set
GLM_API_KEY(orZAI_API_KEY) as an environment variable before starting the server:export GLM_API_KEY=your-api-keyThe API key can be obtained from z.ai.
Test plan
GLM_API_KEYand confirm the GLM health check shows "ready" in the UIGLM_API_KEY, confirm the health banner shows the missing key message🤖 Generated with Claude Code
Note
Add GLM (Zhipu AI) as a provider alongside Codex
GLM_API_KEY/ZAI_API_KEY), session directory, and orchestration layers so GLM sessions are treated as first-class alongside Codex.GlmIcon, settings page for custom models, and health banner.Macroscope summarized e6899af.