Skip to content

fix: prevent TransferEncodingError in Gemini SSE stream#2322

Open
Stranmor wants to merge 1 commit intolbjlaq:mainfrom
Stranmor:fix/gemini-sse-transfer-encoding-error
Open

fix: prevent TransferEncodingError in Gemini SSE stream#2322
Stranmor wants to merge 1 commit intolbjlaq:mainfrom
Stranmor:fix/gemini-sse-transfer-encoding-error

Conversation

@Stranmor
Copy link
Copy Markdown
Contributor

Problem

When upstream returns an error during Gemini SSE streaming, the handler yields Err(String) into Body::from_stream(). Axum/hyper interprets this as a transport-level failure and aborts the HTTP response with a TransferEncoding error instead of delivering the error as a valid SSE data: frame.

Impact: HTTP clients (aiohttp, httpx, curl) see a broken chunked transfer instead of a parseable error event. This is the root cause of TransferEncodingError / ClientPayloadError reports.

Fix

Convert yield Err(...) to yield Ok(Bytes::from(...)) with a properly formatted SSE error event (data: {"error": ...}\n\n). This ensures error information reaches the client as valid SSE data rather than corrupting the HTTP transport layer.

Files changed:

  • src-tauri/src/proxy/handlers/gemini.rs — primary fix (stream error yield)
  • src-tauri/src/proxy/mappers/claude/mod.rs — defense-in-depth (same pattern)

Verification

  • OpenAI handler already uses the correct Ok(Bytes) pattern — this PR aligns Gemini and Claude mappers to the same contract
  • Minimal diff: 2 files, 16 insertions, 2 deletions — no formatting or unrelated changes

When SSE streams encounter errors (timeout, upstream connection drop),
the Gemini handler uses yield Err(...) which causes hyper to abort
HTTP/1.1 chunked transfer encoding without sending the final 0\r\n\r\n
terminator. HTTP clients that validate chunk boundaries (aiohttp,
httpx) then throw TransferEncodingError.

Fix: convert stream errors to Ok(Bytes) SSE content (matching the
pattern already used by the OpenAI handler), so the stream terminates
cleanly and hyper sends the proper chunked encoding terminator.

Affected path: Gemini streaming handler
Already fixed: OpenAI streaming handler (errors wrapped as Ok)
Partially mitigated: Claude handler (.map wrapper converts Err→Ok)
@Stranmor Stranmor force-pushed the fix/gemini-sse-transfer-encoding-error branch from 2759a95 to bca2492 Compare March 13, 2026 16:59
@Stranmor
Copy link
Copy Markdown
Contributor Author

Closing as duplicate of #2321 which covers both Gemini and Claude SSE streams.

@Stranmor Stranmor closed this Mar 14, 2026
@Stranmor
Copy link
Copy Markdown
Contributor Author

Reopening — this PR uses serde_json::json!() for proper JSON escaping, which is safer than the manual format!() + replace() approach in #2321.

@adam2626
Copy link
Copy Markdown

When trying to change accounts in the accounts section, I get a "storage json not found" error.

@adam2626
Copy link
Copy Markdown

adam2626 commented Mar 25, 2026 via email

@Stranmor
Copy link
Copy Markdown
Contributor Author

Stranmor commented Mar 25, 2026

Hey @adam2626!

I have opened a dedicated issue to track and fix the storage JSON bug we discussed: #2371

And yes, suggestions for fixes and improvements are always welcome! Feel free to open issues with anything you notice or would like to improve. GitHub is the preferred channel for this, as it keeps everything public and searchable.

@Stranmor
Copy link
Copy Markdown
Contributor Author

Hi @adam2626, I owe you an apology — my previous analysis was wrong. You were absolutely right, this PR IS the indirect cause of the "storage json not found" error.

Here's what actually happens:

  1. This PR changes the error format to a clean JSON object instead of a broken stream.
  2. The IDE parser expects a standard OpenAI chunk (choices[0].delta). When it receives our custom error JSON, it throws a JavaScript TypeError and crashes/freezes the chat UI.
  3. Because the UI is frozen, you close the IDE.
  4. When you then click 'Switch Account' in Antigravity-Manager, it tries to find storage.json by scanning the running IDE process. Since the IDE is now closed, the process discovery fails, resulting in the "storage json not found" error.

Thank you for catching this! The fix is to wrap the error message inside a structurally valid OpenAI SSE chunk so the IDE doesn't crash. I'll work on a patch for this. Good catch!

@adam2626
Copy link
Copy Markdown

adam2626 commented Mar 25, 2026 via email

@adam2626
Copy link
Copy Markdown

adam2626 commented Mar 25, 2026 via email

@Stranmor
Copy link
Copy Markdown
Contributor Author

Hey @adam2626, great question — totally fair concern.

Here's how it works:

Your actual AI provider API keys (Gemini, Claude, OpenAI, etc.) are never sent to the Antigravity server. The server doesn't want them, doesn't store them, and has no use for them.

Instead, the local proxy authenticates with the Antigravity API using its own Antigravity Proxy Token (the sk-ag-... key you see in the config). This is a separate credential that has nothing to do with any AI provider's API key.

When the proxy forwards your requests to AI providers, it uses the server's own internal keys — not yours. If you accidentally put your own provider API key in the IDE config, the proxy will actually strip it out before forwarding the request.

The whole point of this architecture is that you don't need your own API keys for any AI provider. Access is managed through a quota system tied to your Antigravity account instead.

So in short: your provider keys stay local (and aren't even needed), and the proxy token is the only credential that talks to the Antigravity server.

Hope that clears things up! Let me know if you have any other questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants