Two related problems when using the Cursor provider (e.g. gpt-5.3-codex-fast) compared to the same model via the OpenAI provider, where both work correctly.
1. Stream never ends after tool execution (timeout)
What happens: After the agent runs a tool (e.g. ls, run_terminal_cmd), the Cursor API stream is not closed by the server. The client keeps waiting for the end event on the HTTP/2 stream and the UI shows "Working…" indefinitely (minutes).
Flow: One HTTP/2 stream is used for the whole turn: we send runRequest (user message), server streams interactionUpdate (model reply, tool call), then execServerMessage (e.g. shellStreamArgs). We run the tool locally and send the result back on the same stream (e.g. shellStream events: start, stdout, stderr, exit). We then wait for the server to send more data and eventually close the stream (end). Bug: The server never sends the continuation (or never closes the stream) after receiving our tool result.
Workaround (local): Add a client-side timeout (e.g. 90s) and close the stream + treat as error if no end is received.
Expected: Server should send the model's follow-up response after tool result and then close the stream.
2. Agent loses conversation context (single-message memory)
What happens: When using the Cursor provider, the agent effectively only has context for the current message. It does not retain conversation history across turns: it "forgets" previous messages and tool results.
Contrast: With the same model (e.g. GPT-5.3 Codex) via the OpenAI provider, the agent correctly keeps full context and uses past messages and tool results.
Possible causes: Cursor API may not be returning/accepting conversation state in a way that preserves full history; or each turn might be sent as a fresh run with incomplete state; could be related to (1) if state is only committed on stream end.
Steps to Reproduce
- Launch OMP (oh-my-pi).
- Authorize via Cursor (log in / connect Cursor provider).
- Authorize via OpenAI (log in / connect OpenAI Codex provider).
- Select the same model (e.g.
gpt-5.3-codex-fast) with the Cursor provider.
- Send any message that triggers a tool (e.g. “run
ls” or “list files in the current directory”).
- Observe: The request hangs — the stream never completes, UI shows “Working…” indefinitely.
- Switch to the OpenAI provider, keeping the same model.
- Send the same or a similar message that triggers a tool.
- Observe: The command runs normally, the stream completes, and conversation context is preserved across turns.
Expected Behavior
When using the Cursor provider with the same model, behavior should match the OpenAI provider:
-
Stream completion: After the client sends tool results (e.g. shell stdout/stderr, exit code) back to the Cursor API, the server should send the model’s follow-up response (if any) and then close the HTTP/2 stream. The client should receive the end event and finish the turn without hanging.
-
Conversation context: The agent should retain full conversation history across turns: previous user messages, assistant replies, and tool inputs/outputs. Later messages (e.g. “what was the output of the previous command?” or “run the same command in the parent directory”) should be answered using that history, as with the OpenAI provider.
Error Output
Platform
macOS
omp version
12.17.0
Bun version
1.3.8
Provider
Cursor
Area
Tool execution
Additional context
No response
Two related problems when using the Cursor provider (e.g.
gpt-5.3-codex-fast) compared to the same model via the OpenAI provider, where both work correctly.1. Stream never ends after tool execution (timeout)
What happens: After the agent runs a tool (e.g.
ls,run_terminal_cmd), the Cursor API stream is not closed by the server. The client keeps waiting for theendevent on the HTTP/2 stream and the UI shows "Working…" indefinitely (minutes).Flow: One HTTP/2 stream is used for the whole turn: we send
runRequest(user message), server streamsinteractionUpdate(model reply, tool call), thenexecServerMessage(e.g.shellStreamArgs). We run the tool locally and send the result back on the same stream (e.g.shellStreamevents: start, stdout, stderr, exit). We then wait for the server to send more data and eventually close the stream (end). Bug: The server never sends the continuation (or never closes the stream) after receiving our tool result.Workaround (local): Add a client-side timeout (e.g. 90s) and close the stream + treat as error if no
endis received.Expected: Server should send the model's follow-up response after tool result and then close the stream.
2. Agent loses conversation context (single-message memory)
What happens: When using the Cursor provider, the agent effectively only has context for the current message. It does not retain conversation history across turns: it "forgets" previous messages and tool results.
Contrast: With the same model (e.g. GPT-5.3 Codex) via the OpenAI provider, the agent correctly keeps full context and uses past messages and tool results.
Possible causes: Cursor API may not be returning/accepting conversation state in a way that preserves full history; or each turn might be sent as a fresh run with incomplete state; could be related to (1) if state is only committed on stream end.
Steps to Reproduce
gpt-5.3-codex-fast) with the Cursor provider.ls” or “list files in the current directory”).Expected Behavior
When using the Cursor provider with the same model, behavior should match the OpenAI provider:
Stream completion: After the client sends tool results (e.g. shell stdout/stderr, exit code) back to the Cursor API, the server should send the model’s follow-up response (if any) and then close the HTTP/2 stream. The client should receive the
endevent and finish the turn without hanging.Conversation context: The agent should retain full conversation history across turns: previous user messages, assistant replies, and tool inputs/outputs. Later messages (e.g. “what was the output of the previous command?” or “run the same command in the parent directory”) should be answered using that history, as with the OpenAI provider.
Error Output
Platform
macOS
omp version
12.17.0
Bun version
1.3.8
Provider
Cursor
Area
Tool execution
Additional context
No response