Environment
OpenClaude: 0.9.2
Provider: Codex
Model: GPT-5.5
Summary
OpenClaude should automatically attempt recovery when a GPT-5.5 request fails because the context window is exceeded by a tool result.
Instead of stopping the current task, OpenClaude should either compact the context before failure or retry once after failure with reduced context.
Problem
When using OpenClaude with GPT-5.5, the current task can stop unexpectedly after a tool search/read operation.
The UI shows exactly:
Searched for 1 pattern, read 1 file (ctrl+o to expand)
⎿ API Error: 500 Your input exceeds the context window of this model. Please adjust your input and try again.
After this happens, the session itself is still open, but the current task flow stops. Auto-compact does not appear to recover automatically, so the user has to manually recover by entering another prompt or trying /compact.
Proposed Direction
There are two possible recovery paths.
1. Automatic compaction before failure
OpenClaude should try to prevent the context-window error before it happens.
Expected behavior:
- Detect when the next model request is likely to exceed the effective context window.
- Run auto-compact before sending that request.
- Continue the current task with the compacted context.
This would prevent failures in cases where the accumulated context is large but can still be reduced before the request is sent.
2. Retry after context-window failure
Even if the existing auto-compact algorithm is sufficient for normal accumulated context, it may not be enough when a single tool command suddenly produces too much output.
In that case, the important missing behavior is not necessarily a better compaction algorithm, but an automatic retry path after the failure.
Expected behavior:
-
When a request fails with context window exceeded, record the failure in the working context.
-
Preserve enough information for the model to understand what happened:
- failure reason: context window exceeded
- previous action
- related tool result
- searched pattern
- matched file path
- relevant line numbers, if available
-
Reduce the context by removing, shrinking, or summarizing the oversized entry.
-
Retry once.
-
Let the model continue from the failure information instead of stopping the task.
This would allow a capable model to adapt after the failed attempt, for example by continuing with less information, using a smaller scope, or avoiding the same large-output path.
Alternatives Considered
Today, the user has to manually recover after the task stops.
Common workarounds include:
- entering another prompt to continue the task
- manually summarizing the previous state and failure reason
- manually trying
/compact
- clearing the context and starting again if
/compact fails or is not enough
These workarounds are possible, but they are inconvenient and interrupt the coding workflow.
In particular, when the context window has already been exceeded, /compact itself may fail or may not recover enough context. If OpenClaude automatically retried after compacting or reducing context, the user would not need to manually re-enter the same recovery instructions.
Additional Context
It is not yet clear whether this is specifically an auto-compact problem.
After the error occurred, manually running /compact immediately seemed to recover the workflow without further issues. Because of that, the exact failure mode is somewhat ambiguous.
Environment
OpenClaude: 0.9.2
Provider: Codex
Model: GPT-5.5
Summary
OpenClaude should automatically attempt recovery when a GPT-5.5 request fails because the context window is exceeded by a tool result.
Instead of stopping the current task, OpenClaude should either compact the context before failure or retry once after failure with reduced context.
Problem
When using OpenClaude with GPT-5.5, the current task can stop unexpectedly after a tool search/read operation.
The UI shows exactly:
After this happens, the session itself is still open, but the current task flow stops. Auto-compact does not appear to recover automatically, so the user has to manually recover by entering another prompt or trying
/compact.Proposed Direction
There are two possible recovery paths.
1. Automatic compaction before failure
OpenClaude should try to prevent the context-window error before it happens.
Expected behavior:
This would prevent failures in cases where the accumulated context is large but can still be reduced before the request is sent.
2. Retry after context-window failure
Even if the existing auto-compact algorithm is sufficient for normal accumulated context, it may not be enough when a single tool command suddenly produces too much output.
In that case, the important missing behavior is not necessarily a better compaction algorithm, but an automatic retry path after the failure.
Expected behavior:
When a request fails with
context window exceeded, record the failure in the working context.Preserve enough information for the model to understand what happened:
Reduce the context by removing, shrinking, or summarizing the oversized entry.
Retry once.
Let the model continue from the failure information instead of stopping the task.
This would allow a capable model to adapt after the failed attempt, for example by continuing with less information, using a smaller scope, or avoiding the same large-output path.
Alternatives Considered
Today, the user has to manually recover after the task stops.
Common workarounds include:
/compact/compactfails or is not enoughThese workarounds are possible, but they are inconvenient and interrupt the coding workflow.
In particular, when the context window has already been exceeded,
/compactitself may fail or may not recover enough context. If OpenClaude automatically retried after compacting or reducing context, the user would not need to manually re-enter the same recovery instructions.Additional Context
It is not yet clear whether this is specifically an auto-compact problem.
After the error occurred, manually running
/compactimmediately seemed to recover the workflow without further issues. Because of that, the exact failure mode is somewhat ambiguous.