Skip to content

fix(openai): tolerate prompt_cache_retention drift in streaming#36925

Merged
ccurme (ccurme) merged 1 commit intomasterfrom
mdrxy/openai-patch
Apr 21, 2026
Merged

fix(openai): tolerate prompt_cache_retention drift in streaming#36925
ccurme (ccurme) merged 1 commit intomasterfrom
mdrxy/openai-patch

Conversation

@mdrxy
Copy link
Copy Markdown
Member

@mdrxy Mason Daugherty (mdrxy) commented Apr 21, 2026

OpenAI's Responses API is emitting prompt_cache_retention: "in_memory" on streaming events, but the openai SDK's Response pydantic model declares the Literal as only "in-memory" / "24h" (openai-python#2883). Response.model_validate() raises inside _coerce_chunk_response, aborting the stream and forcing downstream fallback chains. Normalize the known mismatch and add an escape hatch for future drifts.

Possibly related to #36899

@github-actions github-actions Bot added fix For PRs that implement a fix integration PR made that is related to a provider partner package integration internal openai `langchain-openai` package issues & PRs size: S 50-199 LOC labels Apr 21, 2026
@ccurme ccurme (ccurme) merged commit 488c6a7 into master Apr 21, 2026
93 checks passed
@ccurme ccurme (ccurme) deleted the mdrxy/openai-patch branch April 21, 2026 18:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

fix For PRs that implement a fix integration PR made that is related to a provider partner package integration internal openai `langchain-openai` package issues & PRs size: S 50-199 LOC

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants