Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions src/oss/deepagents/context-engineering.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -348,6 +348,44 @@ This dual approach ensures the agent maintains awareness of its goals and progre
- If any model call raises a standard @[ContextOverflowError], Deep Agents immediately falls back to summarization and retries with summary + recent preserved messages
- Older messages are summarized by the model

:::python
<Tip>
[Streaming tokens](/oss/deepagents/streaming#llm-tokens) from the agent will generally include tokens generated by the summarization step. You can filter out these tokens using their associated metadata:
```python
for chunk in agent.stream(
{"messages": [...]},
stream_mode="messages",
version="v2",
):
token, metadata = chunk["data"]
if metadata.get("lc_source") == "summarization": # [!code highlight]
continue
else:
...
```
</Tip>
:::

:::js
<Tip>
[Streaming tokens](/oss/deepagents/streaming#llm-tokens) from the agent will generally include tokens generated by the summarization step. You can filter out these tokens using their associated metadata:
```typescript
for await (const [namespace, chunk] of await agent.stream(
{ messages: [...] },
{ streamMode: "messages" },
)) {
const [message, metadata] = chunk;
if (metadata?.lcSource === "summarization") { // [!code highlight]
continue;
} else {
...
}
}
```
</Tip>
:::


:::python

##### Summarization Tool
Expand Down
Loading