Checked other resources
Package (Required)
Feature Description
TodoListMiddleware should re-inject the current todo list into the system prompt on every model call for immunity against summarization.
Currently wrap_model_call only appends static instructions, it never reads state["todos"]. This means the agent's live plan is invisible to the model unless it happens to still be in message history.
Proposed addition: inject_current_todos: bool = True parameter on TodoListMiddleware.init. When enabled, _build_system_content() reads state["todos"] and appends a <current_todos> block with status markers ([ ] pending, [~] in_progress, [x] completed) to the system prompt on every model call.
Use Case
This breaks silently when TodoListMiddleware is composed with SummarizationMiddleware in long-running / deep agent runs.
SummarizationMiddleware compacts message history when the context window fills up. The ToolMessage echo of the last write_todos call is eligible for summarization once trimmed, the model loses visibility of its current plan entirely, even though state["todos"] is still fully intact in checkpointed state.
The agent must then infer its plan from a lossy LLM-generated summary instead of reading structured state directly. This causes plan drift and repeated re-planning on long tasks exactly the failure mode the todo list
was designed to prevent.
This is directly relevant to LangChain's own deep agents pattern where the TodoList tool is described as a cognitive no-op for context engineering. That guarantee breaks under summarization without this fix.
Proposed Solution
Add to TodoListMiddleware.init:
inject_current_todos: bool = True
Extract a _build_system_content(request) helper that:
- Takes the existing system message content blocks as base
- Appends the static system_prompt instructions (existing behavior)
- If inject_current_todos=True and state["todos"] is non-empty, appends:
<current_todos>
[x] Completed task
[~] In-progress task
[ ] Pending task
</current_todos>
Both wrap_model_call and awrap_model_call delegate to this shared helper, removing the duplicated if/else logic that currently exists in both methods.
The change is fully non-breaking, inject_current_todos=False restores exactly the current behavior.
Alternatives Considered
Summarization-aware trimming configuring SummarizationMiddleware to never trim ToolMessages from write_todos. Rejected: brittle, requires users to know about the interaction, doesn't help if history is cleared for other reasons.
Additional Context
I have already implemented this locally:
- _render_todos_block() helper function
- inject_current_todos flag with opt-out default
- _build_system_content() refactor eliminating duplicated logic in sync/async paths
- 3 new unit tests: injection present, empty-list no-op, flag disabled
Branch is committed and tests pass. Happy to open the PR immediately if this is approved and I am assigned.
Checked other resources
Package (Required)
Feature Description
TodoListMiddleware should re-inject the current todo list into the system prompt on every model call for immunity against summarization.
Currently wrap_model_call only appends static instructions, it never reads state["todos"]. This means the agent's live plan is invisible to the model unless it happens to still be in message history.
Proposed addition: inject_current_todos: bool = True parameter on TodoListMiddleware.init. When enabled, _build_system_content() reads state["todos"] and appends a <current_todos> block with status markers ([ ] pending, [~] in_progress, [x] completed) to the system prompt on every model call.
Use Case
This breaks silently when TodoListMiddleware is composed with SummarizationMiddleware in long-running / deep agent runs.
SummarizationMiddleware compacts message history when the context window fills up. The ToolMessage echo of the last write_todos call is eligible for summarization once trimmed, the model loses visibility of its current plan entirely, even though state["todos"] is still fully intact in checkpointed state.
The agent must then infer its plan from a lossy LLM-generated summary instead of reading structured state directly. This causes plan drift and repeated re-planning on long tasks exactly the failure mode the todo list
was designed to prevent.
This is directly relevant to LangChain's own deep agents pattern where the TodoList tool is described as a cognitive no-op for context engineering. That guarantee breaks under summarization without this fix.
Proposed Solution
Add to TodoListMiddleware.init:
inject_current_todos: bool = True
Extract a _build_system_content(request) helper that:
<current_todos>
[x] Completed task
[~] In-progress task
[ ] Pending task
</current_todos>
Both wrap_model_call and awrap_model_call delegate to this shared helper, removing the duplicated if/else logic that currently exists in both methods.
The change is fully non-breaking, inject_current_todos=False restores exactly the current behavior.
Alternatives Considered
Summarization-aware trimming configuring SummarizationMiddleware to never trim ToolMessages from write_todos. Rejected: brittle, requires users to know about the interaction, doesn't help if history is cleared for other reasons.
Additional Context
I have already implemented this locally:
Branch is committed and tests pass. Happy to open the PR immediately if this is approved and I am assigned.