Skip to content

feat(middleware): TodoListMiddleware should re-inject current todos into system prompt for deep/long-running agent compatibility #36624

@chetanreddyv

Description

Checked other resources

  • This is a feature request, not a bug report or usage question.
  • I added a clear and descriptive title that summarizes the feature request.
  • I used the GitHub search to find a similar feature request and didn't find it.
  • I checked the LangChain documentation and API reference to see if this feature already exists.
  • This is not related to the langchain-community package.

Package (Required)

  • langchain
  • langchain-openai
  • langchain-anthropic
  • langchain-classic
  • langchain-core
  • langchain-model-profiles
  • langchain-tests
  • langchain-text-splitters
  • langchain-chroma
  • langchain-deepseek
  • langchain-exa
  • langchain-fireworks
  • langchain-groq
  • langchain-huggingface
  • langchain-mistralai
  • langchain-nomic
  • langchain-ollama
  • langchain-openrouter
  • langchain-perplexity
  • langchain-qdrant
  • langchain-xai
  • Other / not sure / general

Feature Description

TodoListMiddleware should re-inject the current todo list into the system prompt on every model call for immunity against summarization.

Currently wrap_model_call only appends static instructions, it never reads state["todos"]. This means the agent's live plan is invisible to the model unless it happens to still be in message history.

Proposed addition: inject_current_todos: bool = True parameter on TodoListMiddleware.init. When enabled, _build_system_content() reads state["todos"] and appends a <current_todos> block with status markers ([ ] pending, [~] in_progress, [x] completed) to the system prompt on every model call.

Use Case

This breaks silently when TodoListMiddleware is composed with SummarizationMiddleware in long-running / deep agent runs.

SummarizationMiddleware compacts message history when the context window fills up. The ToolMessage echo of the last write_todos call is eligible for summarization once trimmed, the model loses visibility of its current plan entirely, even though state["todos"] is still fully intact in checkpointed state.

The agent must then infer its plan from a lossy LLM-generated summary instead of reading structured state directly. This causes plan drift and repeated re-planning on long tasks exactly the failure mode the todo list
was designed to prevent.

This is directly relevant to LangChain's own deep agents pattern where the TodoList tool is described as a cognitive no-op for context engineering. That guarantee breaks under summarization without this fix.

Proposed Solution

Add to TodoListMiddleware.init:
inject_current_todos: bool = True

Extract a _build_system_content(request) helper that:

  1. Takes the existing system message content blocks as base
  2. Appends the static system_prompt instructions (existing behavior)
  3. If inject_current_todos=True and state["todos"] is non-empty, appends:

<current_todos>
[x] Completed task
[~] In-progress task
[ ] Pending task
</current_todos>

Both wrap_model_call and awrap_model_call delegate to this shared helper, removing the duplicated if/else logic that currently exists in both methods.
The change is fully non-breaking, inject_current_todos=False restores exactly the current behavior.

Alternatives Considered

Summarization-aware trimming configuring SummarizationMiddleware to never trim ToolMessages from write_todos. Rejected: brittle, requires users to know about the interaction, doesn't help if history is cleared for other reasons.

Additional Context

I have already implemented this locally:

  • _render_todos_block() helper function
  • inject_current_todos flag with opt-out default
  • _build_system_content() refactor eliminating duplicated logic in sync/async paths
  • 3 new unit tests: injection present, empty-list no-op, flag disabled

Branch is committed and tests pass. Happy to open the PR immediately if this is approved and I am assigned.

Metadata

Metadata

Assignees

No one assigned

    Labels

    externalfeature requestRequest for an enhancement / additional functionalitylangchain`langchain` package issues & PRs
    No fields configured for Feature.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions