Skip to content

Conversation

@ccurme
Copy link
Contributor

@ccurme ccurme commented Dec 16, 2025

No description provided.

@ccurme ccurme requested a review from lnhsingh as a code owner December 16, 2025 21:20
@github-actions github-actions bot added langchain For docs changes to LangChain oss internal labels Dec 16, 2025
@github-actions
Copy link
Contributor

Mintlify preview ID generated: preview-ccstre-1765920086-1fd0a8a

1. Partial JSON as [tool calls](/oss/langchain/models#tool-calling) are generated
2. The completed, parsed tool calls that are executed
To do this, apply both [`"messages"`](#llm-tokens) and [`"updates"`](#agent-progress) streaming modes. The `"messages"` streaming mode will include [message chunks](/oss/langchain/messages#streaming-and-chunks) from all LLM calls in the agent. The `"updates"` mode will include completed messages with tool calls before they are routed to tools for execution.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we avoid relying on updates for this information and show how to aggregate the tool message? There's no guarantee that the message comes from the same source?

def _process_completed_message(source: str, message: AnyMessage) -> None:
if source == "model" and isinstance(message, AIMessage) and message.tool_calls:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think readers will know what source="model" is , probably need a bit more context

To handle human-in-the-loop [interrupts](/oss/langchain/human-in-the-loop), we build on the [above example](#streaming-tool-calls):
1. We configure the agent with [human-in-the-loop middleware and a checkpointer](/oss/langchain/human-in-the-loop#configuring-interrupts)
2. We collect interrupts generated during the `"updates"` stream mode
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do interrupts surface if we do only messages?

def get_weather(city: str) -> str:
"""Get weather for a given city."""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Empty line

)
def _process_message_chunk(token: AIMessage) -> None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we able to remove the process* helpers as they act like sinks into print but usually users will not want a sync but to rewrite into a different generator

```python
interrupts = []
for stream_mode, data in agent.stream(
Command(resume={"decisions": decisions}),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to show how to resume using interrupt IDs, i.e. so user code supports responding to multiple interrupts in parallel (e.g. two agents running in parallel each using HIL)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

internal langchain For docs changes to LangChain oss

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants