Replies: 3 comments 3 replies
-
A follow up question to this: How does the new functional API play into this? Could this already today surface multiple interrupts at once? |
Beta Was this translation helpful? Give feedback.
-
Following this - I'm curious how this would effectively be handled by your UI. Is your proposal that all interrupts are surfaced at once and then a user sends back a single response addressing all interrupts? Or is it that all interrupts at a single step are captured at once, surfaced to the user individually, user responds to each interrupt, then all interrupts are passed back in a single "Command" input to the graph to resume? |
Beta Was this translation helpful? Give feedback.
-
I realized after more testing that my code example makes it seam that only one interrupt is surfaced at a time, but that isn't actually true. It's just that the original Here's an updated version of
Now, if we run the code, we get:
Let's walk through these steps: This is great! It's very close to the behaviour I need. The only thing that I'm missing at the moment is the ability to send a |
Beta Was this translation helpful? Give feedback.
-
FYI, at my company we are thinking about agentic architectures with many concurrent agents, each of which needs to be able to ask for and receive human input. In this kind of setup, we need to be able to surface interrupts to the user without impacting the other agents from continuing their work.
Let's illustrate that setup with a simple example, e.g. a graph like this:
So there are 4 branches, each with 2 sequential nodes.
My understanding is that the current LangGraph implementation will surface interrupts one at a time**, sequentially**. When 2 parallel nodes both call interrupt, then whichever one triggers first will be the one to be surfaced first.
This behaviour is not ideal (for my use case at least), because we would prefer to surface all interrupts
at once to the userconcurrently, rather thanone at a timesequentially.There's 2 separate improvements that can be made imo:
All interrupts within a parallel computation are surfaced together.After more testing, I realised that this is already supported.For step 2., I think this would imply going beyond the superstep approach that underlies the current graph execution, so I'm not sure how feasible this is. It could offer quite a boost in performance in scenarios where for example Branch 1 A and Branch 3 B are slow, but Branch 3 A is fast.
If step 1 and step 2 are combined, we could even surface all 3 interrupts to the user immediately when they are triggered. That would be really nice!
Could you comment on if any of these steps are on the roadmap?
Here's example code for the example graph:
Edit 06/04/2025: Edited the message for clarity. Old phrasing is
slashed throughand new phrasing is bold.Beta Was this translation helpful? Give feedback.
All reactions