langgraph/concepts/human_in_the_loop/ #2290
Replies: 15 comments 11 replies
-
I have a backend, fast api and front end streamlit architecture.... struggling to get this working well. I can edit and resume the graph fine from the back end and fast api using checkpointing. What I cant do well (or cleanly) is trigger a graph update from streamlit front end so the events stream nicely into my existing graph invoke code and event/UI handling. Any insights or code examples for this? |
Beta Was this translation helpful? Give feedback.
-
I have try this and I am still getting from langgraph.graph import StateGraph, MessagesState
from langgraph.types import interrupt, Command
from langgraph.checkpoint.memory import MemorySaver
def human_node(state: MessagesState):
value = interrupt(f"What should I say in response to {state['messages']}")
return {"messages": [{"role": "user", "content": value}]}
checkpointer = MemorySaver()
graph_builder = StateGraph(MessagesState)
graph_builder.add_node(human_node)
graph_builder.set_entry_point("human_node")
graph = graph_builder.compile(
checkpointer=checkpointer
) First call thread_config = {"configurable": {"thread_id": "some_id"}}
graph.invoke({"messages": [{ "role": "user", "content": "Hello!"}]}, config=thread_config) I go the following out put Now resume the graph: graph.invoke(Command(resume="Hello, How can i help you?"), config=thread_config) The output is 2
|
Beta Was this translation helpful? Give feedback.
-
Hey i am using interrupt inside sub graph and when i saw the writes variable of the interrupt |
Beta Was this translation helpful? Give feedback.
-
Hey My interrupt is going in a infinite loop any solutions ? |
Beta Was this translation helpful? Give feedback.
-
it is said that "Resuming from an interrupt is different from Python's input() function, where execution resumes from the exact point where the input() function was called.". Is there a way to let execution resume from the exact point where interrupt() function was called, just like input() function? |
Beta Was this translation helpful? Give feedback.
-
Let's say there is a graph with multiple human in the loop nodes step 1 node I am streaming the graph with following:
When 'human input node 1' hits the interrupt function, it stops the graph and resumes the graph execution with the 'human_input" value. The documentation says the interrupt function resumes from the node where it is applied, and I expected it to finish executing all the nodes in the graph. |
Beta Was this translation helpful? Give feedback.
-
When multiple nodes raise an interrupt, it looks like only one interrupt can be pending at a time. Is there a way to handle multiple interrupts independently within the same graph? |
Beta Was this translation helpful? Give feedback.
-
I have a situation where I have a chat interface and a ticketing tool that needs to confirm with user. I want to know who caused the interrupt so I can prompt the user outside the graph with appropriate questions. My ticketing tool has this code
and my user_node code is
and this is how I am trying to loop
My problem is that I am not able to differentiate between who caused the interrupt? Since the tool node is called again when i resume, i am seeing it print two times the confirmation question. I was hoping that this would be a simple use case but its convoluted like crazy for no reason. Can someone guide me to some code that handles similar case? |
Beta Was this translation helpful? Give feedback.
-
@phfifofum @DuncanRiv @mnuryar Can anyone of you provide with a minimal example of how you've implemented FastAPI with HITL using Interrupt? Or just point me to any resource. I'd be of great help! |
Beta Was this translation helpful? Give feedback.
-
You can also have a look at the example Sales Agent here: await graph.ainvoke(Command(resume={ "response": approval, "reviewed_email": email_text, "comment": retry_comment }), config=thread_config) Might also help you @udaylunawat and @mnuryar @azmathmoosa (see how I pass values from and to the interrupt) |
Beta Was this translation helpful? Give feedback.
-
I have a problem, I'm using the interrupt and defined it in a separate node in my graph to use it with the agent inbox ui and hosted my graph locally using "langgraph dev". The interrupt actions are only functioning when the interrupt is on the first message of the thread, when I try to accept or edit or do any other action on the interrupt that is not on the first message on the thread it does nothing. if the I did this exactly the same way define in the agent inbox repo and I tried it with the examples that creates and interrupts them and confirms the jokes using the ui. I am also facing this exact problem with example graph in this repository https://github.com/langchain-ai/agent-inbox-langgraph-example I don't know if the problem is in the graph itself or how the agent inbox ui processes and submits the interrupts. I tried looking at the source code of agent inbox but I couldn't find the issue |
Beta Was this translation helpful? Give feedback.
-
having a similar issue with interrupt exposed via an api. the graph crashes without an opportunity to get a state update from a second endpoint to resume. the intended behavior is: -get to the interupt node in the graph |
Beta Was this translation helpful? Give feedback.
-
@phfifofum use as Websocket apporach Front end (.js will call FastAPI websocket) internally Fastapi websocket will call to langgraph human in the loop approach |
Beta Was this translation helpful? Give feedback.
-
我在使用人机交互,想实现一个用户输入的断点,以下是我的测试代码: def node_1(state: States):
print('经过:node_1')
return {"status": 'node_1'}
def node_2(state: States):
print('经过:node_2')
return {"status": 'node_2'}
def human_approval(state: States):
print('断电之前')
value = interrupt(
{
"feedback": "验证你是否是真人,请输入1+1的答案"
}
)
print('断电之后')
return {"some_text": value}
def node_3(state: States):
print('经过:node_3')
return {"status": 'node_3'}
graph_builder = StateGraph(States)
graph_builder.add_node("node_1", node_1)
graph_builder.add_node("node_2", node_2)
graph_builder.add_node("human_approval", human_approval)
graph_builder.add_node("node_3", node_3)
graph_builder.set_entry_point("node_1")
graph_builder.add_edge("node_1", "node_2")
graph_builder.add_edge("node_2", "human_approval")
graph_builder.add_edge("human_approval", "node_3")
graph_builder.add_edge("node_3", END)
graph = graph_builder.compile(checkpointer=memory)
thread_config = {"configurable": {"thread_id": "some_id"}}
if user_id == '1151':
for chunk in graph.stream({"issue": "怎么注册账号?"}, config=thread_config):
print(chunk)
else:
for chunk in graph.stream(Command(resume="2"), config=thread_config):
print(chunk) 第一次请求我会携带user_id=1151的参数对接口发起第一次请求,然后他会打出断点信息如下:
第二次我会携带user_id=1152(user_id不等于1151)的参数模拟第二遍请求(即用户的回答) 而在我的实际场景中,我第一遍请求时候我该怎么把反馈内容"验证你是否是真人,请输入1+1的答案"这个反馈展现到用户端?能让用户清楚的知道在输入框中输入正确值? |
Beta Was this translation helpful? Give feedback.
-
Please make the implementation clearly show the meaning of "Human-in-the-loop." |
Beta Was this translation helpful? Give feedback.
-
langgraph/concepts/human_in_the_loop/
Build language agents as graphs
https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/
Beta Was this translation helpful? Give feedback.
All reactions