-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
feat: implement parallel tool execution (Gap 2) with backward compatibility #1401
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 2 commits
e11d456
86f7c53
99b4dda
74fd40f
5c74d64
11bbf9a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -15,6 +15,8 @@ | |||||||||
| import time | ||||||||||
| import json | ||||||||||
| import xml.etree.ElementTree as ET | ||||||||||
| # Gap 2: Tool call execution imports | ||||||||||
| from ..tools.call_executor import ToolCall, create_tool_call_executor | ||||||||||
| # Display functions - lazy loaded to avoid importing rich at startup | ||||||||||
| # These are only needed when output=verbose | ||||||||||
| _display_module = None | ||||||||||
|
|
@@ -1649,6 +1651,7 @@ def get_response( | |||||||||
| task_description: Optional[str] = None, | ||||||||||
| task_id: Optional[str] = None, | ||||||||||
| execute_tool_fn: Optional[Callable] = None, | ||||||||||
| parallel_tool_calls: bool = False, # Gap 2: Enable parallel tool execution | ||||||||||
| stream: bool = True, | ||||||||||
|
Comment on lines
+1654
to
1655
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
This new flag is only consumed in the Responses API / streaming branches. The later tool loop in 🤖 Prompt for AI Agents |
||||||||||
| stream_callback: Optional[Callable] = None, | ||||||||||
| emit_events: bool = False, | ||||||||||
|
|
@@ -1893,26 +1896,47 @@ def _prepare_return_value(text: str) -> Union[str, tuple]: | |||||||||
| "tool_calls": serializable_tool_calls, | ||||||||||
| }) | ||||||||||
|
|
||||||||||
| tool_results = [] | ||||||||||
| # Execute tool calls using ToolCallExecutor (Gap 2: parallel or sequential) | ||||||||||
| is_ollama = self._is_ollama_provider() | ||||||||||
| tool_calls_batch = [] | ||||||||||
|
|
||||||||||
| # Prepare batch of ToolCall objects | ||||||||||
| for tool_call in tool_calls: | ||||||||||
| function_name, arguments, tool_call_id = self._extract_tool_call_info(tool_call) | ||||||||||
|
||||||||||
| function_name, arguments, tool_call_id = self._extract_tool_call_info(tool_call) | |
| function_name, arguments, tool_call_id = self._extract_tool_call_info( | |
| tool_call, is_ollama=is_ollama | |
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t re-raise per-tool execution errors in batched mode.
Line 1921-1922 raises immediately on first tool failure, which aborts remaining tool results and prevents full tool-message emission for the turn. The executor already returns structured error results.
Proposed fix
- for tool_call_obj, tool_result_obj in zip(tool_calls_batch, tool_results_batch):
- if tool_result_obj.error is not None:
- raise tool_result_obj.error
- tool_result = tool_result_obj.result
+ for tool_call_obj, tool_result_obj in zip(tool_calls_batch, tool_results_batch):
+ tool_result = tool_result_obj.result🧰 Tools
🪛 Ruff (0.15.10)
[warning] 1920-1920: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/praisonai-agents/praisonaiagents/llm/llm.py` around lines 1920 - 1923, In
the batch-processing loop over tool_calls_batch and tool_results_batch, stop
re-raising per-tool exceptions (do not raise tool_result_obj.error); instead,
detect if tool_result_obj.error is not None and convert or attach that error
into the emitted/returned structured result (e.g., set tool_result to an error
wrapper or include error info on tool_result_obj) and continue processing
remaining pairs so all tool-results are emitted; update the loop handling around
variables tool_calls_batch, tool_results_batch, tool_result_obj, and tool_result
to propagate structured errors rather than raising.
Copilot
AI
Apr 16, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Within this refactor, the per-call tool_call_id extracted earlier is no longer carried alongside each tool_result_obj in this loop. Downstream code that appends tool messages needs the matching tool_call_id for each result; ensure you use tool_result_obj.tool_call_id (or otherwise preserve the mapping) rather than relying on a stale outer-scope variable.
Copilot
AI
Apr 16, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ToolResult doesn’t define an arguments field, so the verbose display will always show N/A here. If the UI/verbose output should show tool inputs, either add arguments to ToolResult (populated from the original ToolCall) or carry a mapping from tool_call_id -> arguments when rendering.
Copilot
AI
Apr 16, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tool_call callback is now passed {} for tool_input because ToolResult has no arguments. This drops tool input data from callbacks/telemetry. Preserve and pass the original tool arguments (e.g., store them on ToolResult or look them up from the original ToolCall).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_create_tool_message() is undefined on LLM.
This branch will raise AttributeError on the first streamed tool call. The class defines _format_ollama_tool_result_message(...), but there is no _create_tool_message(...) implementation in this file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/praisonai-agents/praisonaiagents/llm/llm.py` around lines 3348 - 3365,
The code calls a missing method _create_tool_message which causes
AttributeError; implement _create_tool_message in the LLM class (or replace its
calls) so tool results are formatted correctly: make
_create_tool_message(function_name, result, tool_call_id, is_ollama) and have it
delegate to the existing _format_ollama_tool_result_message(...) when is_ollama
is True and otherwise produce the non-Ollama formatted message (or call an
existing non-ollama formatter if present), then update the loop that appends
messages to use this implemented helper.
Uh oh!
There was an error while loading. Please reload this page.