Skip to content

[BUG] Streaming Output Missing Tool Call Events in Provider-Specific LLM Classes #3982

@SumJest

Description

@SumJest

Description

The current streaming implementation successfully emits chunks for LLM text responses but lacks tool call information in the streaming output. While the base LLM class has event emission for tool calls, provider-specific implementations (like OpenAICompletion) don't emit these events.

Current Behavior

  • Text responses from LLMs are properly emitted and available in CrewStreamingOutput
  • Tool call events are missing from the streaming output
  • Only the final tool execution results are visible, not the intermediate tool call chunks

Files Likely Affected

  • openai/completion.py (or similar provider-specific files)
  • llm.py
  • utilities/streaming.py

Steps to Reproduce

  1. Initiate Crew with some task and agents, based on OpenAI (and not only that) model with stream=True
  2. Kickoff crew
  3. Iterate chunks

Expected behavior

Streaming output should include both:

  • Text response chunks from LLM
  • Tool call events with tool names and arguments as they occur

Screenshots/Code snippets

agent = create_sp_navigator_agent()

task = create_general_task()

crew = Crew(
    agents=[agent],
    tasks=[task],
    stream=True
)
streaming = await crew.kickoff_async(inputs={
"user_task": "Show list of categories"
})

# Async iteration over chunks
async for chunk in streaming:
    print(chunk.model_dump_json(indent=2))

# Access final result
result = streaming.result
print(f"\n\nFinal output: \n{result.model_dump_json(indent=2)}")

Operating System

Windows 11

Python Version

3.12

crewAI Version

1.6.0

crewAI Tools Version

idk

Virtual Environment

Venv

Evidence

  1. Base Implementation Exists: The LLM class (inheriting from BaseLLM) in llm.py has proper tool call event emission in _handle_streaming_tool_calls():
crewai_event_bus.emit(
    self,
    event=LLMStreamChunkEvent(
        tool_call=tool_call.to_dict(),
        chunk=tool_call.function.arguments,
        from_task=from_task,
        from_agent=from_agent,
        call_type=LLMCallType.TOOL_CALL,
    ),
)
  1. Provider Classes Lack Implementation: When CrewAI maps configured LLMs to provider-specific classes (e.g., OpenAICompletion for GPT-4o), these classes don't implement the tool call event emission.
  2. Method Flow:
    • _handle_streaming_tool_calls() is called from _handle_streaming_response()
    • This flow works in the base LLM class but not in provider-specific implementations

Possible Solution

Implement tool call event emission in all provider-specific LLM classes by:

  1. Ensuring _handle_streaming_tool_calls() is properly called in provider streaming implementations
  2. Adding the missing event emission logic to classes like OpenAICompletion, AnthropicCompletion, etc.
  3. Maintaining consistency with the existing event structure used in the base LLM class

Additional context

This feature is crucial for:

  • Real-time monitoring of agent tool usage
  • Building responsive UIs that show tool calls as they happen
  • Debugging and observability in agent workflows
  • Maintaining parity between streaming and non-streaming behaviors

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions