Skip to content

Conversation

@gitlost-murali
Copy link
Contributor

@gitlost-murali gitlost-murali commented Dec 14, 2025

Description

This pull request adds support for parsing and extracting structured tool calls from model outputs in the generator, making it possible to handle tool-augmented chat completions. It introduces a configurable tool parser, updates the Completion data model to include tool call information, and adds comprehensive unit tests for both tool parsing and non-tool parsing scenarios.

What this does

When users send requests with tool parsing enabled like this:

formatted_request = tokenizer.apply_chat_template(
    as_chat,
    tools=tools,
    tokenize=False,
    add_generation_prompt=True,
)

response = await policy.generate.route(formatted_request)
completion = response[0]

The completion object will now have:

  • completion.tool_calls - List of parsed tool calls from the model output

  • completion.has_tool_calls - Boolean property to check if any tool calls exist

  • completion.content - The text content excluding tool call tags

    To enable this, set tool_call_parser="hermes" when creating the policy.

Changes

  • Generator (src/forge/actors/generator.py): Added tool parsing logic that extracts structured tool calls from model outputs
  • Completion model (src/forge/data_models/completion.py): Added tool_calls and content fields, plus has_tool_calls property
  • Tests (tests/unit_tests/test_generator.py): Added tests for both with and without tool parsing scenarios

Tool Parsing Integration:

  • Added a configurable tool_call_parser option to the Generator class and logic to initialize and use a tool parser for extracting tool calls from model output. [1] [2]
  • Implemented _extract_tool_calls method to extract tool calls and content using the configured parser, handling exceptions gracefully. (src/forge/actors/generator.py)
  • Updated _to_completions to populate the new tool_calls and content fields in Completion when tool parsing is enabled. (src/forge/actors/generator.py) [1] [2]

Data Model Enhancements:

  • Extended the Completion class to include tool_calls, content, and a has_tool_calls property for easier downstream usage. (src/forge/data_models/completion.py) [1] [2]

Dependency Updates:

  • Imported additional vLLM's OpenAI protocol and tool parser classes required for tool call extraction. (src/forge/actors/generator.py)

Test plan

  • Added unit tests covering both scenarios: with and without tool parsing, verifying correct extraction and population of tool call information. (tests/unit_tests/test_generator.py)
  • All pre-commit hooks passing

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Dec 14, 2025
@daniellepintz
Copy link
Contributor

Hi @gitlost-murali! Thanks for the PR! I am wondering do you have a real world example you could test this on? (and add to Test Plan)

Copy link
Contributor

@daniellepintz daniellepintz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like unit tests are failing as well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants