Skip to content

fix tool handling in OpenAIServingChat#18996

Open
JustinTong0323 wants to merge 1 commit intosgl-project:mainfrom
JustinTong0323:fix-tool-call-schema
Open

fix tool handling in OpenAIServingChat#18996
JustinTong0323 wants to merge 1 commit intosgl-project:mainfrom
JustinTong0323:fix-tool-call-schema

Conversation

@JustinTong0323
Copy link
Collaborator

@JustinTong0323 JustinTong0323 commented Feb 19, 2026

Summary

  • Pass full OpenAI-format tool schemas ({"type": "function", "function": {...}}) to Jinja chat templates instead of flat function-only dicts ({"name": ..., "parameters": ...})
  • Fix the fallback path to correctly unwrap to flat format when a template rejects the OpenAI wrapper
  • Add unit tests for both the primary and fallback code paths

Motivation

This PR address #18478

The previous code called item.function.model_dump(), stripping the outer OpenAI wrapper before passing tools to apply_chat_template. This caused templates that rely on the standard {"type": "function", "function": {...}} structure to fall back to incorrect rendering.

Provide a script here you can compare the output for different models: https://gist.github.com/JustinTong0323/744f886cd2f5863a3a6307a422119f5b

For example, with Kimi-K2.5, the old flat format was not recognized by the Jinja template, resulting in a raw JSON dump:

<|im_system|>tool_declare<|im_middle|>[{"name":"get_weather","description":"Get the current weather...","parameters":{...}}]<|im_end|>

With the fix (item.model_dump()), the template correctly identifies the OpenAI tool schema and renders the expected TypeScript namespace format that the model was trained on:

<|im_system|>tool_declare<|im_middle|># Tools

## functions
namespace functions {
// Get the current weather for a location.
type get_weather = (_: {
  // City name
  location: string,
  unit?: "celsius" | "fahrenheit"
}) => any;
}
<|im_end|>

Additionally, the old fallback logic ([t if "function" in t else {"function": t} for t in tools]) was designed to wrap flat dicts outward. After switching to item.model_dump(), "function" is already a key in every dict, making the fallback a no-op — it would retry with the exact same input. This PR fixes the fallback to unwrap instead (t["function"] if "function" in t else t), so templates that expect flat function-only dicts (e.g. some Mistral variants) still work on the second attempt.

Test plan

  • test_jinja_uses_openai_tool_schema_first — verifies apply_chat_template receives full OpenAI-shaped tools on the first call
  • test_jinja_tool_schema_fallback_to_flat_function — simulates a template rejection, verifies the retry passes flat function-only dicts

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

…chema behavior

Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @JustinTong0323, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the handling of tool schemas within the OpenAIServingChat component to improve compatibility with various chat templates. The primary goal is to ensure that Jinja templates receive tool definitions in the standard OpenAI format by default, while also providing a robust fallback mechanism for templates that expect a simpler, flat function-only structure. This prevents incorrect rendering and enhances the system's flexibility across different model integrations.

Highlights

  • Tool Schema Handling: Changed how tool schemas are passed to Jinja chat templates, now sending the full OpenAI-format ({"type": "function", "function": {...}}) instead of just the flat function definition.
  • Fallback Logic Refinement: Corrected the fallback mechanism to properly unwrap tool schemas to a flat function-only format when a template does not accept the OpenAI wrapper.
  • Enhanced Test Coverage: Introduced new unit tests to validate both the primary path of passing OpenAI-formatted tools and the revised fallback logic.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/entrypoints/openai/serving_chat.py
    • Modified tool schema extraction to use item.model_dump() instead of item.function.model_dump() when preparing tools for chat templates.
    • Adjusted the fallback logic within _apply_jinja_template to unwrap tool dictionaries from {"function": ...} to just ... if the initial attempt fails.
    • Updated comments to clarify the purpose of the tool format transformation fallback.
  • test/registered/openai_server/basic/test_serving_chat.py
    • Added test_jinja_uses_openai_tool_schema_first to confirm that the full OpenAI tool schema is passed to apply_chat_template initially.
    • Added test_jinja_tool_schema_fallback_to_flat_function to verify the correct behavior of the fallback mechanism when a template rejects the OpenAI wrapper.
Activity
  • The author has outlined a test plan including test_jinja_uses_openai_tool_schema_first and test_jinja_tool_schema_fallback_to_flat_function to validate the changes.
  • A comprehensive checklist for code quality, testing, documentation, and benchmarking has been provided.
  • The standard review process, involving Merge Oncalls, CODEOWNERS, and CI tests, is detailed for this pull request.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly refactors the tool handling in OpenAIServingChat to pass the full OpenAI-formatted tool schemas to Jinja templates, which is a great improvement for compatibility with templates expecting this format. The corresponding fix to the fallback logic, which now correctly unwraps tools to a flat format, is also well-implemented. The new unit tests are thorough and validate both the primary path and the fallback mechanism. I have one suggestion to improve the robustness of the exception handling in the fallback logic by catching more specific exceptions.

@@ -481,11 +481,10 @@ def _apply_jinja_template(
return_dict=False,
)
except Exception as e:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using a broad except Exception can hide underlying issues unrelated to tool formatting, such as OutOfMemoryError or other runtime problems. This could lead to incorrect fallback attempts. To make the error handling more robust, it's better to catch more specific exceptions that are likely to be raised by apply_chat_template due to template incompatibility, like jinja2.TemplateError, TypeError, or ValueError.

Suggested change
except Exception as e:
except (jinja2.TemplateError, TypeError, ValueError) as e:

@JustinTong0323
Copy link
Collaborator Author

This PR may also resolve or is related to #18086. However, using the https://github.com/MoonshotAI/K2-Vendor-Verifier, I cannot observe any obvious accuracy difference, requiring further investigation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments