-
Notifications
You must be signed in to change notification settings - Fork 51
ChatDatabricks: _convert_lc_messages_to_responses_api incorrectly handling system message with multiple blocks #368
Description
Description
_convert_lc_messages_to_responses_api in databricks_langchain does not properly convert user, system, or developer messages when they contain content blocks. These messages are passed through unchanged with type: "text" (Chat Completions API format) instead of being converted to type: "input_text" (Responses API format).
This causes a 400 Bad Request error:
Invalid value: 'text'. Supported values are: 'input_text', 'input_image', 'output_text', 'refusal', 'input_file', 'computer_screenshot', and 'summary_text'.
Root Cause
The conversion function handles assistant messages correctly but simply passes through user/system/developer messages without converting content block types:
elif role in ("user", "system", "developer"):
input_items.append(cc_msg) # ← No conversion of content blocks!This is problematic because _convert_message_to_dict() returns Chat Completions format (type: "text"), but the Responses API requires type: "input_text".
Why This Matters
1. LangChain middleware commonly builds system prompts as multiple content blocks
For example, deepagents middleware constructs system messages by combining multiple text blocks:
# From deepagents/middleware/_utils.py
system_message = SystemMessage(content=[
{"type": "text", "text": "Part 1 of system prompt"},
{"type": "text", "text": "Part 2 of system prompt"},
# ... additional blocks
])This pattern is common across LangChain tooling for composing prompts modularly.
2. Some models only support the Responses API
I'm using databricks-gpt-5-1-codex-mini via ChatDatabricks, which exclusively uses the Responses API — the Chat Completions endpoint is not available. This means there is no workaround; the model is completely unusable with any middleware that produces multi-block system messages.
Reproduction
from databricks_langchain import ChatDatabricks
from langchain_core.messages import SystemMessage, HumanMessage
llm = ChatDatabricks(endpoint="databricks-gpt-5-1-codex-mini", use_responses_api=True)
# Multi-block system message (common pattern from middleware)
messages = [
SystemMessage(content=[
{"type": "text", "text": "You are a helpful assistant."},
{"type": "text", "text": "Additional instructions here."},
]),
HumanMessage(content="Hello"),
]
llm.invoke(messages) # Raises 400 Bad RequestExpected Behavior
The user, system, and developer message branches should convert content blocks the same way assistant messages do:
type: "text"→type: "input_text"type: "image_url"→type: "input_image"
Environment
langchain-openai: 1.1.7databricks-langchain: 0.16.1deepagents: 0.4.4- Model:
databricks-gpt-5-1-codex-mini(Responses API only)