Skip to content

Using Deepseek R1 with ChatBedrockConverse - when calling the Converse operation: This model doesn't support tool use #447

Open
@Subham07

Description

@Subham07

I am using DeepseekR1 (us.deepseek.r1-v1:0) from aws bedrock and in the code I am using ChatBedrockConverse.

Following is a simple tool calling agent code snippet.

import boto3
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain_aws import ChatBedrock, BedrockLLM, ChatBedrockConverse
from langchain_core.tools import tool

think_params= {
    "thinking": {
        "type": "enabled",
        "budget_tokens": 2000
    }
}

access_config["MODEL_ID"] = "us.deepseek.r1-v1:0"
# access_config["MODEL_ID"] = "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
model = ChatBedrockConverse(
            model_id=access_config["MODEL_ID"],
            client=bedrock_client,
            temperature=1,
            max_tokens=5000,
            verbose=True,
            additional_model_request_fields=think_params,
        )

@tool
def magic_function(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2


tools = [magic_function]


query = "what is the value of magic_function(3)?"

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        ("human", "{input}"),
        # Placeholders fill up a **list** of messages
        ("placeholder", "{agent_scratchpad}"),
    ]
)


agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_executor.invoke({"input": query})

Following is the stacktrace which I am getting when the invoke function is called.

ValidationException                       Traceback (most recent call last)
Cell In[30], line 1
----> 1 agent_executor.invoke({"input": query})

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain\chains\base.py:170](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain/chains/base.py#line=169), in Chain.invoke(self, input, config, **kwargs)
    168 except BaseException as e:
    169     run_manager.on_chain_error(e)
--> 170     raise e
    171 run_manager.on_chain_end(outputs)
    173 if include_run_info:

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain\chains\base.py:160](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain/chains/base.py#line=159), in Chain.invoke(self, input, config, **kwargs)
    157 try:
    158     self._validate_inputs(inputs)
    159     outputs = (
--> 160         self._call(inputs, run_manager=run_manager)
    161         if new_arg_supported
    162         else self._call(inputs)
    163     )
    165     final_outputs: Dict[str, Any] = self.prep_outputs(
    166         inputs, outputs, return_only_outputs
    167     )
    168 except BaseException as e:

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain\agents\agent.py:1624](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain/agents/agent.py#line=1623), in AgentExecutor._call(self, inputs, run_manager)
   1622 # We now enter the agent loop (until it returns something).
   1623 while self._should_continue(iterations, time_elapsed):
-> 1624     next_step_output = self._take_next_step(
   1625         name_to_tool_map,
   1626         color_mapping,
   1627         inputs,
   1628         intermediate_steps,
   1629         run_manager=run_manager,
   1630     )
   1631     if isinstance(next_step_output, AgentFinish):
   1632         return self._return(
   1633             next_step_output, intermediate_steps, run_manager=run_manager
   1634         )

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain\agents\agent.py:1330](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain/agents/agent.py#line=1329), in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
   1321 def _take_next_step(
   1322     self,
   1323     name_to_tool_map: Dict[str, BaseTool],
   (...)   1327     run_manager: Optional[CallbackManagerForChainRun] = None,
   1328 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
   1329     return self._consume_next_step(
-> 1330         [
   1331             a
   1332             for a in self._iter_next_step(
   1333                 name_to_tool_map,
   1334                 color_mapping,
   1335                 inputs,
   1336                 intermediate_steps,
   1337                 run_manager,
   1338             )
   1339         ]
   1340     )

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain\agents\agent.py:1330](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain/agents/agent.py#line=1329), in <listcomp>(.0)
   1321 def _take_next_step(
   1322     self,
   1323     name_to_tool_map: Dict[str, BaseTool],
   (...)   1327     run_manager: Optional[CallbackManagerForChainRun] = None,
   1328 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
   1329     return self._consume_next_step(
-> 1330         [
   1331             a
   1332             for a in self._iter_next_step(
   1333                 name_to_tool_map,
   1334                 color_mapping,
   1335                 inputs,
   1336                 intermediate_steps,
   1337                 run_manager,
   1338             )
   1339         ]
   1340     )

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain\agents\agent.py:1358](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain/agents/agent.py#line=1357), in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
   1355     intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
   1357     # Call the LLM to see what to do.
-> 1358     output = self._action_agent.plan(
   1359         intermediate_steps,
   1360         callbacks=run_manager.get_child() if run_manager else None,
   1361         **inputs,
   1362     )
   1363 except OutputParserException as e:
   1364     if isinstance(self.handle_parsing_errors, bool):

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain\agents\agent.py:581](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain/agents/agent.py#line=580), in RunnableMultiActionAgent.plan(self, intermediate_steps, callbacks, **kwargs)
    573 final_output: Any = None
    574 if self.stream_runnable:
    575     # Use streaming to make sure that the underlying LLM is invoked in a
    576     # streaming
   (...)    579     # Because the response from the plan is not a generator, we need to
    580     # accumulate the output into final output and return that.
--> 581     for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
    582         if final_output is None:
    583             final_output = chunk

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain_core\runnables\base.py:3409](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain_core/runnables/base.py#line=3408), in RunnableSequence.stream(self, input, config, **kwargs)
   3403 def stream(
   3404     self,
   3405     input: Input,
   3406     config: Optional[RunnableConfig] = None,
   3407     **kwargs: Optional[Any],
   3408 ) -> Iterator[Output]:
-> 3409     yield from self.transform(iter([input]), config, **kwargs)

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain_core\runnables\base.py:3396](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain_core/runnables/base.py#line=3395), in RunnableSequence.transform(self, input, config, **kwargs)
   3390 def transform(
   3391     self,
   3392     input: Iterator[Input],
   3393     config: Optional[RunnableConfig] = None,
   3394     **kwargs: Optional[Any],
   3395 ) -> Iterator[Output]:
-> 3396     yield from self._transform_stream_with_config(
   3397         input,
   3398         self._transform,
   3399         patch_config(config, run_name=(config or {}).get("run_name") or self.name),
   3400         **kwargs,
   3401     )

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain_core\runnables\base.py:2199](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain_core/runnables/base.py#line=2198), in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
   2197 try:
   2198     while True:
-> 2199         chunk: Output = context.run(next, iterator)  # type: ignore
   2200         yield chunk
   2201         if final_output_supported:

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain_core\runnables\base.py:3359](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain_core/runnables/base.py#line=3358), in RunnableSequence._transform(self, input, run_manager, config, **kwargs)
   3356     else:
   3357         final_pipeline = step.transform(final_pipeline, config)
-> 3359 yield from final_pipeline

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain_core\runnables\base.py:1413](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain_core/runnables/base.py#line=1412), in Runnable.transform(self, input, config, **kwargs)
   1410 final: Input
   1411 got_first_val = False
-> 1413 for ichunk in input:
   1414     # The default implementation of transform is to buffer input and
   1415     # then call stream.
   1416     # It'll attempt to gather all input into a single chunk using
   1417     # the `+` operator.
   1418     # If the input is not addable, then we'll assume that we can
   1419     # only operate on the last chunk,
   1420     # and we'll iterate until we get to the last chunk.
   1421     if not got_first_val:
   1422         final = ichunk

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain_core\runnables\base.py:5565](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain_core/runnables/base.py#line=5564), in RunnableBindingBase.transform(self, input, config, **kwargs)
   5559 def transform(
   5560     self,
   5561     input: Iterator[Input],
   5562     config: Optional[RunnableConfig] = None,
   5563     **kwargs: Any,
   5564 ) -> Iterator[Output]:
-> 5565     yield from self.bound.transform(
   5566         input,
   5567         self._merge_configs(config),
   5568         **{**self.kwargs, **kwargs},
   5569     )

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain_core\runnables\base.py:1431](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain_core/runnables/base.py#line=1430), in Runnable.transform(self, input, config, **kwargs)
   1428             final = ichunk
   1430 if got_first_val:
-> 1431     yield from self.stream(final, config, **kwargs)

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain_core\language_models\chat_models.py:386](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain_core/language_models/chat_models.py#line=385), in BaseChatModel.stream(self, input, config, stop, **kwargs)
    375 def stream(
    376     self,
    377     input: LanguageModelInput,
   (...)    381     **kwargs: Any,
    382 ) -> Iterator[BaseMessageChunk]:
    383     if not self._should_stream(async_api=False, **{**kwargs, "stream": True}):
    384         # model doesn't implement streaming, so use default implementation
    385         yield cast(
--> 386             BaseMessageChunk, self.invoke(input, config=config, stop=stop, **kwargs)
    387         )
    388     else:
    389         config = ensure_config(config)

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain_core\language_models\chat_models.py:307](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain_core/language_models/chat_models.py#line=306), in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    296 def invoke(
    297     self,
    298     input: LanguageModelInput,
   (...)    302     **kwargs: Any,
    303 ) -> BaseMessage:
    304     config = ensure_config(config)
    305     return cast(
    306         ChatGeneration,
--> 307         self.generate_prompt(
    308             [self._convert_input(input)],
    309             stop=stop,
    310             callbacks=config.get("callbacks"),
    311             tags=config.get("tags"),
    312             metadata=config.get("metadata"),
    313             run_name=config.get("run_name"),
    314             run_id=config.pop("run_id", None),
    315             **kwargs,
    316         ).generations[0][0],
    317     ).message

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain_core\language_models\chat_models.py:843](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain_core/language_models/chat_models.py#line=842), in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    835 def generate_prompt(
    836     self,
    837     prompts: list[PromptValue],
   (...)    840     **kwargs: Any,
    841 ) -> LLMResult:
    842     prompt_messages = [p.to_messages() for p in prompts]
--> 843     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain_core\language_models\chat_models.py:683](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain_core/language_models/chat_models.py#line=682), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    680 for i, m in enumerate(messages):
    681     try:
    682         results.append(
--> 683             self._generate_with_cache(
    684                 m,
    685                 stop=stop,
    686                 run_manager=run_managers[i] if run_managers else None,
    687                 **kwargs,
    688             )
    689         )
    690     except BaseException as e:
    691         if run_managers:

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain_core\language_models\chat_models.py:908](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain_core/language_models/chat_models.py#line=907), in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    906 else:
    907     if inspect.signature(self._generate).parameters.get("run_manager"):
--> 908         result = self._generate(
    909             messages, stop=stop, run_manager=run_manager, **kwargs
    910         )
    911     else:
    912         result = self._generate(messages, stop=stop, **kwargs)

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\langchain_aws\chat_models\bedrock_converse.py:599](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/langchain_aws/chat_models/bedrock_converse.py#line=598), in ChatBedrockConverse._generate(self, messages, stop, run_manager, **kwargs)
    597 logger.debug(f"Input params: {params}")
    598 logger.info("Using Bedrock Converse API to generate response")
--> 599 response = self.client.converse(
    600     messages=bedrock_messages, system=system, **params
    601 )
    602 logger.debug(f"Response from Bedrock: {response}")
    603 response_message = _parse_response(response)

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\botocore\client.py:570](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/botocore/client.py#line=569), in ClientCreator._create_api_method.<locals>._api_call(self, *args, **kwargs)
    566     raise TypeError(
    567         f"{py_operation_name}() only accepts keyword arguments."
    568     )
    569 # The "self" in this scope is referring to the BaseClient.
--> 570 return self._make_api_call(operation_name, kwargs)

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\botocore\context.py:124](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/botocore/context.py#line=123), in with_current_context.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
    122 if hook:
    123     hook()
--> 124 return func(*args, **kwargs)

File [~\AppData\Local\miniconda3\envs\test_package_updates\Lib\site-packages\botocore\client.py:1031](http://localhost:8888/lab/tree/backup/rag-experiments/~/AppData/Local/miniconda3/envs/test_package_updates/Lib/site-packages/botocore/client.py#line=1030), in BaseClient._make_api_call(self, operation_name, api_params)
   1027     error_code = error_info.get("QueryErrorCode") or error_info.get(
   1028         "Code"
   1029     )
   1030     error_class = self.exceptions.from_code(error_code)
-> 1031     raise error_class(parsed_response, operation_name)
   1032 else:
   1033     return parsed_response

ValidationException: An error occurred (ValidationException) when calling the Converse operation: This model doesn't support tool use.

Same Code is executed successfully when I am using us.anthropic.claude-3-7-sonnet-20250219-v1:0

These are the versions I am using:

langchain==0.3.20
langchain-core==0.3.49
langchain-openai==0.3.11
langchain-aws==0.2.18

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions