Skip to content

TrustCall create_extractor and ChatLlamaCpp not working #64

@ashleyabraham

Description

@ashleyabraham

I am using ChatLlamaCpp for my llm and trying to use a pydantic class Tasks as the tool in create_extractor and I am getting ValueError

# Using Trustcall to create a tool that can extract the JSON response from the LLM's output
llm_tool = create_extractor(
    llm,
    tools=[Tasks],
    tool_choice="Tasks",
)
#snippet from langchain_community/chat_models/llamacpp.py: lines 366 - 375
    366     chosen = [
    367         f for f in formatted_tools if f["function"]["name"] == tool_choice
    368     ]
    369     if not chosen:
    370         raise ValueError(
    371             f"Tool choice {tool_choice=} was specified, but the only "
    372             f"provided tools were {tool_names}."
    373         )
    374 elif isinstance(tool_choice, bool):
    375     if len(formatted_tools) > 1:

ValueError: Tool choice tool_choice='any' was specified, but the only provided tools were ['PatchFunctionErrors', 'PatchFunctionName'].

Seems like ChatLlamaCpp is only allowing one tool choice and in TrustCall multiple tools are passed in for tool choice and it errors out, any suggestions or work arounds for this?

If I try to bind the tools directly to the llm like llm_tool = llm.bind_tools(tools=[Tasks], tool_choice="Tasks") it works, but without using TrustCall, I am hoping to use TrustCall (create_extractor). Any help is appreciated! Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions