Skip to content

Open Interpreter Fails with Ollama - litellm.BadRequestError: Invalid Message #1601

Closed
@FadyAlfred

Description

@FadyAlfred

Environment:

  1. OS: macOS 15.3.2
  2. Python version: 3.10
  3. Open Interpreter version: 0.4.3
  4. Ollama version: 0.6.0

Currently, when selecting one of the following local models (mistral, llama3.2) in Open Interpreter with Ollama , the application crashes with the following stacktrace

Traceback (most recent call last):
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/bin/interpreter", line 8, in <module>
    sys.exit(main())
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 612, in main
    start_terminal_interface(interpreter)
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 471, in start_terminal_interface
    interpreter = profile(
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 64, in profile
    return apply_profile(interpreter, profile, profile_path)
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 148, in apply_profile
    exec(profile["start_script"], scope, scope)
  File "<string>", line 1, in <module>
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/core.py", line 145, in local_setup
    self = local_setup(self)
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/local_setup.py", line 314, in local_setup
    interpreter.computer.ai.chat("ping")
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
    for chunk in self.computer.interpreter.llm.run(messages):
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 86, in run
    self.load()
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 397, in load
    self.interpreter.computer.ai.chat("ping")
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
    for chunk in self.computer.interpreter.llm.run(messages):
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 322, in run
    yield from run_tool_calling_llm(self, params)
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/run_tool_calling_llm.py", line 178, in run_tool_calling_llm
    for chunk in llm.completions(**request_params):
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 466, in fixed_litellm_completions
    raise first_error  # If all attempts fail, raise the first error
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 443, in fixed_litellm_completions
    yield from litellm.completion(**params)
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/utils.py", line 1235, in wrapper
    raise e
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/utils.py", line 1113, in wrapper
    result = original_function(*args, **kwargs)
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/main.py", line 3101, in completion
    raise exception_type(
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/main.py", line 2823, in completion
    response = base_llm_http_handler.completion(
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 247, in completion
    data = provider_config.transform_request(
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/llms/ollama/completion/transformation.py", line 315, in transform_request
    modified_prompt = ollama_pt(model=model, messages=messages)
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/litellm_core_utils/prompt_templates/factory.py", line 265, in ollama_pt
    raise litellm.BadRequestError(
litellm.exceptions.BadRequestError: litellm.BadRequestError: Invalid Message passed in {'role': 'system', 'content': 'You are a helpful AI assistant. Produce JSON OUTPUT ONLY! Adhere to this format {"name": "function_name", "arguments":{"argument_name": "argument_value"}} The following functions are available to you:\n{\'type\': \'function\', \'function\': {\'name\': \'execute\', \'description\': "Executes code on the user\'s machine **in the users local environment** and returns the output", \'parameters\': {\'type\': \'object\', \'properties\': {\'language\': {\'type\': \'string\', \'description\': \'The programming language (required parameter to the `execute` function)\', \'enum\': [\'ruby\', \'python\', \'shell\', \'javascript\', \'html\', \'applescript\', \'r\', \'powershell\', \'react\', \'java\']}, \'code\': {\'type\': \'string\', \'description\': \'The code to execute (required)\'}}, \'required\': [\'language\', \'code\']}}}\n'}

When selecting one of the following local models (deepseek-r1, tinyllama) in Open Interpreter with Ollama, the application crashes with the following stacktrace

Traceback (most recent call last):
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/bin/interpreter", line 8, in <module>
    sys.exit(main())
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 612, in main
    start_terminal_interface(interpreter)
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 471, in start_terminal_interface
    interpreter = profile(
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 64, in profile
    return apply_profile(interpreter, profile, profile_path)
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 148, in apply_profile
    exec(profile["start_script"], scope, scope)
  File "<string>", line 1, in <module>
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/core.py", line 145, in local_setup
    self = local_setup(self)
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/terminal_interface/local_setup.py", line 314, in local_setup
    interpreter.computer.ai.chat("ping")
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
    for chunk in self.computer.interpreter.llm.run(messages):
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 86, in run
    self.load()
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 397, in load
    self.interpreter.computer.ai.chat("ping")
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat
    for chunk in self.computer.interpreter.llm.run(messages):
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 324, in run
    yield from run_text_llm(self, params)
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/run_text_llm.py", line 20, in run_text_llm
    for chunk in llm.completions(**params):
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 466, in fixed_litellm_completions
    raise first_error  # If all attempts fail, raise the first error
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 443, in fixed_litellm_completions
    yield from litellm.completion(**params)
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/utils.py", line 1235, in wrapper
    raise e
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/utils.py", line 1113, in wrapper
    result = original_function(*args, **kwargs)
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/main.py", line 3101, in completion
    raise exception_type(
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/main.py", line 2823, in completion
    response = base_llm_http_handler.completion(
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 247, in completion
    data = provider_config.transform_request(
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/llms/ollama/completion/transformation.py", line 315, in transform_request
    modified_prompt = ollama_pt(model=model, messages=messages)
  File "/Users/fadyalfred/PycharmProjects/PythonProject/.venv3.10/lib/python3.10/site-packages/litellm/litellm_core_utils/prompt_templates/factory.py", line 265, in ollama_pt
    raise litellm.BadRequestError(
litellm.exceptions.BadRequestError: litellm.BadRequestError: Invalid Message passed in {'role': 'system', 'content': "You are a helpful AI assistant.\nTo execute code on the user's machine, write a markdown code block. Specify the language after the ```. You will receive the output. Use any programming language."}

Reproduce

  • Install Ollama, and Open Interpreter
  • Pull the model and run it using Ollama
  • Run interpreter --local
  • Choose Ollama, and select the running model

Expected behavior

Expected to be able to open Interpreter CLI tool, but instead it crashes.

Screenshots

No response

Open Interpreter version

0.4.3

Python version

3.10

Operating System name and version

15.3.2

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions