Skip to content

NameError: name 'display_markdown_message' is not defined #1653

@NicolasEliasArias

Description

@NicolasEliasArias

Describe the bug

After running interpreter --model gpt-5 --api_key <my_api_key> I get the following error:

C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\interpreter\core\utils\system_debug_info.py:4: UserWarning:
 pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.             import pkg_resources

▌ Model set to gpt-5

Open Interpreter will require approval before running code.

Use interpreter -y to bypass this.

Press CTRL-C to exit.

> hola
Traceback (most recent call last):
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\litellm\llms\openai\openai.py", line 745, in completion
    raise e
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\litellm\llms\openai\openai.py", line 628, in completion
    return self.streaming(
           ^^^^^^^^^^^^^^^
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\litellm\llms\openai\openai.py", line 918, in streaming
    headers, response = self.make_sync_openai_chat_completion_request(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\litellm\litellm_core_utils\logging_utils.py", line 237, in sync_wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\litellm\llms\openai\openai.py", line 489, in make_sync_openai_chat_completion_request
    raise e
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\litellm\llms\openai\openai.py", line 471, in make_sync_openai_chat_completion_request
    raw_response = openai_client.chat.completions.with_raw_response.create(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\openai\_legacy_response.py", line 364, in wrapped
    return cast(LegacyAPIResponse[R], func(*args, **kwargs))
                                      ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\openai\_utils\_utils.py", line 286, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\openai\resources\chat\completions\completions.py", line 1156, in create
    return self._post(
           ^^^^^^^^^^^
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\openai\_base_client.py", line 1259, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\openai\_base_client.py", line 1047, in request
    raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\litellm\main.py", line 2137, in completion
    raise e
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\litellm\main.py", line 2109, in completion
    response = openai_chat_completions.completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\litellm\llms\openai\openai.py", line 756, in completion
    raise OpenAIError(
litellm.llms.openai.common_utils.OpenAIError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\interpreter\core\respond.py", line 87, in respond
    for chunk in interpreter.llm.run(messages_for_llm):
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\interpreter\core\llm\llm.py", line 322, in run
    yield from run_tool_calling_llm(self, params)
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\interpreter\core\llm\run_tool_calling_llm.py", line 178, in run_tool_calling_llm
    for chunk in llm.completions(**request_params):
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\interpreter\core\llm\llm.py", line 466, in fixed_litellm_completions
    raise first_error  # If all attempts fail, raise the first error
    ^^^^^^^^^^^^^^^^^
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\interpreter\core\llm\llm.py", line 443, in fixed_litellm_completions
    yield from litellm.completion(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\litellm\utils.py", line 1371, in wrapper
    raise e
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\litellm\utils.py", line 1244, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\litellm\main.py", line 3733, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\litellm\litellm_core_utils\exception_mapping_utils.py", line 2273, in exception_type
    raise e
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\litellm\litellm_core_utils\exception_mapping_utils.py", line 330, in exception_type
    raise RateLimitError(
litellm.exceptions.RateLimitError: litellm.RateLimitError: RateLimitError: OpenAIException - You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\nicol\miniconda3\envs\interpreter\Scripts\interpreter.exe\__main__.py", line 6, in <module>
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 612, in main
    start_terminal_interface(interpreter)
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 578, in start_terminal_interface
    interpreter.chat()
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\interpreter\core\core.py", line 191, in chat
    for _ in self._streaming_chat(message=message, display=display):
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\interpreter\core\core.py", line 223, in _streaming_chat
    yield from terminal_interface(self, message)
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\interpreter\terminal_interface\terminal_interface.py", line 162, in terminal_interface
    for chunk in interpreter.chat(message, display=False, stream=True):
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\interpreter\core\core.py", line 259, in _streaming_chat
    yield from self._respond_and_store()
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\interpreter\core\core.py", line 318, in _respond_and_store
    for chunk in respond(self):
  File "C:\Users\nicol\miniconda3\envs\interpreter\Lib\site-packages\interpreter\core\respond.py", line 121, in respond
    display_markdown_message(
    ^^^^^^^^^^^^^^^^^^^^^^^^
NameError: name 'display_markdown_message' is not defined

Reproduce

Create a conda env
Install open interpreter
Run interpreter --model gpt-5 --api_key <my_api_key>
Ask something to the interpreter

Expected behavior

It should give a proper response

Screenshots

No response

Open Interpreter version

0.4.3

Python version

3.11

Operating System name and version

Windows 11

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions