Open
Description
Describe the bug
- I succeeded in interpreter command line mode (both openai and non-openai models succeeded)
- I failed in python. (However, python mode succeeded in openai model, but litellm.exceptions.BadRequestError always occurred in other models)
I tried both. My input problem was very simple, which was this: What's 34/24?
Reproduce
- For command line and python mode, I refer to Does it support Qwen series hosted model? #1572, but I use stream=True mode
Expected behavior
I hope the python mode request can succeed
Screenshots
This is the command line mode to request a non-OpenAI model, which succeeded
Python mode request for non-OpenAI models failed.
litellm.llms.openai.common_utils.OpenAIError: {"error":{"message":"deepseek-chat is not a valid model ID","code":400},"user_id":"user_2dzIMQc2DXdaKhwdKz81nPS4sqH"}
litellm.exceptions.BadRequestError: litellm.BadRequestError: DeepseekException - {"error":{"message":"deepseek-chat is not a valid model ID","code":400},"user_id":"user_2dzIMQc2DXdaKhwdKz81nPS4sqH"}
Open Interpreter version
0.4.3
Python version
Python 3.9.13
Operating System name and version
centos7
Additional context
No response
Metadata
Metadata
Assignees
Labels
No labels