Replies: 6 comments
-
These models do not conform to openai-format at the moment (the irony). There was a PR for enabling them but wasn't updated to be able to merge. |
Beta Was this translation helpful? Give feedback.
-
@withsmilo Although you have failed to verified the |
Beta Was this translation helpful? Give feedback.
-
You have to create a special case for the models, although it's much easier with OpenAI's models. System messages are probably still going through as well. Try running the configuration with Azure OpenAI as the provider.. |
Beta Was this translation helpful? Give feedback.
-
@Emt-lin , Indeed! I finally got a successful reply from |
Beta Was this translation helpful? Give feedback.
-
Is there any update here on how to get these models working properly? o3-mini was also just released today so would love to pipe that in |
Beta Was this translation helpful? Give feedback.
-
I think it won't be too difficult to make changes. If you can tell me which direction to go, I can create a PR to add the o1-mini and o1. |
Beta Was this translation helpful? Give feedback.
-
Copilot version: v2.7.12
(Bug report without the above will be closed)
Describe how to reproduce
Hi, when I was trying to add a new model (
o1
oro1-mini
) using the 3rd party OpenAI format endpoint, I got an error message like below in our custom endpoint :And I found https://community.openai.com/t/why-was-max-tokens-changed-to-max-completion-tokens/938077/1 through googling. OpenAI team said the o1 series only supports
max_completion_tokens
instead ofmax_tokens
.Expected behavior
I want that the o1 series should be added without any issues.
Screenshots

Additional context
Add any other context about the problem here.
Beta Was this translation helpful? Give feedback.
All reactions