-
Notifications
You must be signed in to change notification settings - Fork 247
Description
Version (please complete the following information):
- OS: Windows
- Browser Edge
- Web Clipper version: [e.g. 0.11.9]
Describe the bug
Hi im trying to setup clipper/interpreter with Azure openai; but i can use my gpt5 model deployments. I get errors like
Azure OpenAI error: {
"error": {
"message": "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.",
"type": "invalid_request_error",
"param": "max_tokens",
"code": "unsupported_parameter"
}
and need to use a wonky baseURL like https://xxxxxxx.openai.azure.com/openai/deployments/gpt-5-mini/chat/completions?api-version=2025-01-01-preview
if i want it to get past the 'deployment not found issue' (probably another bug report to make).
The default url supplied in azure ml foundry is https://xxxxxxxxx.openai.azure.com/openai/responses?api-version=2025-04-01-preview
, but that doesnt seem to work for me. I have a url like this working for my 4.1-mini model, and a true 'base url' that doesnt include the deployment doesnt work.
Expected behavior
I should be able to use the config supplied in foundry to connect the interpreter

URLs where the bug occurs
Any specific web pages where the bug can be replicated.
To reproduce
Steps to reproduce the behavior:
- try and connect 4.1 with the default url supplied by foundry (https://openairesourcejmcdonald.openai.azure.com/openai/deployments/gpt-4.1-mini/chat/completions?api-version=2025-01-01-preview)
- note that it doesnt find the deployment
- try again with gpt5, wont find the deployment, and when modifying the url, you get issue with params (max tokens)