Skip to content

BUG: azure-openai model deployments are mismatched with clipper/interpreter settings #614

@joshjm

Description

@joshjm

Version (please complete the following information):

  • OS: Windows
  • Browser Edge
  • Web Clipper version: [e.g. 0.11.9]

Describe the bug
Hi im trying to setup clipper/interpreter with Azure openai; but i can use my gpt5 model deployments. I get errors like

Azure OpenAI error:  {
  "error": {
    "message": "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.",
    "type": "invalid_request_error",
    "param": "max_tokens",
    "code": "unsupported_parameter"
  }

and need to use a wonky baseURL like https://xxxxxxx.openai.azure.com/openai/deployments/gpt-5-mini/chat/completions?api-version=2025-01-01-preview if i want it to get past the 'deployment not found issue' (probably another bug report to make).
The default url supplied in azure ml foundry is https://xxxxxxxxx.openai.azure.com/openai/responses?api-version=2025-04-01-preview, but that doesnt seem to work for me. I have a url like this working for my 4.1-mini model, and a true 'base url' that doesnt include the deployment doesnt work.

Expected behavior

I should be able to use the config supplied in foundry to connect the interpreter

Image

URLs where the bug occurs

Any specific web pages where the bug can be replicated.

To reproduce

Steps to reproduce the behavior:

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions