Add Mistral model support to OpenAIChatCompletionClient#7333
Open
he-yufeng wants to merge 1 commit intomicrosoft:mainfrom
Open
Add Mistral model support to OpenAIChatCompletionClient#7333he-yufeng wants to merge 1 commit intomicrosoft:mainfrom
he-yufeng wants to merge 1 commit intomicrosoft:mainfrom
Conversation
anandfresh
approved these changes
Mar 2, 2026
Author
|
Hey, any chance this can get merged? It's been approved for a couple weeks now. |
Author
|
@anandfresh any chance you could merge this? Been approved since March. |
Author
|
gentle bump — this has been approved for a while now, anything needed from my side to get it across the line? |
Add Mistral AI model entries (mistral-large, mistral-small, codestral, pixtral, ministral, open-mistral-nemo, open-codestral-mamba) to the model registry with capabilities, token limits, and alias pointers. Auto-detect base_url (https://api.mistral.ai/v1/) and MISTRAL_API_KEY environment variable when a Mistral model is used, following the same pattern as Gemini, Anthropic, and Llama support. Closes microsoft#6151 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
6aa632e to
3c8f335
Compare
Contributor
|
@he-yufeng please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
_model_info.py: model pointers (aliases), model info (capabilities), and token limits formistral-large-2411,mistral-small-2503,codestral-2501,pixtral-large-2411,pixtral-12b-2409,open-mistral-nemo-2407,ministral-8b-2410,open-codestral-mamba, andmistral-embedbase_url(https://api.mistral.ai/v1/) andMISTRAL_API_KEYenvironment variable inOpenAIChatCompletionClient, following the existing pattern for Gemini, Anthropic, and Llama modelsis_mistral_model()helper using prefix-based matching for all Mistral model name prefixesMotivation
Currently, using Mistral models requires manually specifying
base_urlandapi_key. This PR adds the same level of auto-detection already available for Gemini, Anthropic, and Llama models, allowing users to simply write:Test plan
mistral-large-latestresolves tomistral-large-2411via model pointersOpenAIChatCompletionClient(model="mistral-large-latest")auto-setsbase_urland readsMISTRAL_API_KEYCloses #6151
🤖 Generated with Claude Code