Summary
When selecting Mistral AI as provider it should use mistral-vibe-cli-latest as default model.
Problem
Right now devstral-latest is preconfigured as default model for Mistral AI. But devstral is outdated, Mistral Vibe itself uses mistral-medium-3.5 (trained April 2026) as default model.
Proposed Direction
When I enter mistral-vibe-cli-latest with the API Key for Vibe, it seem to count agains the rate limits of Vibe (which seem to be much higher than normal API calls on Mistral. With that option I haven't had any rate limit problems but it still uses the more powerful model.
Alternatives Considered
When I put mistral-medium-3.5 as model into openclaude explicitely, I get Rate Limit reached every other call (even when on a Mistral Pro Plan).
Additional Context
Could also be an option to keep both model settings, but taking the vibe one as default (entering "mistral-vibe-cli-latest;devstral-latest" as model).
Summary
When selecting Mistral AI as provider it should use mistral-vibe-cli-latest as default model.
Problem
Right now devstral-latest is preconfigured as default model for Mistral AI. But devstral is outdated, Mistral Vibe itself uses mistral-medium-3.5 (trained April 2026) as default model.
Proposed Direction
When I enter mistral-vibe-cli-latest with the API Key for Vibe, it seem to count agains the rate limits of Vibe (which seem to be much higher than normal API calls on Mistral. With that option I haven't had any rate limit problems but it still uses the more powerful model.
Alternatives Considered
When I put mistral-medium-3.5 as model into openclaude explicitely, I get Rate Limit reached every other call (even when on a Mistral Pro Plan).
Additional Context
Could also be an option to keep both model settings, but taking the vibe one as default (entering "mistral-vibe-cli-latest;devstral-latest" as model).