feat: add MiniMax as a supported LLM provider with M2.7 models#5666
feat: add MiniMax as a supported LLM provider with M2.7 models#5666octo-patch wants to merge 2 commits intocomet-ml:mainfrom
Conversation
Add MiniMax integration documentation, temperature constraint handling, and tests to support MiniMax models (MiniMax-M2.5, MiniMax-M2.5-highspeed) as a first-class provider in Opik. Changes: - Add MiniMax integration documentation page (minimax.mdx) with usage examples for OpenAI SDK, LiteLLM, and evaluation metrics - Add MiniMax to the Model Providers section in overview and README - Add MiniMax navigation entry and redirect in docs.yml - Add MiniMax-specific temperature filter in util.py (MiniMax requires temperature > 0, so zero values are clamped to 0.01) - Add unit tests for MiniMax temperature constraint handling
| ## Important Notes | ||
|
|
||
| - **Temperature**: MiniMax models require temperature to be strictly greater than 0. If you set `temperature=0`, it will be automatically adjusted when using LiteLLM with Opik. | ||
| - **API Compatibility**: MiniMax's API is fully compatible with the OpenAI SDK, so any OpenAI-compatible tool or framework will work with MiniMax. | ||
|
|
There was a problem hiding this comment.
Docs claim MiniMax temperature=0 is auto-adjusted when using LiteLLM with Opik, but the clamp is only applied in LiteLLMChatModel (sdks/python/src/opik/evaluation/models/litellm/util.py) and the Opik monitoring path (sdks/python/src/opik/evaluation/models/litellm/opik_monitor.py lines 20–94) never calls _apply_minimax_filters. Calling litellm.completion with temperature=0 and only Opik callbacks still sends temperature=0 to MiniMax and fails; can we either narrow the doc note to the evaluation-model use case or add the clamp to the LiteLLM monitoring integration?
Finding type: Logical Bugs | Severity: 🔴 High
Want Baz to fix this for you? Activate Fixer
Other fix methods
Prompt for AI Agents:
In apps/opik-documentation/documentation/fern/docs/tracing/integrations/minimax.mdx
around lines 208-212, the note incorrectly states that MiniMax temperature=0 is
automatically adjusted when using LiteLLM with Opik. Change the wording to accurately
reflect the current behavior: state that the automatic clamp only occurs when using the
LiteLLMChatModel evaluation integration, not when using the generic OpikLogger
callbacks; alternatively, implement the clamp in the LiteLLM monitoring integration
(sdks/python/src/opik/evaluation/models/litellm/opik_monitor.py) so OpikLogger also
applies _apply_minimax_filters before sending requests. Make the doc update concise and
unambiguous, or if you implement the code change, add a brief comment in opik_monitor.py
indicating why the clamp is needed for MiniMax compatibility.
| def _apply_minimax_filters( | ||
| params: Dict[str, Any], | ||
| already_warned: Set[str], | ||
| warn: Callable[[str, Any], None], | ||
| ) -> None: | ||
| """Apply MiniMax specific parameter filters. | ||
|
|
||
| MiniMax requires temperature to be strictly greater than 0. | ||
| A temperature of 0 is rejected by the API, so we clamp it to a small | ||
| positive value to avoid errors. | ||
| """ | ||
|
|
There was a problem hiding this comment.
params['temperature'] is treated as optional, but _apply_minimax_filters only clamps values that parse to float and are <= 0.0; float(value) raises for None/non-numeric inputs, so params['temperature'] remains None. This lets MiniMax requests send a nullish temperature and the API rejects them. Can we normalize invalid/null temperatures before returning (e.g., set to 0.01 or drop/log them) so callers can rely on a positive number?
Finding type: Validate nullable inputs | Severity: 🔴 High
Want Baz to fix this for you? Activate Fixer
Other fix methods
Prompt for AI Agents:
In sdks/python/src/opik/evaluation/models/litellm/util.py around lines 109-128, the
_apply_minimax_filters function currently only clamps numeric temperatures <= 0 but
leaves None or non-numeric values unchanged. Change the logic so that if params contains
"temperature" but parsing to float fails or the parsed value is None, set
params["temperature"] = 0.01 (the same clamp used for non-positive numbers) and call
warn(...) to indicate an invalid/null temperature was replaced; also keep the existing
behavior of clamping numeric <= 0 to 0.01. This ensures downstream callers never receive
a null/invalid temperature.
- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model lists (frontend, backend, OpenRouter enum, pricing JSON) - Update default model references from M2.5 to M2.7 in docs and tests - Add M2.7 pricing data (cache_read updated to $0.06/M tokens) - Keep all previous models as available alternatives
| from opik.evaluation.models import LiteLLMChatModel | ||
|
|
||
| # Create a MiniMax model for evaluation | ||
| minimax_model = LiteLLMChatModel(model_name="minimax/MiniMax-M2.7") |
There was a problem hiding this comment.
Should we replace the hand‑pasted minimax_model example with the generated example link or expand it to include the checklist-required intent line, a minimal runnable context with an inline behavior comment, and a maintenance note linking to the canonical/generated example and owner?
Finding type: Keep docs accurate | Severity: 🟢 Low
Want Baz to fix this for you? Activate Fixer
Other fix methods
Prompt for AI Agents:
In apps/opik-documentation/documentation/fern/docs/tracing/integrations/minimax.mdx
around line 190, the single-line example minimax_model =
LiteLLMChatModel(model_name="minimax/MiniMax-M2.7") is hand-pasted and missing required
metadata. First search the autogenerated SDK/docs and examples for a canonical/generated
Minimax example and, if found, replace this line with a link to that canonical example
(include the file path/URL and a brief note). If no canonical example exists, expand
this snippet by adding: (1) a one-line intent/trigger sentence immediately above it
describing when to use the snippet, (2) a minimal runnable context (import and variable
assignment) with an inline comment on the same line explaining the observable behavior,
and (3) a maintenance note below pointing to the canonical/generated artifact (or state
“none”), include the owning team, and an update cadence. Ensure the maintenance note
references a repo path or URL rather than vague text.
|
Thanks for the review, @aadereiko! I don't have a video recording setup available, but I can confirm the MiniMax models work through the standard OpenAI-compatible API ( If it would help, I can add screenshots of the test results or API response examples instead. |
andrescrz
left a comment
There was a problem hiding this comment.
Hi @octo-patch
Thanks for the update, really appreciate the contribution here!
Would it be possible to provide some form of proof that this is working end-to-end? A local recording using Opik would usually be enough, but if that’s not feasible, screenshots of test results or example API responses would also work.
On my side, I’ve focused on reviewing the BE part. One thing to flag though: these changes might get overwritten, since LLM providers and models are now heavily automated on our end. To make this sustainable, we’d need to integrate this provider into the automation layer; otherwise, the current implementation may not persist.
We’ll also get someone from the team to finalize the review on the Python SDK part.
There was a problem hiding this comment.
This file is automatically synced from lite llm repository. So your changes will be overwritten. If you want cost tracking for these, you'll need to send a PR in that repo first. If accepted, changes will be propagated here.
There was a problem hiding this comment.
Same about this file. It's an automated copy and it will be overwritten.
There was a problem hiding this comment.
This is automated now, we will need a review from @AndreiCautisanu to incorporate these.
There was a problem hiding this comment.
Same about this, this is automated now @AndreiCautisanu
There was a problem hiding this comment.
Same here about the automation. This provider would need to be incorporated to prevent being overwritten. @AndreiCautisanu
|
Thank you @andrescrz and @aadereiko for the thorough review! Regarding the automated files:
Regarding proof of working end-to-end:
Shall I go ahead and trim the PR to only the non-automated files (Python SDK changes, docs, and tests)? |
Summary
Add MiniMax as a supported LLM provider for Opik, with the latest M2.7 flagship model as default.
Changes
Why
MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities, offering a 1M token context window.
Testing