Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -228,6 +228,7 @@ The easiest way to log traces is to use one of our direct integrations. Opik sup
| Mastra | Log traces for Mastra AI workflow framework calls | [Documentation](https://www.comet.com/docs/opik/integrations/mastra?utm_source=opik&utm_medium=github&utm_content=mastra_link&utm_campaign=opik) |
| Microsoft Agent Framework (Python) | Log traces for Microsoft Agent Framework calls | [Documentation](https://www.comet.com/docs/opik/integrations/microsoft-agent-framework?utm_source=opik&utm_medium=github&utm_content=agent_framework_link&utm_campaign=opik) |
| Microsoft Agent Framework (.NET) | Log traces for Microsoft Agent Framework .NET calls | [Documentation](https://www.comet.com/docs/opik/integrations/microsoft-agent-framework-dotnet?utm_source=opik&utm_medium=github&utm_content=agent_framework_dotnet_link&utm_campaign=opik) |
| MiniMax | Log traces for MiniMax LLM calls | [Documentation](https://www.comet.com/docs/opik/integrations/minimax?utm_source=opik&utm_medium=github&utm_content=minimax_link&utm_campaign=opik) |
| Mistral AI | Log traces for Mistral AI LLM calls | [Documentation](https://www.comet.com/docs/opik/integrations/mistral?utm_source=opik&utm_medium=github&utm_content=mistral_link&utm_campaign=opik) |
| n8n | Log traces for n8n workflow executions | [Documentation](https://www.comet.com/docs/opik/integrations/n8n?utm_source=opik&utm_medium=github&utm_content=n8n_link&utm_campaign=opik) |
| Novita AI | Log traces for Novita AI LLM calls | [Documentation](https://www.comet.com/docs/opik/integrations/novita-ai?utm_source=opik&utm_medium=github&utm_content=novita_ai_link&utm_campaign=opik) |
Expand Down
6 changes: 6 additions & 0 deletions apps/opik-documentation/documentation/fern/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -592,6 +592,9 @@ navigation:
- page: Groq
path: docs/tracing/integrations/groq.mdx
slug: groq
- page: MiniMax
path: docs/tracing/integrations/minimax.mdx
slug: minimax
- page: Mistral AI
path: docs/tracing/integrations/mistral.mdx
slug: mistral
Expand Down Expand Up @@ -1206,6 +1209,9 @@ redirects:
- source: "/docs/opik/tracing/integrations/groq"
destination: "/docs/opik/integrations/groq"
permanent: true
- source: "/docs/opik/tracing/integrations/minimax"
destination: "/docs/opik/integrations/minimax"
permanent: true
- source: "/docs/opik/tracing/integrations/adk"
destination: "/docs/opik/integrations/adk"
permanent: true
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,224 @@
---
description: Start here to integrate Opik into your MiniMax-based genai application
for end-to-end LLM observability, unit testing, and optimization.
headline: MiniMax | Opik Documentation
og:description: Learn to integrate MiniMax's powerful language models with Opik,
enabling seamless API call tracking and evaluation for your projects.
og:site_name: Opik Documentation
og:title: Integrate MiniMax with Opik - Opik
title: Observability for MiniMax with Opik
---

[MiniMax](https://www.minimax.io/) is a leading AI company providing powerful large language models through an OpenAI-compatible API. Their flagship models, MiniMax-M2.5 and MiniMax-M2.5-highspeed, offer a 204K context window and strong performance across a wide range of tasks.

This guide explains how to integrate Opik with MiniMax using their OpenAI-compatible API endpoints. By using the Opik OpenAI integration, you can easily track and evaluate your MiniMax API calls within your Opik projects as Opik will automatically log the input prompt, model used, token usage, and response generated.

## Getting Started

### Account Setup

First, you'll need a Comet.com account to use Opik. If you don't have one, you can [sign up for free](https://www.comet.com/signup?utm_source=opik&utm_medium=docs&utm_campaign=minimax-integration).

### Installation

Install the required packages:

```bash
pip install opik openai
```

### Configuration

Configure Opik to send traces to your Comet project:

```python
import opik

opik.configure(
project_name="your-project-name",
workspace="your-workspace-name",
)
```

<Tip>
Opik is fully open-source and can be run locally or through the Opik Cloud platform. You can learn more about hosting
Opik on your own infrastructure in the [self-hosting guide](/self-host/overview).
</Tip>

### Environment Setup

Set your MiniMax API key as an environment variable:

```bash
export MINIMAX_API_KEY="your-minimax-api-key"
```

You can obtain a MiniMax API key from the [MiniMax platform](https://platform.minimaxi.com/).

## Tracking MiniMax calls

### Using the OpenAI Python SDK

The easiest way to call MiniMax with Opik is to use the OpenAI Python SDK and the `track_openai` wrapper. Since MiniMax provides an OpenAI-compatible API, you simply point the OpenAI client to the MiniMax base URL:

```python
from opik.integrations.openai import track_openai
from openai import OpenAI
import os

# Create the OpenAI client that points to MiniMax API
client = OpenAI(
api_key=os.environ.get("MINIMAX_API_KEY"),
base_url="https://api.minimax.io/v1",
)

# Wrap your OpenAI client to track all calls to Opik
client = track_openai(client, project_name="minimax-demo")

# Call the API
response = client.chat.completions.create(
model="MiniMax-M2.5",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, what can you help me with?"},
],
)

print(response.choices[0].message.content)
```

### Using LiteLLM

You can also use LiteLLM to call MiniMax models with Opik tracking:

```bash
pip install opik litellm
```

```python
from litellm.integrations.opik.opik import OpikLogger
import litellm
import os

os.environ["OPIK_PROJECT_NAME"] = "minimax-litellm-demo"

opik_logger = OpikLogger()
litellm.callbacks = [opik_logger]

response = litellm.completion(
model="minimax/MiniMax-M2.5",
messages=[{"role": "user", "content": "Write a short story about AI."}],
)

print(response.choices[0].message.content)
```

## Advanced Usage

### Using with the @track Decorator

You can combine the tracked client with Opik's `@track` decorator for more comprehensive tracing:

```python
from opik import track
from opik.integrations.openai import track_openai
from openai import OpenAI
import os

client = OpenAI(
api_key=os.environ.get("MINIMAX_API_KEY"),
base_url="https://api.minimax.io/v1",
)
client = track_openai(client)

@track
def analyze_text(text: str):
response = client.chat.completions.create(
model="MiniMax-M2.5",
messages=[
{"role": "user", "content": f"Analyze this text: {text}"}
],
)
return response.choices[0].message.content

@track
def summarize_analysis(analysis: str):
response = client.chat.completions.create(
model="MiniMax-M2.5-highspeed",
messages=[
{"role": "user", "content": f"Summarize this analysis: {analysis}"}
],
)
return response.choices[0].message.content

@track
def full_pipeline(text: str):
analysis = analyze_text(text)
summary = summarize_analysis(analysis)
return summary

result = full_pipeline("Open source AI models are transforming the industry.")
```

### Streaming Responses

MiniMax supports streaming responses, which are also tracked by Opik:

```python
response = client.chat.completions.create(
model="MiniMax-M2.5",
messages=[
{"role": "user", "content": "Explain quantum computing in simple terms."}
],
stream=True,
)

for chunk in response:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
```

### Using MiniMax for Evaluation Metrics

You can use MiniMax models as the LLM judge in Opik's evaluation metrics:

```python
from opik.evaluation.metrics import Hallucination, AnswerRelevance
from opik.evaluation.models import LiteLLMChatModel

# Create a MiniMax model for evaluation
minimax_model = LiteLLMChatModel(model_name="minimax/MiniMax-M2.5")

# Use it as the judge in evaluation metrics
hallucination_metric = Hallucination(model=minimax_model)
relevance_metric = AnswerRelevance(model=minimax_model)
```

## Supported Models

MiniMax provides the following models:

| Model | Context Window | Description |
|-------|---------------|-------------|
| `MiniMax-M2.5` | 204K tokens | Flagship model with strong reasoning and generation capabilities |
| `MiniMax-M2.5-highspeed` | 204K tokens | Optimized for faster inference with comparable quality |

For the latest model information, visit the [MiniMax platform](https://platform.minimaxi.com/).

## Important Notes

- **Temperature**: MiniMax models require temperature to be strictly greater than 0. If you set `temperature=0`, it will be automatically adjusted when using LiteLLM with Opik.
- **API Compatibility**: MiniMax's API is fully compatible with the OpenAI SDK, so any OpenAI-compatible tool or framework will work with MiniMax.

Comment on lines +210 to +214
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docs claim MiniMax temperature=0 is auto-adjusted when using LiteLLM with Opik, but the clamp is only applied in LiteLLMChatModel (sdks/python/src/opik/evaluation/models/litellm/util.py) and the Opik monitoring path (sdks/python/src/opik/evaluation/models/litellm/opik_monitor.py lines 20–94) never calls _apply_minimax_filters. Calling litellm.completion with temperature=0 and only Opik callbacks still sends temperature=0 to MiniMax and fails; can we either narrow the doc note to the evaluation-model use case or add the clamp to the LiteLLM monitoring integration?

Finding type: Logical Bugs | Severity: 🔴 High


Want Baz to fix this for you? Activate Fixer

Other fix methods

Fix in Cursor

Prompt for AI Agents:

In apps/opik-documentation/documentation/fern/docs/tracing/integrations/minimax.mdx
around lines 208-212, the note incorrectly states that MiniMax temperature=0 is
automatically adjusted when using LiteLLM with Opik. Change the wording to accurately
reflect the current behavior: state that the automatic clamp only occurs when using the
LiteLLMChatModel evaluation integration, not when using the generic OpikLogger
callbacks; alternatively, implement the clamp in the LiteLLM monitoring integration
(sdks/python/src/opik/evaluation/models/litellm/opik_monitor.py) so OpikLogger also
applies _apply_minimax_filters before sending requests. Make the doc update concise and
unambiguous, or if you implement the code change, add a brief comment in opik_monitor.py
indicating why the clamp is needed for MiniMax compatibility.

## Environment Variables

Make sure to set the following environment variables:

```bash
# MiniMax Configuration
export MINIMAX_API_KEY="your-minimax-api-key"

# Opik Configuration
export OPIK_PROJECT_NAME="your-project-name"
export OPIK_WORKSPACE="your-workspace-name"
```
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ Opik helps you easily log, visualize, and evaluate everything from raw LLM calls
<Card title="Fireworks AI" href="/docs/opik/integrations/fireworks-ai" />
<Card title="Gemini" href="/docs/opik/integrations/gemini" />
<Card title="Groq" href="/docs/opik/integrations/groq" />
<Card title="MiniMax" href="/docs/opik/integrations/minimax" />
<Card title="Mistral AI" href="/docs/opik/integrations/mistral" />
<Card title="Novita AI" href="/docs/opik/integrations/novita-ai" />
<Card title="Ollama" href="/docs/opik/integrations/ollama" />
Expand Down
28 changes: 28 additions & 0 deletions sdks/python/src/opik/evaluation/models/litellm/util.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,12 @@ def apply_model_specific_filters(
_apply_gpt5_filters(params, already_warned, warn)
return

if normalized_model_name.startswith("minimax/") or normalized_model_name.startswith(
"minimax."
):
_apply_minimax_filters(params, already_warned, warn)
return

if normalized_model_name.startswith("dashscope/"):
_apply_qwen_dashscope_filters(params, already_warned, warn)
return
Expand Down Expand Up @@ -100,6 +106,28 @@ def _apply_gpt5_filters(
)


def _apply_minimax_filters(
params: Dict[str, Any],
already_warned: Set[str],
warn: Callable[[str, Any], None],
) -> None:
"""Apply MiniMax specific parameter filters.

MiniMax requires temperature to be strictly greater than 0.
A temperature of 0 is rejected by the API, so we clamp it to a small
positive value to avoid errors.
"""

Comment on lines +109 to +120
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

params['temperature'] is treated as optional, but _apply_minimax_filters only clamps values that parse to float and are <= 0.0; float(value) raises for None/non-numeric inputs, so params['temperature'] remains None. This lets MiniMax requests send a nullish temperature and the API rejects them. Can we normalize invalid/null temperatures before returning (e.g., set to 0.01 or drop/log them) so callers can rely on a positive number?

Finding type: Validate nullable inputs | Severity: 🔴 High


Want Baz to fix this for you? Activate Fixer

Other fix methods

Fix in Cursor

Prompt for AI Agents:

In sdks/python/src/opik/evaluation/models/litellm/util.py around lines 109-128, the
_apply_minimax_filters function currently only clamps numeric temperatures <= 0 but
leaves None or non-numeric values unchanged. Change the logic so that if params contains
"temperature" but parsing to float fails or the parsed value is None, set
params["temperature"] = 0.01 (the same clamp used for non-positive numbers) and call
warn(...) to indicate an invalid/null temperature was replaced; also keep the existing
behavior of clamping numeric <= 0 to 0.01. This ensures downstream callers never receive
a null/invalid temperature.

if "temperature" in params:
value = params["temperature"]
try:
numeric_value = float(value)
except (TypeError, ValueError):
numeric_value = None
if numeric_value is not None and numeric_value <= 0.0:
params["temperature"] = 0.01


def _apply_qwen_dashscope_filters(
params: Dict[str, Any],
already_warned: Set[str],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -162,6 +162,41 @@ def test_litellm_chat_model_drops_temperature_for_provider_prefixed_gpt5(
assert not caplog.records


def test_litellm_chat_model_clamps_temperature_for_minimax(monkeypatch):
stub = _install_litellm_stub(monkeypatch)

model = litellm_chat_model.LiteLLMChatModel(
model_name="minimax/MiniMax-M2.5",
temperature=0,
)

# Temperature should be clamped to a small positive value
assert model._completion_kwargs["temperature"] == 0.01

model.generate_string("hello")

assert stub._calls, "Expected completion to be invoked"
_, _, kwargs = stub._calls[-1]
assert kwargs["temperature"] == 0.01


def test_litellm_chat_model_preserves_valid_temperature_for_minimax(monkeypatch):
stub = _install_litellm_stub(monkeypatch)

model = litellm_chat_model.LiteLLMChatModel(
model_name="minimax/MiniMax-M2.5",
temperature=0.7,
)

assert model._completion_kwargs["temperature"] == 0.7

model.generate_string("hello")

assert stub._calls, "Expected completion to be invoked"
_, _, kwargs = stub._calls[-1]
assert kwargs["temperature"] == 0.7


def test_litellm_chat_model_drops_top_logprobs_for_dashscope(
monkeypatch,
):
Expand Down
Loading