feat: add MiniMax as LLM provider with multi-provider utility#15
feat: add MiniMax as LLM provider with multi-provider utility#15octo-patch wants to merge 2 commits intorohitg00:mainfrom
Conversation
Add utils/llm_provider.py that provides a unified interface for Anthropic, OpenAI, and MiniMax LLM providers with auto-detection from environment variables. - MiniMax uses OpenAI-compatible API at api.minimax.io/v1 - Supports M2.7, M2.7-highspeed, M2.5, M2.5-highspeed models - Temperature clamping to (0, 1] for MiniMax compatibility - Think-tag stripping for MiniMax reasoning model responses - Updated first_api_call.py with MiniMax demo - Updated agent_loop.py with real multi-provider LLM support - Updated docs and README with MiniMax provider info - 35 unit tests + 3 integration tests
📝 WalkthroughWalkthroughThis PR adds a unified LLM provider utility that auto-detects and wraps Anthropic, OpenAI, and MiniMax, plus documentation, example calls, agent integration updates, and unit/integration tests to exercise provider detection and chat behavior. Changes
Sequence DiagramsequenceDiagram
participant Client as Client Code
participant Detector as Provider Detector
participant Factory as Client Factory
participant Provider as LLM Provider
participant Parser as Response Parser
Client->>Detector: detect_provider() (check env vars)
Detector-->>Client: provider name or error
Client->>Factory: get_llm_client(provider?, api_key?, model?)
Factory->>Factory: validate provider & api key
Factory->>Provider: initialize SDK client (OpenAI/Anthropic)
Provider-->>Factory: SDK client instance
Factory-->>Client: LLMClient wrapper
Client->>Factory: chat(llm, prompt, system?, model?, temperature?)
Factory->>Provider: send chat request (provider-specific)
Provider-->>Factory: response payload
Factory->>Parser: parse/clean response (strip <think>, extract text)
Parser-->>Factory: cleaned message
Factory-->>Client: final response string
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (6)
utils/llm_provider.py (2)
19-21: Unused importfield.The
fieldimport fromdataclassesis not used in this module.🔧 Proposed fix
-from dataclasses import dataclass, field +from dataclasses import dataclass🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utils/llm_provider.py` around lines 19 - 21, Remove the unused `field` import from the dataclasses import list in utils/llm_provider.py; update the top-level import line so it only imports what’s used (e.g., keep `dataclass`), ensuring there are no other references to `field` in this module (check any dataclass declarations like those using `@dataclass`) and run tests/lint to confirm the unused import warning is resolved.
119-140: Consider adding a defensive else clause.If
PROVIDER_DEFAULTSis extended with a new provider but this if/elif chain isn't updated,clientwould be unbound, causing anUnboundLocalError. A defensive else clause would make this failure explicit.🛡️ Proposed defensive fix
elif provider == "minimax": import openai as openai_sdk client = openai_sdk.OpenAI( api_key=key, base_url=config["base_url"], ) + else: + raise NotImplementedError(f"Provider '{provider}' is configured but not implemented") return LLMClient(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utils/llm_provider.py` around lines 119 - 140, The provider branch leaves local variable client unbound if provider doesn't match "anthropic", "openai", or "minimax"; update the conditional in the function that constructs the client (the if/elif chain handling provider) to include a defensive else that raises a clear exception (e.g., ValueError or RuntimeError) stating the unknown provider and including the provider value, so that LLMClient(...) is never called with an uninitialized client; reference the symbols provider, client, and LLMClient when making the change.phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py (2)
12-12: File handle not closed after reading.The lambda
open(path).read()doesn't close the file handle. In long-running agent loops, this could exhaust file descriptors.♻️ Proposed fix using context manager pattern
"read_file": { "description": "Read the contents of a file", "parameters": { "path": {"type": "string", "description": "File path to read"} }, - "execute": lambda path: open(path).read() if os.path.exists(path) else f"File not found: {path}" + "execute": lambda path: _read_file(path) },Add a helper function:
def _read_file(path): if not os.path.exists(path): return f"File not found: {path}" with open(path) as f: return f.read()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py` at line 12, The current lambda used for "execute" opens files without closing them; replace it with a helper function (e.g., _read_file) that checks os.path.exists(path) and uses a with open(path) as f: context manager to read and return contents (or return the "File not found" message), then update the mapping to use "execute": _read_file so file handles are properly closed.
20-23: File handle not closed after writing.Same issue as
read_file- the file handle fromopen(path, 'w')isn't properly closed.♻️ Proposed fix
"write_file": { "description": "Write content to a file", "parameters": { "path": {"type": "string", "description": "File path to write"}, "content": {"type": "string", "description": "Content to write"} }, - "execute": lambda path, content: ( - open(path, 'w').write(content), - f"Wrote {len(content)} chars to {path}" - )[1] + "execute": lambda path, content: _write_file(path, content) },Add a helper function:
def _write_file(path, content): with open(path, 'w') as f: f.write(content) return f"Wrote {len(content)} chars to {path}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py` around lines 20 - 23, The current inline lambda assigned to the "execute" action opens a file with open(path, 'w') but never closes it; implement a small helper function named _write_file(path, content) that uses a with open(path, 'w') as f: f.write(content) and returns the message f"Wrote {len(content)} chars to {path}", then replace the lambda in the "execute" mapping to call _write_file(path, content) (mirroring how read_file was fixed) so the file handle is properly closed.tests/test_llm_provider.py (2)
319-367: Repeated setup code in each test method.Each test method duplicates the
sys.path.insertand import statements. Consider moving the setup tosetUpClassor module level.♻️ Proposed refactor using setUpClass
class TestAgentLoop(unittest.TestCase): """Test agent loop with provider integration.""" + `@classmethod` + def setUpClass(cls): + agent_path = os.path.join( + os.path.dirname(__file__), "..", + "phases", "14-agent-engineering", "01-the-agent-loop", "code" + ) + sys.path.insert(0, agent_path) + from agent_loop import SimpleAgent, TOOLS + cls.SimpleAgent = SimpleAgent + cls.TOOLS = TOOLS + def test_agent_extract_tool_calls(self): - sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..")) - agent_path = os.path.join( - os.path.dirname(__file__), "..", - "phases", "14-agent-engineering", "01-the-agent-loop", "code" - ) - sys.path.insert(0, agent_path) - from agent_loop import SimpleAgent, TOOLS - - agent = SimpleAgent(TOOLS) + agent = self.SimpleAgent(self.TOOLS) calls = agent._extract_tool_calls( "TOOL_CALL: list_files(path=/tmp)" ) # ... rest of test🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_llm_provider.py` around lines 319 - 367, The tests in TestAgentLoop repeat identical sys.path.insert and import logic in each test method; move that setup into a class-level initializer to DRY the code—add a setUpClass method on TestAgentLoop that performs the sys.path.insert calls and imports agent_loop (obtaining SimpleAgent and TOOLS) and store them as class attributes (e.g., cls.SimpleAgent, cls.TOOLS), then update each test to use those class attributes and remove the duplicated sys.path/import lines.
311-317: Unused loop variablename.The loop variable
nameis not used within the loop body.🔧 Proposed fix
def test_provider_info_fields(self): providers = list_providers() - for name, info in providers.items(): + for _name, info in providers.items(): self.assertIn("available", info) self.assertIn("env_key", info) self.assertIn("default_model", info)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_llm_provider.py` around lines 311 - 317, The test_provider_info_fields test iterates with an unused loop variable name; update the loop over list_providers() to avoid the unused variable by iterating over providers.values() or using an underscore (e.g., for _, info in providers.items()) so only info is referenced; keep the assertions on "available", "env_key", and "default_model" inside the same test_provider_info_fields function.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@phases/00-setup-and-tooling/04-apis-and-keys/docs/en.md`:
- Around line 86-102: The example uses os.environ to read MINIMAX_API_KEY but
never imports the os module; add an import os at the top of the snippet so the
openai.OpenAI client creation (openai.OpenAI) and subsequent call to
client.chat.completions.create work without NameError; update the snippet to
include "import os" before importing openai and constructing the client.
In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py`:
- Around line 59-66: The try/except around importing utils.llm_provider and
calling get_llm_client is catching all Exceptions and hiding real initialization
errors; change it to catch specific errors (e.g., ImportError,
ModuleNotFoundError, KeyError, ValueError, or the SDK-specific exception your
LLM client raises) and log the caught exception details when initialization
fails, while preserving the demo-mode fallback when no provider is configured;
update the block around get_llm_client, provider, and self.llm_client to handle
those specific exceptions and include the exception message in the
printed/logged output.
---
Nitpick comments:
In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py`:
- Line 12: The current lambda used for "execute" opens files without closing
them; replace it with a helper function (e.g., _read_file) that checks
os.path.exists(path) and uses a with open(path) as f: context manager to read
and return contents (or return the "File not found" message), then update the
mapping to use "execute": _read_file so file handles are properly closed.
- Around line 20-23: The current inline lambda assigned to the "execute" action
opens a file with open(path, 'w') but never closes it; implement a small helper
function named _write_file(path, content) that uses a with open(path, 'w') as f:
f.write(content) and returns the message f"Wrote {len(content)} chars to
{path}", then replace the lambda in the "execute" mapping to call
_write_file(path, content) (mirroring how read_file was fixed) so the file
handle is properly closed.
In `@tests/test_llm_provider.py`:
- Around line 319-367: The tests in TestAgentLoop repeat identical
sys.path.insert and import logic in each test method; move that setup into a
class-level initializer to DRY the code—add a setUpClass method on TestAgentLoop
that performs the sys.path.insert calls and imports agent_loop (obtaining
SimpleAgent and TOOLS) and store them as class attributes (e.g.,
cls.SimpleAgent, cls.TOOLS), then update each test to use those class attributes
and remove the duplicated sys.path/import lines.
- Around line 311-317: The test_provider_info_fields test iterates with an
unused loop variable name; update the loop over list_providers() to avoid the
unused variable by iterating over providers.values() or using an underscore
(e.g., for _, info in providers.items()) so only info is referenced; keep the
assertions on "available", "env_key", and "default_model" inside the same
test_provider_info_fields function.
In `@utils/llm_provider.py`:
- Around line 19-21: Remove the unused `field` import from the dataclasses
import list in utils/llm_provider.py; update the top-level import line so it
only imports what’s used (e.g., keep `dataclass`), ensuring there are no other
references to `field` in this module (check any dataclass declarations like
those using `@dataclass`) and run tests/lint to confirm the unused import warning
is resolved.
- Around line 119-140: The provider branch leaves local variable client unbound
if provider doesn't match "anthropic", "openai", or "minimax"; update the
conditional in the function that constructs the client (the if/elif chain
handling provider) to include a defensive else that raises a clear exception
(e.g., ValueError or RuntimeError) stating the unknown provider and including
the provider value, so that LLMClient(...) is never called with an uninitialized
client; reference the symbols provider, client, and LLMClient when making the
change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b0be8efa-efe9-4464-b2a8-d7d70204b462
📒 Files selected for processing (9)
README.mdphases/00-setup-and-tooling/04-apis-and-keys/code/first_api_call.pyphases/00-setup-and-tooling/04-apis-and-keys/docs/en.mdphases/14-agent-engineering/01-the-agent-loop/code/agent_loop.pytests/__init__.pytests/test_llm_provider.pytests/test_minimax_integration.pyutils/__init__.pyutils/llm_provider.py
| try: | ||
| sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "..", "..", "..")) | ||
| from utils.llm_provider import get_llm_client | ||
| provider = provider or os.environ.get("LLM_PROVIDER") | ||
| self.llm_client = get_llm_client(provider) | ||
| print(f" Using LLM: {self.llm_client.provider} ({self.llm_client.default_model})") | ||
| except Exception: | ||
| print(" [No LLM configured — running in demo mode]") |
There was a problem hiding this comment.
Catching blind Exception hides initialization errors.
The bare except Exception swallows all errors including import errors, configuration issues, and SDK bugs. This makes debugging difficult when things go wrong.
🛡️ Proposed fix to catch specific exceptions
try:
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "..", "..", ".."))
from utils.llm_provider import get_llm_client
provider = provider or os.environ.get("LLM_PROVIDER")
self.llm_client = get_llm_client(provider)
print(f" Using LLM: {self.llm_client.provider} ({self.llm_client.default_model})")
- except Exception:
+ except (ImportError, EnvironmentError, ValueError) as e:
print(" [No LLM configured — running in demo mode]")
+ # Optionally log the reason: print(f" ({e})")🧰 Tools
🪛 Ruff (0.15.7)
[warning] 65-65: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py` around
lines 59 - 66, The try/except around importing utils.llm_provider and calling
get_llm_client is catching all Exceptions and hiding real initialization errors;
change it to catch specific errors (e.g., ImportError, ModuleNotFoundError,
KeyError, ValueError, or the SDK-specific exception your LLM client raises) and
log the caught exception details when initialization fails, while preserving the
demo-mode fallback when no provider is configured; update the block around
get_llm_client, provider, and self.llm_client to handle those specific
exceptions and include the exception message in the printed/logged output.
|
Very good work @octo-patch |
1. Added missing import os in the MiniMax API example code block 2. Replaced bare except Exception with specific ImportError and (KeyError, ValueError) handlers in agent_loop.py __init__
|
Thanks @rohitg00! Fixed both review items:
|
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py (1)
59-68:⚠️ Potential issue | 🟠 MajorCatch the no-provider path here too.
utils/llm_provider.py:64-73andutils/llm_provider.py:84-106raiseEnvironmentErrorfor the normal “not configured yet” path. Because this block only handlesImportError,KeyError, andValueError, a clean setup now aborts on Line 63 instead of falling back to demo mode.🛠️ Proposed fix
- except (KeyError, ValueError) as exc: + except (EnvironmentError, ValueError) as exc: print(f" [LLM configuration error: {exc} — running in demo mode]")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py` around lines 59 - 68, The try block that imports and initializes the LLM client (the code that calls get_llm_client and sets self.llm_client / prints provider info in agent_loop.py) does not catch EnvironmentError thrown by utils/llm_provider, so a normal “not configured” path aborts instead of falling back to demo mode—update the exception handling to also catch EnvironmentError (add it to the except tuple alongside ImportError, KeyError, ValueError) and ensure the demo-mode print path is executed when EnvironmentError is raised.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@phases/00-setup-and-tooling/04-apis-and-keys/docs/en.md`:
- Around line 44-48: The markdown fenced block containing the environment
variables (the triple-backtick block that currently shows ANTHROPIC_API_KEY,
OPENAI_API_KEY, MINIMAX_API_KEY) needs a language tag to satisfy markdownlint
MD040; change the opening fence from ``` to ```dotenv so the block becomes
```dotenv followed by the three env lines and the closing ```; no other content
changes required.
In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py`:
- Around line 115-118: The tool-call parser currently expects "TOOL_CALL:
name(arg=value)" and naively splits arguments by commas, which breaks for values
containing commas, newlines, or parentheses; change the contract to "TOOL_CALL:
<JSON payload>" and update the parser to use json.loads for robust parsing
(import json). Specifically, modify _extract_tool_calls to strip "TOOL_CALL:",
json.loads the payload, and return {"name": call["name"], "arguments":
call["arguments"]}, and update the code that invokes tools (the dispatcher that
previously unpacked kwargs from the ad-hoc split) to accept the arguments dict
directly instead of parsing a string. Ensure any callers or tests using the old
string format are updated to emit a JSON object with keys "name" and
"arguments".
---
Duplicate comments:
In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py`:
- Around line 59-68: The try block that imports and initializes the LLM client
(the code that calls get_llm_client and sets self.llm_client / prints provider
info in agent_loop.py) does not catch EnvironmentError thrown by
utils/llm_provider, so a normal “not configured” path aborts instead of falling
back to demo mode—update the exception handling to also catch EnvironmentError
(add it to the except tuple alongside ImportError, KeyError, ValueError) and
ensure the demo-mode print path is executed when EnvironmentError is raised.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b0372f42-05da-453e-8ca4-f533c10e19db
📒 Files selected for processing (2)
phases/00-setup-and-tooling/04-apis-and-keys/docs/en.mdphases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py
| ``` | ||
| ANTHROPIC_API_KEY=sk-ant-... | ||
| OPENAI_API_KEY=sk-... | ||
| MINIMAX_API_KEY=eyJ... | ||
| ``` |
There was a problem hiding this comment.
Add a language tag to the .env fence.
Line 44 is still an untyped fenced block, so markdownlint MD040 will keep warning here. dotenv is the clearest label.
📝 Proposed fix
-```
+```dotenv
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
MINIMAX_API_KEY=eyJ...</details>
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 markdownlint-cli2 (0.22.0)</summary>
[warning] 44-44: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against the current code and only fix it if needed.
In @phases/00-setup-and-tooling/04-apis-and-keys/docs/en.md around lines 44 -
48, The markdown fenced block containing the environment variables (the
triple-backtick block that currently shows ANTHROPIC_API_KEY, OPENAI_API_KEY,
MINIMAX_API_KEY) needs a language tag to satisfy markdownlint MD040; change the
opening fence from todotenv so the block becomes dotenv followed by the three env lines and the closing ; no other content changes required.
</details>
<!-- fingerprinting:phantom:medusa:grasshopper:ffebfaf7-140e-427e-b68a-0ff0ed606f14 -->
<!-- This is an auto-generated comment by CodeRabbit -->
| "You are a helpful agent with access to these tools:\n" | ||
| f"{tool_desc}\n\n" | ||
| "To use a tool, respond with: TOOL_CALL: tool_name(arg=value)\n" | ||
| "When you have the final answer, respond normally without TOOL_CALL." |
There was a problem hiding this comment.
Use a structured tool-call payload instead of ad-hoc parsing.
The current TOOL_CALL: name(arg=value) contract cannot safely carry common write_file inputs. Lines 132-135 split on commas, so values containing commas, newlines, or ) get truncated and Line 92 can end up executing malformed kwargs. Please switch this to JSON and parse it with json.loads.
🛠️ Suggested shape
- "To use a tool, respond with: TOOL_CALL: tool_name(arg=value)\n"
+ 'To use a tool, respond with exactly one JSON object after `TOOL_CALL:`.\n'
+ 'Example: TOOL_CALL: {"name":"list_files","arguments":{"path":"."}}\n'import json
def _extract_tool_calls(self, response):
calls = []
for line in response.splitlines():
if not line.startswith("TOOL_CALL:"):
continue
payload = line.removeprefix("TOOL_CALL:").strip()
call = json.loads(payload)
calls.append({"name": call["name"], "arguments": call["arguments"]})
return callsAlso applies to: 128-135
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py` around
lines 115 - 118, The tool-call parser currently expects "TOOL_CALL:
name(arg=value)" and naively splits arguments by commas, which breaks for values
containing commas, newlines, or parentheses; change the contract to "TOOL_CALL:
<JSON payload>" and update the parser to use json.loads for robust parsing
(import json). Specifically, modify _extract_tool_calls to strip "TOOL_CALL:",
json.loads the payload, and return {"name": call["name"], "arguments":
call["arguments"]}, and update the code that invokes tools (the dispatcher that
previously unpacked kwargs from the ad-hoc split) to accept the arguments dict
directly instead of parsing a string. Ensure any callers or tests using the old
string format are updated to emit a JSON object with keys "name" and
"arguments".
Summary
utils/llm_provider.py— a unified multi-provider LLM utility supporting Anthropic, OpenAI, and MiniMax with auto-detection from environment variablesapi.minimax.io/v1) with models: MiniMax-M2.7, M2.7-highspeed, M2.5, M2.5-highspeed (204K context)first_api_call.py(Phase 0, Lesson 4) to demo MiniMax as a third provider alongside Anthropicagent_loop.py(Phase 14, Lesson 1) with real multi-provider LLM support viaLLM_PROVIDERenv varChanges
utils/__init__.pyutils/llm_provider.pychat(), temp clamping, think-tag stripping)phases/00-setup-and-tooling/04-apis-and-keys/code/first_api_call.pycall_minimax()andcall_with_provider_utility()demosphases/00-setup-and-tooling/04-apis-and-keys/docs/en.mdphases/14-agent-engineering/01-the-agent-loop/code/agent_loop.pyREADME.mdtests/test_llm_provider.pytests/test_minimax_integration.pyMINIMAX_API_KEY)9 files changed, 811 additions
Test plan
pytest tests/test_llm_provider.py)pytest tests/test_minimax_integration.py)LLM_PROVIDER=minimaxfor agent loopWhy MiniMax?
MiniMax M2.7 offers 204K context window and OpenAI-compatible API, making it a great alternative for learners. The multi-provider utility pattern teaches an important engineering concept: abstracting provider differences behind a unified interface.
Summary by CodeRabbit
New Features
Documentation
Tests