Skip to content

feat: add MiniMax as LLM provider with multi-provider utility#15

Open
octo-patch wants to merge 2 commits intorohitg00:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM provider with multi-provider utility#15
octo-patch wants to merge 2 commits intorohitg00:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 30, 2026

Summary

  • Add utils/llm_provider.py — a unified multi-provider LLM utility supporting Anthropic, OpenAI, and MiniMax with auto-detection from environment variables
  • MiniMax uses OpenAI-compatible API (api.minimax.io/v1) with models: MiniMax-M2.7, M2.7-highspeed, M2.5, M2.5-highspeed (204K context)
  • Update first_api_call.py (Phase 0, Lesson 4) to demo MiniMax as a third provider alongside Anthropic
  • Update agent_loop.py (Phase 14, Lesson 1) with real multi-provider LLM support via LLM_PROVIDER env var
  • Update docs and README with MiniMax provider table and setup instructions

Changes

File Change
utils/__init__.py New package init
utils/llm_provider.py Multi-provider LLM utility (auto-detect, chat(), temp clamping, think-tag stripping)
phases/00-setup-and-tooling/04-apis-and-keys/code/first_api_call.py Add call_minimax() and call_with_provider_utility() demos
phases/00-setup-and-tooling/04-apis-and-keys/docs/en.md Add MiniMax API call example, provider table entry, exercises
phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py Real LLM calls via provider utility, tool-call parsing
README.md LLM Providers table with MiniMax
tests/test_llm_provider.py 35 unit tests
tests/test_minimax_integration.py 3 integration tests (require MINIMAX_API_KEY)

9 files changed, 811 additions

Test plan

  • 35 unit tests pass (pytest tests/test_llm_provider.py)
  • 3 integration tests pass with real MiniMax API (pytest tests/test_minimax_integration.py)
  • Verify existing lesson code still runs unchanged
  • Test with LLM_PROVIDER=minimax for agent loop

Why MiniMax?

MiniMax M2.7 offers 204K context window and OpenAI-compatible API, making it a great alternative for learners. The multi-provider utility pattern teaches an important engineering concept: abstracting provider differences behind a unified interface.

Summary by CodeRabbit

  • New Features

    • MiniMax added as a supported LLM provider (OpenAI-compatible).
    • Automatic LLM provider detection and a unified chat interface.
    • Agent loop can call real LLMs when configured.
    • API examples updated to show MiniMax via OpenAI-compatible SDK.
  • Documentation

    • README and docs updated with LLM provider setup and env var examples.
    • New checklist items for using MiniMax and the provider utility.
  • Tests

    • Added unit tests for provider detection, client creation, and chat behavior.
    • Added conditional integration tests for MiniMax.

Add utils/llm_provider.py that provides a unified interface for
Anthropic, OpenAI, and MiniMax LLM providers with auto-detection
from environment variables.

- MiniMax uses OpenAI-compatible API at api.minimax.io/v1
- Supports M2.7, M2.7-highspeed, M2.5, M2.5-highspeed models
- Temperature clamping to (0, 1] for MiniMax compatibility
- Think-tag stripping for MiniMax reasoning model responses
- Updated first_api_call.py with MiniMax demo
- Updated agent_loop.py with real multi-provider LLM support
- Updated docs and README with MiniMax provider info
- 35 unit tests + 3 integration tests
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 30, 2026

📝 Walkthrough

Walkthrough

This PR adds a unified LLM provider utility that auto-detects and wraps Anthropic, OpenAI, and MiniMax, plus documentation, example calls, agent integration updates, and unit/integration tests to exercise provider detection and chat behavior.

Changes

Cohort / File(s) Summary
Core LLM Provider Module
utils/llm_provider.py
New module implementing provider detection, PROVIDER_DEFAULTS, LLMClient wrapper, get_llm_client(), chat() with provider-specific call shapes, temperature clamping, MiniMax thinking-stripping, list_providers(), and a CLI/test main block.
Documentation
README.md, phases/00-setup-and-tooling/04-apis-and-keys/docs/en.md
Added "LLM Providers" README section and updated lesson docs: added MINIMAX_API_KEY examples, MiniMax (OpenAI-compatible) SDK call example, renumbered steps, course provider table update, and new exercise checklist items.
Example Code
phases/00-setup-and-tooling/04-apis-and-keys/code/first_api_call.py
Added call_minimax() (OpenAI-compatible client with custom base_url) and call_with_provider_utility() demonstrating utils.llm_provider auto-detection and unified chat; updated demo ordering/labels.
Agent Integration
phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py
SimpleAgent now accepts optional provider, attempts to initialize real LLM client via utils.llm_provider, replaces stubbed LLM call with provider-backed chat, and adds _extract_tool_calls() parsing for TOOL_CALL: ... patterns.
Tests
tests/test_llm_provider.py, tests/test_minimax_integration.py
New unit tests validating PROVIDER_DEFAULTS, temperature clamping, detect_provider(), get_llm_client() (with mocks), chat() behavior across providers (including MiniMax stripping), and agent_loop tool-call parsing; integration tests for live MiniMax calls (skipped if no MINIMAX_API_KEY).

Sequence Diagram

sequenceDiagram
    participant Client as Client Code
    participant Detector as Provider Detector
    participant Factory as Client Factory
    participant Provider as LLM Provider
    participant Parser as Response Parser

    Client->>Detector: detect_provider() (check env vars)
    Detector-->>Client: provider name or error

    Client->>Factory: get_llm_client(provider?, api_key?, model?)
    Factory->>Factory: validate provider & api key
    Factory->>Provider: initialize SDK client (OpenAI/Anthropic)
    Provider-->>Factory: SDK client instance
    Factory-->>Client: LLMClient wrapper

    Client->>Factory: chat(llm, prompt, system?, model?, temperature?)
    Factory->>Provider: send chat request (provider-specific)
    Provider-->>Factory: response payload
    Factory->>Parser: parse/clean response (strip <think>, extract text)
    Parser-->>Factory: cleaned message
    Factory-->>Client: final response string
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 I found three keys lined up today,

Anthropic, OpenAI, MiniMax at play.
One little interface hopped them all in,
Prompt, chat, return — compact and thin.
Hooray, the agent leaps a longer way!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 13.21% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly and concisely describes the main change: adding MiniMax as a new LLM provider along with a multi-provider utility. The title accurately reflects the core objectives of the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (6)
utils/llm_provider.py (2)

19-21: Unused import field.

The field import from dataclasses is not used in this module.

🔧 Proposed fix
-from dataclasses import dataclass, field
+from dataclasses import dataclass
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utils/llm_provider.py` around lines 19 - 21, Remove the unused `field` import
from the dataclasses import list in utils/llm_provider.py; update the top-level
import line so it only imports what’s used (e.g., keep `dataclass`), ensuring
there are no other references to `field` in this module (check any dataclass
declarations like those using `@dataclass`) and run tests/lint to confirm the
unused import warning is resolved.

119-140: Consider adding a defensive else clause.

If PROVIDER_DEFAULTS is extended with a new provider but this if/elif chain isn't updated, client would be unbound, causing an UnboundLocalError. A defensive else clause would make this failure explicit.

🛡️ Proposed defensive fix
     elif provider == "minimax":
         import openai as openai_sdk

         client = openai_sdk.OpenAI(
             api_key=key,
             base_url=config["base_url"],
         )
+    else:
+        raise NotImplementedError(f"Provider '{provider}' is configured but not implemented")

     return LLMClient(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@utils/llm_provider.py` around lines 119 - 140, The provider branch leaves
local variable client unbound if provider doesn't match "anthropic", "openai",
or "minimax"; update the conditional in the function that constructs the client
(the if/elif chain handling provider) to include a defensive else that raises a
clear exception (e.g., ValueError or RuntimeError) stating the unknown provider
and including the provider value, so that LLMClient(...) is never called with an
uninitialized client; reference the symbols provider, client, and LLMClient when
making the change.
phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py (2)

12-12: File handle not closed after reading.

The lambda open(path).read() doesn't close the file handle. In long-running agent loops, this could exhaust file descriptors.

♻️ Proposed fix using context manager pattern
     "read_file": {
         "description": "Read the contents of a file",
         "parameters": {
             "path": {"type": "string", "description": "File path to read"}
         },
-        "execute": lambda path: open(path).read() if os.path.exists(path) else f"File not found: {path}"
+        "execute": lambda path: _read_file(path)
     },

Add a helper function:

def _read_file(path):
    if not os.path.exists(path):
        return f"File not found: {path}"
    with open(path) as f:
        return f.read()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py` at line 12,
The current lambda used for "execute" opens files without closing them; replace
it with a helper function (e.g., _read_file) that checks os.path.exists(path)
and uses a with open(path) as f: context manager to read and return contents (or
return the "File not found" message), then update the mapping to use "execute":
_read_file so file handles are properly closed.

20-23: File handle not closed after writing.

Same issue as read_file - the file handle from open(path, 'w') isn't properly closed.

♻️ Proposed fix
     "write_file": {
         "description": "Write content to a file",
         "parameters": {
             "path": {"type": "string", "description": "File path to write"},
             "content": {"type": "string", "description": "Content to write"}
         },
-        "execute": lambda path, content: (
-            open(path, 'w').write(content),
-            f"Wrote {len(content)} chars to {path}"
-        )[1]
+        "execute": lambda path, content: _write_file(path, content)
     },

Add a helper function:

def _write_file(path, content):
    with open(path, 'w') as f:
        f.write(content)
    return f"Wrote {len(content)} chars to {path}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py` around
lines 20 - 23, The current inline lambda assigned to the "execute" action opens
a file with open(path, 'w') but never closes it; implement a small helper
function named _write_file(path, content) that uses a with open(path, 'w') as f:
f.write(content) and returns the message f"Wrote {len(content)} chars to
{path}", then replace the lambda in the "execute" mapping to call
_write_file(path, content) (mirroring how read_file was fixed) so the file
handle is properly closed.
tests/test_llm_provider.py (2)

319-367: Repeated setup code in each test method.

Each test method duplicates the sys.path.insert and import statements. Consider moving the setup to setUpClass or module level.

♻️ Proposed refactor using setUpClass
 class TestAgentLoop(unittest.TestCase):
     """Test agent loop with provider integration."""

+    `@classmethod`
+    def setUpClass(cls):
+        agent_path = os.path.join(
+            os.path.dirname(__file__), "..",
+            "phases", "14-agent-engineering", "01-the-agent-loop", "code"
+        )
+        sys.path.insert(0, agent_path)
+        from agent_loop import SimpleAgent, TOOLS
+        cls.SimpleAgent = SimpleAgent
+        cls.TOOLS = TOOLS
+
     def test_agent_extract_tool_calls(self):
-        sys.path.insert(0, os.path.join(os.path.dirname(__file__), ".."))
-        agent_path = os.path.join(
-            os.path.dirname(__file__), "..",
-            "phases", "14-agent-engineering", "01-the-agent-loop", "code"
-        )
-        sys.path.insert(0, agent_path)
-        from agent_loop import SimpleAgent, TOOLS
-
-        agent = SimpleAgent(TOOLS)
+        agent = self.SimpleAgent(self.TOOLS)
         calls = agent._extract_tool_calls(
             "TOOL_CALL: list_files(path=/tmp)"
         )
         # ... rest of test
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_llm_provider.py` around lines 319 - 367, The tests in
TestAgentLoop repeat identical sys.path.insert and import logic in each test
method; move that setup into a class-level initializer to DRY the code—add a
setUpClass method on TestAgentLoop that performs the sys.path.insert calls and
imports agent_loop (obtaining SimpleAgent and TOOLS) and store them as class
attributes (e.g., cls.SimpleAgent, cls.TOOLS), then update each test to use
those class attributes and remove the duplicated sys.path/import lines.

311-317: Unused loop variable name.

The loop variable name is not used within the loop body.

🔧 Proposed fix
     def test_provider_info_fields(self):
         providers = list_providers()
-        for name, info in providers.items():
+        for _name, info in providers.items():
             self.assertIn("available", info)
             self.assertIn("env_key", info)
             self.assertIn("default_model", info)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_llm_provider.py` around lines 311 - 317, The
test_provider_info_fields test iterates with an unused loop variable name;
update the loop over list_providers() to avoid the unused variable by iterating
over providers.values() or using an underscore (e.g., for _, info in
providers.items()) so only info is referenced; keep the assertions on
"available", "env_key", and "default_model" inside the same
test_provider_info_fields function.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@phases/00-setup-and-tooling/04-apis-and-keys/docs/en.md`:
- Around line 86-102: The example uses os.environ to read MINIMAX_API_KEY but
never imports the os module; add an import os at the top of the snippet so the
openai.OpenAI client creation (openai.OpenAI) and subsequent call to
client.chat.completions.create work without NameError; update the snippet to
include "import os" before importing openai and constructing the client.

In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py`:
- Around line 59-66: The try/except around importing utils.llm_provider and
calling get_llm_client is catching all Exceptions and hiding real initialization
errors; change it to catch specific errors (e.g., ImportError,
ModuleNotFoundError, KeyError, ValueError, or the SDK-specific exception your
LLM client raises) and log the caught exception details when initialization
fails, while preserving the demo-mode fallback when no provider is configured;
update the block around get_llm_client, provider, and self.llm_client to handle
those specific exceptions and include the exception message in the
printed/logged output.

---

Nitpick comments:
In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py`:
- Line 12: The current lambda used for "execute" opens files without closing
them; replace it with a helper function (e.g., _read_file) that checks
os.path.exists(path) and uses a with open(path) as f: context manager to read
and return contents (or return the "File not found" message), then update the
mapping to use "execute": _read_file so file handles are properly closed.
- Around line 20-23: The current inline lambda assigned to the "execute" action
opens a file with open(path, 'w') but never closes it; implement a small helper
function named _write_file(path, content) that uses a with open(path, 'w') as f:
f.write(content) and returns the message f"Wrote {len(content)} chars to
{path}", then replace the lambda in the "execute" mapping to call
_write_file(path, content) (mirroring how read_file was fixed) so the file
handle is properly closed.

In `@tests/test_llm_provider.py`:
- Around line 319-367: The tests in TestAgentLoop repeat identical
sys.path.insert and import logic in each test method; move that setup into a
class-level initializer to DRY the code—add a setUpClass method on TestAgentLoop
that performs the sys.path.insert calls and imports agent_loop (obtaining
SimpleAgent and TOOLS) and store them as class attributes (e.g.,
cls.SimpleAgent, cls.TOOLS), then update each test to use those class attributes
and remove the duplicated sys.path/import lines.
- Around line 311-317: The test_provider_info_fields test iterates with an
unused loop variable name; update the loop over list_providers() to avoid the
unused variable by iterating over providers.values() or using an underscore
(e.g., for _, info in providers.items()) so only info is referenced; keep the
assertions on "available", "env_key", and "default_model" inside the same
test_provider_info_fields function.

In `@utils/llm_provider.py`:
- Around line 19-21: Remove the unused `field` import from the dataclasses
import list in utils/llm_provider.py; update the top-level import line so it
only imports what’s used (e.g., keep `dataclass`), ensuring there are no other
references to `field` in this module (check any dataclass declarations like
those using `@dataclass`) and run tests/lint to confirm the unused import warning
is resolved.
- Around line 119-140: The provider branch leaves local variable client unbound
if provider doesn't match "anthropic", "openai", or "minimax"; update the
conditional in the function that constructs the client (the if/elif chain
handling provider) to include a defensive else that raises a clear exception
(e.g., ValueError or RuntimeError) stating the unknown provider and including
the provider value, so that LLMClient(...) is never called with an uninitialized
client; reference the symbols provider, client, and LLMClient when making the
change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b0be8efa-efe9-4464-b2a8-d7d70204b462

📥 Commits

Reviewing files that changed from the base of the PR and between 122cc3c and 4e99cdd.

📒 Files selected for processing (9)
  • README.md
  • phases/00-setup-and-tooling/04-apis-and-keys/code/first_api_call.py
  • phases/00-setup-and-tooling/04-apis-and-keys/docs/en.md
  • phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py
  • tests/__init__.py
  • tests/test_llm_provider.py
  • tests/test_minimax_integration.py
  • utils/__init__.py
  • utils/llm_provider.py

Comment on lines +59 to +66
try:
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "..", "..", ".."))
from utils.llm_provider import get_llm_client
provider = provider or os.environ.get("LLM_PROVIDER")
self.llm_client = get_llm_client(provider)
print(f" Using LLM: {self.llm_client.provider} ({self.llm_client.default_model})")
except Exception:
print(" [No LLM configured — running in demo mode]")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Catching blind Exception hides initialization errors.

The bare except Exception swallows all errors including import errors, configuration issues, and SDK bugs. This makes debugging difficult when things go wrong.

🛡️ Proposed fix to catch specific exceptions
         try:
             sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "..", "..", ".."))
             from utils.llm_provider import get_llm_client
             provider = provider or os.environ.get("LLM_PROVIDER")
             self.llm_client = get_llm_client(provider)
             print(f"  Using LLM: {self.llm_client.provider} ({self.llm_client.default_model})")
-        except Exception:
+        except (ImportError, EnvironmentError, ValueError) as e:
             print("  [No LLM configured — running in demo mode]")
+            # Optionally log the reason: print(f"    ({e})")
🧰 Tools
🪛 Ruff (0.15.7)

[warning] 65-65: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py` around
lines 59 - 66, The try/except around importing utils.llm_provider and calling
get_llm_client is catching all Exceptions and hiding real initialization errors;
change it to catch specific errors (e.g., ImportError, ModuleNotFoundError,
KeyError, ValueError, or the SDK-specific exception your LLM client raises) and
log the caught exception details when initialization fails, while preserving the
demo-mode fallback when no provider is configured; update the block around
get_llm_client, provider, and self.llm_client to handle those specific
exceptions and include the exception message in the printed/logged output.

@rohitg00
Copy link
Copy Markdown
Owner

rohitg00 commented Mar 30, 2026

Very good work @octo-patch

1. Added missing import os in the MiniMax API example code block
2. Replaced bare except Exception with specific ImportError and
   (KeyError, ValueError) handlers in agent_loop.py __init__
@octo-patch
Copy link
Copy Markdown
Author

Thanks @rohitg00! Fixed both review items:

  1. Added missing import os in the MiniMax API example
  2. Improved exception handling in agent_loop.py to catch specific API errors instead of bare Exception

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py (1)

59-68: ⚠️ Potential issue | 🟠 Major

Catch the no-provider path here too.

utils/llm_provider.py:64-73 and utils/llm_provider.py:84-106 raise EnvironmentError for the normal “not configured yet” path. Because this block only handles ImportError, KeyError, and ValueError, a clean setup now aborts on Line 63 instead of falling back to demo mode.

🛠️ Proposed fix
-        except (KeyError, ValueError) as exc:
+        except (EnvironmentError, ValueError) as exc:
             print(f"  [LLM configuration error: {exc} — running in demo mode]")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py` around
lines 59 - 68, The try block that imports and initializes the LLM client (the
code that calls get_llm_client and sets self.llm_client / prints provider info
in agent_loop.py) does not catch EnvironmentError thrown by utils/llm_provider,
so a normal “not configured” path aborts instead of falling back to demo
mode—update the exception handling to also catch EnvironmentError (add it to the
except tuple alongside ImportError, KeyError, ValueError) and ensure the
demo-mode print path is executed when EnvironmentError is raised.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@phases/00-setup-and-tooling/04-apis-and-keys/docs/en.md`:
- Around line 44-48: The markdown fenced block containing the environment
variables (the triple-backtick block that currently shows ANTHROPIC_API_KEY,
OPENAI_API_KEY, MINIMAX_API_KEY) needs a language tag to satisfy markdownlint
MD040; change the opening fence from ``` to ```dotenv so the block becomes
```dotenv followed by the three env lines and the closing ```; no other content
changes required.

In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py`:
- Around line 115-118: The tool-call parser currently expects "TOOL_CALL:
name(arg=value)" and naively splits arguments by commas, which breaks for values
containing commas, newlines, or parentheses; change the contract to "TOOL_CALL:
<JSON payload>" and update the parser to use json.loads for robust parsing
(import json). Specifically, modify _extract_tool_calls to strip "TOOL_CALL:",
json.loads the payload, and return {"name": call["name"], "arguments":
call["arguments"]}, and update the code that invokes tools (the dispatcher that
previously unpacked kwargs from the ad-hoc split) to accept the arguments dict
directly instead of parsing a string. Ensure any callers or tests using the old
string format are updated to emit a JSON object with keys "name" and
"arguments".

---

Duplicate comments:
In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py`:
- Around line 59-68: The try block that imports and initializes the LLM client
(the code that calls get_llm_client and sets self.llm_client / prints provider
info in agent_loop.py) does not catch EnvironmentError thrown by
utils/llm_provider, so a normal “not configured” path aborts instead of falling
back to demo mode—update the exception handling to also catch EnvironmentError
(add it to the except tuple alongside ImportError, KeyError, ValueError) and
ensure the demo-mode print path is executed when EnvironmentError is raised.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b0372f42-05da-453e-8ca4-f533c10e19db

📥 Commits

Reviewing files that changed from the base of the PR and between 4e99cdd and 0e8c671.

📒 Files selected for processing (2)
  • phases/00-setup-and-tooling/04-apis-and-keys/docs/en.md
  • phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py

Comment on lines 44 to 48
```
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
MINIMAX_API_KEY=eyJ...
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a language tag to the .env fence.

Line 44 is still an untyped fenced block, so markdownlint MD040 will keep warning here. dotenv is the clearest label.

📝 Proposed fix
-```
+```dotenv
 ANTHROPIC_API_KEY=sk-ant-...
 OPENAI_API_KEY=sk-...
 MINIMAX_API_KEY=eyJ...
</details>

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.22.0)</summary>

[warning] 44-44: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</details>

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @phases/00-setup-and-tooling/04-apis-and-keys/docs/en.md around lines 44 -
48, The markdown fenced block containing the environment variables (the
triple-backtick block that currently shows ANTHROPIC_API_KEY, OPENAI_API_KEY,
MINIMAX_API_KEY) needs a language tag to satisfy markdownlint MD040; change the
opening fence from todotenv so the block becomes dotenv followed by the three env lines and the closing ; no other content changes required.


</details>

<!-- fingerprinting:phantom:medusa:grasshopper:ffebfaf7-140e-427e-b68a-0ff0ed606f14 -->

<!-- This is an auto-generated comment by CodeRabbit -->

Comment on lines +115 to +118
"You are a helpful agent with access to these tools:\n"
f"{tool_desc}\n\n"
"To use a tool, respond with: TOOL_CALL: tool_name(arg=value)\n"
"When you have the final answer, respond normally without TOOL_CALL."
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use a structured tool-call payload instead of ad-hoc parsing.

The current TOOL_CALL: name(arg=value) contract cannot safely carry common write_file inputs. Lines 132-135 split on commas, so values containing commas, newlines, or ) get truncated and Line 92 can end up executing malformed kwargs. Please switch this to JSON and parse it with json.loads.

🛠️ Suggested shape
-            "To use a tool, respond with: TOOL_CALL: tool_name(arg=value)\n"
+            'To use a tool, respond with exactly one JSON object after `TOOL_CALL:`.\n'
+            'Example: TOOL_CALL: {"name":"list_files","arguments":{"path":"."}}\n'
import json

def _extract_tool_calls(self, response):
    calls = []
    for line in response.splitlines():
        if not line.startswith("TOOL_CALL:"):
            continue
        payload = line.removeprefix("TOOL_CALL:").strip()
        call = json.loads(payload)
        calls.append({"name": call["name"], "arguments": call["arguments"]})
    return calls

Also applies to: 128-135

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@phases/14-agent-engineering/01-the-agent-loop/code/agent_loop.py` around
lines 115 - 118, The tool-call parser currently expects "TOOL_CALL:
name(arg=value)" and naively splits arguments by commas, which breaks for values
containing commas, newlines, or parentheses; change the contract to "TOOL_CALL:
<JSON payload>" and update the parser to use json.loads for robust parsing
(import json). Specifically, modify _extract_tool_calls to strip "TOOL_CALL:",
json.loads the payload, and return {"name": call["name"], "arguments":
call["arguments"]}, and update the code that invokes tools (the dispatcher that
previously unpacked kwargs from the ad-hoc split) to accept the arguments dict
directly instead of parsing a string. Ensure any callers or tests using the old
string format are updated to emit a JSON object with keys "name" and
"arguments".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants