Skip to content

Conversation

@shariqriazz
Copy link
Contributor

Summary

  • Add MiniMax-M2 model to NVIDIA provider with proper configuration
  • 128K context window and 16K output tokens (NVIDIA NIM specific limits)
  • Supports reasoning, tool calling, and temperature control
  • Open weights model with MIT license

Details

The MiniMax-M2 is a compact Mixture-of-Experts (MoE) model with 230B total parameters and 10B active parameters, optimized for coding and agentic tasks. This configuration follows the NVIDIA NIM API specifications and differentiates from other providers by offering specific context/output limits.

Model Specifications

  • Architecture: Mixture-of-Experts (MoE) Transformer
  • Context: 128,000 tokens
  • Output: 16,384 tokens
  • Features: Reasoning, tool calling, temperature control
  • License: MIT (open weights)
  • Knowledge cutoff: July 2024

Based on: https://build.nvidia.com/minimaxai/minimax-m2

- Created minimax-m2.toml with proper configuration
- 128K context window, 16K output tokens
- Supports reasoning, tool calling, and temperature control
- Open weights model with MIT license
- Based on NVIDIA NIM documentation
@rekram1-node rekram1-node merged commit 0e2c58b into sst:dev Nov 2, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants