Available now
MiniMax
minimax-m2.5
MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. Trained in a diverse range of complex real-world digital working environments, M2.5 builds upon the coding expertise of M2.1...
TextReasoningToolsOpen Weights204.8K
InputFree
OutputFree
Context204.8K
Endpointsopenai
Capabilities
ReasoningToolsParallel toolsStructured
Modalities
Input
text
Output
text
Quick stats
Context window204.8K
Max output196.6K
Modechat
TokenizerOther
Quantizationfp8
Hugging FaceMiniMaxAI/MiniMax-M2.5
Performance
Loading performance data...
Supported parameters
| Parameter | Always | Default |
|---|---|---|
| frequency_penalty | — | (do not send) |
| include_reasoning | — | |
| logit_bias | — | — |
| logprobs | — | — |
| max_tokens | — | |
| min_p | — | — |
| parallel_tool_calls | — | — |
| presence_penalty | — | (do not send) |
| reasoning | — | |
| reasoning_effort | — | — |
| repetition_penalty | — | (do not send) |
| response_format | — | — |
| seed | — | — |
| stop | — | — |
| structured_outputs | — | — |
| temperature | 1 | |
| tool_choice | — | — |
| tools | — | — |
| top_k | — | (do not send) |
| top_logprobs | — | — |
| top_p | 0.95 |
§ 01
Pricing
| Input price | $0.00 · 1M tokens |
| Output price | $0.00 · 1M tokens |
| Context window | 204.8K tokens |
| Compatible endpoints | openai |
| Vendor | MiniMax |
§ 02
Call minimax-m2.5 from your code
Point any OpenAI-compatible SDK at UnoRouter and request the model by name. Replace YOUR_API_KEY with a real key from your dashboard.
bash
curl https://api.unorouter.ai/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "minimax-m2.5",
"messages": [{"role": "user", "content": "Hello!"}]
}'§ 03
Frequently asked questions
How much does minimax-m2.5 cost per 1M tokens?
Input is priced at $0.00 per 1M tokens, output at $0.00 per 1M tokens. Billing is per token, no rounding to batch sizes.
How do I access minimax-m2.5 via API?
Send requests to the UnoRouter /v1/chat/completions endpoint with model=minimax-m2.5. Any OpenAI-compatible client library works. Authentication uses a standard Bearer token.
What is the context window of minimax-m2.5?
minimax-m2.5 supports a context window of 204.8K tokens, shared between your prompt and the model's response.
§ 04