Available now
MiniMax

minimax-m2.5-highspeed

TextReasoningToolsOpen Weights204.8K
Input$0.82/ 1M
Output$3.28/ 1M
Context204.8K
Endpointsopenai

Capabilities

ReasoningTools

Quick stats

Context window204.8K

Performance

Loading performance data...
§ 01

Pricing

Input price$0.82 · 1M tokens
Output price$3.28 · 1M tokens
Context window204.8K tokens
Compatible endpointsopenai
VendorMiniMax
§ 02

Call minimax-m2.5-highspeed from your code

Point any OpenAI-compatible SDK at UnoRouter and request the model by name. Replace YOUR_API_KEY with a real key from your dashboard.

bash
curl https://api.unorouter.ai/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "minimax-m2.5-highspeed",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Sign in to auto-fill your API key

§ 03

Frequently asked questions

How much does minimax-m2.5-highspeed cost per 1M tokens?

Input is priced at $0.82 per 1M tokens, output at $3.28 per 1M tokens. Billing is per token, no rounding to batch sizes.

How do I access minimax-m2.5-highspeed via API?

Send requests to the UnoRouter /v1/chat/completions endpoint with model=minimax-m2.5-highspeed. Any OpenAI-compatible client library works. Authentication uses a standard Bearer token.

What is the context window of minimax-m2.5-highspeed?

minimax-m2.5-highspeed supports a context window of 204.8K tokens, shared between your prompt and the model's response.

§ 04

Similar models

Try minimax-m2.5-highspeed now

Create an API key and start making requests in under a minute.

View all models