mimo-v2-flash
MiMo-V2-Flash is an open-source foundation language model developed by Xiaomi. It is a Mixture-of-Experts model with 309B total parameters and 15B active parameters, adopting hybrid attention architecture. MiMo-V2-Flash supports a...
Pricing
| Input price | $0.08 / 1M tokens |
| Output price | $0.23 / 1M tokens |
| Context window | 262.1K tokens |
| Compatible endpoints | openai |
| Vendor | Xiaomi |
Call mimo-v2-flash from your code
Point any OpenAI-compatible SDK at UnoRouter and request the model by name. Replace YOUR_API_KEY with a real key from your dashboard.
curl https://api.unorouter.ai/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "mimo-v2-flash",
"messages": [{"role": "user", "content": "Hello!"}]
}'Frequently asked questions
How much does mimo-v2-flash cost per 1M tokens?
Input is priced at $0.08 per 1M tokens, output at $0.23 per 1M tokens. Billing is per token, no rounding to batch sizes.
How do I access mimo-v2-flash via API?
Send requests to the UnoRouter /v1/chat/completions endpoint with model=mimo-v2-flash. Any OpenAI-compatible client library works. Authentication uses a standard Bearer token.
What is the context window of mimo-v2-flash?
mimo-v2-flash supports a context window of 262.1K tokens, shared between your prompt and the model's response.