Available now
Moonshot

kimi-k2.6

Kimi K2.6 is Moonshot AI's next-generation multimodal model, designed for long-horizon coding, coding-driven UI/UX generation, and multi-agent orchestration. It handles complex end-to-end coding tasks across Python, Rust, and Go, and...

TextReasoningToolsFilesOpen WeightsVision262.1KVideoCache
Input$1.95/ 1M
Output$8.21/ 1M
Context262.1K
Endpointsopenai

Capabilities

ReasoningToolsParallel toolsVisionVideoCacheStructured

Modalities

Input
textimage
Output
text

Quick stats

Context window262.1K
Max output262.1K
Modechat
TokenizerOther
Quantizationint4

Performance

Loading performance data...

Supported parameters

ParameterAlwaysDefault
frequency_penalty(do not send)
include_reasoning
logit_bias
logprobs
max_tokens
min_p
parallel_tool_calls
presence_penalty(do not send)
reasoning
reasoning_effort
repetition_penalty(do not send)
response_format
seed
stop
structured_outputs
temperature(do not send)
tool_choice
tools
top_k(do not send)
top_logprobs
top_p(do not send)
§ 01

Pricing

Input price$1.95 · 1M tokens
Output price$8.21 · 1M tokens
Context window262.1K tokens
Compatible endpointsopenai
VendorMoonshot
§ 02

Call kimi-k2.6 from your code

Point any OpenAI-compatible SDK at UnoRouter and request the model by name. Replace YOUR_API_KEY with a real key from your dashboard.

bash
curl https://api.unorouter.ai/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "kimi-k2.6",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Sign in to auto-fill your API key

§ 03

Frequently asked questions

How much does kimi-k2.6 cost per 1M tokens?

Input is priced at $1.95 per 1M tokens, output at $8.21 per 1M tokens. Billing is per token, no rounding to batch sizes.

How do I access kimi-k2.6 via API?

Send requests to the UnoRouter /v1/chat/completions endpoint with model=kimi-k2.6. Any OpenAI-compatible client library works. Authentication uses a standard Bearer token.

What is the context window of kimi-k2.6?

kimi-k2.6 supports a context window of 262.1K tokens, shared between your prompt and the model's response.

§ 04

Similar models

Try kimi-k2.6 now

Create an API key and start making requests in under a minute.

View all models