Available now
DeepSeek

deepseek-v4-pro

DeepSeek V4 Pro is a large-scale Mixture-of-Experts model from DeepSeek with 1.6T total parameters and 49B activated parameters, supporting a 1M-token context window. It is designed for advanced reasoning, coding,...

TextReasoningToolsOpen Weights1MCache
Input$0.48/ 1M
Output$0.96/ 1M
Endpointsanthropic, openai

Capabilities

ReasoningToolsCacheStructured

Modalities

Input
text
Output
text

Quick stats

Context window1M
Max output384K
TokenizerDeepSeek

Performance

Loading performance data...

Supported parameters

ParameterAlwaysDefault
frequency_penalty(do not send)
include_reasoning
logit_bias
logprobs
max_tokens
min_p
presence_penalty(do not send)
reasoning
repetition_penalty(do not send)
response_format
seed
stop
structured_outputs
temperature1
tool_choice
tools
top_k(do not send)
top_logprobs
top_p1
§ 01

Pricing

Input price$0.48 · 1M tokens
Output price$0.96 · 1M tokens
Compatible endpointsanthropic, openai
VendorDeepSeek
§ 02

Call deepseek-v4-pro from your code

Point any OpenAI-compatible SDK at UnoRouter and request the model by name. Replace YOUR_API_KEY with a real key from your dashboard.

bash
curl https://api.unorouter.ai/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-v4-pro",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Sign in to auto-fill your API key

§ 03

Frequently asked questions

How much does deepseek-v4-pro cost per 1M tokens?

Input is priced at $0.48 per 1M tokens, output at $0.96 per 1M tokens. Billing is per token, no rounding to batch sizes.

How do I access deepseek-v4-pro via API?

Send requests to the UnoRouter /v1/chat/completions endpoint with model=deepseek-v4-pro. Any OpenAI-compatible client library works. Authentication uses a standard Bearer token.

§ 04

Similar models

Try deepseek-v4-pro now

Create an API key and start making requests in under a minute.

View all models