trinity-large-thinking
Trinity Large Thinking is a powerful open source reasoning model from the team at Arcee AI. It shows strong performance in PinchBench, agentic workloads, and reasoning tasks. Launch video: https://youtu.be/Gc82AXLa0Rg?si=4RLn6WBz33qT--B7...
Capabilities
Modalities
Quick stats
Performance
Supported parameters
| Parameter | Always | Default |
|---|---|---|
| frequency_penalty | — | (do not send) |
| include_reasoning | — | |
| logit_bias | — | — |
| max_tokens | — | |
| presence_penalty | — | (do not send) |
| reasoning | — | |
| repetition_penalty | — | (do not send) |
| response_format | — | — |
| seed | — | — |
| stop | — | — |
| structured_outputs | — | — |
| temperature | 0.3 | |
| tool_choice | — | |
| tools | — | |
| top_k | (do not send) | |
| top_p | 0.8 |
Pricing
| Input price | $0.00 · 1M tokens |
| Output price | $0.00 · 1M tokens |
| Context window | 262.1K tokens |
| Compatible endpoints | openai |
| Vendor | Unknown |
Call trinity-large-thinking from your code
Point any OpenAI-compatible SDK at UnoRouter and request the model by name. Replace YOUR_API_KEY with a real key from your dashboard.
curl https://api.unorouter.ai/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "trinity-large-thinking",
"messages": [{"role": "user", "content": "Hello!"}]
}'Frequently asked questions
How much does trinity-large-thinking cost per 1M tokens?
Input is priced at $0.00 per 1M tokens, output at $0.00 per 1M tokens. Billing is per token, no rounding to batch sizes.
How do I access trinity-large-thinking via API?
Send requests to the UnoRouter /v1/chat/completions endpoint with model=trinity-large-thinking. Any OpenAI-compatible client library works. Authentication uses a standard Bearer token.
What is the context window of trinity-large-thinking?
trinity-large-thinking supports a context window of 262.1K tokens, shared between your prompt and the model's response.
Similar models
Try trinity-large-thinking now
Create an API key and start making requests in under a minute.