gemma-4-26b-a4b-it
Gemma 4 26B A4B IT is an instruction-tuned Mixture-of-Experts (MoE) model from Google DeepMind. Despite 25.2B total parameters, only 3.8B activate per token during inference — delivering near-31B quality at...
Pricing
| Input price | $0.00 · 1M tokens |
| Output price | $0.00 · 1M tokens |
| Context window | 256K tokens |
| Compatible endpoints | openai |
| Vendor |
Call gemma-4-26b-a4b-it from your code
Point any OpenAI-compatible SDK at UnoRouter and request the model by name. Replace YOUR_API_KEY with a real key from your dashboard.
curl https://api.unorouter.ai/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gemma-4-26b-a4b-it",
"messages": [{"role": "user", "content": "Hello!"}]
}'Frequently asked questions
How much does gemma-4-26b-a4b-it cost per 1M tokens?
Input is priced at $0.00 per 1M tokens, output at $0.00 per 1M tokens. Billing is per token, no rounding to batch sizes.
How do I access gemma-4-26b-a4b-it via API?
Send requests to the UnoRouter /v1/chat/completions endpoint with model=gemma-4-26b-a4b-it. Any OpenAI-compatible client library works. Authentication uses a standard Bearer token.
What is the context window of gemma-4-26b-a4b-it?
gemma-4-26b-a4b-it supports a context window of 256K tokens, shared between your prompt and the model's response.
Similar models
Try gemma-4-26b-a4b-it now
Create an API key and start making requests in under a minute.