| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247 |
- ---
- title: Zen
- description: Curated list of models provided by OpenCode.
- ---
- import config from "../../../config.mjs"
- export const console = config.console
- export const email = `mailto:${config.email}`
- OpenCode Zen is a list of tested and verified models provided by the OpenCode team.
- :::note
- OpenCode Zen is currently in beta.
- :::
- Zen works like any other provider in OpenCode. You login to OpenCode Zen and get
- your API key. It's **completely optional** and you don't need to use it to use
- OpenCode.
- ---
- ## Background
- There are a large number of models out there but only a few of
- these models work well as coding agents. Additionally, most providers are
- configured very differently; so you get very different performance and quality.
- :::tip
- We tested a select group of models and providers that work well with OpenCode.
- :::
- So if you are using a model through something like OpenRouter, you can never be
- sure if you are getting the best version of the model you want.
- To fix this, we did a couple of things:
- 1. We tested a select group of models and talked to their teams about how to
- best run them.
- 2. We then worked with a few providers to make sure these were being served
- correctly.
- 3. Finally, we benchmarked the combination of the model/provider and came up
- with a list that we feel good recommending.
- OpenCode Zen is an AI gateway that gives you access to these models.
- ---
- ## How it works
- OpenCode Zen works like any other provider in OpenCode.
- 1. You sign in to **<a href={console}>OpenCode Zen</a>**, add your billing
- details, and copy your API key.
- 2. You run the `/connect` command in the TUI, select OpenCode Zen, and paste your API key.
- 3. Run `/models` in the TUI to see the list of models we recommend.
- You are charged per request and you can add credits to your account.
- ---
- ## Endpoints
- You can also access our models through the following API endpoints.
- | Model | Model ID | Endpoint | AI SDK Package |
- | ------------------ | ------------------ | -------------------------------------------------- | --------------------------- |
- | GPT 5.2 | gpt-5.2 | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
- | GPT 5.2 Codex | gpt-5.2-codex | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
- | GPT 5.1 | gpt-5.1 | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
- | GPT 5.1 Codex | gpt-5.1-codex | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
- | GPT 5.1 Codex Max | gpt-5.1-codex-max | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
- | GPT 5.1 Codex Mini | gpt-5.1-codex-mini | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
- | GPT 5 | gpt-5 | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
- | GPT 5 Codex | gpt-5-codex | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
- | GPT 5 Nano | gpt-5-nano | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
- | Claude Sonnet 4.5 | claude-sonnet-4-5 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
- | Claude Sonnet 4 | claude-sonnet-4 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
- | Claude Haiku 4.5 | claude-haiku-4-5 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
- | Claude Haiku 3.5 | claude-3-5-haiku | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
- | Claude Opus 4.5 | claude-opus-4-5 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
- | Claude Opus 4.1 | claude-opus-4-1 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
- | Gemini 3 Pro | gemini-3-pro | `https://opencode.ai/zen/v1/models/gemini-3-pro` | `@ai-sdk/google` |
- | Gemini 3 Flash | gemini-3-flash | `https://opencode.ai/zen/v1/models/gemini-3-flash` | `@ai-sdk/google` |
- | MiniMax M2.1 | minimax-m2.1 | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
- | MiniMax M2.1 Free | minimax-m2.1-free | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
- | GLM 4.7 | glm-4.7 | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
- | GLM 4.7 Free | glm-4.7-free | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
- | GLM 4.6 | glm-4.6 | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
- | Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
- | Kimi K2 Thinking | kimi-k2-thinking | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
- | Kimi K2 | kimi-k2 | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
- | Qwen3 Coder 480B | qwen3-coder | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
- | Big Pickle | big-pickle | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
- The [model id](/docs/config/#models) in your OpenCode config
- uses the format `opencode/<model-id>`. For example, for GPT 5.2 Codex, you would
- use `opencode/gpt-5.2-codex` in your config.
- ---
- ### Models
- You can fetch the full list of available models and their metadata from:
- ```
- https://opencode.ai/zen/v1/models
- ```
- ---
- ## Pricing
- We support a pay-as-you-go model. Below are the prices **per 1M tokens**.
- | Model | Input | Output | Cached Read | Cached Write |
- | --------------------------------- | ------ | ------ | ----------- | ------------ |
- | Big Pickle | Free | Free | Free | - |
- | MiniMax M2.1 Free | Free | Free | Free | - |
- | MiniMax M2.1 | $0.30 | $1.20 | $0.10 | - |
- | GLM 4.7 Free | Free | Free | Free | - |
- | GLM 4.7 | $0.60 | $2.20 | $0.10 | - |
- | GLM 4.6 | $0.60 | $2.20 | $0.10 | - |
- | Kimi K2.5 | $0.60 | $3.00 | $0.10 | - |
- | Kimi K2 Thinking | $0.40 | $2.50 | - | - |
- | Kimi K2 | $0.40 | $2.50 | - | - |
- | Qwen3 Coder 480B | $0.45 | $1.50 | - | - |
- | Claude Sonnet 4.5 (≤ 200K tokens) | $3.00 | $15.00 | $0.30 | $3.75 |
- | Claude Sonnet 4.5 (> 200K tokens) | $6.00 | $22.50 | $0.60 | $7.50 |
- | Claude Sonnet 4 (≤ 200K tokens) | $3.00 | $15.00 | $0.30 | $3.75 |
- | Claude Sonnet 4 (> 200K tokens) | $6.00 | $22.50 | $0.60 | $7.50 |
- | Claude Haiku 4.5 | $1.00 | $5.00 | $0.10 | $1.25 |
- | Claude Haiku 3.5 | $0.80 | $4.00 | $0.08 | $1.00 |
- | Claude Opus 4.5 | $5.00 | $25.00 | $0.50 | $6.25 |
- | Claude Opus 4.1 | $15.00 | $75.00 | $1.50 | $18.75 |
- | Gemini 3 Pro (≤ 200K tokens) | $2.00 | $12.00 | $0.20 | - |
- | Gemini 3 Pro (> 200K tokens) | $4.00 | $18.00 | $0.40 | - |
- | Gemini 3 Flash | $0.50 | $3.00 | $0.05 | - |
- | GPT 5.2 | $1.75 | $14.00 | $0.175 | - |
- | GPT 5.2 Codex | $1.75 | $14.00 | $0.175 | - |
- | GPT 5.1 | $1.07 | $8.50 | $0.107 | - |
- | GPT 5.1 Codex | $1.07 | $8.50 | $0.107 | - |
- | GPT 5.1 Codex Max | $1.25 | $10.00 | $0.125 | - |
- | GPT 5.1 Codex Mini | $0.25 | $2.00 | $0.025 | - |
- | GPT 5 | $1.07 | $8.50 | $0.107 | - |
- | GPT 5 Codex | $1.07 | $8.50 | $0.107 | - |
- | GPT 5 Nano | Free | Free | Free | - |
- You might notice _Claude Haiku 3.5_ in your usage history. This is a [low cost model](/docs/config/#models) that's used to generate the titles of your sessions.
- :::note
- Credit card fees are passed along at cost (4.4% + $0.30 per transaction); we don't charge anything beyond that.
- :::
- The free models:
- - GLM 4.7 is currently free on OpenCode for a limited time. The team is using this time to collect feedback and improve the model.
- - MiniMax M2.1 is currently free on OpenCode for a limited time. The team is using this time to collect feedback and improve the model.
- - Big Pickle is a stealth model that's free on OpenCode for a limited time. The team is using this time to collect feedback and improve the model.
- <a href={email}>Contact us</a> if you have any questions.
- ---
- ### Auto-reload
- If your balance goes below $5, Zen will automatically reload $20.
- You can change the auto-reload amount. You can also disable auto-reload entirely.
- ---
- ### Monthly limits
- You can also set a monthly usage limit for the entire workspace and for each
- member of your team.
- For example, let's say you set a monthly usage limit to $20, Zen will not use
- more than $20 in a month. But if you have auto-reload enabled, Zen might end up
- charging you more than $20 if your balance goes below $5.
- ---
- ## Privacy
- All our models are hosted in the US. Our providers follow a zero-retention policy and do not use your data for model training, with the following exceptions:
- - GLM 4.7: During its free period, collected data may be used to improve the model.
- - MiniMax M2.1: During its free period, collected data may be used to improve the model.
- - Big Pickle: During its free period, collected data may be used to improve the model.
- - OpenAI APIs: Requests are retained for 30 days in accordance with [OpenAI's Data Policies](https://platform.openai.com/docs/guides/your-data).
- - Anthropic APIs: Requests are retained for 30 days in accordance with [Anthropic's Data Policies](https://docs.anthropic.com/en/docs/claude-code/data-usage).
- ---
- ## For Teams
- Zen also works great for teams. You can invite teammates, assign roles, curate
- the models your team uses, and more.
- :::note
- Workspaces are currently free for teams as a part of the beta.
- :::
- Managing your workspace is currently free for teams as a part of the beta. We'll be
- sharing more details on the pricing soon.
- ---
- ### Roles
- You can invite teammates to your workspace and assign roles:
- - **Admin**: Manage models, members, API keys, and billing
- - **Member**: Manage only their own API keys
- Admins can also set monthly spending limits for each member to keep costs under control.
- ---
- ### Model access
- Admins can enable or disable specific models for the workspace. Requests made to a disabled model will return an error.
- This is useful for cases where you want to disable the use of a model that
- collects data.
- ---
- ### Bring your own key
- You can use your own OpenAI or Anthropic API keys while still accessing other models in Zen.
- When you use your own keys, tokens are billed directly by the provider, not by Zen.
- For example, your organization might already have a key for OpenAI or Anthropic
- and you want to use that instead of the one that Zen provides.
- ---
- ## Goals
- We created OpenCode Zen to:
- 1. **Benchmark** the best models/providers for coding agents.
- 2. Have access to the **highest quality** options and not downgrade performance or route to cheaper providers.
- 3. Pass along any **price drops** by selling at cost; so the only markup is to cover our processing fees.
- 4. Have **no lock-in** by allowing you to use it with any other coding agent. And always let you use any other provider with OpenCode as well.
|