zen.mdx 12 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247
  1. ---
  2. title: Zen
  3. description: Curated list of models provided by OpenCode.
  4. ---
  5. import config from "../../../config.mjs"
  6. export const console = config.console
  7. export const email = `mailto:${config.email}`
  8. OpenCode Zen is a list of tested and verified models provided by the OpenCode team.
  9. :::note
  10. OpenCode Zen is currently in beta.
  11. :::
  12. Zen works like any other provider in OpenCode. You login to OpenCode Zen and get
  13. your API key. It's **completely optional** and you don't need to use it to use
  14. OpenCode.
  15. ---
  16. ## Background
  17. There are a large number of models out there but only a few of
  18. these models work well as coding agents. Additionally, most providers are
  19. configured very differently; so you get very different performance and quality.
  20. :::tip
  21. We tested a select group of models and providers that work well with OpenCode.
  22. :::
  23. So if you are using a model through something like OpenRouter, you can never be
  24. sure if you are getting the best version of the model you want.
  25. To fix this, we did a couple of things:
  26. 1. We tested a select group of models and talked to their teams about how to
  27. best run them.
  28. 2. We then worked with a few providers to make sure these were being served
  29. correctly.
  30. 3. Finally, we benchmarked the combination of the model/provider and came up
  31. with a list that we feel good recommending.
  32. OpenCode Zen is an AI gateway that gives you access to these models.
  33. ---
  34. ## How it works
  35. OpenCode Zen works like any other provider in OpenCode.
  36. 1. You sign in to **<a href={console}>OpenCode Zen</a>**, add your billing
  37. details, and copy your API key.
  38. 2. You run the `/connect` command in the TUI, select OpenCode Zen, and paste your API key.
  39. 3. Run `/models` in the TUI to see the list of models we recommend.
  40. You are charged per request and you can add credits to your account.
  41. ---
  42. ## Endpoints
  43. You can also access our models through the following API endpoints.
  44. | Model | Model ID | Endpoint | AI SDK Package |
  45. | ------------------ | ------------------ | -------------------------------------------------- | --------------------------- |
  46. | GPT 5.2 | gpt-5.2 | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
  47. | GPT 5.2 Codex | gpt-5.2-codex | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
  48. | GPT 5.1 | gpt-5.1 | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
  49. | GPT 5.1 Codex | gpt-5.1-codex | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
  50. | GPT 5.1 Codex Max | gpt-5.1-codex-max | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
  51. | GPT 5.1 Codex Mini | gpt-5.1-codex-mini | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
  52. | GPT 5 | gpt-5 | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
  53. | GPT 5 Codex | gpt-5-codex | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
  54. | GPT 5 Nano | gpt-5-nano | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
  55. | Claude Sonnet 4.5 | claude-sonnet-4-5 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
  56. | Claude Sonnet 4 | claude-sonnet-4 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
  57. | Claude Haiku 4.5 | claude-haiku-4-5 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
  58. | Claude Haiku 3.5 | claude-3-5-haiku | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
  59. | Claude Opus 4.5 | claude-opus-4-5 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
  60. | Claude Opus 4.1 | claude-opus-4-1 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
  61. | Gemini 3 Pro | gemini-3-pro | `https://opencode.ai/zen/v1/models/gemini-3-pro` | `@ai-sdk/google` |
  62. | Gemini 3 Flash | gemini-3-flash | `https://opencode.ai/zen/v1/models/gemini-3-flash` | `@ai-sdk/google` |
  63. | MiniMax M2.1 | minimax-m2.1 | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
  64. | MiniMax M2.1 Free | minimax-m2.1-free | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
  65. | GLM 4.7 | glm-4.7 | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
  66. | GLM 4.7 Free | glm-4.7-free | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
  67. | GLM 4.6 | glm-4.6 | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
  68. | Kimi K2.5 | kimi-k2.5 | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
  69. | Kimi K2 Thinking | kimi-k2-thinking | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
  70. | Kimi K2 | kimi-k2 | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
  71. | Qwen3 Coder 480B | qwen3-coder | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
  72. | Big Pickle | big-pickle | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
  73. The [model id](/docs/config/#models) in your OpenCode config
  74. uses the format `opencode/<model-id>`. For example, for GPT 5.2 Codex, you would
  75. use `opencode/gpt-5.2-codex` in your config.
  76. ---
  77. ### Models
  78. You can fetch the full list of available models and their metadata from:
  79. ```
  80. https://opencode.ai/zen/v1/models
  81. ```
  82. ---
  83. ## Pricing
  84. We support a pay-as-you-go model. Below are the prices **per 1M tokens**.
  85. | Model | Input | Output | Cached Read | Cached Write |
  86. | --------------------------------- | ------ | ------ | ----------- | ------------ |
  87. | Big Pickle | Free | Free | Free | - |
  88. | MiniMax M2.1 Free | Free | Free | Free | - |
  89. | MiniMax M2.1 | $0.30 | $1.20 | $0.10 | - |
  90. | GLM 4.7 Free | Free | Free | Free | - |
  91. | GLM 4.7 | $0.60 | $2.20 | $0.10 | - |
  92. | GLM 4.6 | $0.60 | $2.20 | $0.10 | - |
  93. | Kimi K2.5 | $0.60 | $3.00 | $0.10 | - |
  94. | Kimi K2 Thinking | $0.40 | $2.50 | - | - |
  95. | Kimi K2 | $0.40 | $2.50 | - | - |
  96. | Qwen3 Coder 480B | $0.45 | $1.50 | - | - |
  97. | Claude Sonnet 4.5 (≤ 200K tokens) | $3.00 | $15.00 | $0.30 | $3.75 |
  98. | Claude Sonnet 4.5 (> 200K tokens) | $6.00 | $22.50 | $0.60 | $7.50 |
  99. | Claude Sonnet 4 (≤ 200K tokens) | $3.00 | $15.00 | $0.30 | $3.75 |
  100. | Claude Sonnet 4 (> 200K tokens) | $6.00 | $22.50 | $0.60 | $7.50 |
  101. | Claude Haiku 4.5 | $1.00 | $5.00 | $0.10 | $1.25 |
  102. | Claude Haiku 3.5 | $0.80 | $4.00 | $0.08 | $1.00 |
  103. | Claude Opus 4.5 | $5.00 | $25.00 | $0.50 | $6.25 |
  104. | Claude Opus 4.1 | $15.00 | $75.00 | $1.50 | $18.75 |
  105. | Gemini 3 Pro (≤ 200K tokens) | $2.00 | $12.00 | $0.20 | - |
  106. | Gemini 3 Pro (> 200K tokens) | $4.00 | $18.00 | $0.40 | - |
  107. | Gemini 3 Flash | $0.50 | $3.00 | $0.05 | - |
  108. | GPT 5.2 | $1.75 | $14.00 | $0.175 | - |
  109. | GPT 5.2 Codex | $1.75 | $14.00 | $0.175 | - |
  110. | GPT 5.1 | $1.07 | $8.50 | $0.107 | - |
  111. | GPT 5.1 Codex | $1.07 | $8.50 | $0.107 | - |
  112. | GPT 5.1 Codex Max | $1.25 | $10.00 | $0.125 | - |
  113. | GPT 5.1 Codex Mini | $0.25 | $2.00 | $0.025 | - |
  114. | GPT 5 | $1.07 | $8.50 | $0.107 | - |
  115. | GPT 5 Codex | $1.07 | $8.50 | $0.107 | - |
  116. | GPT 5 Nano | Free | Free | Free | - |
  117. You might notice _Claude Haiku 3.5_ in your usage history. This is a [low cost model](/docs/config/#models) that's used to generate the titles of your sessions.
  118. :::note
  119. Credit card fees are passed along at cost (4.4% + $0.30 per transaction); we don't charge anything beyond that.
  120. :::
  121. The free models:
  122. - GLM 4.7 is currently free on OpenCode for a limited time. The team is using this time to collect feedback and improve the model.
  123. - MiniMax M2.1 is currently free on OpenCode for a limited time. The team is using this time to collect feedback and improve the model.
  124. - Big Pickle is a stealth model that's free on OpenCode for a limited time. The team is using this time to collect feedback and improve the model.
  125. <a href={email}>Contact us</a> if you have any questions.
  126. ---
  127. ### Auto-reload
  128. If your balance goes below $5, Zen will automatically reload $20.
  129. You can change the auto-reload amount. You can also disable auto-reload entirely.
  130. ---
  131. ### Monthly limits
  132. You can also set a monthly usage limit for the entire workspace and for each
  133. member of your team.
  134. For example, let's say you set a monthly usage limit to $20, Zen will not use
  135. more than $20 in a month. But if you have auto-reload enabled, Zen might end up
  136. charging you more than $20 if your balance goes below $5.
  137. ---
  138. ## Privacy
  139. All our models are hosted in the US. Our providers follow a zero-retention policy and do not use your data for model training, with the following exceptions:
  140. - GLM 4.7: During its free period, collected data may be used to improve the model.
  141. - MiniMax M2.1: During its free period, collected data may be used to improve the model.
  142. - Big Pickle: During its free period, collected data may be used to improve the model.
  143. - OpenAI APIs: Requests are retained for 30 days in accordance with [OpenAI's Data Policies](https://platform.openai.com/docs/guides/your-data).
  144. - Anthropic APIs: Requests are retained for 30 days in accordance with [Anthropic's Data Policies](https://docs.anthropic.com/en/docs/claude-code/data-usage).
  145. ---
  146. ## For Teams
  147. Zen also works great for teams. You can invite teammates, assign roles, curate
  148. the models your team uses, and more.
  149. :::note
  150. Workspaces are currently free for teams as a part of the beta.
  151. :::
  152. Managing your workspace is currently free for teams as a part of the beta. We'll be
  153. sharing more details on the pricing soon.
  154. ---
  155. ### Roles
  156. You can invite teammates to your workspace and assign roles:
  157. - **Admin**: Manage models, members, API keys, and billing
  158. - **Member**: Manage only their own API keys
  159. Admins can also set monthly spending limits for each member to keep costs under control.
  160. ---
  161. ### Model access
  162. Admins can enable or disable specific models for the workspace. Requests made to a disabled model will return an error.
  163. This is useful for cases where you want to disable the use of a model that
  164. collects data.
  165. ---
  166. ### Bring your own key
  167. You can use your own OpenAI or Anthropic API keys while still accessing other models in Zen.
  168. When you use your own keys, tokens are billed directly by the provider, not by Zen.
  169. For example, your organization might already have a key for OpenAI or Anthropic
  170. and you want to use that instead of the one that Zen provides.
  171. ---
  172. ## Goals
  173. We created OpenCode Zen to:
  174. 1. **Benchmark** the best models/providers for coding agents.
  175. 2. Have access to the **highest quality** options and not downgrade performance or route to cheaper providers.
  176. 3. Pass along any **price drops** by selling at cost; so the only markup is to cover our processing fees.
  177. 4. Have **no lock-in** by allowing you to use it with any other coding agent. And always let you use any other provider with OpenCode as well.