providers.mdx 40 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188118911901191119211931194119511961197119811991200120112021203120412051206120712081209121012111212121312141215121612171218121912201221122212231224122512261227122812291230123112321233123412351236123712381239124012411242124312441245124612471248124912501251125212531254125512561257125812591260126112621263126412651266126712681269127012711272127312741275127612771278127912801281128212831284128512861287128812891290129112921293129412951296129712981299130013011302130313041305130613071308130913101311131213131314131513161317131813191320132113221323132413251326132713281329133013311332133313341335133613371338133913401341134213431344134513461347134813491350135113521353135413551356135713581359136013611362136313641365136613671368136913701371137213731374137513761377137813791380138113821383138413851386138713881389139013911392139313941395139613971398139914001401140214031404140514061407140814091410141114121413141414151416141714181419142014211422142314241425142614271428142914301431143214331434143514361437143814391440144114421443144414451446144714481449145014511452145314541455145614571458145914601461146214631464146514661467146814691470147114721473147414751476147714781479148014811482148314841485148614871488148914901491149214931494149514961497149814991500150115021503150415051506150715081509151015111512151315141515151615171518151915201521152215231524152515261527152815291530153115321533153415351536153715381539154015411542154315441545154615471548154915501551155215531554155515561557155815591560156115621563156415651566156715681569157015711572157315741575157615771578157915801581158215831584158515861587158815891590159115921593159415951596159715981599160016011602160316041605160616071608160916101611161216131614161516161617161816191620162116221623162416251626162716281629163016311632163316341635163616371638163916401641164216431644164516461647164816491650165116521653165416551656165716581659166016611662166316641665166616671668166916701671167216731674167516761677167816791680168116821683168416851686168716881689169016911692169316941695169616971698169917001701170217031704170517061707170817091710171117121713171417151716171717181719172017211722172317241725172617271728172917301731173217331734173517361737173817391740174117421743174417451746174717481749175017511752175317541755175617571758175917601761176217631764176517661767176817691770177117721773177417751776177717781779178017811782178317841785178617871788178917901791179217931794179517961797179817991800180118021803180418051806
  1. ---
  2. title: Providers
  3. description: Using any LLM provider in OpenCode.
  4. ---
  5. import config from "../../../config.mjs"
  6. export const console = config.console
  7. OpenCode uses the [AI SDK](https://ai-sdk.dev/) and [Models.dev](https://models.dev) to support **75+ LLM providers** and it supports running local models.
  8. To add a provider you need to:
  9. 1. Add the API keys for the provider using the `/connect` command.
  10. 2. Configure the provider in your OpenCode config.
  11. ---
  12. ### Credentials
  13. When you add a provider's API keys with the `/connect` command, they are stored
  14. in `~/.local/share/opencode/auth.json`.
  15. ---
  16. ### Config
  17. You can customize the providers through the `provider` section in your OpenCode
  18. config.
  19. ---
  20. #### Base URL
  21. You can customize the base URL for any provider by setting the `baseURL` option. This is useful when using proxy services or custom endpoints.
  22. ```json title="opencode.json" {6}
  23. {
  24. "$schema": "https://opencode.ai/config.json",
  25. "provider": {
  26. "anthropic": {
  27. "options": {
  28. "baseURL": "https://api.anthropic.com/v1"
  29. }
  30. }
  31. }
  32. }
  33. ```
  34. ---
  35. ## OpenCode Zen
  36. OpenCode Zen is a list of models provided by the OpenCode team that have been
  37. tested and verified to work well with OpenCode. [Learn more](/docs/zen).
  38. :::tip
  39. If you are new, we recommend starting with OpenCode Zen.
  40. :::
  41. 1. Run the `/connect` command in the TUI, select opencode, and head to [opencode.ai/auth](https://opencode.ai/auth).
  42. ```txt
  43. /connect
  44. ```
  45. 2. Sign in, add your billing details, and copy your API key.
  46. 3. Paste your API key.
  47. ```txt
  48. ┌ API key
  49. └ enter
  50. ```
  51. 4. Run `/models` in the TUI to see the list of models we recommend.
  52. ```txt
  53. /models
  54. ```
  55. It works like any other provider in OpenCode and is completely optional to use.
  56. ---
  57. ## Directory
  58. Let's look at some of the providers in detail. If you'd like to add a provider to the
  59. list, feel free to open a PR.
  60. :::note
  61. Don't see a provider here? Submit a PR.
  62. :::
  63. ---
  64. ### 302.AI
  65. 1. Head over to the [302.AI console](https://302.ai/), create an account, and generate an API key.
  66. 2. Run the `/connect` command and search for **302.AI**.
  67. ```txt
  68. /connect
  69. ```
  70. 3. Enter your 302.AI API key.
  71. ```txt
  72. ┌ API key
  73. └ enter
  74. ```
  75. 4. Run the `/models` command to select a model.
  76. ```txt
  77. /models
  78. ```
  79. ---
  80. ### Amazon Bedrock
  81. To use Amazon Bedrock with OpenCode:
  82. 1. Head over to the **Model catalog** in the Amazon Bedrock console and request
  83. access to the models you want.
  84. :::tip
  85. You need to have access to the model you want in Amazon Bedrock.
  86. :::
  87. 2. **Configure authentication** using one of the following methods:
  88. #### Environment Variables (Quick Start)
  89. Set one of these environment variables while running opencode:
  90. ```bash
  91. # Option 1: Using AWS access keys
  92. AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY opencode
  93. # Option 2: Using named AWS profile
  94. AWS_PROFILE=my-profile opencode
  95. # Option 3: Using Bedrock bearer token
  96. AWS_BEARER_TOKEN_BEDROCK=XXX opencode
  97. ```
  98. Or add them to your bash profile:
  99. ```bash title="~/.bash_profile"
  100. export AWS_PROFILE=my-dev-profile
  101. export AWS_REGION=us-east-1
  102. ```
  103. #### Configuration File (Recommended)
  104. For project-specific or persistent configuration, use `opencode.json`:
  105. ```json title="opencode.json"
  106. {
  107. "$schema": "https://opencode.ai/config.json",
  108. "provider": {
  109. "amazon-bedrock": {
  110. "options": {
  111. "region": "us-east-1",
  112. "profile": "my-aws-profile"
  113. }
  114. }
  115. }
  116. }
  117. ```
  118. **Available options:**
  119. - `region` - AWS region (e.g., `us-east-1`, `eu-west-1`)
  120. - `profile` - AWS named profile from `~/.aws/credentials`
  121. - `endpoint` - Custom endpoint URL for VPC endpoints (alias for generic `baseURL` option)
  122. :::tip
  123. Configuration file options take precedence over environment variables.
  124. :::
  125. #### Advanced: VPC Endpoints
  126. If you're using VPC endpoints for Bedrock:
  127. ```json title="opencode.json"
  128. {
  129. "$schema": "https://opencode.ai/config.json",
  130. "provider": {
  131. "amazon-bedrock": {
  132. "options": {
  133. "region": "us-east-1",
  134. "profile": "production",
  135. "endpoint": "https://bedrock-runtime.us-east-1.vpce-xxxxx.amazonaws.com"
  136. }
  137. }
  138. }
  139. }
  140. ```
  141. :::note
  142. The `endpoint` option is an alias for the generic `baseURL` option, using AWS-specific terminology. If both `endpoint` and `baseURL` are specified, `endpoint` takes precedence.
  143. :::
  144. #### Authentication Methods
  145. - **`AWS_ACCESS_KEY_ID` / `AWS_SECRET_ACCESS_KEY`**: Create an IAM user and generate access keys in the AWS Console
  146. - **`AWS_PROFILE`**: Use named profiles from `~/.aws/credentials`. First configure with `aws configure --profile my-profile` or `aws sso login`
  147. - **`AWS_BEARER_TOKEN_BEDROCK`**: Generate long-term API keys from the Amazon Bedrock console
  148. - **`AWS_WEB_IDENTITY_TOKEN_FILE` / `AWS_ROLE_ARN`**: For EKS IRSA (IAM Roles for Service Accounts) or other Kubernetes environments with OIDC federation. These environment variables are automatically injected by Kubernetes when using service account annotations.
  149. #### Authentication Precedence
  150. Amazon Bedrock uses the following authentication priority:
  151. 1. **Bearer Token** - `AWS_BEARER_TOKEN_BEDROCK` environment variable or token from `/connect` command
  152. 2. **AWS Credential Chain** - Profile, access keys, shared credentials, IAM roles, Web Identity Tokens (EKS IRSA), instance metadata
  153. :::note
  154. When a bearer token is set (via `/connect` or `AWS_BEARER_TOKEN_BEDROCK`), it takes precedence over all AWS credential methods including configured profiles.
  155. :::
  156. 3. Run the `/models` command to select the model you want.
  157. ```txt
  158. /models
  159. ```
  160. ---
  161. ### Anthropic
  162. We recommend signing up for [Claude Pro](https://www.anthropic.com/news/claude-pro) or [Max](https://www.anthropic.com/max).
  163. 1. Once you've signed up, run the `/connect` command and select Anthropic.
  164. ```txt
  165. /connect
  166. ```
  167. 2. Here you can select the **Claude Pro/Max** option and it'll open your browser
  168. and ask you to authenticate.
  169. ```txt
  170. ┌ Select auth method
  171. │ Claude Pro/Max
  172. │ Create an API Key
  173. │ Manually enter API Key
  174. ```
  175. 3. Now all the Anthropic models should be available when you use the `/models` command.
  176. ```txt
  177. /models
  178. ```
  179. ##### Using API keys
  180. You can also select **Create an API Key** if you don't have a Pro/Max subscription. It'll also open your browser and ask you to login to Anthropic and give you a code you can paste in your terminal.
  181. Or if you already have an API key, you can select **Manually enter API Key** and paste it in your terminal.
  182. ---
  183. ### Azure OpenAI
  184. :::note
  185. If you encounter "I'm sorry, but I cannot assist with that request" errors, try changing the content filter from **DefaultV2** to **Default** in your Azure resource.
  186. :::
  187. 1. Head over to the [Azure portal](https://portal.azure.com/) and create an **Azure OpenAI** resource. You'll need:
  188. - **Resource name**: This becomes part of your API endpoint (`https://RESOURCE_NAME.openai.azure.com/`)
  189. - **API key**: Either `KEY 1` or `KEY 2` from your resource
  190. 2. Go to [Azure AI Foundry](https://ai.azure.com/) and deploy a model.
  191. :::note
  192. The deployment name must match the model name for opencode to work properly.
  193. :::
  194. 3. Run the `/connect` command and search for **Azure**.
  195. ```txt
  196. /connect
  197. ```
  198. 4. Enter your API key.
  199. ```txt
  200. ┌ API key
  201. └ enter
  202. ```
  203. 5. Set your resource name as an environment variable:
  204. ```bash
  205. AZURE_RESOURCE_NAME=XXX opencode
  206. ```
  207. Or add it to your bash profile:
  208. ```bash title="~/.bash_profile"
  209. export AZURE_RESOURCE_NAME=XXX
  210. ```
  211. 6. Run the `/models` command to select your deployed model.
  212. ```txt
  213. /models
  214. ```
  215. ---
  216. ### Azure Cognitive Services
  217. 1. Head over to the [Azure portal](https://portal.azure.com/) and create an **Azure OpenAI** resource. You'll need:
  218. - **Resource name**: This becomes part of your API endpoint (`https://AZURE_COGNITIVE_SERVICES_RESOURCE_NAME.cognitiveservices.azure.com/`)
  219. - **API key**: Either `KEY 1` or `KEY 2` from your resource
  220. 2. Go to [Azure AI Foundry](https://ai.azure.com/) and deploy a model.
  221. :::note
  222. The deployment name must match the model name for opencode to work properly.
  223. :::
  224. 3. Run the `/connect` command and search for **Azure Cognitive Services**.
  225. ```txt
  226. /connect
  227. ```
  228. 4. Enter your API key.
  229. ```txt
  230. ┌ API key
  231. └ enter
  232. ```
  233. 5. Set your resource name as an environment variable:
  234. ```bash
  235. AZURE_COGNITIVE_SERVICES_RESOURCE_NAME=XXX opencode
  236. ```
  237. Or add it to your bash profile:
  238. ```bash title="~/.bash_profile"
  239. export AZURE_COGNITIVE_SERVICES_RESOURCE_NAME=XXX
  240. ```
  241. 6. Run the `/models` command to select your deployed model.
  242. ```txt
  243. /models
  244. ```
  245. ---
  246. ### Baseten
  247. 1. Head over to the [Baseten](https://app.baseten.co/), create an account, and generate an API key.
  248. 2. Run the `/connect` command and search for **Baseten**.
  249. ```txt
  250. /connect
  251. ```
  252. 3. Enter your Baseten API key.
  253. ```txt
  254. ┌ API key
  255. └ enter
  256. ```
  257. 4. Run the `/models` command to select a model.
  258. ```txt
  259. /models
  260. ```
  261. ---
  262. ### Cerebras
  263. 1. Head over to the [Cerebras console](https://inference.cerebras.ai/), create an account, and generate an API key.
  264. 2. Run the `/connect` command and search for **Cerebras**.
  265. ```txt
  266. /connect
  267. ```
  268. 3. Enter your Cerebras API key.
  269. ```txt
  270. ┌ API key
  271. └ enter
  272. ```
  273. 4. Run the `/models` command to select a model like _Qwen 3 Coder 480B_.
  274. ```txt
  275. /models
  276. ```
  277. ---
  278. ### Cloudflare AI Gateway
  279. Cloudflare AI Gateway lets you access models from OpenAI, Anthropic, Workers AI, and more through a unified endpoint. With [Unified Billing](https://developers.cloudflare.com/ai-gateway/features/unified-billing/) you don't need separate API keys for each provider.
  280. 1. Head over to the [Cloudflare dashboard](https://dash.cloudflare.com/), navigate to **AI** > **AI Gateway**, and create a new gateway.
  281. 2. Set your Account ID and Gateway ID as environment variables.
  282. ```bash title="~/.bash_profile"
  283. export CLOUDFLARE_ACCOUNT_ID=your-32-character-account-id
  284. export CLOUDFLARE_GATEWAY_ID=your-gateway-id
  285. ```
  286. 3. Run the `/connect` command and search for **Cloudflare AI Gateway**.
  287. ```txt
  288. /connect
  289. ```
  290. 4. Enter your Cloudflare API token.
  291. ```txt
  292. ┌ API key
  293. └ enter
  294. ```
  295. Or set it as an environment variable.
  296. ```bash title="~/.bash_profile"
  297. export CLOUDFLARE_API_TOKEN=your-api-token
  298. ```
  299. 5. Run the `/models` command to select a model.
  300. ```txt
  301. /models
  302. ```
  303. You can also add models through your opencode config.
  304. ```json title="opencode.json"
  305. {
  306. "$schema": "https://opencode.ai/config.json",
  307. "provider": {
  308. "cloudflare-ai-gateway": {
  309. "models": {
  310. "openai/gpt-4o": {},
  311. "anthropic/claude-sonnet-4": {}
  312. }
  313. }
  314. }
  315. }
  316. ```
  317. ---
  318. ### Cortecs
  319. 1. Head over to the [Cortecs console](https://cortecs.ai/), create an account, and generate an API key.
  320. 2. Run the `/connect` command and search for **Cortecs**.
  321. ```txt
  322. /connect
  323. ```
  324. 3. Enter your Cortecs API key.
  325. ```txt
  326. ┌ API key
  327. └ enter
  328. ```
  329. 4. Run the `/models` command to select a model like _Kimi K2 Instruct_.
  330. ```txt
  331. /models
  332. ```
  333. ---
  334. ### DeepSeek
  335. 1. Head over to the [DeepSeek console](https://platform.deepseek.com/), create an account, and click **Create new API key**.
  336. 2. Run the `/connect` command and search for **DeepSeek**.
  337. ```txt
  338. /connect
  339. ```
  340. 3. Enter your DeepSeek API key.
  341. ```txt
  342. ┌ API key
  343. └ enter
  344. ```
  345. 4. Run the `/models` command to select a DeepSeek model like _DeepSeek Reasoner_.
  346. ```txt
  347. /models
  348. ```
  349. ---
  350. ### Deep Infra
  351. 1. Head over to the [Deep Infra dashboard](https://deepinfra.com/dash), create an account, and generate an API key.
  352. 2. Run the `/connect` command and search for **Deep Infra**.
  353. ```txt
  354. /connect
  355. ```
  356. 3. Enter your Deep Infra API key.
  357. ```txt
  358. ┌ API key
  359. └ enter
  360. ```
  361. 4. Run the `/models` command to select a model.
  362. ```txt
  363. /models
  364. ```
  365. ---
  366. ### Firmware
  367. 1. Head over to the [Firmware dashboard](https://app.firmware.ai/signup), create an account, and generate an API key.
  368. 2. Run the `/connect` command and search for **Firmware**.
  369. ```txt
  370. /connect
  371. ```
  372. 3. Enter your Firmware API key.
  373. ```txt
  374. ┌ API key
  375. └ enter
  376. ```
  377. 4. Run the `/models` command to select a model.
  378. ```txt
  379. /models
  380. ```
  381. ---
  382. ### Fireworks AI
  383. 1. Head over to the [Fireworks AI console](https://app.fireworks.ai/), create an account, and click **Create API Key**.
  384. 2. Run the `/connect` command and search for **Fireworks AI**.
  385. ```txt
  386. /connect
  387. ```
  388. 3. Enter your Fireworks AI API key.
  389. ```txt
  390. ┌ API key
  391. └ enter
  392. ```
  393. 4. Run the `/models` command to select a model like _Kimi K2 Instruct_.
  394. ```txt
  395. /models
  396. ```
  397. ---
  398. ### GitLab Duo
  399. GitLab Duo provides AI-powered agentic chat with native tool calling capabilities through GitLab's Anthropic proxy.
  400. 1. Run the `/connect` command and select GitLab.
  401. ```txt
  402. /connect
  403. ```
  404. 2. Choose your authentication method:
  405. ```txt
  406. ┌ Select auth method
  407. │ OAuth (Recommended)
  408. │ Personal Access Token
  409. ```
  410. #### Using OAuth (Recommended)
  411. Select **OAuth** and your browser will open for authorization.
  412. #### Using Personal Access Token
  413. 1. Go to [GitLab User Settings > Access Tokens](https://gitlab.com/-/user_settings/personal_access_tokens)
  414. 2. Click **Add new token**
  415. 3. Name: `OpenCode`, Scopes: `api`
  416. 4. Copy the token (starts with `glpat-`)
  417. 5. Enter it in the terminal
  418. 3. Run the `/models` command to see available models.
  419. ```txt
  420. /models
  421. ```
  422. Three Claude-based models are available:
  423. - **duo-chat-haiku-4-5** (Default) - Fast responses for quick tasks
  424. - **duo-chat-sonnet-4-5** - Balanced performance for most workflows
  425. - **duo-chat-opus-4-5** - Most capable for complex analysis
  426. ##### Self-Hosted GitLab
  427. For self-hosted GitLab instances:
  428. ```bash
  429. GITLAB_INSTANCE_URL=https://gitlab.company.com GITLAB_TOKEN=glpat-xxxxxxxxxxxxxxxxxxxx opencode
  430. ```
  431. Or add to your bash profile:
  432. ```bash title="~/.bash_profile"
  433. export GITLAB_INSTANCE_URL=https://gitlab.company.com
  434. export GITLAB_TOKEN=glpat-xxxxxxxxxxxxxxxxxxxx
  435. ```
  436. ##### Configuration
  437. Customize through `opencode.json`:
  438. ```json title="opencode.json"
  439. {
  440. "$schema": "https://opencode.ai/config.json",
  441. "provider": {
  442. "gitlab": {
  443. "options": {
  444. "instanceUrl": "https://gitlab.com",
  445. "featureFlags": {
  446. "duo_agent_platform_agentic_chat": true,
  447. "duo_agent_platform": true
  448. }
  449. }
  450. }
  451. }
  452. }
  453. ```
  454. ##### GitLab API Tools (Optional)
  455. To access GitLab tools (merge requests, issues, pipelines, CI/CD, etc.):
  456. ```json title="opencode.json"
  457. {
  458. "$schema": "https://opencode.ai/config.json",
  459. "plugin": ["@gitlab/opencode-gitlab-plugin"]
  460. }
  461. ```
  462. This plugin provides comprehensive GitLab repository management capabilities including MR reviews, issue tracking, pipeline monitoring, and more.
  463. ---
  464. ### GitHub Copilot
  465. To use your GitHub Copilot subscription with opencode:
  466. :::note
  467. Some models might need a [Pro+
  468. subscription](https://github.com/features/copilot/plans) to use.
  469. Some models need to be manually enabled in your [GitHub Copilot settings](https://docs.github.com/en/copilot/how-tos/use-ai-models/configure-access-to-ai-models#setup-for-individual-use).
  470. :::
  471. 1. Run the `/connect` command and search for GitHub Copilot.
  472. ```txt
  473. /connect
  474. ```
  475. 2. Navigate to [github.com/login/device](https://github.com/login/device) and enter the code.
  476. ```txt
  477. ┌ Login with GitHub Copilot
  478. │ https://github.com/login/device
  479. │ Enter code: 8F43-6FCF
  480. └ Waiting for authorization...
  481. ```
  482. 3. Now run the `/models` command to select the model you want.
  483. ```txt
  484. /models
  485. ```
  486. ---
  487. ### Google Vertex AI
  488. To use Google Vertex AI with OpenCode:
  489. 1. Head over to the **Model Garden** in the Google Cloud Console and check the
  490. models available in your region.
  491. :::note
  492. You need to have a Google Cloud project with Vertex AI API enabled.
  493. :::
  494. 2. Set the required environment variables:
  495. - `GOOGLE_CLOUD_PROJECT`: Your Google Cloud project ID
  496. - `VERTEX_LOCATION` (optional): The region for Vertex AI (defaults to `global`)
  497. - Authentication (choose one):
  498. - `GOOGLE_APPLICATION_CREDENTIALS`: Path to your service account JSON key file
  499. - Authenticate using gcloud CLI: `gcloud auth application-default login`
  500. Set them while running opencode.
  501. ```bash
  502. GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json GOOGLE_CLOUD_PROJECT=your-project-id opencode
  503. ```
  504. Or add them to your bash profile.
  505. ```bash title="~/.bash_profile"
  506. export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
  507. export GOOGLE_CLOUD_PROJECT=your-project-id
  508. export VERTEX_LOCATION=global
  509. ```
  510. :::tip
  511. The `global` region improves availability and reduces errors at no extra cost. Use regional endpoints (e.g., `us-central1`) for data residency requirements. [Learn more](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-partner-models#regional_and_global_endpoints)
  512. :::
  513. 3. Run the `/models` command to select the model you want.
  514. ```txt
  515. /models
  516. ```
  517. ---
  518. ### Groq
  519. 1. Head over to the [Groq console](https://console.groq.com/), click **Create API Key**, and copy the key.
  520. 2. Run the `/connect` command and search for Groq.
  521. ```txt
  522. /connect
  523. ```
  524. 3. Enter the API key for the provider.
  525. ```txt
  526. ┌ API key
  527. └ enter
  528. ```
  529. 4. Run the `/models` command to select the one you want.
  530. ```txt
  531. /models
  532. ```
  533. ---
  534. ### Hugging Face
  535. [Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers) provides access to open models supported by 17+ providers.
  536. 1. Head over to [Hugging Face settings](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained) to create a token with permission to make calls to Inference Providers.
  537. 2. Run the `/connect` command and search for **Hugging Face**.
  538. ```txt
  539. /connect
  540. ```
  541. 3. Enter your Hugging Face token.
  542. ```txt
  543. ┌ API key
  544. └ enter
  545. ```
  546. 4. Run the `/models` command to select a model like _Kimi-K2-Instruct_ or _GLM-4.6_.
  547. ```txt
  548. /models
  549. ```
  550. ---
  551. ### Helicone
  552. [Helicone](https://helicone.ai) is an LLM observability platform that provides logging, monitoring, and analytics for your AI applications. The Helicone AI Gateway routes your requests to the appropriate provider automatically based on the model.
  553. 1. Head over to [Helicone](https://helicone.ai), create an account, and generate an API key from your dashboard.
  554. 2. Run the `/connect` command and search for **Helicone**.
  555. ```txt
  556. /connect
  557. ```
  558. 3. Enter your Helicone API key.
  559. ```txt
  560. ┌ API key
  561. └ enter
  562. ```
  563. 4. Run the `/models` command to select a model.
  564. ```txt
  565. /models
  566. ```
  567. For more providers and advanced features like caching and rate limiting, check the [Helicone documentation](https://docs.helicone.ai).
  568. #### Optional Configs
  569. In the event you see a feature or model from Helicone that isn't configured automatically through opencode, you can always configure it yourself.
  570. Here's [Helicone's Model Directory](https://helicone.ai/models), you'll need this to grab the IDs of the models you want to add.
  571. ```jsonc title="~/.config/opencode/opencode.jsonc"
  572. {
  573. "$schema": "https://opencode.ai/config.json",
  574. "provider": {
  575. "helicone": {
  576. "npm": "@ai-sdk/openai-compatible",
  577. "name": "Helicone",
  578. "options": {
  579. "baseURL": "https://ai-gateway.helicone.ai",
  580. },
  581. "models": {
  582. "gpt-4o": {
  583. // Model ID (from Helicone's model directory page)
  584. "name": "GPT-4o", // Your own custom name for the model
  585. },
  586. "claude-sonnet-4-20250514": {
  587. "name": "Claude Sonnet 4",
  588. },
  589. },
  590. },
  591. },
  592. }
  593. ```
  594. #### Custom Headers
  595. Helicone supports custom headers for features like caching, user tracking, and session management. Add them to your provider config using `options.headers`:
  596. ```jsonc title="~/.config/opencode/opencode.jsonc"
  597. {
  598. "$schema": "https://opencode.ai/config.json",
  599. "provider": {
  600. "helicone": {
  601. "npm": "@ai-sdk/openai-compatible",
  602. "name": "Helicone",
  603. "options": {
  604. "baseURL": "https://ai-gateway.helicone.ai",
  605. "headers": {
  606. "Helicone-Cache-Enabled": "true",
  607. "Helicone-User-Id": "opencode",
  608. },
  609. },
  610. },
  611. },
  612. }
  613. ```
  614. ##### Session tracking
  615. Helicone's [Sessions](https://docs.helicone.ai/features/sessions) feature lets you group related LLM requests together. Use the [opencode-helicone-session](https://github.com/H2Shami/opencode-helicone-session) plugin to automatically log each OpenCode conversation as a session in Helicone.
  616. ```bash
  617. npm install -g opencode-helicone-session
  618. ```
  619. Add it to your config.
  620. ```json title="opencode.json"
  621. {
  622. "plugin": ["opencode-helicone-session"]
  623. }
  624. ```
  625. The plugin injects `Helicone-Session-Id` and `Helicone-Session-Name` headers into your requests. In Helicone's Sessions page, you'll see each OpenCode conversation listed as a separate session.
  626. ##### Common Helicone headers
  627. | Header | Description |
  628. | -------------------------- | ------------------------------------------------------------- |
  629. | `Helicone-Cache-Enabled` | Enable response caching (`true`/`false`) |
  630. | `Helicone-User-Id` | Track metrics by user |
  631. | `Helicone-Property-[Name]` | Add custom properties (e.g., `Helicone-Property-Environment`) |
  632. | `Helicone-Prompt-Id` | Associate requests with prompt versions |
  633. See the [Helicone Header Directory](https://docs.helicone.ai/helicone-headers/header-directory) for all available headers.
  634. ---
  635. ### llama.cpp
  636. You can configure opencode to use local models through [llama.cpp's](https://github.com/ggml-org/llama.cpp) llama-server utility
  637. ```json title="opencode.json" "llama.cpp" {5, 6, 8, 10-15}
  638. {
  639. "$schema": "https://opencode.ai/config.json",
  640. "provider": {
  641. "llama.cpp": {
  642. "npm": "@ai-sdk/openai-compatible",
  643. "name": "llama-server (local)",
  644. "options": {
  645. "baseURL": "http://127.0.0.1:8080/v1"
  646. },
  647. "models": {
  648. "qwen3-coder:a3b": {
  649. "name": "Qwen3-Coder: a3b-30b (local)",
  650. "limit": {
  651. "context": 128000,
  652. "output": 65536
  653. }
  654. }
  655. }
  656. }
  657. }
  658. }
  659. ```
  660. In this example:
  661. - `llama.cpp` is the custom provider ID. This can be any string you want.
  662. - `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API.
  663. - `name` is the display name for the provider in the UI.
  664. - `options.baseURL` is the endpoint for the local server.
  665. - `models` is a map of model IDs to their configurations. The model name will be displayed in the model selection list.
  666. ---
  667. ### IO.NET
  668. IO.NET offers 17 models optimized for various use cases:
  669. 1. Head over to the [IO.NET console](https://ai.io.net/), create an account, and generate an API key.
  670. 2. Run the `/connect` command and search for **IO.NET**.
  671. ```txt
  672. /connect
  673. ```
  674. 3. Enter your IO.NET API key.
  675. ```txt
  676. ┌ API key
  677. └ enter
  678. ```
  679. 4. Run the `/models` command to select a model.
  680. ```txt
  681. /models
  682. ```
  683. ---
  684. ### LM Studio
  685. You can configure opencode to use local models through LM Studio.
  686. ```json title="opencode.json" "lmstudio" {5, 6, 8, 10-14}
  687. {
  688. "$schema": "https://opencode.ai/config.json",
  689. "provider": {
  690. "lmstudio": {
  691. "npm": "@ai-sdk/openai-compatible",
  692. "name": "LM Studio (local)",
  693. "options": {
  694. "baseURL": "http://127.0.0.1:1234/v1"
  695. },
  696. "models": {
  697. "google/gemma-3n-e4b": {
  698. "name": "Gemma 3n-e4b (local)"
  699. }
  700. }
  701. }
  702. }
  703. }
  704. ```
  705. In this example:
  706. - `lmstudio` is the custom provider ID. This can be any string you want.
  707. - `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API.
  708. - `name` is the display name for the provider in the UI.
  709. - `options.baseURL` is the endpoint for the local server.
  710. - `models` is a map of model IDs to their configurations. The model name will be displayed in the model selection list.
  711. ---
  712. ### Moonshot AI
  713. To use Kimi K2 from Moonshot AI:
  714. 1. Head over to the [Moonshot AI console](https://platform.moonshot.ai/console), create an account, and click **Create API key**.
  715. 2. Run the `/connect` command and search for **Moonshot AI**.
  716. ```txt
  717. /connect
  718. ```
  719. 3. Enter your Moonshot API key.
  720. ```txt
  721. ┌ API key
  722. └ enter
  723. ```
  724. 4. Run the `/models` command to select _Kimi K2_.
  725. ```txt
  726. /models
  727. ```
  728. ---
  729. ### MiniMax
  730. 1. Head over to the [MiniMax API Console](https://platform.minimax.io/login), create an account, and generate an API key.
  731. 2. Run the `/connect` command and search for **MiniMax**.
  732. ```txt
  733. /connect
  734. ```
  735. 3. Enter your MiniMax API key.
  736. ```txt
  737. ┌ API key
  738. └ enter
  739. ```
  740. 4. Run the `/models` command to select a model like _M2.1_.
  741. ```txt
  742. /models
  743. ```
  744. ---
  745. ### Nebius Token Factory
  746. 1. Head over to the [Nebius Token Factory console](https://tokenfactory.nebius.com/), create an account, and click **Add Key**.
  747. 2. Run the `/connect` command and search for **Nebius Token Factory**.
  748. ```txt
  749. /connect
  750. ```
  751. 3. Enter your Nebius Token Factory API key.
  752. ```txt
  753. ┌ API key
  754. └ enter
  755. ```
  756. 4. Run the `/models` command to select a model like _Kimi K2 Instruct_.
  757. ```txt
  758. /models
  759. ```
  760. ---
  761. ### Ollama
  762. You can configure opencode to use local models through Ollama.
  763. ```json title="opencode.json" "ollama" {5, 6, 8, 10-14}
  764. {
  765. "$schema": "https://opencode.ai/config.json",
  766. "provider": {
  767. "ollama": {
  768. "npm": "@ai-sdk/openai-compatible",
  769. "name": "Ollama (local)",
  770. "options": {
  771. "baseURL": "http://localhost:11434/v1"
  772. },
  773. "models": {
  774. "llama2": {
  775. "name": "Llama 2"
  776. }
  777. }
  778. }
  779. }
  780. }
  781. ```
  782. In this example:
  783. - `ollama` is the custom provider ID. This can be any string you want.
  784. - `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API.
  785. - `name` is the display name for the provider in the UI.
  786. - `options.baseURL` is the endpoint for the local server.
  787. - `models` is a map of model IDs to their configurations. The model name will be displayed in the model selection list.
  788. :::tip
  789. If tool calls aren't working, try increasing `num_ctx` in Ollama. Start around 16k - 32k.
  790. :::
  791. ---
  792. ### Ollama Cloud
  793. To use Ollama Cloud with OpenCode:
  794. 1. Head over to [https://ollama.com/](https://ollama.com/) and sign in or create an account.
  795. 2. Navigate to **Settings** > **Keys** and click **Add API Key** to generate a new API key.
  796. 3. Copy the API key for use in OpenCode.
  797. 4. Run the `/connect` command and search for **Ollama Cloud**.
  798. ```txt
  799. /connect
  800. ```
  801. 5. Enter your Ollama Cloud API key.
  802. ```txt
  803. ┌ API key
  804. └ enter
  805. ```
  806. 6. **Important**: Before using cloud models in OpenCode, you must pull the model information locally:
  807. ```bash
  808. ollama pull gpt-oss:20b-cloud
  809. ```
  810. 7. Run the `/models` command to select your Ollama Cloud model.
  811. ```txt
  812. /models
  813. ```
  814. ---
  815. ### OpenAI
  816. We recommend signing up for [ChatGPT Plus or Pro](https://chatgpt.com/pricing).
  817. 1. Once you've signed up, run the `/connect` command and select OpenAI.
  818. ```txt
  819. /connect
  820. ```
  821. 2. Here you can select the **ChatGPT Plus/Pro** option and it'll open your browser
  822. and ask you to authenticate.
  823. ```txt
  824. ┌ Select auth method
  825. │ ChatGPT Plus/Pro
  826. │ Manually enter API Key
  827. ```
  828. 3. Now all the OpenAI models should be available when you use the `/models` command.
  829. ```txt
  830. /models
  831. ```
  832. ##### Using API keys
  833. If you already have an API key, you can select **Manually enter API Key** and paste it in your terminal.
  834. ---
  835. ### OpenCode Zen
  836. OpenCode Zen is a list of tested and verified models provided by the OpenCode team. [Learn more](/docs/zen).
  837. 1. Sign in to **<a href={console}>OpenCode Zen</a>** and click **Create API Key**.
  838. 2. Run the `/connect` command and search for **OpenCode Zen**.
  839. ```txt
  840. /connect
  841. ```
  842. 3. Enter your OpenCode API key.
  843. ```txt
  844. ┌ API key
  845. └ enter
  846. ```
  847. 4. Run the `/models` command to select a model like _Qwen 3 Coder 480B_.
  848. ```txt
  849. /models
  850. ```
  851. ---
  852. ### OpenRouter
  853. 1. Head over to the [OpenRouter dashboard](https://openrouter.ai/settings/keys), click **Create API Key**, and copy the key.
  854. 2. Run the `/connect` command and search for OpenRouter.
  855. ```txt
  856. /connect
  857. ```
  858. 3. Enter the API key for the provider.
  859. ```txt
  860. ┌ API key
  861. └ enter
  862. ```
  863. 4. Many OpenRouter models are preloaded by default, run the `/models` command to select the one you want.
  864. ```txt
  865. /models
  866. ```
  867. You can also add additional models through your opencode config.
  868. ```json title="opencode.json" {6}
  869. {
  870. "$schema": "https://opencode.ai/config.json",
  871. "provider": {
  872. "openrouter": {
  873. "models": {
  874. "somecoolnewmodel": {}
  875. }
  876. }
  877. }
  878. }
  879. ```
  880. 5. You can also customize them through your opencode config. Here's an example of specifying a provider
  881. ```json title="opencode.json"
  882. {
  883. "$schema": "https://opencode.ai/config.json",
  884. "provider": {
  885. "openrouter": {
  886. "models": {
  887. "moonshotai/kimi-k2": {
  888. "options": {
  889. "provider": {
  890. "order": ["baseten"],
  891. "allow_fallbacks": false
  892. }
  893. }
  894. }
  895. }
  896. }
  897. }
  898. }
  899. ```
  900. ---
  901. ### SAP AI Core
  902. SAP AI Core provides access to 40+ models from OpenAI, Anthropic, Google, Amazon, Meta, Mistral, and AI21 through a unified platform.
  903. 1. Go to your [SAP BTP Cockpit](https://account.hana.ondemand.com/), navigate to your SAP AI Core service instance, and create a service key.
  904. :::tip
  905. The service key is a JSON object containing `clientid`, `clientsecret`, `url`, and `serviceurls.AI_API_URL`. You can find your AI Core instance under **Services** > **Instances and Subscriptions** in the BTP Cockpit.
  906. :::
  907. 2. Run the `/connect` command and search for **SAP AI Core**.
  908. ```txt
  909. /connect
  910. ```
  911. 3. Enter your service key JSON.
  912. ```txt
  913. ┌ Service key
  914. └ enter
  915. ```
  916. Or set the `AICORE_SERVICE_KEY` environment variable:
  917. ```bash
  918. AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","url":"...","serviceurls":{"AI_API_URL":"..."}}' opencode
  919. ```
  920. Or add it to your bash profile:
  921. ```bash title="~/.bash_profile"
  922. export AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","url":"...","serviceurls":{"AI_API_URL":"..."}}'
  923. ```
  924. 4. Optionally set deployment ID and resource group:
  925. ```bash
  926. AICORE_DEPLOYMENT_ID=your-deployment-id AICORE_RESOURCE_GROUP=your-resource-group opencode
  927. ```
  928. :::note
  929. These settings are optional and should be configured according to your SAP AI Core setup.
  930. :::
  931. 5. Run the `/models` command to select from 40+ available models.
  932. ```txt
  933. /models
  934. ```
  935. ---
  936. ### OVHcloud AI Endpoints
  937. 1. Head over to the [OVHcloud panel](https://ovh.com/manager). Navigate to the `Public Cloud` section, `AI & Machine Learning` > `AI Endpoints` and in `API Keys` tab, click **Create a new API key**.
  938. 2. Run the `/connect` command and search for **OVHcloud AI Endpoints**.
  939. ```txt
  940. /connect
  941. ```
  942. 3. Enter your OVHcloud AI Endpoints API key.
  943. ```txt
  944. ┌ API key
  945. └ enter
  946. ```
  947. 4. Run the `/models` command to select a model like _gpt-oss-120b_.
  948. ```txt
  949. /models
  950. ```
  951. ---
  952. ### Scaleway
  953. To use [Scaleway Generative APIs](https://www.scaleway.com/en/docs/generative-apis/) with Opencode:
  954. 1. Head over to the [Scaleway Console IAM settings](https://console.scaleway.com/iam/api-keys) to generate a new API key.
  955. 2. Run the `/connect` command and search for **Scaleway**.
  956. ```txt
  957. /connect
  958. ```
  959. 3. Enter your Scaleway API key.
  960. ```txt
  961. ┌ API key
  962. └ enter
  963. ```
  964. 4. Run the `/models` command to select a model like _devstral-2-123b-instruct-2512_ or _gpt-oss-120b_.
  965. ```txt
  966. /models
  967. ```
  968. ---
  969. ### Together AI
  970. 1. Head over to the [Together AI console](https://api.together.ai), create an account, and click **Add Key**.
  971. 2. Run the `/connect` command and search for **Together AI**.
  972. ```txt
  973. /connect
  974. ```
  975. 3. Enter your Together AI API key.
  976. ```txt
  977. ┌ API key
  978. └ enter
  979. ```
  980. 4. Run the `/models` command to select a model like _Kimi K2 Instruct_.
  981. ```txt
  982. /models
  983. ```
  984. ---
  985. ### Venice AI
  986. 1. Head over to the [Venice AI console](https://venice.ai), create an account, and generate an API key.
  987. 2. Run the `/connect` command and search for **Venice AI**.
  988. ```txt
  989. /connect
  990. ```
  991. 3. Enter your Venice AI API key.
  992. ```txt
  993. ┌ API key
  994. └ enter
  995. ```
  996. 4. Run the `/models` command to select a model like _Llama 3.3 70B_.
  997. ```txt
  998. /models
  999. ```
  1000. ---
  1001. ### Vercel AI Gateway
  1002. Vercel AI Gateway lets you access models from OpenAI, Anthropic, Google, xAI, and more through a unified endpoint. Models are offered at list price with no markup.
  1003. 1. Head over to the [Vercel dashboard](https://vercel.com/), navigate to the **AI Gateway** tab, and click **API keys** to create a new API key.
  1004. 2. Run the `/connect` command and search for **Vercel AI Gateway**.
  1005. ```txt
  1006. /connect
  1007. ```
  1008. 3. Enter your Vercel AI Gateway API key.
  1009. ```txt
  1010. ┌ API key
  1011. └ enter
  1012. ```
  1013. 4. Run the `/models` command to select a model.
  1014. ```txt
  1015. /models
  1016. ```
  1017. You can also customize models through your opencode config. Here's an example of specifying provider routing order.
  1018. ```json title="opencode.json"
  1019. {
  1020. "$schema": "https://opencode.ai/config.json",
  1021. "provider": {
  1022. "vercel": {
  1023. "models": {
  1024. "anthropic/claude-sonnet-4": {
  1025. "options": {
  1026. "order": ["anthropic", "vertex"]
  1027. }
  1028. }
  1029. }
  1030. }
  1031. }
  1032. }
  1033. ```
  1034. Some useful routing options:
  1035. | Option | Description |
  1036. | ------------------- | ---------------------------------------------------- |
  1037. | `order` | Provider sequence to try |
  1038. | `only` | Restrict to specific providers |
  1039. | `zeroDataRetention` | Only use providers with zero data retention policies |
  1040. ---
  1041. ### xAI
  1042. 1. Head over to the [xAI console](https://console.x.ai/), create an account, and generate an API key.
  1043. 2. Run the `/connect` command and search for **xAI**.
  1044. ```txt
  1045. /connect
  1046. ```
  1047. 3. Enter your xAI API key.
  1048. ```txt
  1049. ┌ API key
  1050. └ enter
  1051. ```
  1052. 4. Run the `/models` command to select a model like _Grok Beta_.
  1053. ```txt
  1054. /models
  1055. ```
  1056. ---
  1057. ### Z.AI
  1058. 1. Head over to the [Z.AI API console](https://z.ai/manage-apikey/apikey-list), create an account, and click **Create a new API key**.
  1059. 2. Run the `/connect` command and search for **Z.AI**.
  1060. ```txt
  1061. /connect
  1062. ```
  1063. If you are subscribed to the **GLM Coding Plan**, select **Z.AI Coding Plan**.
  1064. 3. Enter your Z.AI API key.
  1065. ```txt
  1066. ┌ API key
  1067. └ enter
  1068. ```
  1069. 4. Run the `/models` command to select a model like _GLM-4.7_.
  1070. ```txt
  1071. /models
  1072. ```
  1073. ---
  1074. ### ZenMux
  1075. 1. Head over to the [ZenMux dashboard](https://zenmux.ai/settings/keys), click **Create API Key**, and copy the key.
  1076. 2. Run the `/connect` command and search for ZenMux.
  1077. ```txt
  1078. /connect
  1079. ```
  1080. 3. Enter the API key for the provider.
  1081. ```txt
  1082. ┌ API key
  1083. └ enter
  1084. ```
  1085. 4. Many ZenMux models are preloaded by default, run the `/models` command to select the one you want.
  1086. ```txt
  1087. /models
  1088. ```
  1089. You can also add additional models through your opencode config.
  1090. ```json title="opencode.json" {6}
  1091. {
  1092. "$schema": "https://opencode.ai/config.json",
  1093. "provider": {
  1094. "zenmux": {
  1095. "models": {
  1096. "somecoolnewmodel": {}
  1097. }
  1098. }
  1099. }
  1100. }
  1101. ```
  1102. ---
  1103. ## Custom provider
  1104. To add any **OpenAI-compatible** provider that's not listed in the `/connect` command:
  1105. :::tip
  1106. You can use any OpenAI-compatible provider with opencode. Most modern AI providers offer OpenAI-compatible APIs.
  1107. :::
  1108. 1. Run the `/connect` command and scroll down to **Other**.
  1109. ```bash
  1110. $ /connect
  1111. ┌ Add credential
  1112. ◆ Select provider
  1113. │ ...
  1114. │ ● Other
  1115. ```
  1116. 2. Enter a unique ID for the provider.
  1117. ```bash
  1118. $ /connect
  1119. ┌ Add credential
  1120. ◇ Enter provider id
  1121. │ myprovider
  1122. ```
  1123. :::note
  1124. Choose a memorable ID, you'll use this in your config file.
  1125. :::
  1126. 3. Enter your API key for the provider.
  1127. ```bash
  1128. $ /connect
  1129. ┌ Add credential
  1130. ▲ This only stores a credential for myprovider - you will need to configure it in opencode.json, check the docs for examples.
  1131. ◇ Enter your API key
  1132. │ sk-...
  1133. ```
  1134. 4. Create or update your `opencode.json` file in your project directory:
  1135. ```json title="opencode.json" ""myprovider"" {5-15}
  1136. {
  1137. "$schema": "https://opencode.ai/config.json",
  1138. "provider": {
  1139. "myprovider": {
  1140. "npm": "@ai-sdk/openai-compatible",
  1141. "name": "My AI ProviderDisplay Name",
  1142. "options": {
  1143. "baseURL": "https://api.myprovider.com/v1"
  1144. },
  1145. "models": {
  1146. "my-model-name": {
  1147. "name": "My Model Display Name"
  1148. }
  1149. }
  1150. }
  1151. }
  1152. }
  1153. ```
  1154. Here are the configuration options:
  1155. - **npm**: AI SDK package to use, `@ai-sdk/openai-compatible` for OpenAI-compatible providers
  1156. - **name**: Display name in UI.
  1157. - **models**: Available models.
  1158. - **options.baseURL**: API endpoint URL.
  1159. - **options.apiKey**: Optionally set the API key, if not using auth.
  1160. - **options.headers**: Optionally set custom headers.
  1161. More on the advanced options in the example below.
  1162. 5. Run the `/models` command and your custom provider and models will appear in the selection list.
  1163. ---
  1164. ##### Example
  1165. Here's an example setting the `apiKey`, `headers`, and model `limit` options.
  1166. ```json title="opencode.json" {9,11,17-20}
  1167. {
  1168. "$schema": "https://opencode.ai/config.json",
  1169. "provider": {
  1170. "myprovider": {
  1171. "npm": "@ai-sdk/openai-compatible",
  1172. "name": "My AI ProviderDisplay Name",
  1173. "options": {
  1174. "baseURL": "https://api.myprovider.com/v1",
  1175. "apiKey": "{env:ANTHROPIC_API_KEY}",
  1176. "headers": {
  1177. "Authorization": "Bearer custom-token"
  1178. }
  1179. },
  1180. "models": {
  1181. "my-model-name": {
  1182. "name": "My Model Display Name",
  1183. "limit": {
  1184. "context": 200000,
  1185. "output": 65536
  1186. }
  1187. }
  1188. }
  1189. }
  1190. }
  1191. }
  1192. ```
  1193. Configuration details:
  1194. - **apiKey**: Set using `env` variable syntax, [learn more](/docs/config#env-vars).
  1195. - **headers**: Custom headers sent with each request.
  1196. - **limit.context**: Maximum input tokens the model accepts.
  1197. - **limit.output**: Maximum tokens the model can generate.
  1198. The `limit` fields allow OpenCode to understand how much context you have left. Standard providers pull these from models.dev automatically.
  1199. ---
  1200. ## Troubleshooting
  1201. If you are having trouble with configuring a provider, check the following:
  1202. 1. **Check the auth setup**: Run `opencode auth list` to see if the credentials
  1203. for the provider are added to your config.
  1204. This doesn't apply to providers like Amazon Bedrock, that rely on environment variables for their auth.
  1205. 2. For custom providers, check the opencode config and:
  1206. - Make sure the provider ID used in the `/connect` command matches the ID in your opencode config.
  1207. - The right npm package is used for the provider. For example, use `@ai-sdk/cerebras` for Cerebras. And for all other OpenAI-compatible providers, use `@ai-sdk/openai-compatible`.
  1208. - Check correct API endpoint is used in the `options.baseURL` field.