| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188118911901191119211931194119511961197119811991200120112021203120412051206120712081209121012111212121312141215121612171218121912201221122212231224122512261227122812291230123112321233123412351236123712381239124012411242124312441245124612471248124912501251125212531254125512561257125812591260126112621263126412651266126712681269127012711272127312741275127612771278127912801281128212831284128512861287128812891290129112921293129412951296129712981299130013011302130313041305130613071308130913101311131213131314131513161317131813191320132113221323132413251326132713281329133013311332133313341335133613371338133913401341134213431344134513461347134813491350135113521353135413551356135713581359136013611362136313641365136613671368136913701371137213731374137513761377137813791380138113821383138413851386138713881389139013911392139313941395139613971398139914001401140214031404140514061407140814091410141114121413141414151416141714181419142014211422142314241425142614271428142914301431143214331434143514361437143814391440144114421443144414451446144714481449145014511452145314541455145614571458145914601461146214631464146514661467146814691470147114721473147414751476147714781479148014811482148314841485148614871488148914901491149214931494149514961497149814991500150115021503150415051506150715081509151015111512151315141515151615171518151915201521152215231524152515261527152815291530153115321533153415351536153715381539154015411542154315441545154615471548154915501551155215531554155515561557155815591560156115621563156415651566156715681569157015711572157315741575157615771578157915801581158215831584158515861587158815891590159115921593159415951596159715981599160016011602160316041605160616071608160916101611161216131614161516161617161816191620162116221623162416251626162716281629163016311632163316341635163616371638163916401641164216431644164516461647164816491650165116521653165416551656165716581659166016611662166316641665166616671668166916701671167216731674167516761677167816791680168116821683168416851686168716881689169016911692169316941695169616971698169917001701170217031704170517061707170817091710171117121713171417151716171717181719172017211722172317241725172617271728172917301731173217331734173517361737173817391740174117421743174417451746174717481749175017511752175317541755175617571758175917601761176217631764176517661767176817691770177117721773177417751776177717781779178017811782178317841785178617871788178917901791179217931794179517961797179817991800180118021803180418051806 |
- ---
- title: Providers
- description: Using any LLM provider in OpenCode.
- ---
- import config from "../../../config.mjs"
- export const console = config.console
- OpenCode uses the [AI SDK](https://ai-sdk.dev/) and [Models.dev](https://models.dev) to support **75+ LLM providers** and it supports running local models.
- To add a provider you need to:
- 1. Add the API keys for the provider using the `/connect` command.
- 2. Configure the provider in your OpenCode config.
- ---
- ### Credentials
- When you add a provider's API keys with the `/connect` command, they are stored
- in `~/.local/share/opencode/auth.json`.
- ---
- ### Config
- You can customize the providers through the `provider` section in your OpenCode
- config.
- ---
- #### Base URL
- You can customize the base URL for any provider by setting the `baseURL` option. This is useful when using proxy services or custom endpoints.
- ```json title="opencode.json" {6}
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "anthropic": {
- "options": {
- "baseURL": "https://api.anthropic.com/v1"
- }
- }
- }
- }
- ```
- ---
- ## OpenCode Zen
- OpenCode Zen is a list of models provided by the OpenCode team that have been
- tested and verified to work well with OpenCode. [Learn more](/docs/zen).
- :::tip
- If you are new, we recommend starting with OpenCode Zen.
- :::
- 1. Run the `/connect` command in the TUI, select opencode, and head to [opencode.ai/auth](https://opencode.ai/auth).
- ```txt
- /connect
- ```
- 2. Sign in, add your billing details, and copy your API key.
- 3. Paste your API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run `/models` in the TUI to see the list of models we recommend.
- ```txt
- /models
- ```
- It works like any other provider in OpenCode and is completely optional to use.
- ---
- ## Directory
- Let's look at some of the providers in detail. If you'd like to add a provider to the
- list, feel free to open a PR.
- :::note
- Don't see a provider here? Submit a PR.
- :::
- ---
- ### 302.AI
- 1. Head over to the [302.AI console](https://302.ai/), create an account, and generate an API key.
- 2. Run the `/connect` command and search for **302.AI**.
- ```txt
- /connect
- ```
- 3. Enter your 302.AI API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model.
- ```txt
- /models
- ```
- ---
- ### Amazon Bedrock
- To use Amazon Bedrock with OpenCode:
- 1. Head over to the **Model catalog** in the Amazon Bedrock console and request
- access to the models you want.
- :::tip
- You need to have access to the model you want in Amazon Bedrock.
- :::
- 2. **Configure authentication** using one of the following methods:
- #### Environment Variables (Quick Start)
- Set one of these environment variables while running opencode:
- ```bash
- # Option 1: Using AWS access keys
- AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY opencode
- # Option 2: Using named AWS profile
- AWS_PROFILE=my-profile opencode
- # Option 3: Using Bedrock bearer token
- AWS_BEARER_TOKEN_BEDROCK=XXX opencode
- ```
- Or add them to your bash profile:
- ```bash title="~/.bash_profile"
- export AWS_PROFILE=my-dev-profile
- export AWS_REGION=us-east-1
- ```
- #### Configuration File (Recommended)
- For project-specific or persistent configuration, use `opencode.json`:
- ```json title="opencode.json"
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "amazon-bedrock": {
- "options": {
- "region": "us-east-1",
- "profile": "my-aws-profile"
- }
- }
- }
- }
- ```
- **Available options:**
- - `region` - AWS region (e.g., `us-east-1`, `eu-west-1`)
- - `profile` - AWS named profile from `~/.aws/credentials`
- - `endpoint` - Custom endpoint URL for VPC endpoints (alias for generic `baseURL` option)
- :::tip
- Configuration file options take precedence over environment variables.
- :::
- #### Advanced: VPC Endpoints
- If you're using VPC endpoints for Bedrock:
- ```json title="opencode.json"
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "amazon-bedrock": {
- "options": {
- "region": "us-east-1",
- "profile": "production",
- "endpoint": "https://bedrock-runtime.us-east-1.vpce-xxxxx.amazonaws.com"
- }
- }
- }
- }
- ```
- :::note
- The `endpoint` option is an alias for the generic `baseURL` option, using AWS-specific terminology. If both `endpoint` and `baseURL` are specified, `endpoint` takes precedence.
- :::
- #### Authentication Methods
- - **`AWS_ACCESS_KEY_ID` / `AWS_SECRET_ACCESS_KEY`**: Create an IAM user and generate access keys in the AWS Console
- - **`AWS_PROFILE`**: Use named profiles from `~/.aws/credentials`. First configure with `aws configure --profile my-profile` or `aws sso login`
- - **`AWS_BEARER_TOKEN_BEDROCK`**: Generate long-term API keys from the Amazon Bedrock console
- - **`AWS_WEB_IDENTITY_TOKEN_FILE` / `AWS_ROLE_ARN`**: For EKS IRSA (IAM Roles for Service Accounts) or other Kubernetes environments with OIDC federation. These environment variables are automatically injected by Kubernetes when using service account annotations.
- #### Authentication Precedence
- Amazon Bedrock uses the following authentication priority:
- 1. **Bearer Token** - `AWS_BEARER_TOKEN_BEDROCK` environment variable or token from `/connect` command
- 2. **AWS Credential Chain** - Profile, access keys, shared credentials, IAM roles, Web Identity Tokens (EKS IRSA), instance metadata
- :::note
- When a bearer token is set (via `/connect` or `AWS_BEARER_TOKEN_BEDROCK`), it takes precedence over all AWS credential methods including configured profiles.
- :::
- 3. Run the `/models` command to select the model you want.
- ```txt
- /models
- ```
- ---
- ### Anthropic
- We recommend signing up for [Claude Pro](https://www.anthropic.com/news/claude-pro) or [Max](https://www.anthropic.com/max).
- 1. Once you've signed up, run the `/connect` command and select Anthropic.
- ```txt
- /connect
- ```
- 2. Here you can select the **Claude Pro/Max** option and it'll open your browser
- and ask you to authenticate.
- ```txt
- ┌ Select auth method
- │
- │ Claude Pro/Max
- │ Create an API Key
- │ Manually enter API Key
- └
- ```
- 3. Now all the Anthropic models should be available when you use the `/models` command.
- ```txt
- /models
- ```
- ##### Using API keys
- You can also select **Create an API Key** if you don't have a Pro/Max subscription. It'll also open your browser and ask you to login to Anthropic and give you a code you can paste in your terminal.
- Or if you already have an API key, you can select **Manually enter API Key** and paste it in your terminal.
- ---
- ### Azure OpenAI
- :::note
- If you encounter "I'm sorry, but I cannot assist with that request" errors, try changing the content filter from **DefaultV2** to **Default** in your Azure resource.
- :::
- 1. Head over to the [Azure portal](https://portal.azure.com/) and create an **Azure OpenAI** resource. You'll need:
- - **Resource name**: This becomes part of your API endpoint (`https://RESOURCE_NAME.openai.azure.com/`)
- - **API key**: Either `KEY 1` or `KEY 2` from your resource
- 2. Go to [Azure AI Foundry](https://ai.azure.com/) and deploy a model.
- :::note
- The deployment name must match the model name for opencode to work properly.
- :::
- 3. Run the `/connect` command and search for **Azure**.
- ```txt
- /connect
- ```
- 4. Enter your API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 5. Set your resource name as an environment variable:
- ```bash
- AZURE_RESOURCE_NAME=XXX opencode
- ```
- Or add it to your bash profile:
- ```bash title="~/.bash_profile"
- export AZURE_RESOURCE_NAME=XXX
- ```
- 6. Run the `/models` command to select your deployed model.
- ```txt
- /models
- ```
- ---
- ### Azure Cognitive Services
- 1. Head over to the [Azure portal](https://portal.azure.com/) and create an **Azure OpenAI** resource. You'll need:
- - **Resource name**: This becomes part of your API endpoint (`https://AZURE_COGNITIVE_SERVICES_RESOURCE_NAME.cognitiveservices.azure.com/`)
- - **API key**: Either `KEY 1` or `KEY 2` from your resource
- 2. Go to [Azure AI Foundry](https://ai.azure.com/) and deploy a model.
- :::note
- The deployment name must match the model name for opencode to work properly.
- :::
- 3. Run the `/connect` command and search for **Azure Cognitive Services**.
- ```txt
- /connect
- ```
- 4. Enter your API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 5. Set your resource name as an environment variable:
- ```bash
- AZURE_COGNITIVE_SERVICES_RESOURCE_NAME=XXX opencode
- ```
- Or add it to your bash profile:
- ```bash title="~/.bash_profile"
- export AZURE_COGNITIVE_SERVICES_RESOURCE_NAME=XXX
- ```
- 6. Run the `/models` command to select your deployed model.
- ```txt
- /models
- ```
- ---
- ### Baseten
- 1. Head over to the [Baseten](https://app.baseten.co/), create an account, and generate an API key.
- 2. Run the `/connect` command and search for **Baseten**.
- ```txt
- /connect
- ```
- 3. Enter your Baseten API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model.
- ```txt
- /models
- ```
- ---
- ### Cerebras
- 1. Head over to the [Cerebras console](https://inference.cerebras.ai/), create an account, and generate an API key.
- 2. Run the `/connect` command and search for **Cerebras**.
- ```txt
- /connect
- ```
- 3. Enter your Cerebras API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model like _Qwen 3 Coder 480B_.
- ```txt
- /models
- ```
- ---
- ### Cloudflare AI Gateway
- Cloudflare AI Gateway lets you access models from OpenAI, Anthropic, Workers AI, and more through a unified endpoint. With [Unified Billing](https://developers.cloudflare.com/ai-gateway/features/unified-billing/) you don't need separate API keys for each provider.
- 1. Head over to the [Cloudflare dashboard](https://dash.cloudflare.com/), navigate to **AI** > **AI Gateway**, and create a new gateway.
- 2. Set your Account ID and Gateway ID as environment variables.
- ```bash title="~/.bash_profile"
- export CLOUDFLARE_ACCOUNT_ID=your-32-character-account-id
- export CLOUDFLARE_GATEWAY_ID=your-gateway-id
- ```
- 3. Run the `/connect` command and search for **Cloudflare AI Gateway**.
- ```txt
- /connect
- ```
- 4. Enter your Cloudflare API token.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- Or set it as an environment variable.
- ```bash title="~/.bash_profile"
- export CLOUDFLARE_API_TOKEN=your-api-token
- ```
- 5. Run the `/models` command to select a model.
- ```txt
- /models
- ```
- You can also add models through your opencode config.
- ```json title="opencode.json"
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "cloudflare-ai-gateway": {
- "models": {
- "openai/gpt-4o": {},
- "anthropic/claude-sonnet-4": {}
- }
- }
- }
- }
- ```
- ---
- ### Cortecs
- 1. Head over to the [Cortecs console](https://cortecs.ai/), create an account, and generate an API key.
- 2. Run the `/connect` command and search for **Cortecs**.
- ```txt
- /connect
- ```
- 3. Enter your Cortecs API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model like _Kimi K2 Instruct_.
- ```txt
- /models
- ```
- ---
- ### DeepSeek
- 1. Head over to the [DeepSeek console](https://platform.deepseek.com/), create an account, and click **Create new API key**.
- 2. Run the `/connect` command and search for **DeepSeek**.
- ```txt
- /connect
- ```
- 3. Enter your DeepSeek API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a DeepSeek model like _DeepSeek Reasoner_.
- ```txt
- /models
- ```
- ---
- ### Deep Infra
- 1. Head over to the [Deep Infra dashboard](https://deepinfra.com/dash), create an account, and generate an API key.
- 2. Run the `/connect` command and search for **Deep Infra**.
- ```txt
- /connect
- ```
- 3. Enter your Deep Infra API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model.
- ```txt
- /models
- ```
- ---
- ### Firmware
- 1. Head over to the [Firmware dashboard](https://app.firmware.ai/signup), create an account, and generate an API key.
- 2. Run the `/connect` command and search for **Firmware**.
- ```txt
- /connect
- ```
- 3. Enter your Firmware API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model.
- ```txt
- /models
- ```
- ---
- ### Fireworks AI
- 1. Head over to the [Fireworks AI console](https://app.fireworks.ai/), create an account, and click **Create API Key**.
- 2. Run the `/connect` command and search for **Fireworks AI**.
- ```txt
- /connect
- ```
- 3. Enter your Fireworks AI API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model like _Kimi K2 Instruct_.
- ```txt
- /models
- ```
- ---
- ### GitLab Duo
- GitLab Duo provides AI-powered agentic chat with native tool calling capabilities through GitLab's Anthropic proxy.
- 1. Run the `/connect` command and select GitLab.
- ```txt
- /connect
- ```
- 2. Choose your authentication method:
- ```txt
- ┌ Select auth method
- │
- │ OAuth (Recommended)
- │ Personal Access Token
- └
- ```
- #### Using OAuth (Recommended)
- Select **OAuth** and your browser will open for authorization.
- #### Using Personal Access Token
- 1. Go to [GitLab User Settings > Access Tokens](https://gitlab.com/-/user_settings/personal_access_tokens)
- 2. Click **Add new token**
- 3. Name: `OpenCode`, Scopes: `api`
- 4. Copy the token (starts with `glpat-`)
- 5. Enter it in the terminal
- 3. Run the `/models` command to see available models.
- ```txt
- /models
- ```
- Three Claude-based models are available:
- - **duo-chat-haiku-4-5** (Default) - Fast responses for quick tasks
- - **duo-chat-sonnet-4-5** - Balanced performance for most workflows
- - **duo-chat-opus-4-5** - Most capable for complex analysis
- ##### Self-Hosted GitLab
- For self-hosted GitLab instances:
- ```bash
- GITLAB_INSTANCE_URL=https://gitlab.company.com GITLAB_TOKEN=glpat-xxxxxxxxxxxxxxxxxxxx opencode
- ```
- Or add to your bash profile:
- ```bash title="~/.bash_profile"
- export GITLAB_INSTANCE_URL=https://gitlab.company.com
- export GITLAB_TOKEN=glpat-xxxxxxxxxxxxxxxxxxxx
- ```
- ##### Configuration
- Customize through `opencode.json`:
- ```json title="opencode.json"
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "gitlab": {
- "options": {
- "instanceUrl": "https://gitlab.com",
- "featureFlags": {
- "duo_agent_platform_agentic_chat": true,
- "duo_agent_platform": true
- }
- }
- }
- }
- }
- ```
- ##### GitLab API Tools (Optional)
- To access GitLab tools (merge requests, issues, pipelines, CI/CD, etc.):
- ```json title="opencode.json"
- {
- "$schema": "https://opencode.ai/config.json",
- "plugin": ["@gitlab/opencode-gitlab-plugin"]
- }
- ```
- This plugin provides comprehensive GitLab repository management capabilities including MR reviews, issue tracking, pipeline monitoring, and more.
- ---
- ### GitHub Copilot
- To use your GitHub Copilot subscription with opencode:
- :::note
- Some models might need a [Pro+
- subscription](https://github.com/features/copilot/plans) to use.
- Some models need to be manually enabled in your [GitHub Copilot settings](https://docs.github.com/en/copilot/how-tos/use-ai-models/configure-access-to-ai-models#setup-for-individual-use).
- :::
- 1. Run the `/connect` command and search for GitHub Copilot.
- ```txt
- /connect
- ```
- 2. Navigate to [github.com/login/device](https://github.com/login/device) and enter the code.
- ```txt
- ┌ Login with GitHub Copilot
- │
- │ https://github.com/login/device
- │
- │ Enter code: 8F43-6FCF
- │
- └ Waiting for authorization...
- ```
- 3. Now run the `/models` command to select the model you want.
- ```txt
- /models
- ```
- ---
- ### Google Vertex AI
- To use Google Vertex AI with OpenCode:
- 1. Head over to the **Model Garden** in the Google Cloud Console and check the
- models available in your region.
- :::note
- You need to have a Google Cloud project with Vertex AI API enabled.
- :::
- 2. Set the required environment variables:
- - `GOOGLE_CLOUD_PROJECT`: Your Google Cloud project ID
- - `VERTEX_LOCATION` (optional): The region for Vertex AI (defaults to `global`)
- - Authentication (choose one):
- - `GOOGLE_APPLICATION_CREDENTIALS`: Path to your service account JSON key file
- - Authenticate using gcloud CLI: `gcloud auth application-default login`
- Set them while running opencode.
- ```bash
- GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json GOOGLE_CLOUD_PROJECT=your-project-id opencode
- ```
- Or add them to your bash profile.
- ```bash title="~/.bash_profile"
- export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
- export GOOGLE_CLOUD_PROJECT=your-project-id
- export VERTEX_LOCATION=global
- ```
- :::tip
- The `global` region improves availability and reduces errors at no extra cost. Use regional endpoints (e.g., `us-central1`) for data residency requirements. [Learn more](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-partner-models#regional_and_global_endpoints)
- :::
- 3. Run the `/models` command to select the model you want.
- ```txt
- /models
- ```
- ---
- ### Groq
- 1. Head over to the [Groq console](https://console.groq.com/), click **Create API Key**, and copy the key.
- 2. Run the `/connect` command and search for Groq.
- ```txt
- /connect
- ```
- 3. Enter the API key for the provider.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select the one you want.
- ```txt
- /models
- ```
- ---
- ### Hugging Face
- [Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers) provides access to open models supported by 17+ providers.
- 1. Head over to [Hugging Face settings](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained) to create a token with permission to make calls to Inference Providers.
- 2. Run the `/connect` command and search for **Hugging Face**.
- ```txt
- /connect
- ```
- 3. Enter your Hugging Face token.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model like _Kimi-K2-Instruct_ or _GLM-4.6_.
- ```txt
- /models
- ```
- ---
- ### Helicone
- [Helicone](https://helicone.ai) is an LLM observability platform that provides logging, monitoring, and analytics for your AI applications. The Helicone AI Gateway routes your requests to the appropriate provider automatically based on the model.
- 1. Head over to [Helicone](https://helicone.ai), create an account, and generate an API key from your dashboard.
- 2. Run the `/connect` command and search for **Helicone**.
- ```txt
- /connect
- ```
- 3. Enter your Helicone API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model.
- ```txt
- /models
- ```
- For more providers and advanced features like caching and rate limiting, check the [Helicone documentation](https://docs.helicone.ai).
- #### Optional Configs
- In the event you see a feature or model from Helicone that isn't configured automatically through opencode, you can always configure it yourself.
- Here's [Helicone's Model Directory](https://helicone.ai/models), you'll need this to grab the IDs of the models you want to add.
- ```jsonc title="~/.config/opencode/opencode.jsonc"
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "helicone": {
- "npm": "@ai-sdk/openai-compatible",
- "name": "Helicone",
- "options": {
- "baseURL": "https://ai-gateway.helicone.ai",
- },
- "models": {
- "gpt-4o": {
- // Model ID (from Helicone's model directory page)
- "name": "GPT-4o", // Your own custom name for the model
- },
- "claude-sonnet-4-20250514": {
- "name": "Claude Sonnet 4",
- },
- },
- },
- },
- }
- ```
- #### Custom Headers
- Helicone supports custom headers for features like caching, user tracking, and session management. Add them to your provider config using `options.headers`:
- ```jsonc title="~/.config/opencode/opencode.jsonc"
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "helicone": {
- "npm": "@ai-sdk/openai-compatible",
- "name": "Helicone",
- "options": {
- "baseURL": "https://ai-gateway.helicone.ai",
- "headers": {
- "Helicone-Cache-Enabled": "true",
- "Helicone-User-Id": "opencode",
- },
- },
- },
- },
- }
- ```
- ##### Session tracking
- Helicone's [Sessions](https://docs.helicone.ai/features/sessions) feature lets you group related LLM requests together. Use the [opencode-helicone-session](https://github.com/H2Shami/opencode-helicone-session) plugin to automatically log each OpenCode conversation as a session in Helicone.
- ```bash
- npm install -g opencode-helicone-session
- ```
- Add it to your config.
- ```json title="opencode.json"
- {
- "plugin": ["opencode-helicone-session"]
- }
- ```
- The plugin injects `Helicone-Session-Id` and `Helicone-Session-Name` headers into your requests. In Helicone's Sessions page, you'll see each OpenCode conversation listed as a separate session.
- ##### Common Helicone headers
- | Header | Description |
- | -------------------------- | ------------------------------------------------------------- |
- | `Helicone-Cache-Enabled` | Enable response caching (`true`/`false`) |
- | `Helicone-User-Id` | Track metrics by user |
- | `Helicone-Property-[Name]` | Add custom properties (e.g., `Helicone-Property-Environment`) |
- | `Helicone-Prompt-Id` | Associate requests with prompt versions |
- See the [Helicone Header Directory](https://docs.helicone.ai/helicone-headers/header-directory) for all available headers.
- ---
- ### llama.cpp
- You can configure opencode to use local models through [llama.cpp's](https://github.com/ggml-org/llama.cpp) llama-server utility
- ```json title="opencode.json" "llama.cpp" {5, 6, 8, 10-15}
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "llama.cpp": {
- "npm": "@ai-sdk/openai-compatible",
- "name": "llama-server (local)",
- "options": {
- "baseURL": "http://127.0.0.1:8080/v1"
- },
- "models": {
- "qwen3-coder:a3b": {
- "name": "Qwen3-Coder: a3b-30b (local)",
- "limit": {
- "context": 128000,
- "output": 65536
- }
- }
- }
- }
- }
- }
- ```
- In this example:
- - `llama.cpp` is the custom provider ID. This can be any string you want.
- - `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API.
- - `name` is the display name for the provider in the UI.
- - `options.baseURL` is the endpoint for the local server.
- - `models` is a map of model IDs to their configurations. The model name will be displayed in the model selection list.
- ---
- ### IO.NET
- IO.NET offers 17 models optimized for various use cases:
- 1. Head over to the [IO.NET console](https://ai.io.net/), create an account, and generate an API key.
- 2. Run the `/connect` command and search for **IO.NET**.
- ```txt
- /connect
- ```
- 3. Enter your IO.NET API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model.
- ```txt
- /models
- ```
- ---
- ### LM Studio
- You can configure opencode to use local models through LM Studio.
- ```json title="opencode.json" "lmstudio" {5, 6, 8, 10-14}
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "lmstudio": {
- "npm": "@ai-sdk/openai-compatible",
- "name": "LM Studio (local)",
- "options": {
- "baseURL": "http://127.0.0.1:1234/v1"
- },
- "models": {
- "google/gemma-3n-e4b": {
- "name": "Gemma 3n-e4b (local)"
- }
- }
- }
- }
- }
- ```
- In this example:
- - `lmstudio` is the custom provider ID. This can be any string you want.
- - `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API.
- - `name` is the display name for the provider in the UI.
- - `options.baseURL` is the endpoint for the local server.
- - `models` is a map of model IDs to their configurations. The model name will be displayed in the model selection list.
- ---
- ### Moonshot AI
- To use Kimi K2 from Moonshot AI:
- 1. Head over to the [Moonshot AI console](https://platform.moonshot.ai/console), create an account, and click **Create API key**.
- 2. Run the `/connect` command and search for **Moonshot AI**.
- ```txt
- /connect
- ```
- 3. Enter your Moonshot API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select _Kimi K2_.
- ```txt
- /models
- ```
- ---
- ### MiniMax
- 1. Head over to the [MiniMax API Console](https://platform.minimax.io/login), create an account, and generate an API key.
- 2. Run the `/connect` command and search for **MiniMax**.
- ```txt
- /connect
- ```
- 3. Enter your MiniMax API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model like _M2.1_.
- ```txt
- /models
- ```
- ---
- ### Nebius Token Factory
- 1. Head over to the [Nebius Token Factory console](https://tokenfactory.nebius.com/), create an account, and click **Add Key**.
- 2. Run the `/connect` command and search for **Nebius Token Factory**.
- ```txt
- /connect
- ```
- 3. Enter your Nebius Token Factory API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model like _Kimi K2 Instruct_.
- ```txt
- /models
- ```
- ---
- ### Ollama
- You can configure opencode to use local models through Ollama.
- ```json title="opencode.json" "ollama" {5, 6, 8, 10-14}
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "ollama": {
- "npm": "@ai-sdk/openai-compatible",
- "name": "Ollama (local)",
- "options": {
- "baseURL": "http://localhost:11434/v1"
- },
- "models": {
- "llama2": {
- "name": "Llama 2"
- }
- }
- }
- }
- }
- ```
- In this example:
- - `ollama` is the custom provider ID. This can be any string you want.
- - `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API.
- - `name` is the display name for the provider in the UI.
- - `options.baseURL` is the endpoint for the local server.
- - `models` is a map of model IDs to their configurations. The model name will be displayed in the model selection list.
- :::tip
- If tool calls aren't working, try increasing `num_ctx` in Ollama. Start around 16k - 32k.
- :::
- ---
- ### Ollama Cloud
- To use Ollama Cloud with OpenCode:
- 1. Head over to [https://ollama.com/](https://ollama.com/) and sign in or create an account.
- 2. Navigate to **Settings** > **Keys** and click **Add API Key** to generate a new API key.
- 3. Copy the API key for use in OpenCode.
- 4. Run the `/connect` command and search for **Ollama Cloud**.
- ```txt
- /connect
- ```
- 5. Enter your Ollama Cloud API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 6. **Important**: Before using cloud models in OpenCode, you must pull the model information locally:
- ```bash
- ollama pull gpt-oss:20b-cloud
- ```
- 7. Run the `/models` command to select your Ollama Cloud model.
- ```txt
- /models
- ```
- ---
- ### OpenAI
- We recommend signing up for [ChatGPT Plus or Pro](https://chatgpt.com/pricing).
- 1. Once you've signed up, run the `/connect` command and select OpenAI.
- ```txt
- /connect
- ```
- 2. Here you can select the **ChatGPT Plus/Pro** option and it'll open your browser
- and ask you to authenticate.
- ```txt
- ┌ Select auth method
- │
- │ ChatGPT Plus/Pro
- │ Manually enter API Key
- └
- ```
- 3. Now all the OpenAI models should be available when you use the `/models` command.
- ```txt
- /models
- ```
- ##### Using API keys
- If you already have an API key, you can select **Manually enter API Key** and paste it in your terminal.
- ---
- ### OpenCode Zen
- OpenCode Zen is a list of tested and verified models provided by the OpenCode team. [Learn more](/docs/zen).
- 1. Sign in to **<a href={console}>OpenCode Zen</a>** and click **Create API Key**.
- 2. Run the `/connect` command and search for **OpenCode Zen**.
- ```txt
- /connect
- ```
- 3. Enter your OpenCode API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model like _Qwen 3 Coder 480B_.
- ```txt
- /models
- ```
- ---
- ### OpenRouter
- 1. Head over to the [OpenRouter dashboard](https://openrouter.ai/settings/keys), click **Create API Key**, and copy the key.
- 2. Run the `/connect` command and search for OpenRouter.
- ```txt
- /connect
- ```
- 3. Enter the API key for the provider.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Many OpenRouter models are preloaded by default, run the `/models` command to select the one you want.
- ```txt
- /models
- ```
- You can also add additional models through your opencode config.
- ```json title="opencode.json" {6}
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "openrouter": {
- "models": {
- "somecoolnewmodel": {}
- }
- }
- }
- }
- ```
- 5. You can also customize them through your opencode config. Here's an example of specifying a provider
- ```json title="opencode.json"
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "openrouter": {
- "models": {
- "moonshotai/kimi-k2": {
- "options": {
- "provider": {
- "order": ["baseten"],
- "allow_fallbacks": false
- }
- }
- }
- }
- }
- }
- }
- ```
- ---
- ### SAP AI Core
- SAP AI Core provides access to 40+ models from OpenAI, Anthropic, Google, Amazon, Meta, Mistral, and AI21 through a unified platform.
- 1. Go to your [SAP BTP Cockpit](https://account.hana.ondemand.com/), navigate to your SAP AI Core service instance, and create a service key.
- :::tip
- The service key is a JSON object containing `clientid`, `clientsecret`, `url`, and `serviceurls.AI_API_URL`. You can find your AI Core instance under **Services** > **Instances and Subscriptions** in the BTP Cockpit.
- :::
- 2. Run the `/connect` command and search for **SAP AI Core**.
- ```txt
- /connect
- ```
- 3. Enter your service key JSON.
- ```txt
- ┌ Service key
- │
- │
- └ enter
- ```
- Or set the `AICORE_SERVICE_KEY` environment variable:
- ```bash
- AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","url":"...","serviceurls":{"AI_API_URL":"..."}}' opencode
- ```
- Or add it to your bash profile:
- ```bash title="~/.bash_profile"
- export AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","url":"...","serviceurls":{"AI_API_URL":"..."}}'
- ```
- 4. Optionally set deployment ID and resource group:
- ```bash
- AICORE_DEPLOYMENT_ID=your-deployment-id AICORE_RESOURCE_GROUP=your-resource-group opencode
- ```
- :::note
- These settings are optional and should be configured according to your SAP AI Core setup.
- :::
- 5. Run the `/models` command to select from 40+ available models.
- ```txt
- /models
- ```
- ---
- ### OVHcloud AI Endpoints
- 1. Head over to the [OVHcloud panel](https://ovh.com/manager). Navigate to the `Public Cloud` section, `AI & Machine Learning` > `AI Endpoints` and in `API Keys` tab, click **Create a new API key**.
- 2. Run the `/connect` command and search for **OVHcloud AI Endpoints**.
- ```txt
- /connect
- ```
- 3. Enter your OVHcloud AI Endpoints API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model like _gpt-oss-120b_.
- ```txt
- /models
- ```
- ---
- ### Scaleway
- To use [Scaleway Generative APIs](https://www.scaleway.com/en/docs/generative-apis/) with Opencode:
- 1. Head over to the [Scaleway Console IAM settings](https://console.scaleway.com/iam/api-keys) to generate a new API key.
- 2. Run the `/connect` command and search for **Scaleway**.
- ```txt
- /connect
- ```
- 3. Enter your Scaleway API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model like _devstral-2-123b-instruct-2512_ or _gpt-oss-120b_.
- ```txt
- /models
- ```
- ---
- ### Together AI
- 1. Head over to the [Together AI console](https://api.together.ai), create an account, and click **Add Key**.
- 2. Run the `/connect` command and search for **Together AI**.
- ```txt
- /connect
- ```
- 3. Enter your Together AI API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model like _Kimi K2 Instruct_.
- ```txt
- /models
- ```
- ---
- ### Venice AI
- 1. Head over to the [Venice AI console](https://venice.ai), create an account, and generate an API key.
- 2. Run the `/connect` command and search for **Venice AI**.
- ```txt
- /connect
- ```
- 3. Enter your Venice AI API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model like _Llama 3.3 70B_.
- ```txt
- /models
- ```
- ---
- ### Vercel AI Gateway
- Vercel AI Gateway lets you access models from OpenAI, Anthropic, Google, xAI, and more through a unified endpoint. Models are offered at list price with no markup.
- 1. Head over to the [Vercel dashboard](https://vercel.com/), navigate to the **AI Gateway** tab, and click **API keys** to create a new API key.
- 2. Run the `/connect` command and search for **Vercel AI Gateway**.
- ```txt
- /connect
- ```
- 3. Enter your Vercel AI Gateway API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model.
- ```txt
- /models
- ```
- You can also customize models through your opencode config. Here's an example of specifying provider routing order.
- ```json title="opencode.json"
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "vercel": {
- "models": {
- "anthropic/claude-sonnet-4": {
- "options": {
- "order": ["anthropic", "vertex"]
- }
- }
- }
- }
- }
- }
- ```
- Some useful routing options:
- | Option | Description |
- | ------------------- | ---------------------------------------------------- |
- | `order` | Provider sequence to try |
- | `only` | Restrict to specific providers |
- | `zeroDataRetention` | Only use providers with zero data retention policies |
- ---
- ### xAI
- 1. Head over to the [xAI console](https://console.x.ai/), create an account, and generate an API key.
- 2. Run the `/connect` command and search for **xAI**.
- ```txt
- /connect
- ```
- 3. Enter your xAI API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model like _Grok Beta_.
- ```txt
- /models
- ```
- ---
- ### Z.AI
- 1. Head over to the [Z.AI API console](https://z.ai/manage-apikey/apikey-list), create an account, and click **Create a new API key**.
- 2. Run the `/connect` command and search for **Z.AI**.
- ```txt
- /connect
- ```
- If you are subscribed to the **GLM Coding Plan**, select **Z.AI Coding Plan**.
- 3. Enter your Z.AI API key.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Run the `/models` command to select a model like _GLM-4.7_.
- ```txt
- /models
- ```
- ---
- ### ZenMux
- 1. Head over to the [ZenMux dashboard](https://zenmux.ai/settings/keys), click **Create API Key**, and copy the key.
- 2. Run the `/connect` command and search for ZenMux.
- ```txt
- /connect
- ```
- 3. Enter the API key for the provider.
- ```txt
- ┌ API key
- │
- │
- └ enter
- ```
- 4. Many ZenMux models are preloaded by default, run the `/models` command to select the one you want.
- ```txt
- /models
- ```
- You can also add additional models through your opencode config.
- ```json title="opencode.json" {6}
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "zenmux": {
- "models": {
- "somecoolnewmodel": {}
- }
- }
- }
- }
- ```
- ---
- ## Custom provider
- To add any **OpenAI-compatible** provider that's not listed in the `/connect` command:
- :::tip
- You can use any OpenAI-compatible provider with opencode. Most modern AI providers offer OpenAI-compatible APIs.
- :::
- 1. Run the `/connect` command and scroll down to **Other**.
- ```bash
- $ /connect
- ┌ Add credential
- │
- ◆ Select provider
- │ ...
- │ ● Other
- └
- ```
- 2. Enter a unique ID for the provider.
- ```bash
- $ /connect
- ┌ Add credential
- │
- ◇ Enter provider id
- │ myprovider
- └
- ```
- :::note
- Choose a memorable ID, you'll use this in your config file.
- :::
- 3. Enter your API key for the provider.
- ```bash
- $ /connect
- ┌ Add credential
- │
- ▲ This only stores a credential for myprovider - you will need to configure it in opencode.json, check the docs for examples.
- │
- ◇ Enter your API key
- │ sk-...
- └
- ```
- 4. Create or update your `opencode.json` file in your project directory:
- ```json title="opencode.json" ""myprovider"" {5-15}
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "myprovider": {
- "npm": "@ai-sdk/openai-compatible",
- "name": "My AI ProviderDisplay Name",
- "options": {
- "baseURL": "https://api.myprovider.com/v1"
- },
- "models": {
- "my-model-name": {
- "name": "My Model Display Name"
- }
- }
- }
- }
- }
- ```
- Here are the configuration options:
- - **npm**: AI SDK package to use, `@ai-sdk/openai-compatible` for OpenAI-compatible providers
- - **name**: Display name in UI.
- - **models**: Available models.
- - **options.baseURL**: API endpoint URL.
- - **options.apiKey**: Optionally set the API key, if not using auth.
- - **options.headers**: Optionally set custom headers.
- More on the advanced options in the example below.
- 5. Run the `/models` command and your custom provider and models will appear in the selection list.
- ---
- ##### Example
- Here's an example setting the `apiKey`, `headers`, and model `limit` options.
- ```json title="opencode.json" {9,11,17-20}
- {
- "$schema": "https://opencode.ai/config.json",
- "provider": {
- "myprovider": {
- "npm": "@ai-sdk/openai-compatible",
- "name": "My AI ProviderDisplay Name",
- "options": {
- "baseURL": "https://api.myprovider.com/v1",
- "apiKey": "{env:ANTHROPIC_API_KEY}",
- "headers": {
- "Authorization": "Bearer custom-token"
- }
- },
- "models": {
- "my-model-name": {
- "name": "My Model Display Name",
- "limit": {
- "context": 200000,
- "output": 65536
- }
- }
- }
- }
- }
- }
- ```
- Configuration details:
- - **apiKey**: Set using `env` variable syntax, [learn more](/docs/config#env-vars).
- - **headers**: Custom headers sent with each request.
- - **limit.context**: Maximum input tokens the model accepts.
- - **limit.output**: Maximum tokens the model can generate.
- The `limit` fields allow OpenCode to understand how much context you have left. Standard providers pull these from models.dev automatically.
- ---
- ## Troubleshooting
- If you are having trouble with configuring a provider, check the following:
- 1. **Check the auth setup**: Run `opencode auth list` to see if the credentials
- for the provider are added to your config.
- This doesn't apply to providers like Amazon Bedrock, that rely on environment variables for their auth.
- 2. For custom providers, check the opencode config and:
- - Make sure the provider ID used in the `/connect` command matches the ID in your opencode config.
- - The right npm package is used for the provider. For example, use `@ai-sdk/cerebras` for Cerebras. And for all other OpenAI-compatible providers, use `@ai-sdk/openai-compatible`.
- - Check correct API endpoint is used in the `options.baseURL` field.
|