Browse Source

docs: include "limit" example (#3925)

Co-authored-by: opencode-agent[bot] <opencode-agent[bot]@users.noreply.github.com>
Co-authored-by: rekram1-node <[email protected]>
Matthew Fitzpatrick 3 months ago
parent
commit
c9dfe6d964
1 changed files with 15 additions and 5 deletions
  1. 15 5
      packages/web/src/content/docs/providers.mdx

+ 15 - 5
packages/web/src/content/docs/providers.mdx

@@ -482,7 +482,6 @@ To use Google Vertex AI with OpenCode:
 
 
 4. Run the `/models` command to select a model like _Kimi-K2-Instruct_ or _GLM-4.6_.
 4. Run the `/models` command to select a model like _Kimi-K2-Instruct_ or _GLM-4.6_.
 
 
-
 ---
 ---
 
 
 ### LM Studio
 ### LM Studio
@@ -935,9 +934,9 @@ You can use any OpenAI-compatible provider with opencode. Most modern AI provide
 
 
 ##### Example
 ##### Example
 
 
-Here's an example setting the `apiKey` and `headers` options.
+Here's an example setting the `apiKey`, `headers`, and model `limit` options.
 
 
-```json title="opencode.json" {9,11}
+```json title="opencode.json" {9,11,17-20}
 {
 {
   "$schema": "https://opencode.ai/config.json",
   "$schema": "https://opencode.ai/config.json",
   "provider": {
   "provider": {
@@ -953,7 +952,11 @@ Here's an example setting the `apiKey` and `headers` options.
       },
       },
       "models": {
       "models": {
         "my-model-name": {
         "my-model-name": {
-          "name": "My Model Display Name"
+          "name": "My Model Display Name",
+          "limit": {
+            "context": 200000,
+            "output": 65536
+          }
         }
         }
       }
       }
     }
     }
@@ -961,7 +964,14 @@ Here's an example setting the `apiKey` and `headers` options.
 }
 }
 ```
 ```
 
 
-We are setting the `apiKey` using the `env` variable syntax, [learn more](/docs/config#env-vars).
+Configuration details:
+
+- **apiKey**: Set using `env` variable syntax, [learn more](/docs/config#env-vars).
+- **headers**: Custom headers sent with each request.
+- **limit.context**: Maximum input tokens the model accepts.
+- **limit.output**: Maximum tokens the model can generate.
+
+The `limit` fields allow OpenCode to understand how much context you have left. Standard providers pull these from models.dev automatically.
 
 
 ---
 ---