|
|
@@ -652,6 +652,44 @@ The `global` region improves availability and reduces errors at no extra cost. U
|
|
|
|
|
|
---
|
|
|
|
|
|
+### llama.cpp
|
|
|
+
|
|
|
+You can configure opencode to use local models through [llama.cpp's](https://github.com/ggml-org/llama.cpp) llama-server utility
|
|
|
+
|
|
|
+```json title="opencode.json" "llama.cpp" {5, 6, 8, 10-14}
|
|
|
+{
|
|
|
+ "$schema": "https://opencode.ai/config.json",
|
|
|
+ "provider": {
|
|
|
+ "llama.cpp": {
|
|
|
+ "npm": "@ai-sdk/openai-compatible",
|
|
|
+ "name": "llama-server (local)",
|
|
|
+ "options": {
|
|
|
+ "baseURL": "http://127.0.0.1:8080/v1"
|
|
|
+ },
|
|
|
+ "models": {
|
|
|
+ "qwen3-coder:a3b": {
|
|
|
+ "name": "Qwen3-Coder: a3b-30b (local)"
|
|
|
+ }
|
|
|
+ },
|
|
|
+ "limit": {
|
|
|
+ "context": 128000,
|
|
|
+ "output": 65536
|
|
|
+ }
|
|
|
+ }
|
|
|
+ }
|
|
|
+}
|
|
|
+```
|
|
|
+
|
|
|
+In this example:
|
|
|
+
|
|
|
+- `llama.cpp` is the custom provider ID. This can be any string you want.
|
|
|
+- `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API.
|
|
|
+- `name` is the display name for the provider in the UI.
|
|
|
+- `options.baseURL` is the endpoint for the local server.
|
|
|
+- `models` is a map of model IDs to their configurations. The model name will be displayed in the model selection list.
|
|
|
+
|
|
|
+---
|
|
|
+
|
|
|
### IO.NET
|
|
|
|
|
|
IO.NET offers 17 models optimized for various use cases:
|