◧ opencode
⚠️ Notice: We are in progress of a complete overhaul in the
dontlookbranch - should be released mid June. The README below is for the current version
A powerful terminal-based AI assistant for developers, providing intelligent coding assistance directly in your terminal.
OpenCode is a Go-based CLI application that brings AI assistance to your terminal. It provides a TUI (Terminal User Interface) for interacting with various AI models to help with coding tasks, debugging, and more.
# Install the latest version
curl -fsSL https://opencode.ai/install | bash
# Install a specific version
curl -fsSL https://opencode.ai/install | VERSION=0.1.0 bash
brew install sst/tap/opencode
# Using yay
yay -S opencode-bin
# Using paru
paru -S opencode-bin
go install github.com/sst/opencode@latest
OpenCode looks for configuration in the following locations:
$HOME/.opencode.json$XDG_CONFIG_HOME/opencode/.opencode.json./.opencode.json (local directory)You can configure OpenCode using environment variables:
| Environment Variable | Purpose |
|---|---|
ANTHROPIC_API_KEY |
For Claude models |
OPENAI_API_KEY |
For OpenAI models |
GEMINI_API_KEY |
For Google Gemini models |
VERTEXAI_PROJECT |
For Google Cloud VertexAI (Gemini) |
VERTEXAI_LOCATION |
For Google Cloud VertexAI (Gemini) |
GROQ_API_KEY |
For Groq models |
AWS_ACCESS_KEY_ID |
For AWS Bedrock (Claude) |
AWS_SECRET_ACCESS_KEY |
For AWS Bedrock (Claude) |
AWS_REGION |
For AWS Bedrock (Claude) |
AZURE_OPENAI_ENDPOINT |
For Azure OpenAI models |
AZURE_OPENAI_API_KEY |
For Azure OpenAI models (optional when using Entra ID) |
AZURE_OPENAI_API_VERSION |
For Azure OpenAI models |
{
"data": {
"directory": ".opencode"
},
"providers": {
"openai": {
"apiKey": "your-api-key",
"disabled": false
},
"anthropic": {
"apiKey": "your-api-key",
"disabled": false
},
"groq": {
"apiKey": "your-api-key",
"disabled": false
},
"openrouter": {
"apiKey": "your-api-key",
"disabled": false
}
},
"agents": {
"primary": {
"model": "claude-3.7-sonnet",
"maxTokens": 5000
},
"task": {
"model": "claude-3.7-sonnet",
"maxTokens": 5000
},
"title": {
"model": "claude-3.7-sonnet",
"maxTokens": 80
}
},
"mcpServers": {
"example": {
"type": "stdio",
"command": "path/to/mcp-server",
"env": [],
"args": []
}
},
"lsp": {
"go": {
"disabled": false,
"command": "gopls"
}
},
"shell": {
"path": "/bin/zsh",
"args": ["-l"]
},
"debug": false,
"debugLSP": false
}
OpenCode supports a variety of AI models from different providers:
To use bedrock models with OpenCode you need three things.
AWS_SECRET_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_REGION)A correct configuration file. You don't need the providers key. Instead you have to prefix your models per agent with bedrock. and then a valid model. For now only Claude 3.7 is supported.
{
"agents": {
"primary": {
"model": "bedrock.claude-3.7-sonnet",
"maxTokens": 5000,
"reasoningEffort": ""
},
"task": {
"model": "bedrock.claude-3.7-sonnet",
"maxTokens": 5000,
"reasoningEffort": ""
},
"title": {
"model": "bedrock.claude-3.7-sonnet",
"maxTokens": 80,
"reasoningEffort": ""
}
}
}
# Start OpenCode
opencode
# Start with debug logging
opencode -d
# Start with a specific working directory
opencode -c /path/to/project
You can run OpenCode in non-interactive mode by passing a prompt directly as a command-line argument or by piping text into the command. This is useful for scripting, automation, or when you want a quick answer without launching the full TUI.
# Run a single prompt and print the AI's response to the terminal
opencode -p "Explain the use of context in Go"
# Pipe input to OpenCode (equivalent to using -p flag)
echo "Explain the use of context in Go" | opencode
# Get response in JSON format
opencode -p "Explain the use of context in Go" -f json
# Or with piped input
echo "Explain the use of context in Go" | opencode -f json
# Run without showing the spinner
opencode -p "Explain the use of context in Go" -q
# Or with piped input
echo "Explain the use of context in Go" | opencode -q
# Enable verbose logging to stderr
opencode -p "Explain the use of context in Go" --verbose
# Or with piped input
echo "Explain the use of context in Go" | opencode --verbose
# Restrict the agent to only use specific tools
opencode -p "Explain the use of context in Go" --allowedTools=view,ls,glob
# Or with piped input
echo "Explain the use of context in Go" | opencode --allowedTools=view,ls,glob
# Prevent the agent from using specific tools
opencode -p "Explain the use of context in Go" --excludedTools=bash,edit
# Or with piped input
echo "Explain the use of context in Go" | opencode --excludedTools=bash,edit
In this mode, OpenCode will process your prompt, print the result to standard output, and then exit. All permissions are auto-approved for the session.
You can control which tools the AI assistant has access to in non-interactive mode:
--allowedTools: Comma-separated list of tools that the agent is allowed to use. Only these tools will be available.--excludedTools: Comma-separated list of tools that the agent is not allowed to use. All other tools will be available.These flags are mutually exclusive - you can use either --allowedTools or --excludedTools, but not both at the same time.
OpenCode supports the following output formats in non-interactive mode:
| Format | Description |
|---|---|
text |
Plain text output (default) |
json |
Output wrapped in a JSON object |
The output format is implemented as a strongly-typed OutputFormat in the codebase, ensuring type safety and validation when processing outputs.
| Flag | Short | Description |
|---|---|---|
--help |
-h |
Display help information |
--debug |
-d |
Enable debug mode |
--cwd |
-c |
Set current working directory |
--prompt |
-p |
Run a single prompt in non-interactive mode |
--output-format |
-f |
Output format for non-interactive mode (text, json) |
--quiet |
-q |
Hide spinner in non-interactive mode |
--verbose |
Display logs to stderr in non-interactive mode | |
--allowedTools |
Restrict the agent to only use specified tools | |
--excludedTools |
Prevent the agent from using specified tools |
| Shortcut | Action |
|---|---|
Ctrl+C |
Quit application |
Ctrl+? |
Toggle help dialog |
? |
Toggle help dialog (when not in editing mode) |
Ctrl+L |
View logs |
Ctrl+A |
Switch session |
Ctrl+K |
Command dialog |
Ctrl+O |
Toggle model selection dialog |
Esc |
Close current overlay/dialog or return to previous mode |
| Shortcut | Action |
|---|---|
Ctrl+N |
Create new session |
Ctrl+X |
Cancel current operation/generation |
i |
Focus editor (when not in writing mode) |
Esc |
Exit writing mode and focus messages |
| Shortcut | Action |
|---|---|
Ctrl+S |
Send message (when editor is focused) |
Enter or Ctrl+S |
Send message (when editor is not focused) |
Ctrl+E |
Open external editor |
Esc |
Blur editor and focus messages |
| Shortcut | Action |
|---|---|
↑ or k |
Previous session |
↓ or j |
Next session |
Enter |
Select session |
Esc |
Close dialog |
| Shortcut | Action |
|---|---|
↑ or k |
Move up |
↓ or j |
Move down |
← or h |
Previous provider |
→ or l |
Next provider |
Esc |
Close dialog |
| Shortcut | Action |
|---|---|
← or left |
Switch options left |
→ or right or tab |
Switch options right |
Enter or space |
Confirm selection |
a |
Allow permission |
A |
Allow permission for session |
d |
Deny permission |
| Shortcut | Action |
|---|---|
Backspace or q |
Return to chat page |
OpenCode's AI assistant has access to various tools to help with coding tasks:
| Tool | Description | Parameters |
|---|---|---|
glob |
Find files by pattern | pattern (required), path (optional) |
grep |
Search file contents | pattern (required), path (optional), include (optional), literal_text (optional) |
ls |
List directory contents | path (optional), ignore (optional array of patterns) |
view |
View file contents | file_path (required), offset (optional), limit (optional) |
write |
Write to files | file_path (required), content (required) |
edit |
Edit files | Various parameters for file editing |
patch |
Apply patches to files | file_path (required), diff (required) |
diagnostics |
Get diagnostics information | file_path (optional) |
| Tool | Description | Parameters |
|---|---|---|
bash |
Execute shell commands | command (required), timeout (optional) |
fetch |
Fetch data from URLs | url (required), format (required), timeout (optional) |
agent |
Run sub-tasks with the AI agent | prompt (required) |
OpenCode allows you to configure the shell used by the bash tool. By default, it uses:
$SHELL environment variable (if available)/bin/bash if neither of the above is availableTo configure a custom shell, add a shell section to your .opencode.json configuration file:
{
"shell": {
"path": "/bin/zsh",
"args": ["-l"]
}
}
You can specify any shell executable and custom arguments:
{
"shell": {
"path": "/usr/bin/fish",
"args": []
}
}
OpenCode is built with a modular architecture:
OpenCode supports custom commands that can be created by users to quickly send predefined prompts to the AI assistant.
Custom commands are predefined prompts stored as Markdown files in one of three locations:
User Commands (prefixed with user:):
$XDG_CONFIG_HOME/opencode/commands/
(typically ~/.config/opencode/commands/ on Linux/macOS)
or
$HOME/.opencode/commands/
Project Commands (prefixed with project:):
<PROJECT DIR>/.opencode/commands/
Each .md file in these directories becomes a custom command. The file name (without extension) becomes the command ID.
For example, creating a file at ~/.config/opencode/commands/prime-context.md with content:
RUN git ls-files
READ README.md
This creates a command called user:prime-context.
OpenCode supports named arguments in custom commands using placeholders in the format $NAME (where NAME consists of uppercase letters, numbers, and underscores, and must start with a letter).
For example:
# Fetch Context for Issue $ISSUE_NUMBER
RUN gh issue view $ISSUE_NUMBER --json title,body,comments
RUN git grep --author="$AUTHOR_NAME" -n .
RUN grep -R "$SEARCH_PATTERN" $DIRECTORY
When you run a command with arguments, OpenCode will prompt you to enter values for each unique placeholder. Named arguments provide several benefits:
You can organize commands in subdirectories:
~/.config/opencode/commands/git/commit.md
This creates a command with ID user:git:commit.
Ctrl+K to open the command dialoguser: or project:)The content of the command file will be sent as a message to the AI assistant.
OpenCode implements the Model Context Protocol (MCP) to extend its capabilities through external tools. MCP provides a standardized way for the AI assistant to interact with external services and tools.
MCP servers are defined in the configuration file under the mcpServers section:
{
"mcpServers": {
"example": {
"type": "stdio",
"command": "path/to/mcp-server",
"env": [],
"args": []
},
"web-example": {
"type": "sse",
"url": "https://example.com/mcp",
"headers": {
"Authorization": "Bearer token"
}
}
}
}
Once configured, MCP tools are automatically available to the AI assistant alongside built-in tools. They follow the same permission model as other tools, requiring user approval before execution.
OpenCode integrates with Language Server Protocol to provide code intelligence features across multiple programming languages.
Language servers are configured in the configuration file under the lsp section:
{
"lsp": {
"go": {
"disabled": false,
"command": "gopls"
},
"typescript": {
"disabled": false,
"command": "typescript-language-server",
"args": ["--stdio"]
}
}
}
The AI assistant can access LSP features through the diagnostics tool, allowing it to:
While the LSP client implementation supports the full LSP protocol (including completions, hover, definition, etc.), currently only diagnostics are exposed to the AI assistant.
# Clone the repository
git clone https://github.com/sst/opencode.git
cd opencode
# Build
go build -o opencode
# Run
./opencode
OpenCode gratefully acknowledges the contributions and support from these key individuals:
Special thanks to the broader open source community whose tools and libraries have made this project possible.
OpenCode is licensed under the MIT License. See the LICENSE file for details.
Contributions are welcome! Here's how you can contribute:
git checkout -b feature/amazing-feature)git commit -m 'Add some amazing feature')git push origin feature/amazing-feature)Please make sure to update tests as appropriate and follow the existing code style.