opencode is an OSS terminal coding agent with pluggable providers. Because it supports any OpenAI-compatible or Anthropic-compatible endpoint, you can point it at the LangWatch AI Gateway and pick up budgets, guardrails, policy-rule filters, and trace propagation without changing the agent itself.Documentation Index
Fetch the complete documentation index at: https://langwatch.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Setup
opencode reads~/.config/opencode/opencode.json (or the project-local opencode.json). Configure the gateway as a custom provider:
Ctrl-X M).
Using Anthropic-shape models via /v1/messages
If you want opencode to hit the Anthropic-native endpoint (preserves cache_control blocks byte-for-byte), use the @ai-sdk/anthropic provider instead:
/v1/chat/completions and /v1/messages on the same VK. Pick the shape opencode knows how to emit for your model family.
Trace propagation
opencode passes request-level headers through its provider config. To make gateway spans nest under the opencode trace:OPENCODE_TRACEPARENT is set by the shell wrapper, the gateway parents its span under that trace and the cost is attributed to the opencode session — no double counting. See SDKs → trace propagation for the header shape.
Governance recipes
Hackday mode — disable cache, warn on spend
principal budget on the hackday VK: window day, limit $20, on_breach: warn. Engineers see a warning but aren’t blocked mid-experiment.
Block dangerous shell tools
VKpolicy_rules.tools: ["bash", "shell", "exec"]. opencode requests a tool with that name → 403 tool_not_allowed before it ever leaves the gateway. See Policy Rules.
Troubleshooting
- Model picker shows only stock providers — ensure
opencode.jsonis in~/.config/opencode/or$PWD. opencode logs the loaded config path on startup. 401 invalid_api_key—{env:LANGWATCH_VK}didn’t expand. Export the env var before launching opencode.- Streaming hangs mid-response — check
X-LangWatch-Request-Idagainst the LangWatch trace; gateway emits a terminalevent: erroron mid-stream failures (it won’t silently switch providers). See Streaming. opencode runhangs indefinitely on custom provider (1.14.x) — confirmed regression in opencode 1.14.22:opencode modelscorrectly lists the custom provider’s models, butopencode run -m custom-provider/modelonly emits the proxy-startup log line then hangs (no instance creation, no HTTP call to the gateway, exit 124 at any timeout). Reproduces across cwdopencode.jsonand global~/.config/opencode/opencode.json, with and withoutnpm: "@ai-sdk/openai-compatible", with and without the package pre-installed at~/.config/opencode/node_modules/@ai-sdk/openai-compatible/. Workarounds: pin opencode1.13.x(pre-bug); or run viaopencode --attach <server-url>against a long-runningopencode serveinstance instead ofopencode run. Track upstreamopencode-ai/opencode#5674for fix.
Working configuration verified end-to-end
These steps were last validated against opencode1.13.x + LangWatch AI Gateway feat/ai-gateway tip:
langwatch.virtual_key_id, gen_ai.usage.*, captured cost, and the full session attributed to the langwatch-gateway provider.