Skip to main content

Documentation Index

Fetch the complete documentation index at: https://langwatch.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

opencode is an OSS terminal coding agent with pluggable providers. Because it supports any OpenAI-compatible or Anthropic-compatible endpoint, you can point it at the LangWatch AI Gateway and pick up budgets, guardrails, policy-rule filters, and trace propagation without changing the agent itself.

Setup

opencode reads ~/.config/opencode/opencode.json (or the project-local opencode.json). Configure the gateway as a custom provider:
{
  "provider": {
    "langwatch-gateway": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "LangWatch Gateway",
      "options": {
        "baseURL": "https://gateway.langwatch.ai/v1",
        "apiKey": "{env:LANGWATCH_VK}"
      },
      "models": {
        "gpt-5-mini": { "name": "GPT-5 mini (via gateway)" },
        "claude-haiku-4-5-20251001": { "name": "Claude Haiku 4.5 (via gateway)" },
        "claude-sonnet-4-6": { "name": "Claude Sonnet 4.6 (via gateway)" }
      }
    }
  }
}
Then set your virtual key and run opencode:
export LANGWATCH_VK="lw_vk_live_01HZX..."
opencode
Pick the gateway provider in the model picker (Ctrl-X M).

Using Anthropic-shape models via /v1/messages

If you want opencode to hit the Anthropic-native endpoint (preserves cache_control blocks byte-for-byte), use the @ai-sdk/anthropic provider instead:
{
  "provider": {
    "langwatch-gateway-anthropic": {
      "npm": "@ai-sdk/anthropic",
      "name": "LangWatch Gateway (Anthropic)",
      "options": {
        "baseURL": "https://gateway.langwatch.ai",
        "apiKey": "{env:LANGWATCH_VK}"
      },
      "models": {
        "claude-sonnet-4-6": { "name": "Claude Sonnet 4.6 (cache-aware)" }
      }
    }
  }
}
The gateway serves both /v1/chat/completions and /v1/messages on the same VK. Pick the shape opencode knows how to emit for your model family.

Trace propagation

opencode passes request-level headers through its provider config. To make gateway spans nest under the opencode trace:
{
  "provider": {
    "langwatch-gateway": {
      "options": {
        "headers": {
          "traceparent": "{env:OPENCODE_TRACEPARENT}",
          "X-LangWatch-Thread-Id": "{env:OPENCODE_SESSION_ID}"
        }
      }
    }
  }
}
If OPENCODE_TRACEPARENT is set by the shell wrapper, the gateway parents its span under that trace and the cost is attributed to the opencode session — no double counting. See SDKs → trace propagation for the header shape.

Governance recipes

Hackday mode — disable cache, warn on spend

{
  "provider": {
    "langwatch-gateway": {
      "options": {
        "headers": { "X-LangWatch-Cache": "disable" }
      }
    }
  }
}
Pair with a principal budget on the hackday VK: window day, limit $20, on_breach: warn. Engineers see a warning but aren’t blocked mid-experiment.

Block dangerous shell tools

VK policy_rules.tools: ["bash", "shell", "exec"]. opencode requests a tool with that name → 403 tool_not_allowed before it ever leaves the gateway. See Policy Rules.

Troubleshooting

  • Model picker shows only stock providers — ensure opencode.json is in ~/.config/opencode/ or $PWD. opencode logs the loaded config path on startup.
  • 401 invalid_api_key{env:LANGWATCH_VK} didn’t expand. Export the env var before launching opencode.
  • Streaming hangs mid-response — check X-LangWatch-Request-Id against the LangWatch trace; gateway emits a terminal event: error on mid-stream failures (it won’t silently switch providers). See Streaming.
  • opencode run hangs indefinitely on custom provider (1.14.x) — confirmed regression in opencode 1.14.22: opencode models correctly lists the custom provider’s models, but opencode run -m custom-provider/model only emits the proxy-startup log line then hangs (no instance creation, no HTTP call to the gateway, exit 124 at any timeout). Reproduces across cwd opencode.json and global ~/.config/opencode/opencode.json, with and without npm: "@ai-sdk/openai-compatible", with and without the package pre-installed at ~/.config/opencode/node_modules/@ai-sdk/openai-compatible/. Workarounds: pin opencode 1.13.x (pre-bug); or run via opencode --attach <server-url> against a long-running opencode serve instance instead of opencode run. Track upstream opencode-ai/opencode#5674 for fix.

Working configuration verified end-to-end

These steps were last validated against opencode 1.13.x + LangWatch AI Gateway feat/ai-gateway tip:
# 1. Mint a VK in LangWatch UI bound to your OpenAI provider credential
export LANGWATCH_VK=lw_vk_live_

# 2. Configure opencode (one-shot)
mkdir -p ~/.config/opencode
cat > ~/.config/opencode/opencode.json <<'EOF'
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "langwatch": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "LangWatch Gateway",
      "options": {
        "baseURL": "https://gateway.langwatch.ai/v1",
        "apiKey": "{env:LANGWATCH_VK}"
      },
      "models": {
        "gpt-4o-mini": { "name": "GPT-4o mini (via gateway)" }
      }
    }
  }
}
EOF

# 3. Run a task (or open the TUI without -m)
opencode run -m langwatch/gpt-4o-mini "Reply with the word: ok"
Trace lands in LangWatch with langwatch.virtual_key_id, gen_ai.usage.*, captured cost, and the full session attributed to the langwatch-gateway provider.