⚠️ Experimental — OpenClaw’s
diagnostics-otel plugin is under active development. The instrumentation spec is stabilizing but may change. Follow the discussion at PR #11100 for the latest.diagnostics-otel plugin emits OpenTelemetry GenAI Semantic Conventions-compliant traces that LangWatch can ingest natively.
Setup
Get your LangWatch API Key
Go to app.langwatch.ai/authorize to create your account and project, then grab your API key.
Configure OpenClaw
Add the following to your
~/.openclaw/openclaw.json:If you already have an
openclaw.json, just merge the diagnostics and plugins sections into your existing config.What You Get
Each agent turn produces a span tree like this:- Model info — which model was requested and which was used
- Token usage — prompt tokens, completion tokens, cache read/write breakdown
- Latency — duration of each LLM call and tool execution
- Cost — calculated from token usage
- Content — full input/output messages when
captureContentis enabled
Content Capture
ThecaptureContent flag controls whether message content is included in traces.
When enabled, traces include:
gen_ai.input.messages— the full prompt sent to the modelgen_ai.output.messages— the model’s responsegen_ai.system_instructions— the system promptgen_ai.request.tools— tool definitions available to the model- Tool input/output on execution spans
Configuration Reference
| Option | Description | Default |
|---|---|---|
diagnostics.otel.enabled | Turn OTEL export on/off | false |
diagnostics.otel.endpoint | OTLP endpoint URL | — |
diagnostics.otel.traces | Export traces | true |
diagnostics.otel.metrics | Export metrics | true |
diagnostics.otel.logs | Export logs | false |
diagnostics.otel.headers | Auth headers (include your API key) | {} |
diagnostics.otel.serviceName | Service name in traces | "openclaw" |
diagnostics.otel.sampleRate | Sampling rate (0.0–1.0) | 1.0 |
diagnostics.otel.captureContent | Include message content in traces | false |
sampleRate to control costs. A rate of 0.1 samples 10% of traces.
GenAI Semantic Conventions
The plugin emits traces compliant with the OTEL GenAI semantic conventions:| Attribute | Description |
|---|---|
gen_ai.operation.name | "chat" for LLM inference spans |
gen_ai.system | Provider identifier (e.g. "anthropic", "openai") |
gen_ai.request.model | Model requested |
gen_ai.response.model | Model actually used |
gen_ai.usage.input_tokens | Total input tokens (including cached) |
gen_ai.usage.output_tokens | Completion tokens |
gen_ai.usage.cache_read_input_tokens | Tokens served from cache |
gen_ai.usage.cache_creation_input_tokens | Tokens written to cache |
SPAN_KIND_CLIENT per the GenAI spec (outbound RPCs to model providers).
For more information, check out the OpenClaw documentation and the diagnostics-otel discussion.