Instrumenting Your OpenClaw Agent with LangWatch via OpenTelemetry

Rogerio Chaves

Feb 3, 2026

** Experimental — Feb 2026**

This post documents work-in-progress on OpenClaw's @openclaw/diagnostics-otel plugin. The project is moving fast — the instrumentation spec changed significantly just this week. We believe the current design will be the final shape going forward, but check PR #11100 for the latest discussion and spec details.

OpenClaw now ships a built-in OpenTelemetry exporter (diagnostics-otel) that emits traces compliant with the OpenTelemetry GenAI Semantic Conventions. If you're running an OpenClaw agent, whether as a personal assistant on your laptop or handling production workflows like code reviews, monitoring, and incident response, you can send structured traces to LangWatch with zero custom instrumentation.

This post walks through the setup, the GenAI spec compliance details, and how to get it running.

Why Instrument Your Agent?

We've been running OpenClaw internally at LangWatch as a team assistant, helping us debug production issues fast, review GitHub PRs, monitor alerts. It's become genuinely valuable. But with that power comes responsibility: as your clawdbot becomes an insane productivity partner, you want to know what it's actually doing.

Which requests worked and which didn't. Tool calls that failed silently. Risky behaviors you want to catch early. The silly prompt that burned 200k tokens for nothing. Cost visibility. The goal is to shape your clawdbot into a reliable copilot you actually trust, and for that, you need observability.

Setup

Enable it in ~/.openclaw/openclaw.json:

{
  "diagnostics": {
    "enabled": true,
    "otel": {
      "enabled": true,
      "endpoint": "https://app.langwatch.ai/api/otel/v1/traces",
      "traces": true,
      "metrics": true,
      "headers": {
        "X-Auth-Token": "sk-lw-YOUR_API_KEY_HERE"
      },
      "serviceName": "my-clawdbot",
      "sampleRate": 1,
      "captureContent": true
    }
  },
  "plugins": {
    "allow": ["diagnostics-otel"],
    "entries": {
      "diagnostics-otel": {
        "enabled": true
      }
    }
  }
}


Get your API key at app.langwatch.ai/authorize.

Restart your gateway:

openclaw gateway restart

Now open LangWatch Dashboard and talk to your clawdbot. Traces start flowing immediately. Each agent turn produces a span tree like:


Each LLM span is a child of the run. Tool spans are children of the LLM call that invoked them.

Content Capture

The captureContent flag controls whether message content is included in traces. When enabled, you get:

  • gen_ai.input.messages — the full prompt sent to the model (JSON array of chat messages)

  • gen_ai.output.messages — the model's response

  • gen_ai.system_instructions — the system prompt

  • gen_ai.request.tools — tool definitions available to the model

  • Tool input/output on tool.execution spans

When disabled, you still get the full trace structure, token counts, latency, and model metadata — just no message content. Useful if you're tracing a shared agent and want observability without logging sensitive conversations.

GenAI Semantic Conventions Compliance

This is the part we're most excited about contributing to the community.

The OTEL GenAI semantic conventions define a standard vocabulary for instrumenting LLM applications. The diagnostics-otel plugin now aligns with this spec:

Configuration Reference

Option

Description

Default

diagnostics.otel.enabled

Turn OTEL export on/off

false

diagnostics.otel.endpoint

OTLP endpoint URL

diagnostics.otel.traces

Export traces

true

diagnostics.otel.metrics

Export metrics

true

diagnostics.otel.logs

Export logs

false

diagnostics.otel.headers

Auth headers

{}

diagnostics.otel.serviceName

Service name in traces

"openclaw"

diagnostics.otel.sampleRate

Sampling rate (0.0–1.0)

1.0

diagnostics.otel.captureContent

Include message content in traces

false

Where This Is Heading

The current implementation covers the core loop: LLM calls, tool executions, token usage, and content capture. Follow the discussion on PR #11100 — it's shaping what comes next.

Ship agents with confidence, not crossed fingers

Get up and running with LangWatch in as little as 5 minutes.

Ship agents with confidence, not crossed fingers

Get up and running with LangWatch in as little as 5 minutes.

Ship agents with confidence, not crossed fingers

Get up and running with LangWatch in as little as 5 minutes.