This page focuses on configuration. For a complete, runnable example, see the full working example repository: Spring AI + LangWatch (OpenTelemetry) example.
Prerequisites
- Java 17 or later
- An OpenAI API key (if you use the OpenAI provider via Spring AI)
- A LangWatch API key
Setup
Set required environment variables
Export your provider API keys as environment variables used by your app.
Configure the OpenTelemetry exporter to LangWatch
Configure OpenTelemetry and SpringAI in your
src/main/resources/application.yaml so your app captures and sends traces directly to LangWatch.application.yaml
What gets traced
- HTTP requests handled by your Spring Boot application
- AI model calls performed via Spring AI (e.g., OpenAI)
- Prompt and completion content, when capture is enabled/configured
- Performance metrics and errors/exceptions
Monitoring
Once configured:- Visit your LangWatch dashboard to explore spans and AI-specific attributes
- Analyze model performance, usage, and costs
- Investigate failures with full trace context
Troubleshooting
I don't see any traces in LangWatch
I don't see any traces in LangWatch
- Authorization header: Ensure
Authorization: Bearer <your-langwatch-key>is set underotel.exporter.otlp.headers. - Endpoint URL: Confirm
otel.exporter.otlp.endpointresolves to your LangWatch endpoint (for cloud:https://app.langwatch.ai/api/otel) and protocol ishttp/protobuf. - Network egress: Verify your environment can reach LangWatch (egress/proxy/firewall settings).
Spring AI calls aren't producing spans
Spring AI calls aren't producing spans
- Provider configuration: Ensure your Spring AI provider (e.g., OpenAI) is properly configured and invoked by your code.
- Sampling: Check OpenTelemetry sampling configuration if you’ve customized it; overly aggressive sampling can drop spans.
For a complete implementation showing controllers, Spring AI configuration, and OpenTelemetry setup, see the
full working example repository.