Integrate LangWatch into your Python application to start observing your LLM interactions. This guide covers the setup and basic usage of the LangWatch Python SDK.Documentation Index
Fetch the complete documentation index at: https://langwatch.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Get your LangWatch API Key
First, you need a LangWatch API key. Sign up at app.langwatch.ai and find your API key in your project settings. The SDK will automatically use theLANGWATCH_API_KEY environment variable if it is set.
Start Instrumenting
First, ensure you have the SDK installed:Capturing Messages
- Each message triggering your LLM pipeline as a whole is captured with a Trace.
- A Trace contains multiple Spans, which are the steps inside your pipeline.
- Traces can be grouped together on LangWatch Dashboard by having the same
thread_idin their metadata, making the individual messages become part of a conversation.- It is also recommended to provide the
user_idmetadata to track user analytics.
- It is also recommended to provide the
Creating a Trace
To capture an end-to-end operation, like processing a user message, you can wrap the main function or entry point with the@langwatch.trace() decorator. This automatically creates a root span for the entire operation.
langwatch.get_current_trace().
Capturing a Span
To instrument specific parts of your pipeline within a trace (like an llm operation, rag retrieval, or external api call), use the@langwatch.span() decorator.
@langwatch.span() decorator automatically captures the decorated
function’s arguments as the span’s input and its return value as the
output. This behavior can be controlled via the capture_input and
capture_output arguments (both default to True).@langwatch.trace() will automatically be nested under the main trace span. You can add additional type, name, metadata, and events, or override the automatic input/output using decorator arguments or the update() method on the span object obtained via langwatch.get_current_span().
For detailed guidance on manually creating traces and spans using context managers or direct start/end calls, see the Manual Instrumentation Tutorial.
Full Setup
Options
LANGWATCH_API_KEY
environment variable.LANGWATCH_ENDPOINT environment
variable or https://app.langwatch.ai.OpenAIInstrumentor,
LangChainInstrumentor) to capture data from supported libraries.TracerProvider. If provided, LangWatch will use it
(adding its exporter) instead of creating a new one. If not provided,
LangWatch checks the global provider or creates a new one.False or checks if the
LANGWATCH_DEBUG environment variable is set to "true".True, disables sending traces to the LangWatch server. Useful for testing
or development.True (the default), the tracer provider will attempt to flush all pending
spans when the program exits via atexit.True, suppresses the warning message logged when an existing global
TracerProvider is detected and LangWatch attaches its exporter to it instead
of overriding it.Integrations
LangWatch offers seamless integrations with a variety of popular Python libraries and frameworks. These integrations provide automatic instrumentation, capturing relevant data from your LLM applications with minimal setup. Below is a list of currently supported integrations. Click on each to learn more about specific setup instructions and available features:- Agno
- AWS Bedrock
- Azure AI
- Crew AI
- DSPy
- Haystack
- Langchain
- LangGraph
- LiteLLM
- OpenAI
- OpenAI Agents
- OpenAI Azure
- Pydantic AI
- Other Frameworks
FAQ: Frequently Asked Questions
How do I track LLM costs and token usage?
How do I track LLM costs and token usage?
How do I capture RAG (Retrieval Augmented Generation) contexts?
How do I capture RAG (Retrieval Augmented Generation) contexts?
How can I make input and output of the trace more human readable to better read the conversation?
How can I make input and output of the trace more human readable to better read the conversation?
How do I add custom metadata and user information to traces?
How do I add custom metadata and user information to traces?
How can I capture a whole conversation?
How can I capture a whole conversation?
thread_id metadata on each trace. See the Tracking Conversations tutorial for a full example.How do I capture evaluations and guardrails tracing data?
How do I capture evaluations and guardrails tracing data?
How can I manually instrument my application for more fine-grained control?
How can I manually instrument my application for more fine-grained control?
How do I integrate with existing OpenTelemetry setups?
How do I integrate with existing OpenTelemetry setups?