Skip to main content

Documentation Index

Fetch the complete documentation index at: https://langwatch.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

LangWatch integrates with OpenAI Agents through OpenInference instrumentation to monitor agent execution, LLM calls, and tool usage.

Installation

pip install langwatch openai-agents openinference-instrumentation-openai-agents

Usage

The LangWatch API key is configured by default via the LANGWATCH_API_KEY environment variable.
Use the OpenInference instrumentation for OpenAI Agents by passing OpenAIAgentsInstrumentor to langwatch.setup().
import langwatch
from agents import Agent, Runner
from openinference.instrumentation.openai_agents import OpenAIAgentsInstrumentor
import os
import asyncio

langwatch.setup(instrumentors=[OpenAIAgentsInstrumentor()])

agent = Agent(name="ExampleAgent", instructions="You are a helpful assistant.")


@langwatch.trace(name="OpenAI Agent Run with OpenInference")
async def run_agent_with_openinference(prompt: str):
    result = await Runner.run(agent, prompt)
    return result.final_output


async def main():
    user_query = "Tell me a joke"
    response = await run_agent_with_openinference(user_query)
    print(f"User: {user_query}")
    print(f"AI: {response}")


if __name__ == "__main__":
    asyncio.run(main())
The OpenAIAgentsInstrumentor automatically captures agent activities, LLM calls, and tool usage. Use @langwatch.trace() to create a parent trace under which agent operations will be nested.