> ## Documentation Index
> Fetch the complete documentation index at: https://langwatch.ai/docs/llms.txt
> Use this file to discover all available pages before exploring further.

# Azure OpenAI

> Use the LangWatch Azure OpenAI guide to instrument LLM calls, trace interactions, and support AI agent test workflows.

<div className="not-prose" style={{display: "flex", gap: "8px", padding: "0"}}>
  <div>
    <a href="https://github.com/langwatch/langwatch/tree/main/typescript-sdk" target="_blank">
      <img src="https://img.shields.io/badge/repo-langwatch-blue?style=flat&logo=Github" noZoom alt="LangWatch TypeScript Repo" />
    </a>
  </div>

  <div>
    <a href="https://www.npmjs.com/package/langwatch" target="_blank">
      <img src="https://img.shields.io/npm/v/langwatch?color=007EC6" noZoom alt="LangWatch TypeScript SDK version" />
    </a>
  </div>
</div>

LangWatch library is the easiest way to integrate your TypeScript application with LangWatch, the messages are synced on the background so it doesn't intercept or block your LLM calls.

<Note>Protip: wanna to get started even faster? Copy our <a href="/llms.txt" target="_blank">llms.txt</a> and ask an AI to do this integration</Note>

#### Prerequisites

* Obtain your `LANGWATCH_API_KEY` from the [LangWatch dashboard](https://app.langwatch.ai/).

#### Installation

```sh theme={null}
npm install langwatch
```

#### Configuration

Ensure `LANGWATCH_API_KEY` is set:

<Tabs>
  <Tab title="Environment variable">
    ```bash .env theme={null}
    LANGWATCH_API_KEY='your_api_key_here'
    ```
  </Tab>

  <Tab title="Client parameters">
    ```typescript theme={null}
    import { LangWatch } from 'langwatch';

    const langwatch = new LangWatch({
      apiKey: 'your_api_key_here',
    });
    ```
  </Tab>
</Tabs>

## Basic Concepts

* Each message triggering your LLM pipeline as a whole is captured with a [Trace](/concepts#traces).
* A [Trace](/concepts#traces) contains multiple [Spans](/concepts#spans), which are the steps inside your pipeline.
  * A span can be an LLM call, a database query for a RAG retrieval, or a simple function transformation.
  * Different types of [Spans](/concepts#spans) capture different parameters.
  * [Spans](/concepts#spans) can be nested to capture the pipeline structure.
* [Traces](/concepts#traces) can be grouped together on LangWatch Dashboard by having the same [`thread_id`](/concepts#threads) in their metadata, making the individual messages become part of a conversation.
  * It is also recommended to provide the [`user_id`](/concepts#user-id) metadata to track user analytics.

## Integration

<Info>
  The LangWatch API key is configured by default via the `LANGWATCH_API_KEY` environment variable.
</Info>

Start by setting up observability and initializing the LangWatch tracer:

```typescript theme={null}
import { setupObservability } from "langwatch/observability/node";
import { getLangWatchTracer } from "langwatch";

// Setup observability first
setupObservability();

const tracer = getLangWatchTracer("my-service");
```

Then to capture your LLM calls, you can use the `withActiveSpan` method to create an LLM span with automatic lifecycle management:

```typescript theme={null}
import { AzureOpenAI } from "openai";

// Model to be used and messages that will be sent to the LLM
const model = "gpt-5-mini";
const messages: OpenAI.Chat.ChatCompletionMessageParam[] = [
  { role: "system", content: "You are a helpful assistant." },
  {
    role: "user",
    content: "Write a tweet-size vegetarian lasagna recipe for 4 people.",
  },
];

const openai = new AzureOpenAI({
  apiKey: process.env.AZURE_OPENAI_API_KEY,
  apiVersion: "2024-02-01",
  endpoint: process.env.AZURE_OPENAI_ENDPOINT,
});

// Use withActiveSpan for automatic error handling and span cleanup
const result = await tracer.withActiveSpan("llm-call", async (span) => {
  // Set span type and input
  span.setType("llm");
  span.setInput("chat_messages", messages);
  span.setRequestModel(model);

  // Make the Azure OpenAI call
  const chatCompletion = await openai.chat.completions.create({
    messages: messages,
    model: model,
  });

  // Set output and metrics
  span.setOutput("chat_messages", [chatCompletion.choices[0]!.message]);
  span.setMetrics({
    promptTokens: chatCompletion.usage?.prompt_tokens,
    completionTokens: chatCompletion.usage?.completion_tokens,
  });

  return chatCompletion;
});
```

The `withActiveSpan` method automatically:

* Creates the span with the specified name
* Handles errors and sets appropriate span status
* Ends the span when the function completes
* Returns the result of your async function

## Community Auto-Instrumentation

For automatic instrumentation without manual span creation, you can use the [OpenInference instrumentation for OpenAI](https://github.com/Arize-ai/openinference/tree/main/js/packages/openinference-instrumentation-openai), which also works with Azure OpenAI:

<Steps>
  <Step title="Install the OpenInference instrumentation">
    ```bash theme={null}
    npm install @arizeai/openinference-instrumentation-openai
    ```
  </Step>

  <Step title="Register the instrumentation">
    ```typescript theme={null}
    import { NodeSDK } from "@opentelemetry/sdk-node";
    import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";
    import { setupObservability } from "langwatch/observability/node";

    // Setup observability with the instrumentation
    setupObservability({
      instrumentations: [new OpenAIInstrumentation()],
    });
    ```
  </Step>

  <Step title="Use Azure OpenAI normally">
    ```typescript theme={null}
    import { AzureOpenAI } from "openai";

    const openai = new AzureOpenAI({
      apiKey: process.env.AZURE_OPENAI_API_KEY,
      apiVersion: "2024-02-01",
      endpoint: process.env.AZURE_OPENAI_ENDPOINT,
    });

    // This call will be automatically instrumented
    const completion = await openai.chat.completions.create({
      model: "gpt-5-mini",
      messages: [{ role: "user", content: "Hello!" }],
    });
    ```
  </Step>
</Steps>

<Info>
  The OpenInference instrumentation automatically captures:

  * Input messages and model configuration
  * Output responses and token usage
  * Error handling and status codes
  * Request/response timing
  * Azure-specific configuration (endpoint, API version)
</Info>

<Warning>
  When using auto-instrumentation, you may need to configure data capture settings to control what information is sent to LangWatch.
</Warning>

<Note>
  On short-live environments like Lambdas or Serverless Functions, be sure to call <br /> `await trace.sendSpans();` to wait for all pending requests to be sent before the runtime is destroyed.
</Note>

## Capture a RAG Span

Appart from LLM spans, another very used type of span is the RAG span. This is used to capture the retrieved contexts from a RAG that will be used by the LLM, and enables a whole new set of RAG-based features evaluations for RAG quality on LangWatch.

To capture a RAG, you can simply start a RAG span inside the trace, giving it the input query being used:

```typescript theme={null}
const ragSpan = trace.startRAGSpan({
  name: "my-vectordb-retrieval", // optional
  input: { type: "text", value: "search query" },
});

// proceed to do the retrieval normally
```

Then, after doing the retrieval, you can end the RAG span with the contexts that were retrieved and will be used by the LLM:

```typescript theme={null}
ragSpan.end({
  contexts: [
    {
      documentId: "doc1",
      content: "document chunk 1",
    },
    {
      documentId: "doc2",
      content: "document chunk 2",
    },
  ],
});
```

<Note>
  On LangChain.js, RAG spans are captured automatically by the LangWatch callback when using LangChain Retrievers, with `source` as the documentId.
</Note>

## Capture an arbritary Span

You can also use generic spans to capture any type of operation, its inputs and outputs, for example for a function call:

```typescript theme={null}
// before the function starts
const span = trace.startSpan({
  name: "weather_function",
  input: {
    type: "json",
    value: {
      city: "Tokyo",
    },
  },
});

// ...after the function ends
span.end({
  output: {
    type: "json",
    value: {
      weather: "sunny",
    },
  },
});
```

You can also nest spans one inside the other, capturing your pipeline structure, for example:

```typescript theme={null}
const span = trace.startSpan({
  name: "pipeline",
});

const nestedSpan = span.startSpan({
  name: "nested_pipeline",
});

nestedSpan.end()

span.end()
```

Both LLM and RAG spans can also be nested like any arbritary span.

## Capturing Exceptions

To capture also when your code throws an exception, you can simply wrap your code around a try/catch, and update or end the span with the exception:

```typescript theme={null}
try {
  throw new Error("unexpected error");
} catch (error) {
  span.end({
    error: error,
  });
}
```

## Capturing custom evaluation results

[LangWatch Evaluators](/evaluations/evaluators/list) can run automatically on your traces, but if you have an in-house custom evaluator, you can also capture the evaluation
results of your custom evaluator on the current trace or span by using the `.addEvaluation` method:

```typescript theme={null}
import { type LangWatchTrace } from "langwatch";

async function llmStep({ message, trace }: { message: string, trace: LangWatchTrace }): Promise<string> {
    const span = trace.startLLMSpan({ name: "llmStep" });

    // ... your existing code

    span.addEvaluation({
        name: "custom evaluation",
        passed: true,
        score: 0.5,
        label: "category_detected",
        details: "explanation of the evaluation results",
    });
}
```

The evaluation `name` is required and must be a string. The other fields are optional, but at least one of `passed`, `score` or `label` must be provided.

## Related Documentation

For more advanced Azure AI integration patterns and best practices:

* **[Integration Guide](/integration/typescript/guide)** - Basic setup and core concepts
* **[Manual Instrumentation](/integration/typescript/tutorials/manual-instrumentation)** - Advanced span management for Azure AI calls
* **[Semantic Conventions](/integration/typescript/tutorials/semantic-conventions)** - Azure-specific attributes and conventions
* **[Debugging and Troubleshooting](/integration/typescript/tutorials/debugging-typescript)** - Debug Azure integration issues
* **[Capturing Metadata](/integration/typescript/tutorials/capturing-metadata)** - Adding custom metadata to Azure AI calls

<Tip>
  For production Azure AI applications, combine manual instrumentation with [Semantic Conventions](/integration/typescript/tutorials/semantic-conventions) for consistent observability and better analytics.
</Tip>
