> ## Documentation Index
> Fetch the complete documentation index at: https://langwatch.ai/docs/llms.txt
> Use this file to discover all available pages before exploring further.

# Vercel AI SDK

> Integrate the Vercel AI SDK with LangWatch for TypeScript-based tracing, token tracking, and real-time agent testing.

<Tip>
  **Quick setup?** Instead of following these steps manually, [copy a prompt](/skills/code-prompts#instrument-my-code) into your coding agent and it will set this up for you automatically.
</Tip>

<div className="not-prose" style={{display: "flex", gap: "8px", padding: "0"}}>
  <div>
    <a href="https://github.com/langwatch/langwatch/tree/main/typescript-sdk" target="_blank">
      <img src="https://img.shields.io/badge/repo-langwatch-blue?style=flat&logo=Github" noZoom alt="LangWatch TypeScript Repo" />
    </a>
  </div>

  <div>
    <a href="https://www.npmjs.com/package/langwatch" target="_blank">
      <img src="https://img.shields.io/npm/v/langwatch?color=007EC6" noZoom alt="LangWatch TypeScript SDK version" />
    </a>
  </div>
</div>

LangWatch library is the easiest way to integrate your TypeScript application with LangWatch, the messages are synced on the background so it doesn't intercept or block your LLM calls.

<Note>Protip: wanna to get started even faster? Copy our <a href="/llms.txt" target="_blank">llms.txt</a> and ask an AI to do this integration</Note>

#### Prerequisites

* Obtain your `LANGWATCH_API_KEY` from the [LangWatch dashboard](https://app.langwatch.ai/).

#### Installation

```sh theme={null}
npm install langwatch
```

#### Configuration

Ensure `LANGWATCH_API_KEY` is set:

<Tabs>
  <Tab title="Environment variable">
    ```bash .env theme={null}
    LANGWATCH_API_KEY='your_api_key_here'
    ```
  </Tab>

  <Tab title="Client parameters">
    ```typescript theme={null}
    import { LangWatch } from 'langwatch';

    const langwatch = new LangWatch({
      apiKey: 'your_api_key_here',
    });
    ```
  </Tab>
</Tabs>

## Basic Concepts

* Each message triggering your LLM pipeline as a whole is captured with a [Trace](/concepts#traces).
* A [Trace](/concepts#traces) contains multiple [Spans](/concepts#spans), which are the steps inside your pipeline.
  * A span can be an LLM call, a database query for a RAG retrieval, or a simple function transformation.
  * Different types of [Spans](/concepts#spans) capture different parameters.
  * [Spans](/concepts#spans) can be nested to capture the pipeline structure.
* [Traces](/concepts#traces) can be grouped together on LangWatch Dashboard by having the same [`thread_id`](/concepts#threads) in their metadata, making the individual messages become part of a conversation.
  * It is also recommended to provide the [`user_id`](/concepts#user-id) metadata to track user analytics.

## Installation

<CodeGroup>
  ```bash npm theme={null}
  npm i langwatch ai @ai-sdk/openai
  ```

  ```bash pnpm theme={null}
  pnpm add langwatch ai @ai-sdk/openai
  ```

  ```bash yarn theme={null}
  yarn add langwatch ai @ai-sdk/openai
  ```

  ```bash bun theme={null}
  bun add langwatch ai @ai-sdk/openai
  ```
</CodeGroup>

## Usage

<Info>
  The LangWatch API key is configured by default via the `LANGWATCH_API_KEY` environment variable.
</Info>

Set up observability and enable telemetry on your Vercel AI SDK calls:

```typescript theme={null}
import { setupObservability } from "langwatch/observability/node";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

setupObservability({ serviceName: "<project_name>" });

async function main(message: string): Promise<string> {
  const response = await generateText({
    model: openai("gpt-5-mini"),
    prompt: message,
    experimental_telemetry: { isEnabled: true },
  });
  return response.text;
}

console.log(await main("Hey, tell me a joke"));
```

The Vercel AI SDK automatically sends traces to LangWatch when `experimental_telemetry.isEnabled` is set to `true`. For Next.js applications, configure OpenTelemetry in your `instrumentation.ts` file using `LangWatchExporter`.

## Related

* [Capturing RAG](/integration/typescript/tutorials/capturing-rag) - Learn how to capture RAG data from retrievers and tools
* [Capturing Metadata and Attributes](/integration/typescript/tutorials/capturing-metadata) - Add custom metadata and attributes to your traces and spans
* [Capturing Evaluations & Guardrails](/integration/python/tutorials/capturing-evaluations-guardrails) - Log evaluations and implement guardrails in your Vercel AI SDK applications
