> ## Documentation Index
> Fetch the complete documentation index at: https://langwatch.ai/docs/llms.txt
> Use this file to discover all available pages before exploring further.

# Individual Run View

The **Individual Run View** is where you can perform a detailed analysis of a single scenario. You can access this view by clicking on a scenario from the **Batch Runs** page.

This page displays the full conversation log between the user and the agent.

<img src="https://mintcdn.com/langwatch/UFU4yqeW-QWPi3A0/images/simulations/individual-simulation-run-with-history.png?fit=max&auto=format&n=UFU4yqeW-QWPi3A0&q=85&s=48b778f73ff2d8d791b73196d2e23b3b" alt="Individual Simulation Run" width="100%" data-path="images/simulations/individual-simulation-run-with-history.png" />

A key feature of this page is the **Previous Runs** panel on the right. It shows the history for that specific scenario, identified by its `scenarioId`, allowing you to see how its behavior has changed over time across different batches. This is invaluable for tracking regressions or improvements.

### Test Report

At the bottom of the conversation, you'll find the **Scenario Test Report**. This block provides a summary of the scenario's execution and its final outcome.

<img src="https://mintcdn.com/langwatch/UFU4yqeW-QWPi3A0/images/simulations/simulation-results.png?fit=max&auto=format&n=UFU4yqeW-QWPi3A0&q=85&s=8d0039eac95db2374678aa0c7c2900e0" alt="Scenario Test Report" width="100%" data-path="images/simulations/simulation-results.png" />

The report includes:

* **Status**: The final result of the run (e.g., PASSED, FAILED).
* **Success Criteria**: The total number of criteria that were met.
* **Duration**: The total time the scenario took to execute.
* **Met Criteria**: A list of the specific evaluation criteria that were satisfied.
* **Reasoning**: The explanation provided by the Judge Agent for its final verdict.
