User Simulator Agent
Overview
The User Simulator Agent is an LLM-powered agent that simulates realistic user behavior during scenario tests. Instead of writing scripted user messages, you describe the user's context and goals, and the simulator generates natural, contextually appropriate messages that drive the conversation forward.
When to Use the User Simulator
The User Simulator is ideal for:
- Automatic Testing: Let the conversation unfold naturally without scripting every message
- Diverse Scenarios: Test how your agent handles different user personalities and communication styles
- Edge Cases: Explore unexpected user behaviors and responses
- Multi-Turn Conversations: Simulate realistic back-and-forth interactions
How It Works
The user simulator:
- Reads the scenario description and conversation history
- Generates a natural user message based on context
- Adapts its communication style to match the described persona
- Responds realistically to the agent's messages
Use Case Example: Testing a Frustrated Customer
Let's test how a support agent handles an increasingly frustrated customer:
python
import pytest
import scenario
class TechnicalSupportAgent(scenario.AgentAdapter):
async def call(self, input: scenario.AgentInput) -> scenario.AgentReturnTypes:
# Your technical support agent implementation
user_message = input.last_new_user_message_str()
return await my_support_bot.process(user_message)
@pytest.mark.asyncio
async def test_frustrated_customer_handling():
result = await scenario.run(
name="frustrated customer with internet issues",
description="""
User is a non-technical person experiencing slow internet for 3 days.
They've already tried calling support twice with no resolution.
They're frustrated and tired of technical jargon. They just want
their internet to work and are losing patience with troubleshooting steps.
""",
agents=[
TechnicalSupportAgent(),
scenario.UserSimulatorAgent(
model="openai/gpt-4o",
temperature=0.3 # Some variability for realistic frustration
),
scenario.JudgeAgent(criteria=[
"Agent acknowledges the customer's frustration empathetically",
"Agent avoids excessive technical jargon",
"Agent provides simple, clear instructions",
"Agent offers escalation if troubleshooting doesn't work",
"Agent remains professional despite customer frustration"
])
],
max_turns=10
)
assert result.success
print(f"Test completed with {len(result.messages)} messages")Configuration Reference
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
model | str | No | Global config | LLM model identifier (e.g., "openai/gpt-4o"). |
temperature | float | No | 0.0 | Sampling temperature (0.0-1.0). Higher values (0.3-0.7) create more varied user messages. |
max_tokens | int | No | Model default | Maximum tokens for user messages. Keep reasonable for natural brevity. |
system_prompt | str | No | Built-in | Custom system prompt to override default user simulation behavior. |
api_base | str | No | Global config | Base URL for custom API endpoints. |
api_key | str | No | Environment | API key for the model provider. |
**extra_params | dict | No | {} | Additional LiteLLM parameters (headers, timeout, client). |
Next Steps
Explore related documentation:
- Judge Agent - Configure automated evaluation
- Core Concepts - Understand the simulation loop
- Writing Scenarios - Best practices for scenario design
- Scripted Simulations - Mix simulation with precise control
- Configuration - Set global defaults for all simulators
