Skip to content

User Simulator Agent

Overview

The User Simulator Agent is an LLM-powered agent that simulates realistic user behavior during scenario tests. Instead of writing scripted user messages, you describe the user's context and goals, and the simulator generates natural, contextually appropriate messages that drive the conversation forward.

When to Use the User Simulator

The User Simulator is ideal for:

  • Automatic Testing: Let the conversation unfold naturally without scripting every message
  • Diverse Scenarios: Test how your agent handles different user personalities and communication styles
  • Edge Cases: Explore unexpected user behaviors and responses
  • Multi-Turn Conversations: Simulate realistic back-and-forth interactions

How It Works

The user simulator:

  1. Reads the scenario description and conversation history
  2. Generates a natural user message based on context
  3. Adapts its communication style to match the described persona
  4. Responds realistically to the agent's messages

Use Case Example: Testing a Frustrated Customer

Let's test how a support agent handles an increasingly frustrated customer:

python
import pytest
import scenario
 
class TechnicalSupportAgent(scenario.AgentAdapter):
    async def call(self, input: scenario.AgentInput) -> scenario.AgentReturnTypes:
        # Your technical support agent implementation
        user_message = input.last_new_user_message_str()
        return await my_support_bot.process(user_message)
 
@pytest.mark.asyncio
async def test_frustrated_customer_handling():
    result = await scenario.run(
        name="frustrated customer with internet issues",
        description="""
            User is a non-technical person experiencing slow internet for 3 days.
            They've already tried calling support twice with no resolution.
            They're frustrated and tired of technical jargon. They just want
            their internet to work and are losing patience with troubleshooting steps.
        """,
        agents=[
            TechnicalSupportAgent(),
            scenario.UserSimulatorAgent(
                model="openai/gpt-4o",
                temperature=0.3  # Some variability for realistic frustration
            ),
            scenario.JudgeAgent(criteria=[
                "Agent acknowledges the customer's frustration empathetically",
                "Agent avoids excessive technical jargon",
                "Agent provides simple, clear instructions",
                "Agent offers escalation if troubleshooting doesn't work",
                "Agent remains professional despite customer frustration"
            ])
        ],
        max_turns=10
    )
    
    assert result.success
    print(f"Test completed with {len(result.messages)} messages")

Configuration Reference

ParameterTypeRequiredDefaultDescription
modelstrNoGlobal configLLM model identifier (e.g., "openai/gpt-4o").
temperaturefloatNo0.0Sampling temperature (0.0-1.0). Higher values (0.3-0.7) create more varied user messages.
max_tokensintNoModel defaultMaximum tokens for user messages. Keep reasonable for natural brevity.
system_promptstrNoBuilt-inCustom system prompt to override default user simulation behavior.
api_basestrNoGlobal configBase URL for custom API endpoints.
api_keystrNoEnvironmentAPI key for the model provider.
**extra_paramsdictNo{}Additional LiteLLM parameters (headers, timeout, client).

Next Steps

Explore related documentation: