Launch Week - Day 4: Writing Tests with the Scenario MCP

@langwatch/mcp-server - v0.3.1 · Nov 6, 2025

Testing LLM agents has always been a bit too ✨ vibes ✨ based. Thankfully, we've got Scenario MCP to the rescue.

With todays release, Scenario MCP will enable you to generate high-quality tests, and it's as easy as cake. One prompt is all it takes, and the outcome will be a strong testing foundation for your agents, keeping them on a straight and narrow as you continue to build out new features.

This is only the start of continuous testing for AI agents, and it's stupidly simple to setup. We're excited to see where this journey takes us, and how you use it!

Setting it up

Step 1: Add the LangWatch MCP

Full information live on: https://docs.langwatch.ai/integration/mcp

{
  "mcpServers": {
    "langwatch": {
      "command": "npx",
      "args": [
        "-y",
        "@langwatch/mcp-server"
      ]
    }
  }
}

Step 2: Ask your agent to write the tests

You can be as specific as you want - or don’t. Either way, you’re ready to run your scenarios!

You can learn more about Scenario on the official docs and check the LangWatch MCP documentation for more details on the MCP. You can also read the full blog post here.

Ship agents with confidence, not crossed fingers

Get up and running with LangWatch in as little as 5 minutes.

Ship agents with confidence, not crossed fingers

Get up and running with LangWatch in as little as 5 minutes.

Ship agents with confidence, not crossed fingers

Get up and running with LangWatch in as little as 5 minutes.