New Pricing: AI growth shouldn’t increase your bill

Manouk Draisma
Feb 20, 2026
How LangWatch fits in today’s LLM Ops landscape
As a company in a fast-moving, innovative space, we’ve revisited our pricing strategy a few times, but after almost 3 years, we feel it’s now very clear where the industry is going, and what the best definitive pricing strategy for LangWatch is.
Looking at the previous observability industry, the main persona was SREs, and the main value the platform provided was system health monitoring, so pricing was mostly based on traces only, which made sense. Naturally, LLMops platforms followed the same model, charging per LLM traces as the primary line item, again treating system health as the primary value.
Tools like Arize AI, LangSmith, and Langfuse evolved alongside early AI teams. Pretty OK products. Their growth makes sense. Trace-based pricing became the industry default because early LLM systems looked a lot like traditional distributed systems.
But AI systems didn’t stay simple for long.
Agents became more complex. Each user interaction triggered many internal steps. Experiments multiplied. Evaluation traffic grew fast. And because observability is mission-critical, usage couldn’t simply be reduced.
Success started to directly translate into rising platform costs.
This is where we see the market differently.
We don’t believe LLM Ops should penalize teams for building successful AI systems. If your product is working, your tooling should scale predictably, not exponentially.
So instead of anchoring pricing around raw trace volume, we anchored it around collaboration and dramatically reduced usage costs.
Contribution is where the value is for LLM Ops
We realized most AI teams have two kinds of contributors that are keen on having access to the LLM Ops platform: the core contributors and external stakeholders. Direct contributors are the ones creating experiments, debugging traces, managing prompts and annotating data, they are part of the team. This team generally also has external stakeholders keen to see the reports, the graphs, and business metrics.
That’s why LangWatch now charges per seat, at €29 per seat ($34), with unlimited lite seats, so you can share results with stakeholders, leadership and customers without worrying about extra costs.
Reflecting how complex agents have become, we also replaced trace-based with event-based pricing (each event is an operation going inside an agent), and slashed our prices there, with 200k free events on the growth plan plus just $1 per 100k additional event -where typical others in the space charge $8-10 per 100k. That’s it, no extra hidden charges, very predictable pricing.
This new pricing also greatly simplifies our self-hosted offering, where you don’t pay for any events you process, after all it’s your own infra. For the enterprise offering, the license requires a minimum number of seats to unblock enterprise features such as SSO and Audit Logs.
What this looks like in practice
A small AI team getting started: Imagine a team of 3–5 core contributors running around 100k traces per month as they iterate on prompts, run evaluations, and debug early agents, which with LangWatch typically results in a monthly cost between $80–150, while still allowing unlimited lite seats so stakeholders can follow progress without friction. Under other llmops trace-based pricing, that same level of activity often exceeds $400 per month, long before the team has reached meaningful production scale, making early experimentation expensive precisely when teams should be exploring the most.
When the product starts working: As that same team scales to 1 million traces per month, which is common once an AI feature moves into production and real users begin interacting with it, LangWatch remains in the $200–300 per month range, keeping costs predictable and aligned with collaboration rather than raw volume. With trace-based models, however, pricing frequently climbs into the $600–$2,000+ range purely as a result of increased usage, even though nothing about the team or the way they collaborate has changed.
High-throughput AI systems: For agent-heavy systems processing tens of millions of traces per month, LangWatch stays in the few-thousand-dollar range due to low-cost event pricing, while other llm observability-style models often escalate into the tens or even hundreds of thousands per month at that scale, creating a situation where teams are forced to optimize their tooling bill instead of focusing on improving AI quality.
Predictability at every stage
LLMops is no longer just about monitoring system health. LangWatch provides agent simulations and makes collaborating on evaluations easy by design. We are really excited to have our new pricing strategy aligned with our mission, so you can focus on improving your agents.
This creates predictability at every stage, from early prototypes to enterprise-scale agents.
Other platforms in the space continue to optimize primarily around tracing volume. That model made sense when LLM systems were simple pipelines. But modern agent systems are iterative, experimental, and collaborative by nature.
We believe the center of gravity in LLMops has shifted.
From monitoring → to improving AI quality.
From volume → to collaboration.
And that’s where LangWatch is built to scale.
Start LangWatch, and view our pricing how this would fit your company.

