Event Sourcing Engine Fast is the only “mode”
3.0.0 · Apr 28, 2026
Overview
For the past couple of months we've been shovelling coal in the LangWatch engine room. We ripped out BullMQ, rebuilt the data pipeline on a purpose-built event sourcing architecture, and came out the other side with a system that processes traces, evaluations, and agent simulations as immutable events — folded, projected, and queryable the moment they land.
Everything is fast. Everything is accurate. Some platforms make you choose between the two — toggle a "fast mode" that skips data so the numbers load quicker, trade correctness for speed. We don't have a fast mode. We don't need one. When the architecture is right, that's not a trade-off you have to make.
What You Can Do Now
Speed
Traces, evaluations, and experiment results show up the moment they land. No batching window, and no "refresh in 30 seconds."
Dashboards and analytics queries are faster across the board.
Real-time simulations
Scenario runs start faster and don't stall. Events stream in as they happen — you can watch a simulation unfold live.
Out-of-order events are automatically re-folded into the correct sequence. The engine handles it, you don't think about it.
Improved stability
Transient backend errors no longer block your data from processing — they recover automatically.
Trace boundaries are properly scoped, so large agent runs don't produce unbounded traces that slow down your views.
Full pipeline observability on our side means we catch issues before they affect you.
Shipping faster
This architecture is unlocking a new velocity at which we can ship features. Every new capability we build is a new projection over the same event stream - which means new features land faster and don't break existing ones.
Rollout
Live now. All LangWatch Cloud users are already on the new engine.
Notes
Every event is processed with full fidelity. There is no lossy fast mode, no sampling, no approximation.
Self-hosted users on 3.0 get the new engine automatically.

