LangWatch Cloud
Self-Managed
Developer
Free
Get started with AI Agent
monitoring, evaluation
& agent simulations
All platform features
50,000 logs p/m
14 days data access
2 users
3 Scenario's, 3 Simulations & 3 custom evaluations
Community Support
(Github & Discord)
Growth
/core-seat/month
Evals, prompts, and agents, one place. CI/CD for engineers, collaboration for PMs.
All platform features
Everything in Developer
200,000 events included
+ €1 per 100k extra events
30 days data retention included
+ custom retention (€3/GB)
Above 20 users: volume discount available)
Unlimited lite-users
Unlimited eval scores, simulations & prompts
Multiple users:
Private Slack / Teams support
Enterprise / Regulated
Premium support with on-prem or hosted deployment for high volume or privacy-sensitive data.
Custom
Alternative hosting options; hybrid, self-hosted, on-prem
Custom data retention
Custom SSO / RBAC
Audit logs
Uptime & Support SLA
ISO27001 reports InfoSec/legal reviews
Custom Terms, DPA
Forward Deployed Engineer
Billing via AWS, Google, Azure Marketplace

Rene Wilbers
Lead AI Adesso
"Our partnership with LangWatch enables us to integrate their powerful product into our GenAI solutions, allowing us to deliver safe, traceable, and optimized LLM-based products to our clients."
Rene Wilbers
Developer
Growth
Enterprise
Observability
Traces & Graphs (Agents) Debugging
Threads Tracking (Conversations/Sessions/Users)
Multi-agent graphs
Cost and Token Tracking
Any Framework Integrations
SDKs (Python, Typescript)
OpenTelemetry (Java, Go, custom)
Custom metrics / dashboards
Included Usage
50,000 logs
200,000 logs
Custom
Additional Usage
X
pay as you go: €0,0001 per event
Volume discount
Retention
14 days
30 days
Custom
Storage
Incl GB
Evaluations
Agent Simulations
Offline Evaluation
(CI/CD, Notebooks and Workflows Experimentation)
Online, real-time evaluations
Evaluations via UI / platform
Write scenario's in code / on platform
Custom Experiments (via SDK)
External Evaluation Pipelines
LLM-as-judge Evals, code evals, session avals
Build your own Eval
Development
Prompt management, versioning control
Auto-Building Datasets
(from real-time trace filters and automated LLM evaluations)
Replay prompt trace in playground
Multi-prompt comparisson
Prompt learning (PL) optimization (DSPy)
Trace search
Prompt catching, playground views
LangWatch Safeguards
Jailbreaking / Prompt Injection
Business Sensitive evaluation
PII detection and auto-redaction
Competitor blocklist, off-topic evaluation
Content Moderation
Custom Guardrails
Learn: Monitor & Analyze
User-Analytics, Topic Detection, Sentiment Analysis, Feedback
Build custom graphs on any metric available
in the platform
Tracking functional KPIs, allowing stakeholders to visualize performance metrics in real time
Trend analysis and performance benchmarking
Detailed tracking of costs including per-request costs and overall operational expenses
Collaboration
Projects
1
Unlimited
Unlimited
Users
2
20
(volume discount after)
Custom
API
Extensive Public API
Support
Community (GitHub, Discord)
Chat & Email
Private Slack/Teams Channel
Dedicated Solution Engineer
Architectual Guidance
Security & Compliance
Support SLA
SSO via Google, AzureAD, GitHub (Microsoft)
RBAC (Organisation, project, teams)
Enterprise SSO (Okta, AzureAD/EntraID)
SSO Enforcement
Data Retention Management
Audit Logs
Data Region
EU
EU
EU/US/CA/APAC
Payment Methods
Credit card
Credit card
Credit card, Invoice
Contract Duration
Monthly
Monthly / Yearly
Custom
Billing via AWS or Azure Marketplace
Contracts
Standard T&Cs
Standard T&Cs
Standard T&Cs
GDPR
ISO27001 Reports
InfoSec / Legal Review


GDPR
Compliance
Role-based
access controls
Use custom models
& integrate via API
FAQ