Get started with LangWatch Skills in seconds: Set up evals, scenario tests, and tracing just by asking your AI coding assistant.
Python
Experiment
[ { "status": "processed", "score": 123, "passed": true, "label": "<string>", "details": "<string>", "cost": { "currency": "<string>", "amount": 123 } } ]
Evaluates how pertinent the generated answer is to the given prompt. Higher scores indicate better relevancy.
API key for authentication
Show child attributes
Optional trace ID to associate this evaluation with a trace
Successful evaluation
processed
skipped
error
Numeric score from the evaluation
Whether the evaluation passed
Label assigned by the evaluation
Additional details about the evaluation
Was this page helpful?