Ship reliable, testable agents – not guesses. Better Agents adds simulations, evaluations, and standards on top of any framework. Explore Better Agents
Python
Experiment
import langwatchdf = langwatch.datasets.get_dataset("dataset-id").to_pandas()experiment = langwatch.experiment.init("my-experiment")for index, row in experiment.loop(df.iterrows()): # your execution code here experiment.evaluate( "ragas/sql_query_equivalence", index=index, data={ "output": output, "expected_output": row["expected_output"], "expected_contexts": row["expected_contexts"], }, settings={} )
[ { "status": "processed", "score": 123, "passed": true, "label": "<string>", "details": "<string>", "cost": { "currency": "<string>", "amount": 123 } } ]
Checks if the SQL query is equivalent to a reference one by using an LLM to infer if it would generate the same results given the table schemas.
API key for authentication
The output/response text to evaluate
The expected output for comparison
The expected contexts for comparison
Show child attributes
Successful evaluation
processed
skipped
error
Numeric score from the evaluation
Whether the evaluation passed
Label assigned by the evaluation
Additional details about the evaluation
Was this page helpful?