Ship reliable, testable agents – not guesses. Better Agents adds simulations, evaluations, and standards on top of any framework. Explore Better Agents
Python
Experiment
import langwatchdf = langwatch.datasets.get_dataset("dataset-id").to_pandas()experiment = langwatch.experiment.init("my-experiment")for index, row in experiment.loop(df.iterrows()): # your execution code here experiment.evaluate( "azure/prompt_injection", index=index, data={ "input": row["input"], "contexts": row["contexts"], }, settings={} )
[ { "status": "processed", "score": 123, "passed": true, "label": "<string>", "details": "<string>", "cost": { "currency": "<string>", "amount": 123 } } ]
This evaluator checks for prompt injection attempt in the input and the contexts using Azure’s Content Safety API.
API key for authentication
The input text to evaluate
Array of context strings used for RAG evaluation
Successful evaluation
processed
skipped
error
Numeric score from the evaluation
Whether the evaluation passed
Label assigned by the evaluation
Additional details about the evaluation
Show child attributes
Was this page helpful?