This evaluator uses OpenAI’s moderation API to detect potentially harmful content in text,
including harassment, hate speech, self-harm, sexual content, and violence.
POST
/
openai
/
moderation
/
evaluate
Copy
import langwatchdf = langwatch.datasets.get_dataset("dataset-id").to_pandas()experiment = langwatch.experiment.init("my-experiment")for index, row in experiment.loop(df.iterrows()): # your execution code here experiment.evaluate( "openai/moderation", index=index, data={ "input": row["input"], "output": output, }, settings={} )