Ship reliable, testable agents – not guesses. Better Agents adds simulations, evaluations, and standards on top of any framework. Explore Better Agents
This evaluator uses OpenAI’s moderation API to detect potentially harmful content in text,
including harassment, hate speech, self-harm, sexual content, and violence.