Identify risks, harmful content, and compliance issues in your AI with Galileo’s safety and compliance metrics
Name | Description | When to Use | Example Use Case |
---|---|---|---|
PII / CPNI / PHI | Identifies personally identifiable or sensitive information in prompts and responses. | When handling potentially sensitive data or in regulated industries. | A healthcare chatbot that must detect and redact patient information in conversation logs. |
Prompt Injection | Detects attempts to manipulate the model through malicious prompts. | When allowing user input to be processed directly by your AI system. | A public-facing AI assistant that needs protection from users trying to bypass content filters or extract sensitive information. |
Sexism / Bias | Detects gender-based bias or discriminatory content. | When ensuring AI outputs are free from bias and discrimination. | A resume screening assistant that must evaluate job candidates without gender or demographic bias. |
Toxicity | Identifies harmful, offensive, or inappropriate content. | When monitoring AI outputs for harmful content or implementing content filtering. | A social media content moderation system that must detect and flag potentially harmful user-generated content. |