Understand your AI’s certainty in its responses with Galileo’s model confidence metrics
Name | Description | When to Use | Example Use Case |
---|---|---|---|
Uncertainty | Measures the model’s confidence in its generated response. | When you want to understand how certain the model is about its answers. | Flagging responses where the model is unsure, so a human can review them before sending to a user. |
Prompt Perplexity | Evaluates how difficult or unusual the prompt is for the model to process. | When you want to identify prompts that may confuse the model or lead to lower-quality responses. | Detecting outlier prompts in a customer support chatbot to improve prompt engineering. |