Measure and analyze model confidence in AI outputs using Galileo’s Uncertainty Metric to identify potential hallucinations and improve response quality.
Uncertainty measures how much a model is deciding randomly between multiple ways of continuing the output, indicating the model’s confidence level in its responses.
Uncertainty is measured at both the token level and the response level:
Higher uncertainty scores indicate the model is less certain about its output, which often correlates with:
Uncertainty is calculated using log probabilities from the model:
Token Analysis
For each token in the sequence, the model calculates its confidence in predicting that token based on all preceding tokens in the context.
Response Aggregation
The system identifies the highest uncertainty value across all tokens in the response to determine the overall response-level uncertainty.
Model Integration
The calculation leverages log probabilities from OpenAI’s Davinci models or Chat Completion models, available through both OpenAI and Azure platforms.
Uncertainty canonly be calculated with LLM integrations that provide log probabilities:
pq.run(...)
, using the chosen modeldavinci-001
davinci-001
pq.run(...)
, using the chosen modeltext-davinci-003
or text-curie-001
, if available in your Azure deploymenttext-davinci-003
or text-curie-001
, if available in your Azure deploymentTo calculate the Uncertainty metric, we require having text-curie-001
or text-davinci-003
models available in your Azure environment to fetch log probabilities. For Galileo’s Guardrail metrics that rely on GPT calls (Factuality and Groundedness), we require using 0613
or above versions of gpt-3-5-turbo
.
When responses show high uncertainty scores, your model is likely struggling with the content. To improve your system:
Identify uncertainty patterns: Analyze where in responses uncertainty spikes occur.
Enhance knowledge sources: Provide better context or retrieval results for topics with high uncertainty.
Refine prompts: Add more specific instructions or constraints for areas where the model shows uncertainty.
Consider model selection: Some models may be more confident in specific domains.
Track tokens and phrases that consistently trigger high uncertainty to identify knowledge gaps.
Set uncertainty thresholds to flag or reject responses that exceed acceptable uncertainty levels.
Evaluate how different models perform on the same inputs to identify which ones have lower uncertainty in your domain.
Use Uncertainty alongside Correctness metrics to identify correlations between model confidence and factual accuracy.
When analyzing Uncertainty, remember that some level of uncertainty is normal and even desirable in certain contexts. Very low uncertainty might indicate the model is being overly deterministic or repeating memorized patterns rather than reasoning about the content.
Measure and analyze model confidence in AI outputs using Galileo’s Uncertainty Metric to identify potential hallucinations and improve response quality.
Uncertainty measures how much a model is deciding randomly between multiple ways of continuing the output, indicating the model’s confidence level in its responses.
Uncertainty is measured at both the token level and the response level:
Higher uncertainty scores indicate the model is less certain about its output, which often correlates with:
Uncertainty is calculated using log probabilities from the model:
Token Analysis
For each token in the sequence, the model calculates its confidence in predicting that token based on all preceding tokens in the context.
Response Aggregation
The system identifies the highest uncertainty value across all tokens in the response to determine the overall response-level uncertainty.
Model Integration
The calculation leverages log probabilities from OpenAI’s Davinci models or Chat Completion models, available through both OpenAI and Azure platforms.
Uncertainty canonly be calculated with LLM integrations that provide log probabilities:
pq.run(...)
, using the chosen modeldavinci-001
davinci-001
pq.run(...)
, using the chosen modeltext-davinci-003
or text-curie-001
, if available in your Azure deploymenttext-davinci-003
or text-curie-001
, if available in your Azure deploymentTo calculate the Uncertainty metric, we require having text-curie-001
or text-davinci-003
models available in your Azure environment to fetch log probabilities. For Galileo’s Guardrail metrics that rely on GPT calls (Factuality and Groundedness), we require using 0613
or above versions of gpt-3-5-turbo
.
When responses show high uncertainty scores, your model is likely struggling with the content. To improve your system:
Identify uncertainty patterns: Analyze where in responses uncertainty spikes occur.
Enhance knowledge sources: Provide better context or retrieval results for topics with high uncertainty.
Refine prompts: Add more specific instructions or constraints for areas where the model shows uncertainty.
Consider model selection: Some models may be more confident in specific domains.
Track tokens and phrases that consistently trigger high uncertainty to identify knowledge gaps.
Set uncertainty thresholds to flag or reject responses that exceed acceptable uncertainty levels.
Evaluate how different models perform on the same inputs to identify which ones have lower uncertainty in your domain.
Use Uncertainty alongside Correctness metrics to identify correlations between model confidence and factual accuracy.
When analyzing Uncertainty, remember that some level of uncertainty is normal and even desirable in certain contexts. Very low uncertainty might indicate the model is being overly deterministic or repeating memorized patterns rather than reasoning about the content.