Reducing Hesitation and Uncertainty
Learn how to reduce hesitation and uncertainty in your AI models.
Some models struggle to confidently generate responses, leading to hesitation, incomplete answers, or repeated disclaimers.
For example, consider this prompt and response:
Model Response: “Well, there are many aspects to climate change. Some people think it’s caused by humans, and others think it’s just natural. It’s hard to say exactly.”
What Went Wrong?
- The prompt did not provide enough context for confident decision-making
- The model allowed too much randomness in token selection
- The prompt was ambiguous in the response it expected
How It Showed Up in Metrics:
- High Uncertainty: The model hesitated in its response
- High Prompt Perplexity: The model struggled with predicting the next token
- Mid-range Instruction Adherence: The model understood the instructions but lacked decisiveness
Improvements and Solutions
For the following improvements, we will be showing how we could change a simple prompt script like the below example:
Provide Stronger Context in Prompts
Include explicit guiding statements, for example:
This should reduce the uncertaincy and perplexity in your metrics on Galileo.
Adjust Model Sampling Parameters
Lower temperature to make the model more deterministic, for example:
Use top-k sampling to limit options and prevent hesitation, for example:
Lowering the temperature
and decreasing top_k
both generally increase the prompt adherence.
Modify Prompt Structure
Use direct phrasing to force a single, clear response, for example:
Apply Uncertainty-Based Filtering
Automatically reject responses with an Uncertainty score above a set threshold