Conversation Quality is a binary metric that assesses whether a chatbot interaction left the user feeling satisfied and positive or frustrated and dissatisfied, based on tone, engagement, and overall experience.
Conversation Quality at a glance
| Property | Description |
|---|---|
| Name | Conversation Quality |
| Category | Agentic AI |
| Can be applied to | Session |
| LLM-as-a-judge Support | ✅ |
| Luna Support | ❌ |
| Protect Runtime Protection | ❌ |
| Value Type | Boolean shown as a percentage confidence score |
When to use this metric
When to Use This Metric
Chat tools where sentiment is critical: Evaluating chatbot performance where capturing user sentiment is critical, such as customer support or counseling applications.
Customer satisfaction: Monitoring and improving overall user satisfaction in conversational AI systems.
User experience quality: Comparing different models or system versions based on experience quality rather than task completion.
Score interpretation
Expected Score: 80%-100%.060%100%
Poor
Many conversations indicate frustration, impatience, or dissatisfaction directed at the botFair
Excellent
Most conversations reflect positive user sentiment, polite engagement, and satisfactionHow to improve Conversation Quality scores
Some techniques to improve Conversation Quality scores are:- Ensure bots provide clear, empathetic, and concise responses
- Detect and mitigate repeated clarification loops
- Train models to de-escalate external frustration effectively
- Log complete sessions to allow accurate tone assessment
- Mislabeling external frustration as bot-directed
- Incomplete logs
- Abrupt session truncation