How do you capture the quality of a support interaction?
It’s a nuanced question. A short chat can still feel draining, while a longer, step-by-step exchange might actually reduce customers’ effort if it effectively guides them or teaches something new.
At Markprompt, we’re tackling this problem in multiple ways. One of them is using a Customer Effort Score (CES), powered by LLMs.
Pre-LLM, traditional measurements either demanded manual review or relied on crude metric combinations. This has some obvious pitfalls:
- Fragmented analysis: Counting messages or flagging sentiment often misses the full story.
- Rigid models: Fixed scoring rubrics don’t adapt well to evolving behaviors and don’t account for cross-correlations.
- Lack of “Why?”: Traditional scores rarely tell why an interaction feels easy or difficult, especially when qualitative factors are linked.
In contrast, using LLMs offers a much richer and detailed perspective:
- Contextual understanding: Our LLMs read entire transcripts, evaluating multiple aspects—tone, complexity of the problem, time to the “aha” moment, and so on.
- Adaptive: The scoring logic can be tailored to the specific factors that matter to you, while omitting others.
- Explanatory feedback: Instead of a single score, our LLMs also explain why they gave a particular assessment, highlighting, for instance, the main friction points that led to a high effort score. When tracked at scale, these insights become a clear roadmap for product improvements.
In short, that’s how we as humans would look at it—if we only had the resources!