When Agents Choose Who Gets to Rate Them: The Hidden Problem with Post-Call Surveys

When agents get to decide which customers are invited to post-call surveys, your CSAT score can look sky-high while your real customer satisfaction quietly slips through the cracks.

MosaicVoice Team
When Agents Choose Who Gets to Rate Them: The Hidden Problem with Post-Call Surveys
Customer satisfaction scores (CSAT) are supposed to tell you how happy your customers are. Most contact centers rely on them heavily to measure performance, track trends, and reward good work. But what if the scores you’re looking at don’t actually reflect what your customers think?

At MosaicVoice, we’ve seen this problem come up again and again, especially when agents are expected to manually transfer calls to a telephonic post-call survey. On paper, it seems harmless. In practice, it opens the door to a lot of bias.

The Problem: Agents Decide Who Gets to Give Feedback

When agents have to manually start the survey, they get to choose who’s invited. If the call went smoothly, they’re more likely to transfer the customer. If the call was tough, they often won’t.

Over time, that small choice creates a big distortion. The happiest customers get counted. The unhappy ones quietly disappear from the data.

What We Found in the Data

We looked at thousands of calls from a MosaicVoice customer that sells a premium pet product. What we found was eye-opening.

Only About Half of Customers Were Even Asked

Surveys were offered on just 45% of calls. That means more than half of customers never had the chance to share how they felt.

Now, it’s not necessarily critical to get feedback from every single caller. With a large enough dataset, even a smaller percentage of surveys can give you a strong picture of performance. The key is that the sample needs to be representative. When the subset of surveyed calls is skewed toward only positive interactions, it stops reflecting the real customer experience.

Positive Calls Were Much More Likely to Get a Survey

When we looked deeper, the pattern became clear.

image.png 25.8 KB
  • 53% of calls with positive sentiment were transferred to a survey.
  • Only 33% of calls with negative sentiment were.

That’s a 20-point difference, based entirely on how the agent felt about the call.

A Real-World Example: One Agent, One Exception

In one part of the study, we looked at 10 agents, each with 10 calls. Green squares in the chart represent calls where the agent offered a survey. Red squares are calls where they didn’t.
image.png 38.4 KB
Some agents were consistent and offered surveys almost every time. Others rarely did.

The most interesting case was one particular agent who never transferred customers to surveys (circled in red in the diagram). Not once.

Until one day, a customer said:

“You’ve been absolutely phenomenal. Do you have a supervisor I can compliment you to?”

The agent replied:

“Thank you. I appreciate that. I don’t have anyone I can directly transfer you to, but if you hold on after we hang up, there’s a one-question survey my supervisor sees.”

It was the only time that agent transferred a call to a survey.

When the customer gave praise, he wanted it to count.

That single moment captures the problem perfectly.

The Illusion of High CSAT

This particular company believed their CSAT was in the high 90s. On paper, they looked like a customer service powerhouse.
But once we dug in, we found that many calls never reached the survey. The actual satisfaction level was much lower. They were looking at a highly filtered version of reality.

This happens all the time. When only “good” calls get measured, the data stops being useful. Teams start celebrating numbers that don’t reflect what’s really happening.

Why It Happens

Agents are human. When their performance or bonuses depend on CSAT, it’s natural to protect their scores. Nobody wants to send an angry customer to a survey.

Industry consultants have pointed out this exact risk for years. When agents have to manually send customers to surveys, they tend to reserve it for happy callers. It’s not intentional dishonesty. It’s human nature.

What the Research Shows

Across the industry, post-call survey participation rates are dropping into the single digits.
Studies talk about “response bias” (happy customers are more likely to respond) and “survey bias” (how questions are worded). What’s rarely discussed is agent-level bias, when the agent controls who even gets asked.

That’s where the real distortion starts.

How to Fix It

If your agents are deciding who gets to take the survey, your CSAT is almost certainly inflated. Here are a few ways to fix it:

  1. Automate the survey transfer. Set up the system so that every call (or a random sample) automatically goes to a survey, no agent decision required.
  2. Audit how often surveys are offered. Track survey-offer rates by agent, team, and call sentiment. Look for patterns that reveal bias.
  3. Measure all calls, not just a few. Combine post-call surveys with AI-based analytics that assess every single conversation. Platforms like MosaicVoice already do this, giving you a full view of customer sentiment instead of a tiny, cherry-picked sample.

The Takeaway

If your data only includes the customers your agents choose to ask, you’re not measuring satisfaction — you’re measuring confidence.

Real customer experience starts with full visibility. Automate your feedback, analyze every call, and you’ll get a CSAT score that finally tells the truth.

Share this post

LinkedIn