By
Greg Levin
|
Date Published: September 15, 2013 - Last Updated September 08, 2020
|
Comments
This post originally appeared on the Intradiem blog on April 4, 2013.
Today’s contact centers are all about customer satisfaction measurement. If only most of them knew how to do it right.
C-Sat measurement can be one of the best ways to gauge customer sentiment and operational effectiveness. Unfortunately, too many centers have a poor surveying process in place. The following are five common and costly mistakes that contact centers make in their C-Sat initiatives.
1. Untimely survey delivery. Timing is everything in customer satisfaction measurement. Conducting a post-contact survey after a day or more has passed since the customer’s interaction with your contact center is a great way to get inaccurate C-Sat data and feedback. It’s also a bad way to get accurate C-Sat data and feedback. It’s essential to survey customers when the specifics of the interaction are still fresh in their mind. According to Gartner, feedback collected immediately after an event is 40 percent more accurate than feedback collected 24 hours after the event.
In addition, by surveying customers immediately or very soon (no more than a few hours) after an interaction, the center is able to react quickly to recover those customers who indicate via the survey that they would like to punch the agent with whom they spoke. (This is assuming the center has a process in place to alert a “recovery team” about notably low survey ratings – a best practice, by the way.)
To help ensure the timeliness of their surveys (and, if necessary, recovery attempts), many contact centers use automated IVR-based post-call surveys and/or email-based surveys, both of which can easily be administered immediately following an agent-customer interaction.
2. Too many or too few survey questions. Delivering surveys in a timely manner doesn’t mean much if the survey itself is poorly designed. One of the most common C-Sat survey design mistakes is making the survey too long or too short to be of any real value to the contact center.
Too many questions, and customers will tire of the survey and opt not to complete it, leaving the contact center with incomplete ratings and feedback. (Or, just as bad, or worse, the survey’s excessive length may anger the customer and negatively influence their ratings.) Too short a survey, and the center fails to gather critical customer insight.
So what’s the right number of questions? The consensus among survey design experts seems to be that no fewer than 4-5 questions and no more than 9-10 is ideal for post-contact C-Sat surveys – with the opportunity for customers to elaborate on their ratings with open-ended comments. I’ve heard several experts say a good C-Sat survey should take customers no more than a minute or so to complete. (These experts recommend informing customers of the survey length in the invitation message so that they are aware of how brief it is and will be more willing to complete it.)
3. Failure to capture feedback on contact resolution. Neglecting to include at least a question or two regarding whether or not the customer’s issue was fully resolved on the initial contact is a common – and potentially expensive – survey oversight. Studies have shown that no other performance metric has as big an impact on customer satisfaction as does first-call resolution (FCR), and there are few better ways to measure FCR than via a real-time, post-contact survey. Doing so not only provides a clear picture of the contact center’s FCR rate from the customer’s perspective, it can help the center – assuming the survey includes opportunities for customers to elaborate on issues – uncover some of the main causes of repeat contacts.
4. Not using customer ratings/feedback in agent coaching. Customer ratings and feedback from surveys can be a highly effective agent coaching and training tool when used appropriately. While an increasing number of contact centers are starting to embrace “Voice of the Customer” (VOC) initiatives that include incorporating direct customer feedback and ratings from post-contact surveys into agent monitoring scores and coaching, most centers have yet to utilize VOC as a powerful agent development resource.
Incorporating direct customer feedback into agent evaluations and coaching isn’t just what agents need, it’s what they want, says Mike Desmarais of contact center consulting firm Service Quality Measurement (SQM) Group. “Agents find customer feedback more meaningful and believable than the ratings they receive from peers, supervisors or quality assurance teams.”
Of course, incorporating customer’s survey comments and ratings into coaching does much more than just make agents happy. Research has shown that agent-level customer feedback increases productivity, first-call resolution, customer satisfaction, and the ROI on training and coaching efforts.
5. Failure to share key C-Sat survey insights with the rest of the enterprise. As important as it is to share customer feedback with agents, it isn’t enough to drive improvement across the entire enterprise, nor to drive lasting customer satisfaction and loyalty.
The data that is continually captured via a well-designed C-Sat survey is gold, and failure to share that gold with appropriate departments within the organization is a huge missed opportunity for the organization as a whole.
In world-class contact centers across the globe, the invaluable quantitative and qualitative information from C-Sat surveys is shared with Marketing, Sales, R&D, HR and other key departments, as well as with the entire executive team.
Dr. Jodie Monger, President of Customer Relationship Metrics, puts it perfectly.
“The contact center touches and represents all parts of the organization. The actionable customer intelligence that the contact center collects – or could collect – can be leveraged by all parts of the organization.”