By
ICMI Contributors
|
Date Published: February 08, 2023 - Last Updated July 12, 2023
|
Comments
The contact center industry is constantly evolving, and that’s why we rely on the support of peers to help us stay on top of things. Each month, ICMI asks contact center thought leaders a question of the month, and then shares their responses.
Here is this month’s question:
With ChatGPT and other text-generating AI apps now going mainstream, how should customer service teams adapt to the possibility they may be chatting with, and even negotiating with, a bot instead of a customer?
And here were the responses from three of the ICMI Top 25 Thought Leaders:
Afshan Kinder
Partner, SwitchGear
ChatGPT should be top of mind for leaders in our industry. We need to start external and internal conversations to forecast the impact of this AI. While doing so, let’s remember and build in the lessons learned from the past when email, chat, and social media arrived on our doorstep.
There’s so much to talk about when it comes to this topic. What I’ll share today is one benefit, one predictable change, and a challenge. When clients use ChatGPT, a big benefit is that the communication of their issue will be clearly written, therefore better understood by the service bot and human service teams. This increased clarity will (hopefully) reduce cycle time to resolution.
This technology is not perfect, especially when it comes to accuracy of information and sentience. Humans will need to provide oversight by working exception queues. The key human skills for the future will be strong emotional intelligence and critical thinking.
As this technology evolves, there will be ongoing changes to security protocols and the methods used to verify and validate customer identity. Today, we use voice prints to validate customers. This is something that could be mimicked in the future, making it an area of deep concern. Privacy and AI laws will need to reinforce the rigor of validating and verifying that behind the bot is the actual client.
Customers who use ChatGPT will most likely encounter a bot when seeking service. Essentially, without trying to be funny, it will be a battle of the bots. Which bot will be better skilled?
Let’s go back to the present day. The reality is that customer adoption of ChatGPT will accelerate if it is simple to use, the experience is frictionless, and access to the required bandwidth is affordable. The jury is still out as of today, OpenAI has limited usage of ChatGPT to one million customers.
Stacy Sherman
Customer Experience and Marketing Executive
Stacy has written about this subject in the past, and asked to share an excerpt of what she wrote.
Here are four suggestions for CX best practices when interacting with AI robots:
Don’t make assumptions
Take time to understand the capabilities of the AI robot. Many chatbots are built on top of existing rule-based systems, which means they are limited in the types of conversations they can handle. So, be aware of what it can and can’t do to manage expectations and reduce frustration. As emerging technology rapidly advances, get educated, so you are always prepared and ready.
Be impeccable with your words
Use concise and simple language. Avoid negative terms, sarcasm, jargon, and technical phrases that the AI robot may not comprehend, leading to longer talk time and aggravation. Likewise, be mindful of legal compliance. The same regulations apply whether conversing with a robot or a human. Everything is trackable so never let your guard down.
Don’t take anything personally
AI robots are not human with emotions. Interacting with bots can be stressful and a test of patience. So, remember that the perceived lack of empathy or understanding is not personally against you. Stay calm and composed. Take deep breaths.
Always do your best
Focus on delivering meaningful customer experiences, whether human or robot. Robot CX is a strange concept, but it’s a reality, so adopt the changes, or rel="noopener noreferrer" your competitors will outpace you.
Mike Aoki
Trainer and Speaker on CX and Sales, Reflective Keynotes, Inc.
Be prepared! Your customers may use ChatGPT to write to your email and chat teams. For example, a recent CNN article cited a real estate agent who used ChatGPT to write an email on behalf of his client, outlining liability issues regarding faulty house construction. The builder immediately showed up at the client’s house following this well written, authoritative, computer-generated email.
There are pros and cons if customers rely on ChatGPT to write their customer service inquiries for them, though. One “pro” is ChatGPT may write better worded inquiries. For example, have you ever received an email where the client’s wording does not clearly state their question or problem? In that case, customers may have been better off asking ChatGPT to write the email for them. So, AI could help customers write an inquiry that agents can solve.
On the other hand, ChatGPT may provide customers with the wrong information or create false hope. There is an old saying in computing, "Garbage in, garbage out." ChatGPT’s biggest limitation is it depends on analyzing Internet examples to build its responses. But the AI cannot always determine what is, and is not, correct.
Here’s an example: A customer asks ChatGPT to write an inquiry on their behalf, asking for a refund. The AI may quote a consumer law that does not apply in the client's state or province. However, the client believes the AI's written response and then feels angrier about not receiving a refund, even after your agent explains that the quoted law is not applicable in their location.
That problem is magnified by ChatGPT's very confident writing style. It does not preface its answers by writing, "This writing may - or may not - be accurate, since it is just an algorithm’s summary of Internet data. And some of what you find on the internet is wrong." Instead, ChatGPT writes in a very confident tone. So, the customer feels the AI must be right and the agent must be wrong.
In summary, ChatGPT may help customers write easily understood inquiries to your company. It also may help them argue in a logical fashion, with an emphasis on liability, consumer law, and social impact. However, it may also delude customers into thinking they are right, even if they are wrong. It also may also trick agents into thinking a customer can write - and therefore understand - customer service issues at an advanced level. The agent may write a response that flies right over the customer's head.
What advice can you add?