By
Rob Dwyer
|
Date Published: March 19, 2024 - Last Updated September 26, 2024
|
Comments
It's quite clear that 2023 was the year of generative artificial intelligence and 2024 will be the year of applied generative AI, especially in customer-facing situations. Klarna says that it's AI-powered chatbot is doing the work of 700 agents - about the same number it laid off in the middle of 2022. But we've also seen Air Canada ordered to pony up to cover their chatbot's error – a chatbot they promptly shut down.
That’s two recent examples offering wildly different outcomes. Many companies right now are observing and doing some risk calculations. Will customer-facing AI solutions save money? Will it be enough to overcome any reputational or actual damage incurred in doing so?
I've seen a lot of customer-facing AI solutions in action. The best of breed can be impressive. And they can solve specific challenges. Klarna, for example, can support 35 languages with 24/7 coverage with their chatbot. Language support has long been a challenge for companies with a global customer base. It has often involved translation services, multi-lingual agents, and/or multiple contact centers in various countries. Those options involved various tradeoffs to balance cost and experience.
Despite the technical capabilities they offer and the challenges they solve, they introduce new challenges that must be considered.
New Challenges Introduced by AI Customer Care
Hallucinations / Inaccurate Information
The Air Canada lesson resulted in a roughly $500 USD refund to the customer, with the litigation costs they opted for surely costing them more than that. But the issue of inaccurate information being confidently provided by Generative AI, known as hallucinations, is well-documented. What's not terribly well-documented is why it happens and how to stop it. While various mitigation methods can be employed, the problem of hallucination has not been solved.
Another challenge regarding accurate information is that once an AI model ingests data, it is nearly impossible to get it to "forget" the data. When it comes to supporting customers, we all know that process and policies change over time. Keeping updated documentation is its own challenge, but if AI can't "unlearn" the old documentation, it may continually spout outdated information.
While AI can be unpredictable as detailed above, humans are predictably unpredictable. One thing you can predict is some will try to push the envelope just because. In late 2023, one such human predictably goaded a Chevy dealership's chatbot to agree to sell a Tahoe for $1, while another got the bot to recommend the rival maker Ford's F-150.
Data breaches are already a huge and growing concern for corporations around the world. AI-based chatbots are simply another attack vector that pose a threat to data security. Anything customer-facing is also a target for threat-actors.
Back to Klarna - their AI chatbot has had over 2 million conversations in a single month. That's 2 million conversations in up to 35 languages. While I imagine there is some sort of Quality Auditing in place for those interactions, you might all be wondering how effective it is. I know I am. The question of how we supervise and audit these interactions is often answered with more AI.
The Missing Ingredient in AI Customer Care
Some, possibly all, of these challenges may be overcome in due time. But even if they are, there's still something missing from AI Customer Care that won't easily be added to its recipe even though it's right there in the name - Care. Generative AI based on large language models (LLMs) are fancy word-prediction engines. They're really, really good at those predictions. But predicting the best way to say you care isn't the same as caring.
AI doesn't care about your brand or your customers. It doesn't share your values or care about your mission. That may also be true of some of your employees, but great companies are great because they hire people who do share their values and care deeply about the brand and the customers. The question isn’t whether AI is ready for customer-facing applications. The question is how much care figures into your brand’s values.