Skip to main content

Delivering on the Promise of Conversational AI  

Image
Convo AI blog
Justin Guerra Enterprise Solution Consultant

Imagine a group of incredible minds converging in downtown New Orleans. They gather on the third floor of the Four Seasons for an NLP (natural language processing) conference. The topics were hot, controversial even, and between presentations, one could overhear attendees debating the difference between “conversational” versus “conversation” AI – a subtlety I don’t even fully appreciate, but one the industry must embrace for us to deliver AI solutions that impress our customers.   

This was the scene at the 2022 Conversational Cloud Conference, hosted by Opus Research in November. The event explored transformational technologies to enable multichannel customer service and how contact centers can succeed with Conversational AI, bots, NLP, self-service. I had the pleasure of speaking during the conference, as well as the opportunity to enjoy an agenda of insightful presentations, and I’m excited to share what I learned.   

“Conversational AI” generally refers to a free-flowing exchange of messages between customers and technology, whereas “conversation AI” is more about the protocols the systems use to process a transaction. The distinction comes from the idea that a highly “scripted” conversation is necessary for a transaction to take place, but a conversation that “flows” should still be able to accomplish the task – in a less regimented way. This distinction played nicely into my session’s topic: The Broken Promise of Conversational AI. I began by asking the audience a somewhat provocative question: Did big tech break their promise of delivering conversational AI? Has their need to be first to market hindered their ability to deliver deeply intelligent technologies? Is conversational AI in the contact center even possible?  

Conversational AI has a people problem   

Hollywood shaped the world’s expectation of conversational AI. Remember KITT, the talking Pontiac from Night Rider, or Hal 9000 from 2001: A Space Odyssey? If KITT was told “Come get me,” the response was never “Sorry, I’m having trouble understanding you right now. Please try a little later.” Instead KITT was a great example of AI that generated spontaneous and intelligent responses, or at least ones that seemed that way. The first time Michael Knight interacts with KITT he says to himself, “With all these weird gadgets, you’d think they’d give you a radio” and KITT responded with “What would you like to hear?” Michael slams on the breaks and says, “What the heck was that!” and the first automotive-conversational AI was brought to life in our own living rooms... 40 years ago!   

So, why are customer self-service experiences still not as clever as KITT? Is it the technology? Or perhaps the people implementing it? Buckle up because I blame us – this is a people problem. To deliver more conversational experiences we must first ask: What makes a conversation feel conversational? Two things: surprise and context.   

First, surprise. When we talk to people, we can’t always predict their responses. When Michael talked to KITT, he had no idea what she would say next. For us, surprises will come in the form of response variety. So, in the absence of generative AI that would craft a witty retort on its own, we need to generate a pool of responses large enough to continually surprise the customer. Even when your system doesn’t know the answer, you need variety. When a system replies with an identical response every time a failure occurs or a fixed response for every answer, your interactions will feel transactional, not conversational.   

Next, and probably the most important aspect of any conversation, is context. The ability to track the active context of a conversation, and any sidebar remarks, is something still considered uniquely human.   

For example, if we’re talking about a rental car reservation and on the step when I ask you when you’d like to return the vehicle you say, “wait, do I need snow tires to drive up the mountain?,” I (as a human) would have no problem telling you to not worry about it without losing the thread of our reservation. Most chatbots and intelligent virtual agents (IVAs) on the market don’t do so well, and instead either can’t answer the question taken out of context, or (worse) providing an answer will break the transaction already in progress causing the customer to start over. Luckily, this too can be addressed with current technology. We just need to be clever about our implementation practices.   

So, how do we take these core principles and use them to bridge the gap between what was promised and what is delivered? We must rip up our existing IVR playbook. We must roll up our creative sleeves. And we must watch what our customers do.   

IVRs are so 1989  

IVR was all about limiting access to the contact center, but IVAs are about creating engaging experiences for the customer (which just so happens to reduce the burden on the contact center). Start by revisiting the day-to-day operations of your contact center and ask yourself how your customers want (try) to interact with your business. The business use cases, processes, and rules have likely changed since your IVR was first deployed; you need to evaluate things from the perspective of the customer. Then do a deep dive into any company specific jargon and industry speak that relates to the use cases the IVA is trying to solve for; you’ll need these when you get to step two.   

Punch up the IVA script   

Here’s where the creative component of the story comes to light. We want variety; we want conversational; we need a team. Assemble a writer’s room. (No, you don’t need professional writers.) Once you establish the types of questions (use cases) your IVA will tackle, go around the office and ask agents how they would respond to each inquiry. Boom! Variety. Do this for failure points too; make sure the system responds accurately, yet – from the perspective of the customer – differently each time they interact, just like a person would.   

Learn as you go  

Now we’re ready to build, and test, and test— and test. Watch how your customers interact with the system. Review the things they say in response to your prompts. Where do they “break the thread”? How can you build a logical flow to support their sidebar questions? What changes can you make to improve the conversation? Use your contact center agents to help you with this part. They are the ones who field conversations the IVA couldn’t handle, so they have insights on how (or why) the customer asked for a human in the first place.   

Collaborative intelligence helps deliver on promise of conversational AI   

After my presentation I was delighted to hear stories from several Fortune 500 companies that explained how they had successfully accomplished the goals outlined above. Conversational AI has been possible for some time. In fact, these companies have entire teams dedicated to making it possible. However, smaller companies don’t have the capital or resources to build these kinds of teams. But with Five9 you don’t have to.   

The key to delivering a quality IVA is to make sure that you have a well-defined process for each phase and that you keep your eye on the customer. Your Five9 team will help you realize the promise of conversational AI through collaborative intelligence.   

Check out this Collaborative Intelligence e-book to learn more about Five9’s approach to Conversational AI and IVA.   

Image
Convo AI blog
Justin Guerra Enterprise Solution Consultant

Call 1-800-553-8159 to learn more about Five9