How to Test New AI Tech in your Contact Centre – Part 2

How to test new AI technology in your contact centre – (part 2)

Henry Jinman from EBI.AI looks at the challenges of running proof of concept AI projects in contact centres, and how to extract the maximum business benefits and learning from them.

In this second part, we look at how to build, run and assess pilot projects. Read part one  in the series by Clicking Here

Building and running your bot

It’s often thought that the performance of an AI chatbot or voice bot is only ever as good as the data on which it was trained. The more data you have, so the thinking goes, the better. This misconception probably comes from stories in the popular press about how many cats a Google AI has to look at before it learns to recognise them for itself.

Certainly, volume of data helps – but only if it is relevant data. To build a chatbot ideally you want access to the transcriptions of lots and lots of chat sessions between customers and live chat agents. For a voice bot you want transcriptions of call recordings.

In chatbot development we talk about intents (what customers will ask the chatbot to do) and utterances (the words customers will use to express their intents). If you have access to recordings and transcriptions of live agent chat sessions or calls then not only can you use these to train your bot, you can also do a frequency analysis to determine the business impact of automating any given intent.

But what do you do if you don’t have access to recordings, or if all your data is in a format that you can’t easily use? The answer is to take a reasonable guess. A small group of agents will give you a very good idea of the common utterances customers might use to invoke particular intents. Use this information to start training a Minimum Viable Bot (MVB); a bot that can perform well enough for users to see its usefulness and potential. Then, continue to train your bot with live interactions post-launch. This process ensures you can capture training data from the bot’s real interactions as soon as possible. You can even have live agents ‘pose’ as the chatbot until you have gathered enough examples of real customer utterances to train your bot.

During a 2 to 3-month pilot project, we tend to continuously train the bot on the conversations it has as it goes and release a new iteration of the bot every two weeks so that the improvement in performance can be tracked.

What success looks like

To keep everyone informed and the project on course we recommend fortnightly reports and an end of pilot report. These should focus on answering a few key questions that help you prove your initial hypothesis. For example:

– What % of users engage with the bot? How many hang up, click away, or request to speak a live contact centre agent?

– What % have a successful conclusion for each use case / topic?

– What are customers asking that the bot can’t do? What intents and utterances should we train the bot on?

– Can you identify the wider business benefits such as reduction in call volumes?

Remember the primary goals of proof of concept projects are to learn something and prove something. You can now go back to your starting hypothesis and modify it based on what you have proved. The beauty of this method is that you now have a new hypothesis which can be the basis of your future pilot projects on your incremental steps to your ultimate objective.


Additional Information

To find out what can happen if you don’t follow this method, read the whitepaper on ‘Why AI fails’ containing examples from major financial institutions, airlines and contact centres Click Here.

Henry Jinman is Commercial Director at EBI.AI

For additional information on EBI.AI visit their Website or view their Company Profile

error: Content Protected