Many know the situation: a new colleague is to strengthen the team. Very good, because there is a lot to do. But before the new colleague is one hundred per cent ready for action, a few weeks of familiarisation will pass. It's very similar with the development of a phonebot. Phonebots are voice-controlled digital assistants. Sven Vanoeteren, Voice User Interface Lead Designer at ERGO, explains how such a phonebot is created.
Finally, the time has come. The new colleague has his first day. They have already considered in advance which tasks he should take on and decided on him because of his abilities. His new main task will be to deal with customer concerns over the phone in the Service Centre.
Now it's time to get started. Getting to know his superiors and co-workers, attending training courses, internalising procedures, learning the trade from colleagues and then finally making his first phone calls himself. Of course, he will still need more help. But in time he will become a routine and valued colleague.
Only he doesn't have a name yet. Yes, you read that right: Our phonebots all get a call name. This helps us in the team to communicate. "Henry" passes our lips much more quickly than "Phonebot for concern recognition."
In telephone communication, on the other hand, we only use exactly one bot persona, which we developed together with customers. The aim of the bot persona is to ensure a consistent user experience and form of communication.
The basis of every successful phonebot is adequate research and analysis of the target group, the existing processes and the technical possibilities.
Basically, we observe a good acceptance among our customers to clarify their concerns with one of our phonebots. However, the truth is that we do not yet reach all customers with our offer. Especially since our voice solutions are a voluntary offer for our customers.
Our Phonebots complement our telephone customer service. The advantage for our customers is that they can deal with their concerns directly without having to wait. In a survey conducted in 2020 in our customer workshop, this was the top reason (58 percent) for preferring to speak to a phonebot instead of an employee.
But for which customer concerns is the use of a phonebot suitable?
There are a number of tasks that phonebots can do very well, such as answering frequently asked questions, providing information or forwarding calls. In general, they are particularly well suited for tasks that are standardised and therefore have a predictable structure.
The colleagues in customer service are thus relieved by their new colleague and get more time for other, more complex customer enquiries or concerns in which the personal conversation with a human being plays an essential role.
In order for the use of a phonebot to be worthwhile from a business perspective, the customer concern should occur in a sufficiently large quantity.
When all the requirements for the use of a phonebot are met, we start the development. We work in cross-functional Scrum teams and in close contact with the units in which the new Phonebot colleague will be used.
Together, we define the dialogue steps that should, as far as possible, reflect a conversation with a real person. The dialogue should feel as natural as possible without pretending to be human. It is very important to us that our customers are aware that they are talking to a machine.
We then create an initial prototype that maps the simplest and most straightforward path through the process. Based on this, we continue to develop the dialogue. To do this, we use the dialogue management system "Parloa".
Parloa enables us to map a dialogue with all its steps, logics and conditions. Through integrable services, we can access interfaces, for example, to retrieve information on the current status of a service processing. This information is then incorporated into a text spoken by our phonebot.
But how does such a dialogue actually work?
Each dialogue step is a node in our dialogue management system. It consists of the phonebot's statement and the caller's responses we expect. With his answers, the caller in turn intends a reaction from the Phonebot. That is why we call them "Intents".
Phonebot: "Please tell me your policy number."
Caller: "My number is 12345678."
However, each caller answers a little differently, even if they have the same intent. To do this, we create a list with many different response examples for each intent and use it to train the phonebot. It is not necessary to record every possible answer - which is hardly possible. Instead, the Phonebot calculates the probability with which the caller's statement matches an Intent. If there is a match, the dialogue continues at the node that is connected to the intent.
To do this, we check details such as the insurance number, date of birth, etc. for completeness and logical consistency, for example, so that the phonebot can ask again directly if what it has understood does not match the expected or possible formats, such as a date of birth "30 February 1879" (the day does not exist and 142 years is not a valid age for a human being).
"Expect the unexpected"
But what if the caller does not answer as expected?
Phonebot: "Please tell me your insurance number."
Caller: "Oh, I don't have it handy right now."
Even such cases need to be intercepted by a very good phonebot. Nothing is more annoying than being asked the same question over and over again by the phonebot when such a statement is made. Our aim, as mentioned earlier, is to create natural dialogues. A customer service employee would also have a suitable answer up his sleeve for such cases.
Ideally, it is important to be able to respond to all possible statements of a caller in the context. In our example, the phonebot could ask the caller if he would like to arrange a callback to clarify his request later.
To avoid any misunderstanding: a phonebot does not have to and cannot have the right answer to everything. But a conversation must never end in a dead end. As a way out, the customer can, for example, be passed on to a human colleague if the phonebot can no longer help.
The statements of our Phonebots are not static audio files as they are known from many classic telephone hotlines. They are generated by a text-to-speech engine in real time and spoken by a synthetic voice.
The basis for this are the texts that we store in the dialogue management system.
This has many advantages:
There are many guidelines and best practices for writing bot texts. One of the most important for me is: "Listen, don't read". Written texts often seem easy to understand when you read them. But it is only when you listen that you realise if this is really the case. Does it feel natural, does it fit into the context of the dialogue. All this becomes tangible only when listening.
Besides the linguistic aspects, there is also a technical aspect: the synthesis voice.
The latest synthesis voices already come very close to the sound of human voices. However, some terms or phrases still sound very unnatural, which is why listening to the statements is such an important step in the creation of a phonebot.
After several sprints in which we laid the technical and dialogue foundations, the new Phonebot colleague is finally ready for use.
Along the way, it has undergone several training sessions to optimise the language model, i.e. the probability of correctly recognising the caller's intention. This also requires teamwork. Many colleagues provide the training data we need with test calls.
We are always very excited when a new phonebot receives its first calls. After all, it is only under real live conditions that we find out whether the performance is already at a good level.
As with any new colleague, not every call will be routine yet: The phonebot will misunderstand callers or callers will confront it with concerns we haven't taught it (yet).
Therefore, our work does not stop with the go-live. The coming weeks will be marked by intensive analysis of data, short iterations to further improve the language model and many conversations with the responsible unit to further optimise the Phonebot.
All of this will be data-driven. This means, for example, that we run A/B tests to check different variants or ask the callers whether a piece of information was helpful or what the callers missed. In this way, we learn a lot from our customers and can further develop our phonebots so that they offer the best possible user experience.
It is important to plan these steps directly and, if necessary, not to use the phonebot immediately for one hundred percent of the incoming calls. It is better to gradually hand over more call volume to it as it learns. After all, it is not an end in itself, but should provide real added value for customers, colleagues and ERGO.
Since mid-2020, we have placed around ten phonebots in jobs in this way. Due to the very good results, we now also support international colleagues in addition to the German ERGO companies. It's a whole new challenge that we're taking on together with great pleasure.
Text: Sven Vanoeteren