Thomas Hirschmann is an expert in behavioural data-driven transformation. In this interview, he talks about the role of behavioural psychology in AI-driven innovations in the insurance sector - and why humans will continue to play a role despite advancing technology. Not least because of the immense importance of empathy in a digital world.
At a recent talk held by the ERGO Innovation Lab in Berlin, one of the topics discussed was the role that people will still play in customer contact in the future. The opinions represented in the panel were somewhat divided. Where do you stand on this topic from the point of view of behavioural psychology?
Thomas Hirschmann: Probably somewhere in the middle. I feel reminded of the shift that took place not so long ago with regard to streaming services versus video stores. I was still living in Munich at the time and spoke to an employee of such a video store and asked him if he had heard of Netflix and what he thought of it. Meaningfully, he answered me: "Streaming will never catch on because people come to us to get a very personal recommendation." We all know how that story turned out. In general, I would say that the human factor takes a back seat in direct customer contact in the long run. However, I would express one reservation in this context ...
Which is?
The same as we see in other highly sensitive industries - in medicine, for example. We will need accountability in the future as well. However, from a purely legal point of view, machines cannot assume responsibility. Moreover, there will always be so-called "edge cases". By this I mean those cases that are so unusual that the machine cannot classify them adequately and therefore cannot make a suitable decision. And this is simply because the case lies outside the range of learned parameters. For a sufficiently "good" decision, a human being is actually needed. We must also be able to adequately locate such particularities in AI-augmented processes, and only people with sufficient intelligence, life experience and empathy can do this meaningfully in the end. That is why there will be no completely AI-controlled final decisions in the near future, especially in decision-sensitive areas.
In other words, in your view, we humans are programmed to trust people at certain moments?
I can imagine that artificial intelligence works well for 90 per cent, maybe even 95 per cent, of the standard concerns of an insured person. We have recently made massive progress in AI and will continue to do so.
Where efficiency is concerned, the machine always has an advantage. The other story is that of personal preference. Every person has individual preferences. These can also be learned from the data over time. But that doesn't mean that the machine can actually understand and implement them, because we humans naturally also have a strong emotional component. And emotional preferences are difficult to model. That's why, for example, there will always be people who are willing to pay more for a product like insurance if it guarantees them a human contact. Someone to whom they can explain themselves individually, who understands them and is committed to ensuring that payment is made when a concrete insurance case occurs, instead of the case being controlled by an algorithm that refuses payment because there was an anomaly at some point from the point of view of the AI, but which would be comprehensible and thus explainable for the insurance broker.
Which data and findings from behavioural psychology are particularly relevant for the insurance industry when it comes to AI applications, and why?
First and foremost, the insight that humans are not necessarily "rational" beings. We are naturally endowed with reason, or rather fundamentally capable of acting "intelligently", i.e. in a considered and goal-oriented manner. But that does not mean that we always do so. Even where we do, the time horizon of our behaviour is usually very short term for evolutionary reasons. In the future, AI has the potential as an ambient reference system to optimise the quality of our decisions in such a way that we can also achieve a long-term improvement in our quality of life. To achieve this, however, we must also systematically familiarise AI with behavioural psychological findings from the field of "Behavioural Economics" (keywords "Nudging", "Biases" ...), from group and social psychology (keywords "Peer Pressure", "Fear of Missing out" ...) and above all from the field of neuropsychology (keywords "Free Will", "Role of Emotions", "Life History Theory", ...). It is very important to realise that we do not necessarily have to understand ourselves completely in all of this. This means that the whole debate about a "transparent AI" is fundamentally misguided insofar as humans, as David Ogilvy once perfectly put it, "do not think what they feel, do not say what they think and do not do what they say".
How must an AI in use then act in order to make us pick up on it?
Customers want to feel that their individuality is recognised. This is precisely where the machine has often reached its limits. I say "so far" with reference to a series of exciting technical innovations such as the SoundStorm system from Google. In the future, it will be possible to artificially generate completely authentic-sounding human speech in real time, including sentence melody, accent and even breathing noises. In terms of customer service, this means that in the initial contact, the insurance customer speaks with a machine that is indistinguishable from a real human being. This creates an interaction between man and machine that comes very, very close to a human-to-human dialogue. The only thing that is still missing in AI is the empathy factor, i.e. emotional empathy. And here, too, there has been impressive progress. In a blind medical study published in May 2023, the AI performed better than the human doctors involved with regard to the impression of patients, and in particular with regard to the empathy experienced. In other words, "Doctor AI" was perceived as more empathetic than a human doctor, without patients knowing that it was a machine. That's pretty stark. Especially with regard to the basic quality of many "services of a higher kind", the machine will not only have a lot of potential, but may also be able to do a lot better than humans. Ultimately, this may also be due to the fact that an AI doctor is never stressed, has no private life and also needs no breaks.
Of course, the machine can't really do this "being perceived as an individual" that I mentioned earlier. That is precisely why there must always be a possibility of escalation to a human being. So there has to be a process whereby the machine recognises its own interaction and effectiveness limits according to certain parameters, discloses them and then makes it transparent that it is now time to hand over to a human employee. And it is precisely at this boundary that very exciting human-machine projections will occur, i.e. customers may already have become so accustomed to a machine profile that the AI personality must then be further thought out by the human employee and used as a reference in his or her customer interaction.
Conversely, this also means that this new human machine colleague must be extremely well trained and qualified to be able to correctly classify and evaluate all decisions that have already been made by the AI. Because trust in AI is also always based on how well humans can comprehend and understand the categorisations or decision proposals made by it.
This sounds close to "Westworld" and will hopefully take a while. Another exciting question about insurance and AI is whether or how AI will help insurance customers to make better decisions about their individual insurance needs in the future. Do you already have an idea from your discipline?
In fact, I already have a whole range of ideas. Especially in the area of improving sustainable behaviour, I see enormous potential to optimise the decisions of insured persons in such a way that not only their individual and very personal needs, but all of our needs and life situations are improved in the long term. AI could help us in this. For example, by providing us with precise forecasts of the aggregated, long-term consequences of our behaviour, communicating them to us in a clear way and incentivising sustainable behaviour. In this way, it could contribute to the urgently needed change in our way of thinking and behaving in order to keep our planet in a viable and liveable state for future generations.
Interview: Alexa Brandt
Most Popular