Artificial intelligence (AI) is becoming increasingly important in society as a whole and especially in the economic context. It intervenes in processes, modifies them and takes over tasks that were previously carried out by humans. For ERGO, it is a growth driver and is of central importance for the future direction of the company. Rolf Mertens, Head of Advanced Analytics & Robotics at ERGO Group AG, talks in this interview about the need for clearly defined ethical principles.
Mr Mertens, many people are sceptical about the development and use of AI. They fear loss of control and risks that cannot be assessed. How does ERGO counter such concerns?
The reservations are understandable, because AI excludes emotional and empathetic aspects that are an important basis for our coexistence. However, when it comes to purely rational processes, they have a different status. It is therefore our task to answer legitimate questions unequivocally and to show clear allocations of humans and technology within different processes.
What does that mean in concrete terms?
Automation based on AI has many advantages in certain clearly defined areas. For example, we can significantly minimise error rates and accelerate processes. At the same time, people are involved in or affected by all of these processes in all phases. Therefore, there must be no doubt that we operate without exception within a value system based on responsibility towards the individual and society, security - especially data security - and transparency. This is also the view of the European Commission, which some time ago also formulated ethical guidelines and associated requirements for the trustworthy handling of artificial intelligence (link in German).
What central statements does the ERGO Code of Values make, what considerations were the basis?
As a major insurer, we perform tasks that directly affect people and their wishes, their life plans and their future prospects. That's why we wanted to lay down binding guidelines, which we call "guard rails for dealing with artificial intelligence". A strong impulse for this came from our works councils. We gladly took up their request to be involved in decisions on every single AI application and already at the beginning of 2019 jointly developed our guard rails, which we made public throughout the company. Adhering to the guard rails is the task of everyone involved in the development of the solutions. This includes, for example, the respective specialist departments, IT and, of course, the colleagues from the Advanced Analystics Team. We therefore operate on a broad and qualified basis.
What message does ERGO convey in the guard rails?
In the preamble, we already make it clear that we are aware of the risks as well as the opportunities in the implementation of AI-based solutions. That when weighing up opportunities and risks, people come before technology and we refer to the Munich Re Group Code of Conduct. In six commitments, our guard rails, we document our convictions in detail following the preamble. We state there that all decisions must be people-oriented. Acting responsibly towards our customers and our sales partners is the benchmark for ERGO as a company and for each and every one of our employees. Our innovative AI solutions always have an influence on work processes and procedures involving many who also represent our commitment to the outside world. We act for the benefit of all, as the application field of artificial intelligence, as we use it, has effects on society as a whole. Ultimately, offering sustainable products, solutions and services and continuously improving the customer experience is a strong motive for ERGO to attract and retain customers. In order to counteract and exclude discrimination of individuals or groups that could result from artificial intelligence, we develop and use algorithms according to the highest technical and scientific standards - this is another promise. The comprehensive treasure trove of data we use is the foundation for high-quality solutions that we strive for and realise. Since all the algorithms we develop are based on this concrete data, some of which is also personal, personal rights and data protection naturally have the highest priority.
So much for the theory. Could you give us an example from practice where the guard rails become visible?
The development of a new AI algorithm is always a process at the beginning of which it must be clearly defined what we actually want to predict. For example: What is the probability that a customer will cancel his insurance contract at the end of the agreed term. Once we have answered this question, we have to decide which data we can use for this purpose. At this moment, the guard rails already come into play in a recognisable way. Because we have to prevent distortions from arising by using data that bear the risk of violating our ethical principles - for example, the unconditional avoidance of discrimination. We talk about Explainable AI: we must be able to clearly understand and explain in what way and on what basis we want to arrive at the desired result. Take the reimbursement of medical bills, which we have realised, as another example. Anyone who submits a medical bill to ERGO expects to be reimbursed promptly. What does an AI algorithm have to be able to do? It must independently decide which invoices are reimbursed immediately and when further checks or processing are necessary. To a certain extent, it must be able to emulate the cognitive abilities of humans, such as reading and understanding. We have used training sets based on millions of pieces of data that have resulted in the algorithm achieving high quality and quantitative decisions. When we apply this algorithm, the guard rails come into play again: in order not to create any injustices and not to become vulnerable, we do not refuse any reimbursement based on an AI decision. This is still decided by the technical expert. So it's a good combination of AI and humans.
What difficulties do you encounter?
They are not so much difficulties as challenges to meet our demands at all times and in full. Our guard rails are not just fine words, but binding instructions for action against which we must and want to be measured. This is already required by our own process description. This means that we will only use an AI algorithm productively if everyone involved is able to keep all the promises formulated in our guard rails in every phase.
Thank you very much for this interview, Mr Mertens!
Interview: Martin Sulkowsky