AI & Robotics

AI-Hype – Myths are dangerous and long-lasting!!

Artificial intelligence has become the buzzword of buzzwords in digital, finds ERGO CDO Mark Klein. Much is labelled as AI which is actually miles away. These exaggerations, these myths, are dangerous in two respects:
On one hand, the exaggerations blur the view of where AI can and cannot really help. On the other hand, it fuels the scepticism of those who think AI will eventually rule the world.


My way to work is one of the most beautiful ones. From home to my office I drive a few kilometers along the Rhine. Practically always by bike, sometimes with a podcast on. I love podcasts! Whether it's about a feature on new forms of teaching or Richard David Precht, who calls for digital ethics, it doesn't matter. The algorithm of my streaming app provides me with new suggestions to listen to. The Artificial Intelligence (AI) does a good job, it now really meets my personal taste in a very precise way.

Myths are dangerous

AI is omnipresent: we carry it with us all the time, we have gotten used to the services from music selection, navigation to smartphone assistance via voice. But also on the road with intelligent traffic guidance systems, in medicine with algorithms that can read MRI images better than radiologists, in crime fighting, even in art - AI is everywhere. So widespread, in fact, that the German TÜV association is now introducing a technical inspection for everything that carries AI. In that case my cell phone will need a sticker like my car.

I explicitly welcome this "omnipresent", it makes life (mostly) easier. But in this article I would like to focus on something that does more harm than good to the success of AI. Artificial intelligence has become the buzzword of buzzwords in the digital world. Many things are labeled as AI, which is actually far away from it. These exaggerations, these myths are dangerous in two ways. On the one hand, the exaggerations blur the view of where AI can really help and where not. On the other hand, it fuels the sceptism of those who believe that AI can dominate the world.

Many things are labeled as AI, which in fact are far away from it

As Chief Digital Officer of an insurance company, I have chosen matter-of-factness as my priority. Insurance companies like ERGO are investing massively in AI. Although I still see the financial sector in midfield compared to the rest of the industries, the trend is pointed upward. That is precisely why we want to leverage the strengths of AI for our processes beyond the hype. On the other hand, we as insurers in particular want to deal responsibly with the ethical aspects. And this includes saying that the scepticism of AI, which at some point is hanging over people, is largely unnecessary.

This latent fear has possibly brought us the best chess player in the world. When Gari Kasparov lost against the computer called Deep Blue in 1997, the myth of the all-powerful AI received a terrific boost. But Deep Blue had probably won because the programmers had humanized him: Deep Blue hesitated, seemed to be insecure - all a bluff - which, taken together, had an effect. Kasparov lost.

In reality, a four-year-old child is still far superior to even the most sophisticated artificial intelligence. The British mathematician and author Hannah Fry considers us to be far away - even from an artificial "intelligence on the level of a hedgehog". Proof of this is the RoboCup, a robot tournament that a research group in the USA initiated a long time ago. The goal for 2050 was to program robots that could play soccer better than humans. Meanwhile, co-initiator Manuela Veloso says it will be years before a two-legged robot can walk like a human (at all).

Robocup 2019 SPL Final - HTWK vs. B-Human


A four-year-old child is still superior to even the most sophisticated AI

If the AI community is roughly divided into the categories weak, strong and superstrong AI, it must be stated that superstrong = zero, strong = almost none and weak = 98 percent of all AI applications. Everything we call AI today falls almost exclusively within the weak or Artificial Narrow Intelligence area. In other words, very concrete, defined application areas based on machine learning algorithms.

So my suggestion is to demystify and to get a clear view of the things we can already solve with AI today. And there are an impressive number of them (although not as exciting as a kicking robot.

In the "weak" area there is a lot to do, to solve and to learn. We at ERGO are becoming more and more involved in weak AI. In two years, we have built up a 27-strong team of experts, which is now making customer and service processes more effective and efficient in large-scale production. Our algorithms are small, precise helpers for the colleagues in the service departments. Yet, in doing so, they unfold remarkably high efficiencies.

So my suggestion: demystify and get a clear view of the things we can already solve with AI today

In per diem sickness insurance, a Gradient Boosting (instead of Deep Learning) using AI, based on attributes such as occupation, age and diagnosis, suggests to the clerks where they should perform spot checks. This saves many customers unnecessary checks and the insured community money.

Another algorithm uses a Deep Learning approach (Word Embeddings) to take care of incoming customer messages by e-mail (300,000 mails per year) that are not assigned to a recipient - and pushes them to the right mailboxes.

A solution that has just been finalized helps to keep more customer transactions "dark". Dark means that customer requests are processed fully automatically without the need for intervention by a clerk (if the clerk has to intervene, we speak of "light"). With 5.7 million medical bills per year, every per mil that can remain dark thanks to AI is helpful. 

With 5.7 million medical bills per year, every per mil that can remain dark thanks to AI is helpful   

I believe that we as insurers will catch up rapidly at AI, also because on the one hand we produce and manage an infinite amount of data, and on the other hand we handle data processing extremely carefully for professional reasons. Even weak AI with the intelligence of a worm can do harm.

On the other hand, an algorithm is not necessarily bad or hostile by definition. It is as good or as bad as it was trained. Guidelines therefore play an important role for us. Whenever I am on the way to my AI leader, I have to pass the big board with the AI code. That is good at all times!

By the way, a few days after his historic defeat, Kasparov said that Deep Blue "played like a god for a moment". My navigation system on the app also shows me the way - though not godlike, but based on the data of other cyclists who visited the destination before me. This doesn't change the world, but quickly brings me to my next appointment in the city. That's all I want right now!

Mark Klein, 29.09.2020

Deep Blue vs Kasparov: How a computer beat best chess player in the world - BBC News