AI & Robotics

Dark processing with AI: The small difference between "paperless" and "digital"

German insurance-speak uses shades to refer to degrees of process automation: “light” processing means documents are handled manually, “dark” processing means an automated process is implemented, and “grey processing” involves a bit of both. At the far end of this spectrum, “dark processing”, also known as E2E automated processing, is one of the most important automation steps that the insurance industry has seen. At ERGO alone, customers submit more than one million documents, such as medical bills, every week. Automated processing means inbound correspondence is governed by a fixed set of rules, and no intervention from staff is required. "This in itself is nothing new, but bringing artificial intelligence (AI) into play could boost it to no end," writes ERGO CDO Mark Klein in his longpost here on //next.

 

Customers are extremely satisfied with the invoice-submission app offered by ERGO’s health insurer DKV. All they have to do is register, take a photo of their medical bill, and send it off. A few seconds later ERGO is already processing their invoice. As soon as the process is complete, the app sends customers a confirmation, with an overview of the claim attached. In this way, we offer our health insurance customers a straightforward, convenient and fully digital channel, and more and more people are taking advantage of it.

From a processing perspective, however, the most exciting part is what happens between submitting the invoice to the app and sending the push notification. How many of the tens of thousands of medical bills that ERGO receives every week do staff still have to call up on their computer and check line by line using a “light” process? And for how many does the three-step process – classification, extraction and prudent decision-making – run automatically “in the dark” by means of intelligent data processing?

Beyond helping to make the ERGO offices paper-free, the option to submit photos of invoices via a customer app also provides an input channel for the fully automated settlement of insurance claims. Now, AI, which can detect case information on a medical bill that conventional approaches haven’t been able to recognise, is set to give automatic processing a real boost.


Customer documents and invoices were already being processed automatically 15 years ago. This automatic verification worked (and often still works) on the basis of what are known as if-then rules. For the calculation of health insurance claims, for example, these rules work like this: if the insured person is covered by plan X, indemnification for dental prosthesis may not exceed Y%. Such rules can already be applied to a surprisingly high number of automated claims decisions.

However, given the approximately 20,000 possible codes for medical diagnoses, the thousands of different invoice formats and the more than 1,000 DKV insurance plans available to customers, the system of if-then rules inevitably, and quite quickly, reaches its limits. The problem is clarity and whether the system is capable of making clear distinctions when extracting information (step 2) and of making the right decision (step 3).

Some clarity can already be lost in the data classification stage (step 1). If, say, a customer registers a new residential address, this could have several reasons. It might mean: send all correspondence to this address from now on. Or it could mean that the insurance risk in the new neighbourhood or home has changed. If that’s the case, the customer’s home insurance would need to be updated accordingly. The information “change of address” can thus have various implications for the insurer.

If-then rule systems aren’t usually capable of solving such complex, multidimensional cases. A well-trained AI model, on the other hand, has no problem handling such multidimensionality!

AI can handle multidimensional classification, extraction and decision-making

AI follows the same pattern as the three steps described above. The process begins with classification, which is similar to sorting customer correspondence into the relevant departments’ inboxes, as would have been the process before. Correspondence relating to a change of address goes to the department that handles address changes, hospital bills go to the inpatient department, and prescriptions go to the prescriptions department.

The next step is to extract the thematic data. In the prescriptions department, the relevant data is read from the invoice: nasal drops for €10.80, plus eye drops for €5.60. That adds up to €16.40. But the amount to be paid out to the insured person is still not clear. This is verified in the third step, when the system checks which services the plan covers (performed by our “tariff engine”). However, we also check whether a given procedure is medically justifiable.

This is not as simple as it sounds. For example, it’s down to the insurer to ascertain whether a nose correction job has been performed for medical reasons or on cosmetic grounds. The insurance company pays out for medically justified treatment, but generally not for cosmetic treatment. But the reasons for medical justification can be incredibly diverse, including accidents that cause damage to the face and nose, of course, but also mental illness. The old systems of rules would fail completely here, but AI can handle it.

 

AI recognises patterns on the basis of neural networks – relationships between a highly diverse group of elements – and requires good training. And it’s getting better and better all the time, as we can see from how far image analysis has come. AI already overtook humans in its ability to classify images several years ago, and is more reliable at this than human intelligence. For example, AI can identify the dog pictured in 96 out of 100 photos. We humans, on the other hand, manage about 93%, and it takes us longer.

In text analysis, too, AI is now on the cusp of overtaking humans when it comes to how reliably it can classify content. But let’s stick with the medical invoice with hundreds of features to be analysed as our example. A system called optical character recognition (OCR) reads the printed text from an image. Then, intelligent character recognition (ICR), an advanced form of OCR, analyses the order of the letters to identify the most likely words. The match rate for extraction in texts is often still below 50%, which is lower than with rule-based systems, but AI is catching up fast.

Next challenge: Making unstructured data available in the text memory

So the technical infrastructure is there to start using AI in automated processing. That said, it will be a few years yet before we’ll be able to fully exploit the advantages that AI offers. Why? It’s to do with data. In addition to AI competence, data, or rather how data is structured to allow it to be interpreted correctly, is the real foundation on which everything else is built. And until we have a way to clearly identify entities in medical bills and to make valid decisions, using AI would do more harm than good.

That’s why the initial assessment, preparation and analysis of the data is so important for modelling and training AI algorithms. Only when that has been perfected can AI do its job in all three steps of the process. To this end, we are building a complete analytics content hub, or a text memory, here at ERGO, which we’ll use to make unstructured data available.

Later, this will serve as the central set of training data for all AI models to use, allowing them to continuously enhance their efficiency. As things stand now, we’re getting a promising match rate when we use AI to read outpatient medical bills.

Once the text memory is up and running in a fully structured way, we’ll be able to scale the AI algorithms as we wish. It’s here that we gain the advantages over the old rule logic. AI will pave the way for us to roll out full-scale automated processing across hundreds of cases. That wasn’t possible with old ruled-based systems, as these can be used for only for a limited number of customer concerns. However, the two approaches are dependent on one another. AI would not be able to function without the existing “dark” automated processes as the basis.


You might be wondering what this means for staff. The answer is, we still need them! It will take 20 to 30 years before the AI approach can work on its own. In the meantime, we’ll keep working in the grey area – with processes neither fully in the dark or fully in the light, as it were. Humans and machines will help each other out.

For daily hospital allowance claims, AI will indicate to staff which cases should be checked. And humans will carry out systematic spot checks to ensure the quality of the AI. Especially when faced with “new” circumstances, our AI is not yet clever enough and will still rely on the support of our specialists for quite some time to come.

But human-computer interaction is already sparing many customers unnecessary checks, and saving the insured community money as well. Another benefit for customers who use the app is that they receive their notification, a push message from DKV, even earlier. Because that’s precisely what AI does: it shifts everything up a gear.

Text: Mark Klein