Why an AI needs rules


Ethics of algorithms

Digitalisation & Technology, 09.06.2021

Algorithms now permeate our everyday digital lives more than we often realize: From risk assessment for a loan, to product recommendations on e-commerce sites, to image optimization in Adobe Photoshop - automated decision-making tools analyze us, give us suggestions, and thus also subtly steer us. But do they do so entirely without ulterior motives, and who decides about them?

Ethics of algorithms - why an AI needs rules 

Automated decision support and algorithm bias

Systems for automated decision-making also include "chatbots, data-based targeted advertising, or navigation systems," as Jessica Wulf of AlgorithmWatch explains. This nonprofit organization conducts research on AI-enabled services and their societal impact. Research on algorithm-driven decisions is important, because often in everyday life we don't even notice that an AI has (helped) decide at all. For example, in job postings that are more often shown to female user groups than male users, as AlgorithmWatch demonstrated in an experiment.

Rare extreme cases of prejudice illustrate all the more how complex the issue actually is: a study two years ago showed that image recognition systems, such as those used in autonomous vehicles, were less able to recognize people of color. But what was the reason for this so-called algorithm bias? First, the image recognition software was fed too few people with dark skin. Second, the AI did not give enough weight even to the few photos with people of color. In another recent case at Harvard's U.S. University Hospital, algorithms prioritized physicians for Covid 19 vaccination who worked from home over nurses on site at the clinic. What had gone wrong? Hospital staff were not involved in the development of the vaccination prioritization algorithm, nor were they able to provide feedback when the algorithms were deployed.

“The responsibility for the ethical design of algorithmic systems is similar to that of an orchestra: every musician is jointly responsible for how the music sounds, and if something sounds wrong, you can't blame the instruments.”

Julia Gundlach, Project Manager at Algorithmenethik

To err is human - that's why AI doesn't know any better

Who is responsible for the ethics of algorithms? Is it the developer who programs the algorithm and embeds a prejudice in the code? Is it the algorithms themselves that make these decisions? In the field of Natural Language Processing, for example, we are amazed at how GPT-3 algorithms arrive at independent results and produce good-sounding articles, for example.

Julia Gundlach, project manager of Algorithmethics, a project of the Bertelsmann Stiftung, comments: "The responsibility for the ethical design of algorithmic systems is similar to that of an orchestra: Every musician is responsible for how the music sounds, and if something sounds wrong, you can't blame the instruments. Even in the world of algorithms, it's always people who decide - that's also true in machine learning." Timo Daum, author of "The Artificial Intelligence of Capital," takes an even more nuanced view: "Are the programmers really the ones who have the overview of the mechanisms and goals of the algorithms they are working on, and at the same time also have the power and influence to recognize and change discriminatory code or biased data sets? Probably not. But the idea that algorithmic bias is unconsciously incorporated into the code by male programming nerds also overestimates the scope of those whose job it is to translate the business logic of their clients into executable code according to strict rules.

Handbooks and guides for ethical practice

Only all of us together can put a stop to this - with many responsibilities. Julia Gundlach of "Ethics of Algorithms" at the Bertelsmann Stiftung calls for responsibility over the ethical design of algorithms to be a central part of corporate culture and collaboration between organizations. "How this responsibility is specifically distributed in an authority or company must be discussed and clearly defined at an early stage," says Gundlach. To this end, the Bertelsmann Stiftung, together with the iRights.Lab and around 500 participants, developed the so-called Algo.Rules for the ethical design of algorithmic systems. In the process, eleven different role profiles in public administration alone were identified, which are involved in the design of algorithmic systems and thus also assume responsibility. "This makes the need for a clear assignment of responsibility particularly clear," she explains. Based on the Algo.Rules, a handout for digital administration and an implementation guide for executives and developers (both PDFs in German) were also created to put ethical principles into practice. "In the best case, companies use such guiding questions as a basis for their own development of suitable principles and concrete operationalization steps, since the corporate needs and prerequisites are different in each case. This requires the commitment of everyone involved so that these changes are not only defined but also implemented," says Julia Gundlach.

“It is very difficult to notice that I am disadvantaged by an automated system. Firstly, because of the lack of information that an automated system is being used here. On the other hand, I have no comparisons to other people.”

Jessica Wulf, AlgorithmWatch

Recognizing discrimination risks in the first place

Until such a guideline is put into practice, the users are in demand. Everyone should take a very close look at automated decision-making aids and also question them. But this is exactly where the problem lies. Help is coming from Unding, which is also supported by the Bertelsmann Stiftung - and from AutoCheck, a new project from AlgorithmWatch. Until July 2022, project manager Jessica Wulf and her team will be working on guidelines for anti-discrimination counseling centers to be able to recognize discrimination risks in the first place. She reports: "It is very difficult to notice that I am being discriminated against by an automated system. For one thing, because of the lack of information that an automated system is being used here. For another, I don't have comparisons to other people and their results to determine that I've been treated differently - and possibly discriminatorily." At the moment, the team is tracking down case studies and also relies on tips from users. Specifically, for example, AutoCheck was informed that some images of black people appeared under the label "animal" on an iPhone 8, meaning they were automatically assigned to that category. "The results of the project, instructions for action and workshop concepts, are primarily aimed at employees of anti-discrimination counseling centers. Their competencies are to be supported so that they are better able to assess and recognize risks. This should provide better support for those affected in concrete cases of discrimination," says Jessica Wulf. Making people aware of possible discrimination through automated decision-making aids is therefore at least as important and fundamental work as a set of rules for the development of these decision-making aids. But if we're going to start questioning, let's be more fundamental: Jessica Wulf calls for more social discussion about whether and where we can use automated systems sensibly - and where it's better not to. 

Text: Verena Dauerer

Your opinion
If you would like to share your opinion on this topic with us, please send us a message to next@ergo.de.

Related articles

Digitalisation & Technology 08.11.2021

"AI and ethics - great things can come out of it"

What about the ethical aspects surrounding the use of artificial intelligence? This is the subject of a podcast by Ludwig Maximilian University in Munich. In the //next interview: Dorothea Winter, head of the "PhiPod" editorial team.

Digitalisation & Technology 25.05.2022

AI initiatives fail more often than people think

They are concerned with a much more mundane question: why do many companies find it so difficult to create algorithms that really achieve something? The advancement of AI is nowhere near as great as many people think, says ERGO CDO Mark Klein in his current blog on //next

Digitalisation & Technology 23.11.2020

What KI is already doing at ERGO

What opportunities does AI open up especially in the insurance industry? Rolf Mertens and three other ERGO colleagues recently discussed this on YouTube with other AI experts from the start-up scene as well as from KPMG, Henkel and Vodafone.