New Mobility

Autonomous driving in court: This is what a trial could look like

A lot still needs to happen before autonomous vehicles become commonplace on German roads. The legal framework has also not yet been fully created. A fictitious court hearing has played out the serious case: Does the manufacturer of an autonomous vehicle have to be liable for an accident? This was not only about legal issues - fundamental ethical considerations were also discussed.

Smart Car (HUD) ,Autonomes selbstfahrendes Verkehrsträgerfahrzeug auf der Autobahn iot Konzept mit grafischen Sensor Radarsignalsystem

The sensors register an oncoming truck in your own lane. Full braking can no longer prevent the accident. Swerve to the left? Out of the question, there is a group of cyclists here. Swerve to the right? Also ruled out, there is a pedestrian walking here. The AI switched on to guide the vehicle is faced with a dilemma situation - and drives to the right. The car crashes into the pedestrian, breaking her arm, causing abrasions and destroying her smartphone.

This accident did not happen, but it is an excellent example to discuss legal, ethical and technical issues surrounding autonomous driving. In a fictitious court case, the Berlin-based Futurium took on these questions, in cooperation with the Learning Systems Platform and Leibniz Universität Hannover. The event was recorded and can be watched on YouTube.

Is an AI allowed to make decisions about human lives?

Judge Daniela Sprengel, the prosecution's lawyer Simon Gerndt and the fictitious car manufacturer's lawyer Esra Karakoc (all three from Leibniz Universität Hannover) heard the case. Experts were called in for technical and ethical questions, and the livestream audience was also able to ask questions via chat, which were passed on to the court, and take part in a live poll.

This civil court hearing saw two opposing views: The prosecution questioned whether AI could choose to harm a person - especially if the system was advertised and approved as safe. After all, can something be considered safe that can cause harm to persons? The defendant company's representation countered that no system could guarantee one hundred percent safety. Moreover, the controlling AI had received the necessary approval. The driving system was safer than human drivers - to demand that it never make mistakes was disproportionate.

An AI does not decide - it calculates

It is difficult to punish an AI with a fine or imprisonment - it has no property and, after all, it cannot be locked up either. The owner of the fictitious vehicle was not responsible for the accident - after all, it was an autonomy level five vehicle, which does not require human intervention in the driving process. This is also the view of the legal situation regarding autonomous driving, which was not adapted until 2021. 

So the pedestrian had no choice but to direct her complaint to the manufacturer. After all, it was the manufacturer who developed the system that caused her the damage. According to the prosecution, the programming at the car manufacturer is to blame for the accident.

In his statement, technology expert Dr Tobias Hesse from the German Aerospace Center (DLR) made it clear that this view falls somewhat short of the mark. An AI is not a programme in which the programmers prescribe how the system should behave in given situations. Rather, the AI is trained by feeding it with countless traffic situations and data. Based on this training, the AI calculates how the vehicle should behave in dangerous situations. The action of the AI in the accident scenario can therefore not be directly attributed to a human decision. Can one therefore speak of a decision here at all?

AI needs to act comprehensibly

If it were up to the ethics expert Prof. Armin Grunwald from the Karlsruhe Institute of Technology (KIT), this would not be the case. Even with humans, one cannot speak of a decision if they act in a matter of seconds - in this case one would rather speak of affect. The accident could then be considered a tragedy, something unavoidable that has its origin in the unpredictability of humans. The dilemma does not exist here - decisions are made in the heat of the moment, without a chance to weigh things up.

In the case of an accident caused by an AI, things are different. Here, the event is based on a calculation. How this calculation is to be evaluated ethically depends on two questions: Are people valued differently in the calculation? And: According to which ethical principles should the evaluation be made?

On the first question: In the fictitious case, those present in court do not know how the AI's action came about. It is not known whether the AI distinguished between the possible participants in the incident. Did it prioritise protecting the occupant over protecting the pedestrian? Does the AI perhaps even allow human harm in order to prevent economic damage to the owner?

No consensus on the evaluation of AI actions

On the question of ethical principles: According to Grundwald, it is unethical in any case if the AI puts the occupant or even the economic interests of the owner above the welfare of other people. If, on the other hand, it calculates how to cause as little harm as possible to as few people as possible, it is acting ethically correct according to utilitarian standards. However, it cannot fulfil Immanuel Kant's ethics: Deliberately causing harm to one person is always rejected here, even if the harm of others can be averted by doing so.

It was not only because of these open questions that the fictitious court was also faced with a decision problem. In order to prosecute the manufacturer under the Product Liability Act, it must be possible to prove a fault in the system. This is not the case. On the other hand, it also cannot be ruled out that the AI calculated incorrectly. So it is complicated. On the advice of the judge, the prosecution and the defendant agreed on a settlement.

It was a somewhat unsatisfactory end for the event. But the result is close to everyday legal life - and reflects that there is still no social consensus on the evaluation of AI actions.

The audience speaks out in favour of autonomous driving - and clear market rules

Society was represented at the event by the livestream audience. Through the questions posed by the audience, there was a lively interest in legal quandaries, but also in clear regulation of artificial intelligence. In the live poll on Mentimeter, 72 per cent of the audience was in favour of the introduction of autonomous vehicles, but mostly on the condition that the evaluation standards of AI are bound to clear ethical principles. It should also be ensured that injured parties receive compensation - regardless of whether someone can be blamed or not.

The fictional court case thus made it clear that so far it is not only the technical requirements that are lacking before cars can roll along our roads without human intervention. However, with careful discussions about clear requirements for AI, the right framework conditions can be created in time - because so far, automation level five is still pure dreams of the future.

On 23 February 2022, the federal cabinet passed a first ordinance on autonomous driving. Before the ordinance comes into force, however, the Bundesrat must still give its approval. Further information is available on the website of the Federal Ministry of Digital Affairs and Transport (in German only):

https://www.bmvi.de/SharedDocs/DE/Pressemitteilungen/2022/008-wissing-verordnung-zum-autonomen-fahren.html

Text: Nils Bühler

Most popular