As soon as the talk turns to driverless cars, questions about ethic dilemmas arises. One known as the trolley-problem has been discussed for years by psychologists, philosophers, and behavioral scientists in different variations.
Trolley-Problem
The original trolley-problem asks how we would react, if a street car uncontrollably rolls down a street and threatens to hit multiple workers. The workers cannot be warned in time from the position of the observer. But the observer can turn the lever of a track switch, allowing to redirect the street car to another track away from the workers. Alas, there is a catch: on the other track is also one worker. The question that comes up is as follows: Would you switch the track and risk to kill one person in order to save five from certain death, or would you rather not do anything and have the five workers killed?
Matthias Müller, former CEO of Porsche and now CEO of Volkswagen, was quoted some time ago in the German ‚auto motor und sport’-magazine. „I always ask myself, how a programmer can make the decision with his work, whether an autonomous car crashes to the right side into a truck, or the left side into a small car.”
A similar take makes the subtitle in an article published by the German Spiegel from January 2016, with the catchy title “Lottery of Dying”:
“One day it will happen, this way or another: a self-driving car races through the countryside, a computer in control. The passenger is comfortable, he reads the newspaper. Three children jump on the street, to the left and right of the street are trees. In this moment the computer has to make a decision. Will it do the right thing? Three lives are dependent on it.”
Exactly! Who will, who must die? Let’s take a look at the dilemma and what the two things are that they say about the person asking the question. The person either doesn’t know a lot about driverless cars and accident statistics, or the person does, but has a hidden agenda.
Counter Questions
That’s why we first ask some counter questions:
- Are you driving cars?
- If yes, how long have you been driving cars?
- Have you ever been in the situation that you had to make a decision to kill a person with a car? How often have you encountered that situation as a driver? Do you know anyone who has ever been in such a dilemma to either kill children or crash into a tree?
- Whom do you trust more making the right decision (if this is even possible)? A driver, who has to make an ethical decision within a fraction of a second, or a programmer, who had hours, days, weeks, or months of time to analyze the question and create proper decision algorithms?
- Did you know that actually not the programmer makes the decision, but the car, because it has gathered a lot of experience and knowledge about decisions for such a case through machine learning?
In fact, this dilemma is so rare, that it’s for almost all of us a purely hypothetical case. But it’s often used by critics and people fearful of such ‘nascient‘ technologies. This hypothetical problem is known as the trolley-problem, but it also plays into utilitarianism and has been used by researchers for decades in different variations to highlight ethical conflicts and show the irrationality (much less our rationality) of our behavioral patterns.

It certainly is a question that is intellectually stimulating and, from an ethical perspective, interesting, but which also almost never occurs. What does happen are accidents with tramways and trains, where humans fail, cross the tracks, even when all the warning lights are on and the roads closed, or when conductors are distracted and operators make a mistake.
Utilitarianism
Utilitarianism looks at an action by estimating the value that it delivers for a society. Is it of bigger use for a society, if the car decides to kill the grandmother or the baby? From this perspective one could argue that the grandmother already had a full life and the few years she may live will add little benefit to a society. But the baby will bring a much larger impact for society with the many years it will live. But what if the baby is struck with cancer and will die within a few years, only accruing a lot of medical costs, while the grandmother is just finishing her first novel and on track of bringing a big bestseller to the market?
Variations of the trolley-problem experiment with the availability of different objects or people, which allow direct or indirect influence of the situation. The fat man whom one could push in front of the trolley to stop it. Or the stick that allows to make the fat man fall over it and fall in front of the trolley. As it turns out, people would not push the fat man directly, but they would with the stick. You are separated by one more degrees from your actions and their outcomes by using the stick. “Not me but the stick pushed the fat man in front of the trolley.” From the perspective of the volunteer in this experiment, this is a big difference. One time you’d been directly responsible for them fat man’s death, the other time only indirectly. One time you kill, the other time you let somebody die.
And that brings us closer to the moral dilemma. The trolley-problem and the application on self-driving cars leave the context out. You are not allowed to come up with a third alternative or discover a new option. And even if we theoretically think our (moral) decision through, it doesn’t guarantee that we really follow the rational decision once such a situation occurs. But what if we realize that without our fault and our own action our life is being sacrificed to protect somebody’s else? Shall an autonomous car kill the passengers to save pedestrians?
Chris Urmson, former head of Google X self-driving group, said that Google trains the cars to be very defensive. They first try to avoid and protect the most vulnerable participants in traffic, such as pedestrians and bicyclists. Then other and larger moving objects, such as cars and trucks. And then the car tries to avoid hitting objects that are not moving.
Machine Learning
Additionally, it isn’t as simple as it seems to program such a decision into a car. Mostly because the car has to experience as many driving situations as it can. And this is one of the misunderstandings of the development of autonomous cars that we fall prey to. Not the engineers make the decision and program the system accordingly, the system learns through machine learning. The engineers give the car a set of rules and conditions, that help the system with each situation it encounters. But with millions of miles driven, the AI-system builds up its own behaviors. Those who think that an engineer makes a clear decision upfront, has not understood AI, machine learning, and autonomous cars.
How Often Do Accidents Happen?
If we look at accident statistics, then the main reason for them happening are human errors. 94 percent of all traffic accidents are caused by human errors and they cost 500 billion dollars globally per year. With over 3,000 fatalities in Germany every year and 40,000 in the US, only 200 in Germany, respectively 2,400 in the US are caused by technical or other failures. That’s still too much, but we see at least 2,800 fatalities in Germany and 37,600 in the US caused by human errors. And those fatalities don’t even include many of the non-fatal accidents that happen. Police estimates that between 55 and 80 percent of all accidents are never being reported.
India for instance has the worst record in traffic accidents globally. Four hundred people die every day in India in traffic accidents, which amounts to over 140,000 fatalities every year. And more than 11,000 die because of extremely badly designed speed breaks, which – oh the irony – were actually introduced to decrease accident numbers.
In Europe the number of traffic victims is unevenly distributed. Russia alone accounts for two thirds of all fatalities. Accidents are also a result of corruption. Traffic cops and those responsible for traffic safety in some countries often take bribes, and then look the other way, instead of enforcing traffic laws and safety standards. That has led to a drastic measure in Mexico City: in 2007 the last male traffic cop was replaced by a female cop. As a result the number of accidents decreased, while the number of traffic tickets increased by 300 percent.
And those facts are real, and not just hypothetical. Many people die today on our streets in traffic accidents, and in most cases because of human errors.
The Ethical Problem of the Ethical Dilemma
And how many of those scenarios are really based on that trolley-problem, where a driver has to decide either killing children or dying by crashing into a tree? So rare are those cases that one asks why this question seems so popular? The technology-expert and former advisor on Google’s X Self-Driving Car-project, Brad Templeton, wrote extensively about that topic, and why this question is getting so much attention, and what questions should be asked instead, if someone intends an honest discussion about the possibilities of this technology.
This is probably an expression of our anxiety about machines making decisions about our lives and deaths. This is a decision we want to keep to ourselves. As much as I can understand that, this desire is already contradicted by today’s reality. We already did give machines the right to make a decision for us. Driverless subways carry us along, the autopilot in airplanes and the traffic control systems in the air allow much tighter flight patterns than humans could do. And even the so seemingly innocuous airbag makes this decision for us. They decide when they inflate or not, and either injure or kill us, or keep us safe. As the automotive expert Prof. Dudenhöffer wrote in his book Wer kriegt die Kurve:
„The question whether a computer may decide about our lives – if we are honest to ourselves – has been decided million times with Yes.“
The real ethical problem is somewhere else: in being so obsessed with a question about a case that so rarely happens, we prevent ourselves from seeing the opportunities that driverless cars promise. If self-driving technology realizes only a fraction of what it promises, then we can save up to 3,200 people every year in Germany alone (in the US 37,600), and have 300,000 fewer injured people in Germany (US over one million fewer injured). Just because we inflate the relevance of the trolley-problem, under the pretext of saving lives, but in reality save may be only ten lives in Germany (100 in the US), we (almost) prevent a technology from saving 3,200 lives.
And that is the real ethical problem. This discussion is led with a dishonest undertone, where the big opportunities are countered by scenarios with so little probability of happening, because the person raising the trolley-problem feels intellectually gifted. Out of sheer vanity, such a person inflates the problem. And thus becomes complicit in the death of thousands of people.
This article was also published in German.
Eigentlich guter Artikel, brauchst aber unbedingt jemanden, der dir den englischen Text korrekturliest.
LikeLike