It is a classic problem and a test of where our morality lies: if a runaway trolley is uncontrollably heading down a set of tracks and you have the ability to use a lever and choose between either five people or one, what would you do?
Extend this to cars: would we prefer an out-of-control vehicle to mow down a pensioner or a child?
A spin-off of the classic trolley problem has been used by researchers from the Massachusetts Institute of Technology (MIT) Media Lab in an experiment called the Moral Machine, which has been designed to test how we view these moral problems in light of the emergence of self-driving cars.
The Moral Machine crowdsourced over 40 million moral decisions made by millions of individuals in 233 countries. These decisions were collected through the gamification of self-driving car accident scenarios, such as:
- Should a self-driving vehicle ‘choose’ to hit a human or a pet?
- More lives, or fewer?
- Women or men?
- The young or the old?
- Law-abiding citizens, or criminals?
In addition, the game prompted responses as to whether the self-driving car should change its course in the face of an upcoming incident at all, or whether it should stay on course.
See also: Ford’s self-driving cars to become Miami’s new pizza delivery guy
In reality, human drivers could think about how they would act in such scenarios, but they only have the time to make split-second decisions. However, self-driving cars could, in theory, be programmed with a kind of moral spectrum as to how to make these decisions.
The Moral Machine did not use one-to-one scenarios. Instead, the experiment emulated what could be a real-life scenario, such as a group of bystanders or a parent and child on the road.
TechRepublic: Our autonomous future: How driverless cars will be the first robots we learn to trust
In general, we agreed across the world that sparing the lives of humans over animals should take priority; many people should be saved rather than few, and the young should be preserved over the elderly.
However, there were also some regional clusters which contributed to the general moral decisions. For example, individuals in ‘southern’ countries, including the African continent, preferred to save the young and women first, especially in comparison to those in ‘eastern’ countries, such as countries in Asia.
“The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to,” says Edmond Awad, a postdoc at the MIT Media Lab and lead author of the paper “We don’t know yet how they should do that.”
CNET: Waymo explains what its self-driving cars should do when pulled over
The crowdsourced project has raised some interesting questions when it comes to writing moral decisions into software, and if self-driving cars become a common feature of our roads, this issue has to be tackled in some way.
Obstacles will appear and accidents will happen — and it may be a requirement for the moral preferences of self-driving vehicles to be discussed in the public sphere.
However, morality is flexible and human decision-making is always going to be different to what a vehicle is able to achieve — and so, perhaps, such decisions will end up simply being made based on human or animal, and the number of individuals a vehicle could potentially hit.
The research has been published in the academic journal Nature.