The “Trolley Problem” Doesn’t Work for Self-Driving Cars

If you were the conductor of a trolley barreling toward two human beings, each one on either end of a fork in the track, could you choose which life to spare?
This problem, one of the most famous thought experiments in all of philosophy, was proposed by British philosopher Phillipa Foot in 1967 as a way to consider tough ethical choices in many fields. It was taken up early in the debate over how to design autonomous vehicles (AVs). But it may not be applicable to this question, argue Veljko Dubljević and his colleagues in the journal AI & Society.
Unlike human drivers, who make a split-second decision on how to react when they see an accident unfold or an obstacle emerge on the road, an AV must follow a preset moral formula to make its choice. Should it swerve to avoid a child crossing the road, even if it means hitting a larger group of adults on the sidewalk? And what if that choice harms the person inside the AV?
“The trolley paradigm was useful to increase awareness of the importance of ethics for AV decision-making, but it is a misleading framework to address the problem,” says Dubljević, a professor of philosophy and science, technology and society at North Carolina State University. “The outcomes of each vehicle trajectory are far from being certain like the two options in the trolley dilemma [and] unlike the trolley dilemma, which describes an immediate choice, decision-making in AVs has to be programmed in advance.”

One way that this shortcoming has played out, says Dubljević, is when collecting human participant responses as training data for AVs. In particular, Dubljević and colleagues write that the moral machine experiment–which has collected millions of responses about unavoidable traffic accidents–relies on binary scenarios that are often unrealistic and sacrificial. For instance, to save one person, others must be killed.
These choices also often reflect human biases that ethicists don’t necessarily want AVs to adopt.
“The goal is to create a decision-making system that avoids human biases and limitations due to reaction time, social background, and cognitive laziness, while at the same time aligning with human common sense and moral intuition,” says Dubljević. “For this purpose, it’s crucial to study human moral intuition by creating optimal conditions for people to judge.”
Dubljević and colleagues created more realistic environment using a combination of virtual reality and mundane traffic scenarios without binary solutions. The researchers also introduce a system to judge the “character” of the drivers, depending on the agent, the deed, and the consequence.
For example, say that a car accidentally runs a stop sign due to a mechanical failure and causes a non-lethal accident. Is the driver morally in the wrong if the traffic violation was out of their control? Would this judgment change if the car had been stolen but did stop at the stop sign?
Nicholas Evans, a professor of philosophy at UMass Lowell, has also studied ethical decision-making of AVs in low-stakes scenarios. He does not think that the trolley problem is obsolete, although he does agree that more work in non-binary moral decision-making is important. But he doesn’t much approve of character-based assessment, particularly in future scenarios where AVs might be making decisions about another AV’s driving.
“These are machines; it’s not Herbie the Love Bug,” Evans says. “Maybe one of the reasons we aren’t as interested in character in AV ethics is that cars don’t have characters, or dispositions, of the kind that humans and animals do. Certainly not yet; according to some, maybe never.”
Time will tell how AVs can interpret this character data. This new framework is still in the early stages, with human participants only making judgments as an observer instead of an agent. However, Dubljević says his team hopes to redesign this type of experiment for first-person decision-making using virtual reality and a driving simulator.
“This may be described as the ‘moral obstacle course,’ which after trials with humans can be used to train artificial neural networks,” Dubljević says.

From Your Site Articles

Related Articles Around the Web

Go to Source