Photo Credit: How should a self-driving car react in these situations? Bonnefon et al.
An interesting study concerning the ethics and morality of self-driving cars has been published in
Arxiv, by a team led by Jean-Francois Bonnefon from the Toulouse School of Economics:
The wide adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically reduce the number of traffic accidents. Some accidents, though, will be inevitable, because some situations will require AVs to choose the lesser of two evils. For example, running over a pedestrian on the road or a passer-by on the side; or choosing whether to run over a group of pedestrians or to sacrifice the passenger by driving into a wall. It is a formidable challenge to define the algorithms that will guide AVs confronted with such moral dilemmas. In particular, these moral algorithms will need to accomplish three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers. We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm. To illustrate our claim, we report three surveys showing that laypersons are relatively comfortable with utilitarian AVs, programmed to minimize the death toll in case of unavoidable harm. We give special attention to whether an AV should save lives by sacrificing its owner, and provide insights into (i) the perceived morality of this self-sacrifice, (ii) the willingness to see this self-sacrifice being legally enforced, (iii) the expectations that AVs will be programmed to self-sacrifice, and (iv) the willingness to buy self-sacrificing AVs.
Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?
More below the orange traffic jam symbol.
As engineers are developing the algorithms that control self-driving cars, they are becoming plagued by moral, ethical, and economic decisions that have not occurred to most of us. There are many scenarios where this issue comes into play. For example, you are riding in a self-driving car. You turn a corner, and there is a crowd of people blocking the road, too close for the car to come to a complete stop before running into them. There are solid walls on both sides of the road. Should the car swerve into the wall, seriously injuring or killing you? Or should it try to stop, knowing full well it will hit the group of people while keeping you safe? What if children are present? Does that change the algorithm?
"It is a formidable challenge to define the algorithms that will guide AVs [Autonomous Vehicles] confronted with such moral dilemmas," the researchers wrote. "We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm."
In their paper, the researchers surveyed several hundred people on Amazon’s Mechanical Turk, an online crowdsourcing tool. They presented the participants with a number of scenarios, including the one mentioned earlier, and also altered the number of people in the car, the number of people in the group, the age of the people in the car (to include children), and so on.
The results are perhaps not too surprising; on the whole, people were willing to sacrifice the driver in order to save others, but most were only willing to do so if they did not consider themselves to be the driver. While 75% of respondents thought it would be moral to swerve, only 65% thought the cars would actually be programmed to swerve.
The legal issues surrounding this remain somewhat of a grey area, too. Will new laws be introduced requiring the car to make an emotionless "greater good" response? Or will the owner be allowed to choose different levels of morality?
"If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?" ask the researchers.
MIT Technology Review notes, however, that self-driving cars themselves are still inherently safer than human drivers. "If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents," the MIT article says. "The result is a Catch-22 situation."
Technology often creates moral and ethical dilemmas. In this case, these very important issues must be sorted before we can realize the reality of self-driving cars, and this has the potential become the single most significant cause of their delay into the marketplace.
Source: Should A Self-Driving Car Kill Its Passengers In A “Greater Good” Scenario?