Tunnel problem
![]() | This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
![]() | This article or section is in a state of significant expansion or restructuring. You are welcome to assist in its construction by editing it as well. If this article or section has not been edited in several days, please remove this template. If you are the editor who added this template and you are actively editing, please be sure to replace this template with {{in use}} during the active editing session. Click on the link for template parameters to use.
This article was last edited by Jmeelar (talk | contribs) 10 years ago. (Update timer) |
The Tunnel Problem is a philosophical thought experiment first introduced by Jason Millar. It is a version of the classic Trolley Problem, a thought experiment introduced in the 1960s, and much discussed ever since. The tunnel problem is intended to draw one's attention to a specific issue in design/engineering ethics, and was first presented as follows:
Tunnel Problem: You are travelling along a single lane mountain road in an autonomous car that is fast approaching a narrow tunnel. Just before entering the tunnel a child attempts to run across the road but trips in the center of the lane, effectively blocking the entrance to the tunnel. The car has but two options: hit and kill the child, or swerve into the wall on either side of the tunnel, thus killing you. How should the car react?[1]
Overview
The tunnel problem is meant to focus one's attention on two questions that it raises for designers and users of autonomous cars:
- How should the car react?
- Who should decide how the car reacts?
In its original formulation the tunnel problem is discussed as an "end-of-life" decision for the passenger of the car: depending on the way the car reacts, the passenger either lives or dies. Because of that feature, Millar argues that the tunnel problem forces us to question whether designers/engineers have the legitimate moral authority to make the decision on behalf of autonomous car users. Indeed, the second question is meant to challenge the standard notion that all design decisions are just technical in nature. Where design features provide "material answers to moral questions"[2] in the use context, Millar argues that designers must find ways to incorporate user preferences in order to avoid unjustifiable paternalistic relationships between technology and the user.[3]
Public Response
In a poll conducted by the Open Roboethics Initiative[4] (ORi), 64% of respondents said the car should continue straight and kill the child, while 36% said it should swerve and kill the passenger. In addition, 48% of respondents reported that the decision was "easy", while 28% and 24% claimed it was "moderately difficult" and "difficult" respectively. When asked who should make the decision, only 12% felt the designer/manufacturer should make it, 44% felt the passenger should make it, and 33% thought it should be left to lawmakers.[5]
References
- ^ http://robohub.org/an-ethical-dilemma-when-robot-cars-must-kill-who-should-pick-the-victim/
- ^ http://robots.law.miami.edu/2014/wp-content/uploads/2013/06/Proxy-Prudence-Rethinking-Models__-of-Responsibility-for-Semi-autonomous-Robots-Millar.pdf
- ^ http://www.law.miami.edu/webcast/video.php?location=depatments&stream=20140404_WeRobot_Part3.mp4&width=480&height=270&page=
- ^ http://www.openroboethics.org
- ^ http://robohub.org/if-a-death-by-an-autonomous-car-is-unavoidable-who-should-die-results-from-our-reader-poll/