JusticeWater61227 Chapter 2 Case Study Autonomous Vehicles and the Trolley Problem…Chapter 2 Case Study Autonomous Vehicles and the Trolley Problem The trolley problem is one the most famous problems in contemporary moral philosophy. First put forward by Philippa Foot in 1967, the problem asks you to imagine yourself standing at a control switch with an out-of-control train hurtling down the track. If you do nothing, the train will continue on its current trajectory and kill five unaware workers on the tracks. However, if you flip the switch next to you, it will divert the train onto a different track on which there is only one worker. Is it permissible to kill one person to save five? Although many people would say the latter option is indeed permissible, the trolley problem is meant to make us pause and consider our intuitions about agency, innocence, and culpability. Is killing someone the same as letting someone die? It is usually assumed to be a maxim that one ought not to kill innocents; does making an exception in this particular case jeopardize that general rule? Whichever course of action you decide to take, should you feel guilty about the bad parts of the outcome that could not be avoided? There have been many variations of the trolley problem since Foot’s original presentation. Some of the more common variants substitute the switch for a person whom you can push into the train’s path to derail it. Others make the person on the second track a person dear to you, such as a child. Others introduce a really cool loop-de-loop for the trolley to go on before crashing into the five victims. (On this latter view, letting five people die in an exciting way is better than killing one person in a boring way.) The original and all of these variations ask us to critically interrogate the idea that ethics is a simple math problem. There is a seductive simplicity to the idea that all we need to do is compare the harms (or utility) of the two possible options, and that doing so will clearly tell us what the “right” thing to do is. For many years these questions were primarily of interest to academics, but in recent years there has been an increasing urgency to do more than ask questions about these types of scenarios. This urgency has been brought on by the development of autonomous (self-driving) cars. Autonomous cars are, as the name might suggest, cars that are equipped with a variety of technological implements that replicate the functions of a driver and drive themselves. The car is able to commute from place to place—the comparatively easy part of the technological problem—and, more importantly, the cars are able to react to the environment around them. Enabling the cars to react to circumstances presented to them has required a number of technological challenges (sensors, image recognition, and the like), but there is also a moral question to be asked: In situations like the trolley problem, what should the car be programmed to do? If there is insufficient time to stop and no safe alternative to redirect toward, should the car kill one pedestrian to save five others? Should it take into account other factors such as whether the pedestrians were jaywalking or appropriately using the sidewalk? Should the ages of the pedestrians matter? What if the only way to save the five pedestrians is to steer the occupant(s) of the car into danger? Would you buy a car that does not protect you above all other individuals on the road? These questions may seem far-fetched, but one pedestrian has already been killed by a self-driving car.24 On 18 March 2018, Elaine Herzberg was struck and killed during the prototype testing of such a vehicle. Assigning moral blame in this case is difficult. We could blame the human safety-backup driver for failing to notice the pedestrian. We could blame the regulators who allowed an immature technology on public roads. We could blame those who programmed the computer. We could blame Herzberg herself, who was crossing the avenue outside of a designated crosswalk. We could also blame city planners who—wittingly or unwittingly—create infrastructure that is hostile to pedestrians. Autonomous cars are not widely available in the market yet, but the future is coming faster than we think, and many trolley-style questions remain unanswered. For Discussion 1.Consider the following claim: An autonomous vehicle that prioritizes the safety of the vehicle’s occupants over other road users (like pedestrians or people on scooters) is an unjust way to externalize the cost of responsibility. Do you agree or disagree? Explain. 2.Would you buy (or even ride in) an autonomous vehicle if you knew that it did not prioritize your safety over the safety of other road users? 3.Should companies be allowed to test autonomous vehicles on public roads? If so, under what conditions? Explain. 4.What would you do in the trolley problem? Would you pull the lever? How would the different moral theories answer the problem?BusinessBusiness – Other