What Morals Should Drive Driverless Cars?
Cars of the past required us to do everything manually, from shifting gears to locking doors and rolling down windows. Now we have cars that “can adapt their speed to the surrounding traffic automatically, maintain a safe distance from the vehicle ahead, keep within their own lane, and even park themselves” [1]. Tech companies like Google, Apple, and Uber are aiming for the ultimate autonomous driving experience—cars that can drive themselves—and driverless cars are already being tested on the roads. Even though we may be years away from their release to the public, concerns about driverless cars are already surfacing.
Driverless cars are poised to make life much easier and more convenient. Elderly people who have difficulty driving could regain freedom and independence using driverless cars. Busy parents would no longer need to drop off their children at school or take them to after-school activities. People with long commutes by car could use that time to focus on rest or work instead. Even more significantly, driverless cars could be much safer than human drivers. According to the National Highway Traffic Safety Administration, 94% of traffic accidents are attributed in part to human error [2]. Driverless cars are designed to follow all traffic laws, including obeying speed limits and completely stopping at stop signs. If human drivers are taken out of the equation, we can imagine that our roads could be much safer. If driverless technology becomes reliable enough, we might even decide that human drivers should be outlawed and removed from the roads for the sake of overall safety.
In addition to the question of whether the aim of such technology should be to get rid of human drivers entirely (along with their dangerous potential for error), the question of what counts as safety also arises. What happens when there are no good options for a driverless car to choose? For example, imagine that a van with a family of five ahead of you suddenly brakes, and your driverless car can either brake but potentially hit the family of five, or it could swerve to the right where there is a school bus full of children, or it could swerve into the median rail on the left—in each case potentially endangering your life as well as or instead of the lives of others. In such situations, human drivers react in unpredictable and generally uninformed ways. Driverless cars, on the other hand, potentially allow us the capacity to be more intentional about how to react to unexpected accidents or emergencies on the road, but there is much disagreement about how to best use this new power.
DISCUSSION QUESTIONS
Given the choice between endangering 5 lives or your life, should a car in which you are the sole occupant be programmed to endanger you?
What moral principles should guide us as we decide what to do about the possibility offered by driverless cars to be more intentional than ever before about reacting to unexpected dangerous situations?
Would people be morally permitted to drive at all if driverless cars are on the whole safer and more reliable?
References
[1] The Economist, “The long, winding road for driverless cars”