Ethical Dilemmas of Driverless Cars
If you took philosophy 101 as an undergrad, you may already be familiar with the trolley dilemma.
In this hypothetical situation, a runaway trolley is barreling down the track toward a fork in the rails. There are five people tied to the tracks on the trolley's current route. On a side track, there's one person tied to the rails. You could do nothing and let those five people die. Or you could divert the trolley and save more lives but effectively murder one person. What's the most ethical option?
This is usually day one in any narrative ethics or philosophy class. And the trolley problem has limitless variants. What if the one person was a family member or friend? What if the five people on the other side were children? Obviously, it's a drastic hypothetical. But we, as human beings, make split-second ethical choices all day, every day.
How long would you wait to hold the door for someone? How long would you wait to hold the door for an elderly person? Why would this particular panhandler get change from you but this one wouldn't? You didn't donate a dollar to Johns Hopkins when you finished paying for your groceries. But you gave $25 to your cousin's Race for the Cure 5K.
We all operate on a baseline moral code. It's instilled in us as children by our parents and teachers or on a larger societal level through the books and media we consume. Even the youngest of children have a concept of what is the "right" or "wrong" thing to do.
But what about when we're in cars? My friend Dan is one of the nicest guys I know. But whenever he's behind the wheel of his truck, he's a monster. Morals are often learned from the society we grow up in. And Dan's from Boston, a city where drivers feel more comfortable using their horns than their turn signals.
Several weeks ago, I wrote about the development of traffic behavior for driverless cars. Far beyond traffic patterns, what will happen when driverless cars are forced to make the ethical decisions that we make on the road? What will happen when lives are at stake? How do we make a car that runs on artificial intelligence (AI) understand that it's better to crash into a light pole than into the lobby of a hotel?
Those are the exact questions that Dr. Edmond Awad and his team at the Massachusetts Institute of Technology (MIT) Media Lab are looking to solve.
As I mentioned, the morals of driving differ from country to country and even city to city within a country. So, who better to determine the moral system within a driverless car than the people on the roads? That's the question MIT Media Lab proceeded to ask on a global scale.
It sent its survey to almost 40 million individuals in different 233 countries. It received 100 or more responses from 130 countries. In total, there were 491,921 respondents to the survey.
The questions were based on ethical dilemmas that drivers could encounter. This included if autonomous vehicles should try to spare law-abiding pedestrians over jaywalkers. Most respondents agreed that jaywalkers should be the AI's secondary consideration in life-threatening accidents.
Sign up for the Wealth Daily newsletter below to stay on top of the greatest value investment ideas... You'll also get our free report, Seven Strategies for Tech Investing.
The respondents were divided into the geographical subgroups: western, eastern, and southern. The study found some notable differences among the geographic clusters, like how southern respondents had a stronger tendency than eastern ones to spare young people over the elderly.
Dr. Awad published his findings with coauthor Iyad Rahwan in a paper, titled "The Moral Machine Experiment" after the name of their survey program.
On the one hand, we wanted to provide a simple way for the public to engage in an important societal discussion. On the other hand, we wanted to collect data to identify which factors people think are important for autonomous cars to use in resolving ethical tradeoffs.
For now, their findings are housed in their online survey database as they await a tap from programmers at Tesla, Google, GE, or Ford.
If tragedy should unfold when these autonomous vehicles hit the road, they'll have a world's worth of ethical input to help guide them.
Contributing Editor, Park Avenue Digest