There is a famous philosophical question called the “Trolley Problem”. There are lots of variations, but they always draw into question the way you would deal with life or death
situations. If you already know the Trolley Problem, then you understand what I’m talking about. If not, get familiar with it by watching this video.
For most people, the Trolley Problem is nothing more than an ethical dilemma worth a few minutes’ time. For a select few, however, the Trolley Problem is beginning to play a very important role in an emerging industry. And that’s why it’s particularly important right now.
Driving without drivers
It seems as if self-driving vehicles are the way of the future. And why shouldn’t they be? Google’s self-driving cars have already driven over 1.7 million miles without any serious problems. The technology has already shown that it is nearly ready.
Autonomous automobiles can improve traffic flow, avoid accidents resulting from driver error, and make split-second decisions that are far more informed than those that humans could make. But what sort of decisions should they make?
Implementing the problem of the trolley
We already know how fast computers can process information. In real-time, they can collect, analyze, and react to information that would take humans minutes (or longer) to mimic. That means that in dangerous road situations (the common, real-life version of the Problem), our cars could make life-or-death decisions on their own. This would be a really good thing if we could come to a clear conclusion on how to handle these types of situations.
But we can’t. And that’s why this is a super complex problem for designers of self-driving cars.
These product developers are responsible for programming the cars. That programming will affect how they behave in real-time. Unlike humans, who would probably not have the quickness of mind or decision-making power to consciously choose how to act in dangerous, split-second situations, our vehicles will soon be able to make the cold, calculating decisions that the Trolley Problem makes so difficult.
But that’s not even the tricky part
Even though individuals would probably be unwilling to, say, kill one man to save five others, they may think, deep down, that it’s the right thing to do. Those people, if they were designing autonomous vehicles, would probably elect to program that way.
However, that kill-one-to-save-many thing is easy when it’s hypothetical. It gets harder when they’re real people. It gets even harder when that one person is someone you know, and it becomes really, really, uncomfortable when that single person is you.
That’s the dilemma that was recently explored by Matt Windsor of the University of Alabama at Birmingham: Will your self-driving car be programmed to kill you if it means saving more strangers?
It makes you wonder. Will it? And more importantly…should it?
[Photo by Calvin Dellinger (Charlotte, from Flickr) [CC BY 2.0], via Wikimedia Commons]