Click to Skip Ad
Closing in...

Self-driving cars will deliberately kill people, and that’s ok

Updated Jun 25th, 2016 7:10AM EDT
Self-Driving Cars Killing People
Image: electrek.co

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

Elon Musk keeps warning us that artificial intelligence is a real threat, but he’s also contributing to the evolution of machines. Sure, he might not be doing the real coding, but the smart Tesla cars that will be able to drive themselves around town in the following years will have to be programmed to handle a particular type of emergency: hitting a pedestrian or saving the lives of the passengers inside the vehicle.

DON’T MISS: Galaxy Note 7 shaping up to be Samsung’s most powerful smartphone yet

Tesla and other carmakers are already working on self-driving cars, but autopilot features are yet to be finalized – or completely safe. Even if driverless cars would be readily available to buyers, the legal framework that would allow such machines to operate on public roads isn’t there yet. That means self-driving vehicles aren’t legal as long as they’re not regulated.

Overall, self-driving cars are supposed to increase the safety of everyone on the road, and minimize the risk of accident. But even so, accidents may happen, and a car would have to be programmed to react in a worst-case type of scenario where it should choose who to save, either the people in the car, or a pedestrian. Sure, the car may be smart enough to find a way to save everyone – or at least not kill anyone, even though some people may sustain injuries. But no matter how unlikely such a scenario sounds, carmakers and lawmakers will have to take it into account.

A new research study reveals that prospective owners of self-driving cars responded in online studies that driverless cars should make decisions for the greater good. But by presenting respondents with “a series of quizzes that present unpalatable options that amount to saving or sacrificing yourself,” as The New York Times puts it, the researchers were able to discern that people would rather stay alive.

The study, published in Science magazine, details the findings of a group of computer scientists and psychologists after performing six online surveys of United States residents between June and November last year.

Teaching ethics to a car, and therefore, to a powerful computer that may have AI features, might be one of the hardest things that programmers working for Tesla, Google, and Apple will have to do. By forcing machines to abide to a certain set of yet-to-be-determined laws, tech companies including Musk’s Tesla will teach them the types of situations where it’s okay to jeopardize the life of a human.

The Times’ full article on the matter, complete with a video explaining this particular moral dilemma, is available at the source link below.

Chris Smith Senior Writer

Chris Smith has been covering consumer electronics ever since the iPhone revolutionized the industry in 2008. When he’s not writing about the most recent tech news for BGR, he brings his entertainment expertise to Marvel’s Cinematic Universe and other blockbuster franchises.

Outside of work, you’ll catch him streaming almost every new movie and TV show release as soon as it's available.