Click to Skip Ad
Closing in...

Is it already too late to stop the robot apocalypse?

Published Aug 5th, 2015 1:47PM EDT
BGR

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

Look: Worries about the implications of artificial intelligence aren’t just the crazy ramblings of survivalist loons. Some of the world’s smartest people — including Stephen Hawking, Elon Musk and Steve Wozniak — have all warned to various degrees that we have to be extremely careful with how far we take A.I. lest we end up making The Terminator films into a reality. Over at The Week, James Poulos argues that it’s basically too late to stop the Robopocalypse unless we stop doing research into robotics all together, which is something that he acknowledges just isn’t happening.

MORE ROBOTS: Robot that hitchhiked across Canada and Germany gets decapitated after 2 weeks in the U.S.

“With drones and other device-powered weapons, quasi-robots are already part of the battle space,” he writes. “Meanwhile, even ostensibly ‘civilian use’ robots can pose a dramatic threat, especially, of course, when hacked. Put two and two together, and the only surefire way to prevent robots from going to war is to shut down robotics.”

One thing Poulos doesn’t touch on is that the letter signed by Hawking, Musk and Wozniak that urged a global ban of A.I.-enhanced weapons systems specifically warned about A.I.’s effects on human decision making. Namely, it said that because A.I.-enhanced weapons systems would reduce risks to military personnel, it would make governments more likely to go to war in the future.

For this and other reasons, Poulos’ argument that we have to scrap all robotics research to save ourselves from the Robopocalypse isn’t all that convincing. Take, for example, the dearly departed hitchBOT that met its grisly demise this past weekend when it was decapitated by human vandals. The robot was easy prey for its assailants because it was completely harmless and entirely dependent on the kindneess of strangers. In short, it posed a threat to absolutely no one because it wasn’t programmed to be one.

Like a lot of things, robots are just tools. When people argue that we need to be careful with robots and A.I., they’re often less worried  about  the robots themselves rebelling than they are worried about what we humans will design them to do.

Brad Reed
Brad Reed Staff Writer

Brad Reed has written about technology for over eight years at BGR.com and Network World. Prior to that, he wrote freelance stories for political publications such as AlterNet and the American Prospect. He has a Master's Degree in Business and Economics Journalism from Boston University.