Click to Skip Ad
Closing in...

Researchers develop method to completely fool AI image recognition, turn turtles into rifles

Published Nov 3rd, 2017 10:01PM EDT
AI tricked

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

Advancements in AI are happening at a breakneck pace, and they have been for some time now, but computers aren’t infallible. Some very interesting new research into AI image recognition has revealed a stunning vulnerability that will make you think twice the next time someone tells you computer algorithms are outpacing human intelligence.

The research team, which is comprised for MIT grad students and undergrads from the student-run LabSix group, were able to create 3D objects that look perfectly identifiable to human eyes, but completely confuse AI object recognition software. The team’s “3D adversarial objects” throw the object recognition algorithms for a loop, making the computers think they’re looking at something they’re really not, and it’s honestly incredible.

The team used 3D objects which were already easily identifiable by the AI, including a small model of a turtle, for example, and then modified them in an effort to trick the computer. The result was a turtle model that looks very similar to the initial, unmodified version, but is identified as a “rifle” or “assault rifle” by the AI. What’s even more impressive is that the turtle doesn’t look all that much different from the default version, but the AI shows remarkable confidence that it’s actually a firearm of some type.

“This work clearly shows that something is broken with how neural networks work, and that researchers who develop these systems need to be spending a lot more time thinking about defending against these sorts of so-called ‘adversarial examples,’” one of the lead authors of the paper, and PhD candidate, Anish Athalye explains. “If we want safe self-driving cars and other systems that use neural networks, this is an area of research that needs to be the focus of much more study.”

So why does this matter? We as a society are only just starting to test the kinds of things that are made possible thanks to AI advancements, and object recognition is still in its infancy in that regard. In the future, something like a security system that is set up to spot danger — by identifying a weapon, for example — could potentially be fooled into thinking an assault rifle is actually just bag of popcorn.

This kind of research, which pokes holes in technology we regularly fawn over, is incredibly important in the grand scheme, and in the case of object recognition AI, shows there’s still plenty of work left to be done.