Click to Skip Ad
Closing in...

How do computers know what they’re looking at?

Published Apr 18th, 2016 7:00PM EDT

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

Computers have become so advanced that they can recognize the contents of the photos you take with your iPhone or Android smartphone, meaning you’ll be able to easily find a certain picture by simply describing it to a computer in the future. It’s all possible thanks to advanced deep learning techniques that train computers to recognize all sorts of shapes and objects in regular images.

DON’T MISS: The only way to get Google’s awesome new app that lets you control your phone by voice

Companies including Google and Microsoft have advanced AI machines that are taught to discern between various items shown in a picture – this link shows you how Google’s computers do it, for instance. MIT’s Technology Review recently took a closer look at what deep learning and artificial neural networks mean and it explained that the training algorithm is why Facebook’s AI recognizes your friends in pictures you submit to the social network.

Artificial neural networks make deep learning possible. They’re arranged into a hierarchy of layers that interpret the data passing through a sequence. Each layer is specialized to identify certain features in a photo, and that’s how the computer ultimately know what it’s looking it. These artificial neural networks are fed millions of pictures during training before being able to actually recognize anything in the pictures users upload online.

The first layers detect very simple patterns that may include colors and shading in an image. The next layer uses what it learned from the previous layer to look at bigger pieces of an image by finding additional patterns including corners, stripes, and meshes. A third layer could detect parts of objects, and differentiate between items that might look similarly to a computer. The higher layers get more sophisticated, as they’re able to tell the difference between objects, animals and people.

As this type of imaging software evolves, it may receive more advanced applications in various fields, including robotics, autonomous cars, and medicine.

Chris Smith Senior Writer

Chris Smith has been covering consumer electronics ever since the iPhone revolutionized the industry in 2008. When he’s not writing about the most recent tech news for BGR, he brings his entertainment expertise to Marvel’s Cinematic Universe and other blockbuster franchises.

Outside of work, you’ll catch him streaming almost every new movie and TV show release as soon as it's available.