Who said a computer can’t dream? Well, not exactly dream, but imagine and create things out of the things it sees. That’s what Google’s neural network computers need to do to recognize all those places and objects in the images you upload to Google Photos.
DON’T MISS: Jetpacks Are Going Mainstream Faster Than You Think, But They Won’t Come Cheap
The search giant explained with the help of lots of pictures how its advanced computer thinks when it comes to analyzing images and recognizing objects. To improve it, the company is showing it millions of examples while gradually adjusting its parameters until the computer can give Google the picture classifications it wants.
“[We] just start with an existing image and give it to our neural net,” Google wrote. “We ask the network: ‘Whatever you see there, I want more of it!’ This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.”
The results are mesmerizing, as Google’s neural network can deliver some extremely impressive results, making real photos look like a glimpse into someone’s dream.
With each pass over pictures, the system is further improved to better recognize the contents of an image, a feature that’s slowly becoming more and more interesting in photography apps, such as the newly released Google Photos. Hopefully though, it won’t see animals everywhere in your pictures, as it disturbingly happens in the test below.