Google Lens is a terrific way to search the web, especially in the AI era. You can use your phone’s camera to capture content around you and then search the web for related information. Google Lens also supports multisearch, so you can use text and images to search the web for more details about something you just saw in the real world or online.
However, a photo might not always be enough, so Google decided to take things to the next level. At I/O 2024, Google demoed a search-with-video feature that lets you upload videos to Google Lens so you can search the web based on their content. That’s something not even ChatGPT can do right now.
Then again, you should not confuse the new Google Lens feature with Google’s Project Astra. The latter was also demoed at I/O 2024, and it represents a big upgrade for Gemini. Project Astra will let Gemini “see” through your phone’s camera so it can respond in real time based on your surroundings.
According to Android Authority, the new Google Lens video search feature is finally rolling out to users. You’ll soon be able to take video recordings of your surroundings and ask Google for information based on the videos.
As you’ll see in Mishaal Rahman’s demo on X, the feature is straightforward. You can start the Google Lens app on your phone and then tap and hold the shutter button to record a brief video of the object of your curiosity.
In the clip, Rahman also uses voice to ask Google for information about a smartwatch he’s holding. This demos the multimodal abilities of Google Lens, which are built on its previous multisearch functionality.
In the past, Google made it possible to use voice and text to perform Google searches with Google Lens. Pairing video with voice is the natural evolution of that, especially in a world where we have AI chatbots to talk to.
As Rahman notes, if AI Overviews are available in Google Search in your region, you’ll get AI responses to your Google Lens video searches. Otherwise, you’ll still get relevant responses for your querry.
The results might not be perfect, but they might help you nonetheless. In the example above, Google doesn’t identify the smartwatch perfectly but nails the manufacturer and operating system. Google thinks the wearable in the video is the OnePlus Watch 2R. But Rahman is using a Watch 2. Still, Google Lens isn’t too far off. Results should improve in the future.
The new Google Lens feature should complement your device’s search abilities. You can always use the Circle to Search feature on Android phones to find more details about whatever is showing up on your screen.
The feature will probably be rolling out to Android users first, though I wouldn’t be surprised to see Google bring it to iPhone users soon.
Apple has also developed a feature similar to Google Lens for the iPhone 16. It’s called Visual Intelligence, and it’s meant to give the AI eyes. You’ll have to tap the Camera Control button on the iPhone 16 so the AI can see what you see and answer prompts related to it. Visual Intelligence might be more of a competitor to Project Astra than Google Lens.