Click to Skip Ad
Closing in...

Google is hinting that its AI-powered smart glasses are coming soon

Published Dec 11th, 2024 11:28AM EST
Google is hinting at AI-powered smart glasses.
Image: Google

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

On Wednesday, Google gave the generative AI chatbot its biggest upgrade yet with Gemini 2.0. While that was the company’s biggest AI announcement of the day, it was far from the only one. Google also gave us a fresh look at the research prototype for its universal AI assistant, Project Astra, and hinted at a future pair of AI-powered smart glasses.

During the Gemini 2.0 briefing this week, Google DeepMind product manager Bibo Xu said that “a small group will be testing Project Astra on prototype glasses, which we believe is one of the most powerful and intuitive form factors to experience this kind of AI,” The Verge reports.

That group will be part of Google’s Trusted Tester program, which often gives members access to prototype devices that never launch publicly. In other words, there’s a chance the glasses you see in the video below will never be available to buy, but when asked about the glasses, Xu said that “for the glasses product itself, we’ll have more news coming shortly.”

Google has experimented with AR glasses in the past, from Google Glass to Google Cardboard, but it’s clear that smart glasses would be a perfect home for Project Astra. Whether or not that leads to a retail product launching any time soon remains to be seen.

As for Gemini 2.0, here are some of the upgrades it brought to Project Astra:

  • Better dialogue: Project Astra now has the ability to converse in multiple languages and in mixed languages, with a better understanding of accents and uncommon words.
  • New tool use: With Gemini 2.0, Project Astra can use Google Search, Lens and Maps, making it more useful as an assistant in your everyday life.
  • Better memory: We’ve improved Project Astra’s ability to remember things while keeping you in control. It now has up to 10 minutes of in-session memory and can remember more conversations you had with it in the past, so it is better personalized to you.
  • Improved latency: With new streaming capabilities and native audio understanding, the agent can understand language at about the latency of human conversation.

“We’re working to bring these types of capabilities to Google products like Gemini app, our AI assistant, and to other form factors like glasses,” Google says.

Jacob Siegal
Jacob Siegal Associate Editor

Jacob Siegal is Associate Editor at BGR, having joined the news team in 2013. He has over a decade of professional writing and editing experience, and helps to lead our technology and entertainment product launch and movie release coverage.