Google has had a great month when it comes to Gemini AI announcements, beefing up its chatbot across the board. The new Gemini 2.0 Flash experimental model powers better Deep Research features, Personalization, and incredible photo editing features. Also, Gemini got Canvas for improved collaboration with the AI, and Audio Overview, a feature that turns document summaries into podcasts.
Google also confirmed at MWC 2025 that Gemini Live would get a couple of amazing new video features in March, and they’re now rolling out to users. Gemini Live can see the live video from your camera in real time and chat with you about it. It can also see the contents of your screen if you’re looking to talk to the AI about something on your phone.
All of this happened while Apple has had a terrible month when it comes to Apple Intelligence. The company was forced to delay the smart Siri until next year, making us realize that the Siri AI vision demoed at WWDC 2024 was just vaporware. Also, while the Gemini Live assistant can talk to you about live video, Siri can’t even tell what month it is.
Gemini Live is the AI assistant Google built under Project Astra, a research project Google demoed at I/O, showing what an AI assistant with multimodal support would be able to do. That multimodality also included access to live video from the phone’s camera, and that functionality is rolling out to Gemini Live users who are also Gemini Advanced subscribers. That’s the premium Gemini tier which gets you access to the latest Gemini features.
A Reddit user discovered a new option to share the phone’s screen with Gemini Live. Tap it, and you’ll give the AI assistant access to the contents of your display. You’ll then be able to ask the AI questions about what’s on your screen.
The Redditor posted a clip to demo the Gemini Live capability that rolled out to their Xiaomi phone. That’s an indication the feature will not be restricted to Pixel phones at launch — here’s the short video:
Sharing the screen while talking to Gemini Live is even better than using Circle to Search to start a Google Search about the contents of your screen. You might be able to get answers even faster this way, as Gemini Live will look at what’s on your display and provide assistance when it can. As you can see in the clip above, Gemini Live can’t perform other tasks, like opening apps for the user.
More interesting to me is Gemini Live’s ability to see the world through the camera lens. That real-time video support should also be rolling out to Gemini Live users with Advanced subscriptions. It’s unclear if the Redditor above got the functionality, as they didn’t share a similar demo. I would expect users who are able to screen-share with Gemini Live also to be able to use live videos with the AI.
Google has Gemini Live demos that show a user interacting with the AI while showing Gemini Live their surroundings via live video. In this example, the user is asking the AI for paint suggestions for their home:
If you have a Gemini Advanced subscription, you’ll want to check if Gemini Live got the new live video features. It’s likely you’ll get them soon now that users have started spotting them in the wild.