Google’s I/O 2024 event is underway in California, and artificial intelligence (AI) is, unsurprisingly, the only topic Google will discuss. But that’s not really a surprise to anyone following Google and the AI landscape closely.
We knew I/O 2024 would showcase Google’s latest AI innovations long before the event kicked off. That’s why OpenAI crashed Google’s party with a mind-blowing ChatGPT upgrade on Monday.
But I/O wouldn’t be complete without Google talking about Android, its most important operating system. And Google is ready to reinvent Android by putting AI front and center. Seriously, Android 15 will bring AI search to the home screen. You’ll also get an improved Circle to Search functionality, and the Gemini assistant is getting new powers.
In January, Google announced Circle to Search, in time for the Galaxy S24 launch. Since then, it has been available on more Pixel phones and other Android devices. Google made it clear at I/O 2024 that Circle to Search will play a big part in the Android AI experience of the near future, announcing new features for it.
Circle to Search can help with your homework. All you need to do is circle the part of the homework you’re working on on your Android phone or tablet, and Circle to Search will provide immediate help.
Moving on to the Gemini app, which is now the de-facto assistant on Android, Google announced that Gemini will get context-aware powers in Android 15. That is, you’ll be able to use Gemini on top of the app you’re using on the phone so the AI can provide additional assistance.
Google used the Pixel 8a to demo the new Gemini features coming to Android this year. Like generating an image based on a conversation or using the AI to ask questions about a video. The Gemini app will also handle large files, like a PDF file, and provide answers about its contents.
But perhaps the most interesting AI features coming to Android soon is support for more on-device processing. That means better privacy for the user, as the data is handled on the Android phone rather than being sent to the cloud.
Specifically, Gemini Nano with Multimodality is coming to Android later this year. Put differently, handsets that can run this upgraded version on Nano will be able to handle text, video, sounds, and spoken language by processing these inputs locally. This Gemini Nano upgrade makes me believe that the Pixel 9 specs we saw earlier this week are real. The phones are supposedly getting more RAM this year, which might be needed to process AI features locally.
One Android feature that will use on-device Gemini Nano processing is having the AI listen in on calls to detect potential fraud. The AI will notify you in real-time if it thinks the person at the end of the line is trying to steal information from you. The processing of data from the call will happen on-device, so it’ll stay private.
Google wasn’t clear whether these new features would require Android 15 to run. But it seems likely these upgrades will need phones that will first be upgraded to Android 15. After all, Google is putting AI at the core of Android. The only way to do that in earnest is via a big software update.
Speaking of the new operating system, Google also announced that Android 15 beta 2, complete with more Android 15 novelties, is coming on Wednesday.