The main I/O 2019 keynote was filled with announcements and demos, but it was pretty dull. Google unveiled Android Q’s main features as well as the mid-range Pixel 3a phone and the Nest hardware rebrand, and it explained the various innovations it’s packing into Google Assistant. The company also talked at length about machine learning and artificial intelligence, giving practical examples of what it all means. On top of all that, Google made clear to everyone in the audience that privacy is now a core focus at Google HQ. One other theme that was pretty clear during the conference was Google’s renewed focus on making technology accessible to everyone, and the related announcement from the keynote might be the most exciting stuff we saw at I/O 2019.
Google came up with a few new accessibility features that should make smartphones and Google services even easier to use — so here they are:
Google Lens text-to-speech
While demoing Google Lens for Google Go, the search app for entry-level devices, Google revealed that the app will be able to read any text aloud and even translate it into the user’s native language. The code needed for the feature supposedly measures 100KB, so the feature can be included in very cheap smartphones. Google gave a practical example of what that might mean for people who struggle to read:
Live Transcribe
Live Transcribe is easily one of the greatest features Google, or anyone else could spend money developing. The feature will help people who are deaf “hear” what someone else is saying in real time. The app will transcribe everything it hears, so anyone hard of hearing can follow the conversation and reply. Live Transcribe is coming soon in beta version, first on Pixel 3 and then on other devices.
Sound Amplifier
Sound Amplifier is also meant to help people with hearing disabilities. The feature lets you adjust the sound settings of the phone to make it easier to hear. The feature only works on devices running Android 9 Pie or higher.
Live Caption and Live Relay
Live Caption is like Live Transcribe, but for videos — any video — and phone calls. With Live Caption enabled, anyone will be able to understand what’s happening in a video, from the web or a phone, without turning on the sound. That’s because subtitles will appear in real-time on top of the video that’s currently playing. The feature is meant to help people who are hard of hearing first and foremost. Similarly, Live Relay will work during calls, if that’s something you want to do, turning the whole conversation into a chat-like experience. Live Caption doesn’t need the internet to work, but you’ll need an Android Q device.
Project Euphonia
Maybe one of the boldest accessibility announcements from Google I/O, Project Euphonia is meant to help people suffering from ALS, people who had a stroke, and people with other speech impediments speak. Google is using machine learning to turn hard-to-understand speech and facial expressions into text so that people can have conversations.