When Siri was originally released on the iPhone 4S back in 2011, the reviews were somewhat lukewarm. On the positive side, having an intelligent assistant baked into a massively popular smartphone was a huge leap forward. On the other hand, Siri didn’t always perform as advertised.
Even more problematic was that Siri’s feature set was purposefully stunted by Apple. In fact, the Siri Apple released in 2011 wasn’t even as capable as the Siri app that Siri’s original developers had previously released on the App Store. What’s more, it eventually became apparent that rival AI systems from the likes of Google were far superior than what Apple was bringing to the table.
DON’T MISS: Case maker claims the most intriguing new iPhone 7 feature is actually fake
As time marched on, Siri began to slowly but surely improve. Not only did the scope of its functionality expand, but its speech recognition and processing capabilities also got markedly better. A fascinating in-depth piece from Steven Levy, writing for Medium’s Backchannel, details how Apple’s ongoing research into machine learning made all of these improvements possible. What’s more, Apple’s machine learning techniques can now be found across many parts of Apple’s software.
With access to top-level Apple executives like Eddy Cue and Craig Federighi, not to mention some time spent with two Siri scientists, Levy provides us with an unprecedented glimpse into some of the fascinating work Apple is currently doing with deep learning.
If you’re an iPhone user, you’ve come across Apple’s AI, and not just in Siri’s improved acumen in figuring out what you ask of her. You see it when the phone identifies a caller who isn’t in your contact list (but did email you recently). Or when you swipe on your screen to get a shortlist of the apps that you are most likely to open next. Or when you get a reminder of an appointment that you never got around to putting into your calendar. Or when a map location pops up for the hotel you’ve reserved, before you type it in. Or when the phone points you to where you parked your car, even though you never asked it to. These are all techniques either made possible or greatly enhanced by Apple’s adoption of deep learning and neural nets.
Indeed, Apple emphasized to Levy that machine learning now permeates all aspects of the iPhone user experience, often in surprising ways. For instance, Apple leverages machine learning to improve face detection in the Camera app and even when deciding if connectivity is stronger on a Wi-Fi network or on cellular,
One particularly interesting tidbit from the piece details how Apple leveraged its machine learning expertise when developing the Apple Pencil that debuted with the iPad Pro. When the iPad Pro first came out, reviewers lauded the Apple Pencil and specifically noted that inadvertent touches via the palm were thankfully unregistered. As it turns out, this was made possible by machine learning.
Using a machine learning model for “palm rejection” enabled the screen sensor to detect the difference between a swipe, a touch, and a pencil input with a very high degree of accuracy. “If this doesn’t work rock solid, this is not a good piece of paper for me to write on anymore — and Pencil is not a good product,” says Federighi. If you love your Pencil, thank machine learning.
Apple famously likes to keep everything under wraps, a fact which makes Levy’s write-up all the more enthralling and worth checking out in its entirety. For the full scoop on how Apple has used machine learning to improve Siri and why Apple believes that controlling both the hardware and software helps improve overall AI, make sure to hit the source link below. Apple doesn’t make its executives and top-level scientists available too often, but when they do, it’s always worth exploring.