Apple unveiled the first iOS 15 features on Wednesday, new accessibility features that should make its products even easier to interact with than before. The new accessibility features would have easily been a great segment in the upcoming WWDC 2021 keynote’s iOS 15 and iPadOS 15 presentation. But Apple wanted to celebrate Global Accessibility Awareness Day (May 20th) with this announcement.
Among the various new accessibility features that Apple unveiled the other day, there’s a new AssistiveTouch functionality for Apple Watch. The wearable will use data from sensors like the gyroscope, accelerometer, and heart rate sensor, combined with on-device machine learning to translate hand gestures into actions on the screen. The feature is meant to allow people with limited mobility who can’t touch the Watch display to interact with the device. But this “mind-reading” feature might be a lot bigger than accessibility for Apple. It sounds a lot like the sci-fi tech Facebook has recently demoed for interacting with smart devices like virtual reality (VR) headsets.
AssistiveTouch for Apple Watch targets a specific type of user, people with limited mobility and those with upper body limb differences. Here’s how the technology works, according to Apple:
Using built-in motion sensors like the gyroscope and accelerometer, along with the optical heart rate sensor and on-device machine learning, Apple Watch can detect subtle differences in muscle movement and tendon activity, which lets users navigate a cursor on the display through a series of hand gestures, like a pinch or a clench. AssistiveTouch on Apple Watch enables customers who have limb differences to more easily answer incoming calls, control an onscreen motion pointer, and access Notification Center, Control Center, and more.
The video that Apple shared in the press release highlights all the hand gestures that the wearable can detect.
Two months ago, Facebook demoed a similar technology that uses wearable devices on the wrists to interpret neural commands from the brain that would trigger various hand gestures. Those hand gestures would then be turned into actions on the virtual screen of a VR headset. Actions like navigating menus, interacting with UI elements, or typing on a keyboard, would all be possible as the wearable interprets the user’s gestures and translates them into actions on the screen.
“You actually have more of your brain dedicated to controlling your wrist than any other part of your body, probably twice as many neurons controlling your wrist and the movement of your hands than is actually dedicated to your mouth for feeding and for speech,” Facebook Reality Labs TR Reardon said during the mid-March briefing.
At the time, Facebook CTO Mike Schroepfer admitted that “it’s hard to predict” the timeline of having the concept wearable device deployed commercially. “How these things sequence out in the market when they show up — are things I don’t have crisp answers to. What we’re focused on is hardening these technologies,” he said at the time.
Unlike Facebook, Apple already has a powerful wearable on its hands. The Apple Watch is the most popular wearable in the world, and Apple could use its “mind-reading” capabilities to bring the same sort of Minority Report user experience that Facebook is envisioning to future Apple products.
Facebook’s concept involves the user wearing two wrist devices, however. The VR gadget would have to turn into action gestures from both hands. Assuming Apple is already looking at turning the Apple Watch into a gadget that will be used to manage AR and VR experiences, Apple could always develop a secondary Watch-like accessory packed with sensors for the second wrist. That’s just speculation, however.
What’s certain at this point is that AssistiveTouch on Apple Watch will launch later this year, probably well before Facebook launches any “mind-reading” wearable accessory for Oculus VR experiences. Facebook’s presentation from mid-March follows below, showing how its mind-reading tech should work.