Click to Skip Ad
Closing in...

This stagnant technology is what’s holding Siri back

Published Aug 28th, 2016 2:24PM EDT
Siri Performance
Image: Apple

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

Over the past few years, Siri’s functionality and reliability has improved by leaps and bounds. Not only is Siri’s feature-set more expansive than ever, but it operates much more quickly and does a much better job at understanding and processing language. To this point, Medium last week published a fascinating and detailed profile on how Apple’s advancements in the field of AI and machine learning have helped take Siri’s performance to the next level.

In fact, Siri Senior Director Alex Acero boasted that advancements in Siri’s underlying technology has cut down the error rate “by a factor of two in all the languages” and “more than a factor of two in many cases.”

DON’T MISS: Apple’s iPhone 8 will feature the radical redesign we’ve been waiting for

“That’s mostly due to deep learning and the way we have optimized it,” Acero added, “ not just the algorithm itself but in the context of the whole end-to-end product.”

That’s all well and good, but heavy Siri users can agree that the software still isn’t anywhere close to perfect, especially when used in less than ideal conditions. Using Siri in a quiet room is one thing, but trying to use Siri to settle a bet in a crowded bar or even on a busy street can often be an exercise in frustration.

Interestingly enough, one of the reasons why Siri and other virtual assistants still struggles with understanding queries in less than ideal acoustic environments is because the state of microphone technology has remained somewhat stagnant over the years.

Highlighting this issue, Bloomberg reports:

The mics in most consumer technology haven’t kept pace with the advances in, say, cameras. They still aren’t great at focusing on faraway voices or filtering out background noise, and they often require too much power to be listening at all times.

Apple and its rivals have challenging, albeit straightforward, demands. They want a higher signal-to-noise ratio, meaning the mic can isolate voices more clearly and from farther away, and a higher acoustic overload point, the threshold at which the mic can no longer distinguish signal from noise. And the chips have to improve in both areas without getting bigger, becoming less reliable, or using more power than before.

One solution, of course, is to simply grace your everyday smartphone with more mics. While this certainly helps, it also costs more money and can also have an unfavorable effect on battery life. That being the case, component manufacturers are busy at work coming up with new strategies designed to improve Siri’s ability to understand language in noisy environments. While some companies are keen on using software algorithms to improve things, others are exploring completely novel microphone designs.

Make sure to hit the source link below for the full rundown on what some companies are doing in order to make software assistants like Siri and Alexa all the more reliable.

Yoni Heisler Contributing Writer

Yoni Heisler has been writing about Apple and the tech industry at large with over 15 years of experience. A life long expert Mac user and Apple expert, his writing has appeared in Edible Apple, Network World, MacLife, Macworld UK, and TUAW.

When not analyzing the latest happenings with Apple, Yoni enjoys catching Improv shows in Chicago, playing soccer, and cultivating new TV show addictions.