Yesterday, the news cycle worked overtime to first bring us the story of how an Amazon Echo secretly sent recordings of a couple’s conversations to a friend, and then to explain why that happened. Amazon says that the glitch was caused by the Echo inadvertently interpreting a word as “Alexa,” a follow-up phrase as “send message,” and then the name of someone who was in their contacts list.

Amazon is trying to brush the story off as “hey, that’s weird!” and in isolation, this bug doesn’t seem all that serious. But what this story reveals is how little we actually know about our smart speakers, what they’re recording that they shouldn’t be, and how often they listen in on conversations that they shouldn’t.

In its initial statement, Amazon said that it was an “extremely rare occurrence;” in a follow-up, it said that “unlikely as this string of events is, we are evaluating options to make this case even less likely.”

Nowhere in there is any data on how often things like this happen. Amazon, Google, and Apple don’t offer any data on false positives for its always-listening smart speakers, which is a terrifying prospect for our privacy. We’ve let always-on devices into our homes because we’ve been repeatedly promised that they’re not listening until they hear a “wake word,” like an obedient butler standing discreetly outside the room until summoned.

But as this event shows, we don’t really have a handle on what they’re listening to. Last October, an Android Police blogger found that his Google Home Mini was recording nearly everything he said and uploading it to Google’s servers, something he wouldn’t ever have known about if he wasn’t checking the activity log for his device. We don’t have any data on how often these kinds of false positives happen, and the mere fact that they can should be cause for concern.

Sure, you can manually use the mute switch on most devices to physically disable the microphone, but I think the common assumption with these devices is that they’re not listening until triggered, rather than that they’re always listening, but only sometimes choose to act on your words.

The solution? Throwing all always-listening devices in the trash probably isn’t likely at this point, but a small dose of transparency would go a long way. Until Google and Amazon provide some data on how often their devices are triggered but no meaningful command given after, we could probably do with a dose more skepticism.

Comments