Alexa, Siri, and other virtual assistant apps are great for small tasks no matter where you are, but actually talking to your AI helper in public is still sort of… well, weird. That awkward feeling could one day be a thing of the past thanks to a new voice-free communication option developed by MIT. Without saying a word, the wearable gadget can understand what you were about to say, and translate that into input for an AI assistant.

The system, which MIT calls AlterEgo, is quite powerful in its ability to read your almost-speech without you having to actually say anything. Unfortunately, the early prototype that MIT is currently showing off is so bulky and obvious that it actually makes talking to Siri in public seem like the less awkward option.

As you can see in the video provided by MIT, the wearable latches on to the side of the user’s face. It reads subtle nonvocal cues that it translates into actual words and then feeds that into an AI assistant which responds in kind. The response the user hears is transmitted through “bone conduction” meaning that the wearer hears it but nobody else can. It sounds really fantastic, and it would be the ideal solution to the issue of talking to your phone in public, but at this point the design obviously leaves a lot to be desired.

I mean, just look at this thing. If you thought people who walked around in public with Google Glass on their faces stood out, imagine seeing someone with a massive plastic mandible that attaches to an entire side of their face.

“The motivation for this was to build an IA device — an intelligence-augmentation device,” Arnav Kapur, a graduate student from MIT Media Lab and lead developer of AlterEgo, explains. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

It seems the team has accomplished that task, now if they could just shrink it to be a bit less invasive they’d really be onto something great.

Comments