The good news is that Microsoft’s AI chatbot isn’t a racist anymore…

Microsoft Tay Tweets

We have good news and bad news about Microsoft’s chatbot experiment involving Tay, a Twitter account that the company hopes will someday interact with other Twitter users in a meaningful way without human assistance. The good news is that Tay is not a racist anymore. But the bad news is that it has apparently morphed into something else the world doesn’t need anymore of: A spambot.

MUST SEE: Watch footage from Daisy Ridley’s emotional ‘Star Wars: The Force Awakens’ audition

Tay was designed by Microsoft to take to Twitter and learn from young users aged between 18 to 24. The hope was that the AI bot would learn so much from the huge number of young Twitter users that it would eventually be able to communicate as a human would.

The problem, of course, is that Twitter is a cesspool.

Trolls immediately went to work on Tay and within a few short hours they had managed to turn the new chatbot into a racist Holocaust denier. Needless to say, Microsoft took the bot offline and went back to work. But as it turns out, there’s a whole lot more work to do than Microsoft thought.

For a brief period of time on Wednesday morning, Tay came back online. And while the chatbot was no longer posting racist tweets, praising Hitler and spouting other hateful messages on Twitter, it instead just began to spam anyone and everyone who mentioned its name on the site.

Here’s a screen shot captured by TechCrunch:

screenshot-2016-03-30-15-04-48

Microsoft once again turned Tay off and protected the chatbot’s account, and it looks like the company has a great deal of work ahead if it hopes to find a way to get Tay to learn from the right people and ignore the trolls.

View Comments