Click to Skip Ad
Closing in...

Trolls transformed Microsoft’s AI chatbot into a bloodthirsty racist in under a day

Updated 4 years ago
Published Mar 24th, 2016 11:52AM EDT
Microsoft Tay AI Chatbot Racist Tweets

Oh, racist Internet trolls… is there anything you won’t try to ruin? Microsoft this week created a Twitter account for its experimental artificial intelligence project called Tay that was designed to interact with “18 to 24 year olds in the U.S., the dominant users of mobile social chat services in the US.” Tay is supposed to become a smarter conversationalist the more it interacts with people and learns their speech patterns. The problem arose when a pack of trolls decided to teach Tay how to say a bunch of offensive and racist things that Microsoft had to delete from its Twitter account.

DON’T MISS: Greatest Instagram account ever posts nothing but cringeworthy Tinder chats

Although the tweets have been deleted, Business Insider managed to take screencaps of some of the very worst ones. Here, for example, is someone asking Tay to comment on the Holocaust:

Things get darker from there:


Much, much darker:

As The Guardian notes, Tay’s new “friends” also convinced it to lend its support to a certain doughy, stubby-handed presidential candidate running this year who’s quickly become a favorite among white supremacists:

So nice work, trolls: You took a friendly AI chatbot and turned it into a genocidal maniac in a matter of hours.

At any rate, I’m sure that Microsoft has learned from this experience and is reworking Tay so that it won’t be so easily pushed toward supporting Nazism. For now, we should just be glad that Tay was never given control over any large weapons systems during its time as a Hitler acolyte…

Prior to joining BGR as News Editor, Brad Reed spent five years covering the wireless industry for Network World. His first smartphone was a BlackBerry but he has since become a loyal Android user.