Click to Skip Ad
Closing in...
  1. Early Prime Day Deals
    08:06 Deals

    10 incredible early Prime Day deals that are about to end at Amazon

  2. Best Prime Day TV Deals
    16:38 Deals

    Best Prime Day TV deals: Samsung, LG, Vizio, and more

  3. Best Prime Day Apple Deals
    12:00 Deals

    Amazon Prime Day 2021: Best Apple deals

  4. Amazon Deals
    10:32 Deals

    Today’s best deals: Free $15 Amazon credit, early Prime Day deals, first M1 iMac sale, $20 Blink cam, $600 projector for $300, more

  5. Best Prime Day Phone Deals
    18:12 Deals

    Best Prime Day phone deals: Apple iPhone, Samsung Galaxy and more




Microsoft explains why its cute AI chatbot became a crazed Nazi in under a day

March 25th, 2016 at 4:08 PM
Microsoft Explains Tay AI Chatbot Racist Tweets

Earlier this week, we brought you the tragicomic story of Tay, an artificial intelligence chatbot that was designed to interact with and learn from people between the ages of 18 and 24. Unfortunately for Microsoft, however, some racist Twitter trolls figured out a way to manipulate Tay’s behavior to transform it into a crazed racist who praised Hitler and denied the existence of the Holocaust.

That is obviously not a good thing and Microsoft has penned a followup blog post explaining what went wrong and what it plans to do in the future.

MUST READ: It sounds like buying Nest has been a total disaster for Google

“In the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay,” writes Microsoft Research corporate vice president Peter Lee. “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility… Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.”

In case you don’t remember, here are some of the “wildly inappropriate” things that trolls tricked poor Tay into saying:

So what is Microsoft going to do to prevent this supposedly innocent AI from turning into the world’s first virtual genocidal maniac? Lee says that while Microsoft is working to patch up the holes exploited by Twitter trolls, it will still put Tay out on in public for anyone on the Internet to interact with however they see fit.

” To do AI right, one needs to iterate with many people and often in public forums,” Lee explains. “We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”




Popular News