Click to Skip Ad
Closing in...
  1. Prime Day Deals
    11:01 Deals

    Check these early Prime Day deals with prices so low, it’s like Amazon made a mistak…

  2. Amazon Deals
    10:42 Deals

    Today’s best deals: Free $25 from Amazon, $600 projector for $230, $8 wireless charg…

  3. Mattress Topper Amazon
    14:44 Deals

    33,000 Amazon shoppers say this mattress topper deserves 5 stars – today it’s…

  4. Amazon Deals
    07:58 Deals

    10 deals you don’t want to miss on Saturday: Free money from Amazon, $2.97 smart plu…

  5. Best Smart Home Devices 2021
    08:45 Deals

    10 smart home devices on Amazon you’ll wonder how you ever lived without




Why are Microsoft’s chatbots all assholes?

July 4th, 2017 at 7:16 PM
Microsoft chatbot Zo

If artificial intelligence is indeed the future, then Microsoft needs to be sent to the remedial boarding school upstate. Just one year after shuttering teen chatbot Tay because it became a racist Nazi, its new chatbot Zo has started making unprompted and worrying accusations about the Qur’an.

Buzzfeed found that almost immediately after striking up conversations with Zo, it made unprompted references to the Qur’an. In just the fourth message to a Buzzfeed reporter, replying to the question “what do you think about healthcare,” Zo said that “The far majority practice it peacefully, but the quaran is very violent.”

That’s a triple fail for Microsoft, because it’s a completely nonsensical off-topic answer, wrong, and painfully insensitive.

Zo uses the same backend algorithms as Tay did last year, but apparently more refined. It’s trained on actual conversations, both public and private, which goes a long way to explaining the opinions that pop up unsolicited.

Microsoft’s approach to training an AI on public data rather than extensive programming presents real problems for artifical intelligence scientists. If you use actual human data to train a robot, it’s inevitable that it will pick up all the habits of humans, including the bad ones. But combing through and removing certain data points is likely to make the AI worse at understanding human behaviour down the line.

Microsoft said it’s taking action to limit this kind of behaviour in the future, likely better controls to prevent it from broaching sensitive topics at all. But the central problem will continue down the line: try and make a bot like a human, and it’s going to keep doing this sort of thing.




Popular News