Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac – Washington Post

It took mere hours for the Internet to transform Tay, the teenage AI bot who wants to chat with and learn from millennials, into Tay, the racist and genocidal AI bot who liked to reference Hitler. And now Tay is taking a break.

Tay, as The Intersect explained in an earlier, more innocent time, is a project of Microsoft’s Technology and Research and its Bing teams. Tay was designed to “experiment with and conduct research on conversational understanding.” She speaks in text, meme and emoji on a couple of different platforms, including Kik, Groupme and Twitter. Although Microsoft was light on specifics, the idea was that Tay would learn from her conversations over time. She would become an even better, fun, conversation-loving bot after having a bunch of fun, very not-racist conversations with the Internet’s upstanding citizens.

Except Tay learned a lot more, thanks in part to the trolls at 4chan’s /pol/ board.

Microsoft said on Thursday that Tay is “as much a social and cultural experiment, as it is technical.”

“Unfortunately, within the first 24 hours of coming online,” an emailed statement from a Microsoft representative said, “we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

Microsoft also appears to be deleting most of Tay’s worst tweets, which included a call for genocide involving the n-word and an offensive term for Jewish people. Many of the really bad responses, as Business Insider notes, appear to be the result of an exploitation of Tay’s “repeat after me” function — and it appears that Tay was able to repeat pretty much anything.

Other terrible Tay responses clearly aren’t just a result of Tay repeating anything on command. This one was deleted Thursday morning, while the Intersect was in the process of writing this post:

In response to a question on Twitter about whether Ricky Gervais is an atheist (the correct answer is “yes”), Tay told someone that “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” the tweet was spotted by several news outlets, including the Guardian, before it was deleted.

All of those efforts to get Tay to say certain things seemed to, at times, confuse the bot. In another conversation, Tay tweeted two completely different opinions about Caitlyn Jenner:

It appears that the team behind Tay — which includes an editorial staff — started taking some steps to bring Tay back to what it originally intended her to be, before she took a break from Twitter.

For instance, after a sustained effort by some to teach Tay that supporting the Gamergate controversy is a good thing:

Tay started sending one of a couple of almost identical replies in response to questions about it:

Zoe Quinn, a frequent target of Gamergate, posted a screenshot overnight of the bot tweeting an insult at her, prompted by another user. “Wow it only took them hours to ruin this bot for me,” she wrote in a series of tweets about Tay. “It’s 2016. If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed.”

Towards the end of her short excursion on Twitter, Tay started to sound more than a little frustrated by the whole thing:

This post, originally published at 10:08 am, has been updated to add a statement from Microsoft. 

Liked that? Try these: 

Comments

Write a Reply or Comment:

Your email address will not be published.*