Search icon

Tech

28th Mar 2016

Microsoft apologises for Tay bot’s racist, sexist, pro-genocide tweets

Jordan Gold

Twitter users had their fun last week when Microsoft’s Tay – an advanced AI program capable of complex conversation – went haywire and started tweeting racist and sexist comments.

The “chatbot”, originally created to interact with 18- to 24- year-olds on social media, is now offline, but it turns out it’s not the robot who’s evil, it’s us, the people talking to it.

Microsoft_axes_Tay_AI_chatbot_for_offens_0_1111268_ver1.0

Although Microsoft had been prepared for the robot to malfunction, a group of mischievous trolls were thought to have targeted the robot in a “coordinated attack” of conversation all at once.

Whilst Tay was supposedly versatile and could learn new things over time, no limits were placed on what the robot was able to absorb, meaning that it was able to learn racism and anti-semitism in less than 24 hours of Beta testing.

Microsoft has promised to bring Tay back when “we are confident we can better anticipate malicious intent.” Now 

“As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.”

Lee added:

“To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. Tay was not the first artificial intelligence application we released into the online social world,” Microsoft’s head of research wrote.

After this disaster, it could well be the last.

https://twitter.com/pmarca/status/714250847392169984