Microsoft apologized with a post about racist and aggressive Tweets from Tay chatbot.
As we have said in an earlier publicationThe company developed with Artificial Intelligence (AI) a chatbot named “Tay.” The company announced it on Twitter earlier this week, but things didn't go as planned.
For those who do not know, Tay is a chatbot with artificial intelligence unveiled by Microsoft on Wednesday was supposed to have conversations with people on social media like Twitter, Kik and GroupMe and learn from them.
However, in less than 24 hours, the company withdrew Thay after incredible racist remarks that glorified Hitler and humiliated feminists.
In a publication on Friday, Mr Corporate Vice President Peter Lee της Microsoft Research ζήτησε συγγνώμη για την ενοχλητική συμπεριφορά του Τay, και ανέφερε ότι δέχτηκε επίθεση από κακόβουλους users.
Truly within 16 hours of the release of Thay he began to express his admiration for Hitler, his hatred for the Jews and the Mexicans, and to make sexist comments. He also accused US President George Bush of the terrorist attack on 9 / 11.
Why Tay chatbot was scheduled to learn from people, and some of her offensive tweets were reached by people who asked him to repeat what they had written.
"It was a coordinated attack by a subset of people who took advantage of a Tay vulnerability," Lee said. "As a result, Thai published outrageously inappropriate and reprehensible words and images."
Its exact nature errorwas not disclosed. Microsoft deleted 96.000 of Tay's tweets and suspended the experiment for now.