Microsoft apologized with a post about racist and aggressive Tweets from Tay chatbot.
As we have said in an earlier publicationThe company ανέπτυξε με Τεχνητή Νοημοσύνη (AI) ένα chatbot που ονόμασε “Tay.” Η εταιρεία το “αμόλησε” στο Twitter earlier this week, but things didn't go as planned.
For those who do not know, Tay is a chatbot with artificial intelligence unveiled by Microsoft on Wednesday was supposed to have conversations with people on social media like Twitter, Kik and GroupMe and learn from them.
However, in less than 24 hours, the company withdrew Thay after incredible racist remarks that glorified Hitler and humiliated feminists.
In a publication on Friday, Mr Corporate Microsoft Research Vice President Peter Lee apologized for Tay's disruptive behavior, and said he was attacked by malicious users.
Truly within 16 hours of the release of Thay he began to express his admiration for Hitler, his hatred for the Jews and the Mexicans, and to make sexist comments. He also accused US President George Bush of the terrorist attack on 9 / 11.
Why Tay chatbot was scheduled to learn from people, and some of her offensive tweets were reached by people who asked him to repeat what they had written.
"It was a coordinated attack by a subset of people who took advantage of a Tay vulnerability," Lee said. "As a result, Thai published outrageously inappropriate and reprehensible words and images."
Its exact nature errorwas not disclosed. Microsoft deleted 96.000 of Tay's tweets and suspended the experiment for now.