Microsoft apologized with a post about racist and aggressive Tweets from Tay chatbot.
As we have said in an earlier publicationThe company ανέπτυξε με Τεχνητή Νοημοσύνη (AI) ένα chatbot που ονόμασε “Tay.” Η εταιρεία το “αμόλησε” στο Twitter νωρίτερα αυτή την εβδteam, but things didn't go as planned.
For those who do not know, Tay is a chatbot with artificial intelligence presented by Microsoft on Wednesday and supposed to have conversations with people in social media such as Twitter, Kik and GroupMe and would learn from them.
However, in less than 24 hours the company pulled Tay after incredibly racist comments who glorified Hitler and reviled feminists.
In a Friday paper, Corporate Vice President Peter Lee of Microsoft Research apologized for the annoying behavior of Thay, and said he was attacked by malicious users.
Truly within 16 hours of the release of Thay he began to express his admiration for Hitler, his hatred for the Jews and the Mexicans, and to make sexist comments. He also accused US President George Bush of the terrorist attack on 9 / 11.
Why Tay chatbot was scheduled to learn from people, and some of her offensive tweets were reached by people who asked him to repeat what they had written.
“It was a coordinated attack by a subset of people who took advantage of a vulnerability of Tay," Lee said. "As a result, Tay published outrageously inappropriate and objectionable words and images."
Its exact nature errorwas not disclosed. Microsoft deleted 96.000 of Tay's tweets and suspended the experiment for now.