Microsoft apologized with a post about racist and aggressive Tweets from Tay chatbot.
As we have said in an earlier publication, the company developed with Artificial Intelligence (AI) a chatbot called "Tay." The company "tarnished" it on Twitter earlier this week, but things did not go as planned.
For those who do not know, Tay is a chatbot with artificial intelligence presented by Microsoft on Wednesday and supposed to have conversations with people in social media such as Twitter, Kik and GroupMe and would learn from them.
However, in less than 24 hours, the company withdrew Thay after incredible racist remarks that glorified Hitler and humiliated feminists.
In a Friday paper, Corporate Vice President Peter Lee of Microsoft Research apologized for the annoying behavior of Thay, and said he was attacked by malicious users.
Truly within 16 hours of the release of Thay he began to express his admiration for Hitler, his hatred for the Jews and the Mexicans, and to make sexist comments. He also accused US President George Bush of the terrorist attack on 9 / 11.
Why Tay chatbot was scheduled to learn from people, and some of her offensive tweets were reached by people who asked him to repeat what they had written.
"It was a coordinated attack by a subset of people who took advantage of a Tay vulnerability," Lee said. "As a result, Thai published outrageously inappropriate and reprehensible words and images."
The exact nature of the error was not revealed. Microsoft deleted 96.000 tweets from Thay and suspended the experiment for the time being.