Microsoft apologized with a post about racist and aggressive Tweets from Tay chatbot.
As we have said in an earlier publicationThe company ανέπτυξε με Τεχνητή Νοημοσύνη (AI) ένα chatbot που ονόμασε “Tay.” Η εταιρεία το “αμόλησε” στο Twitter νωρίτερα αυτή την εβδομάδα, αλλά τα πράγματα δεν πήγαν όπως τα είχε σχεδιάσει.
For those who do not know, Tay is a chatbot with artificial intelligence presented by Microsoft on Wednesday and was supposed to have talks with people on social media like Twitter, Kik and GroupMe and would learn from them.
However, in less than 24 hours, the company withdrew Thay after incredible racist remarks that glorified Hitler and humiliated feminists.
Σε μια δημοσίευση της Παρασκευής, ο Corporate Vice President Peter Lee της Microsoft Research ζήτησε συγγνώμη για την ενοχλητική συμπεριφορά του Τay, και ανέφερε ότι δέχτηκε attack από κακόβουλους users.
Truly within 16 hours of the release of Thay he began to express his admiration for Hitler, his hatred for the Jews and the Mexicans, and to make sexist comments. He also accused US President George Bush of the terrorist attack on 9 / 11.
Why Το chatbot Tay ήταν προγραμματισμένο να μαθαίνει από τους people, και μερικά από τα προσβλητικά tweets της επιτευχθήκαν από ανθρώπους που του ζητούσαν να επαναλάβει αυτό που είχαν γράψει.
"It was a coordinated attack by a subset of people who took advantage of a Tay vulnerability," Lee said. "As a result, Thai published outrageously inappropriate and reprehensible words and images."
Η ακριβής φύση του errorwas not disclosed. Microsoft deleted 96.000 of Tay's tweets and suspended the experiment for now.
