Microsoft has once again released the "super-smart" and super-racist Twitter bot Tay. Αυτή τη φορά, υποτίθεται ότι λειτουργεί καλύτερα από την αρχική έκδοση.
Redmond decided to pull the bot last week after it started circulating offensive and racist tweets less than 24 hours after its launch. Microsoft's brilliant bot, when asked by the users who were testing it to repeat what they wrote, it did so without a "second thought." So anyone could write whatever they wanted and Tay simply repeated it.
This time, however, Tay is supposed to learn more rules of good behavior. But his first tweets are likely to follow surprises for Microsoft.
As noted by VentureBeat, Tay just wrote that “smoking kush in front of the police”, but it looks like Microsoft took immediate action. Tay's tweets are currently protected, so only users whitelisted by Microsoft can see the content.
For those who do not know:
“The chatbot AI Tay is a project engineeringof learning, designed for human engagement. It is a social, a cultural and a technological experiment. Unfortunately, within his first 24 hours online, he experienced a concerted effort by some users to abuse commenting on Tay's skills. Tay responded in inappropriate ways. As a result, we retired Tay to adapt it,” Microsoft said in a statement.
It remains to be seen if this new version will work better. To see how smart the bot is, it has to compete with the whole internet…
You too can start talking with Tay on Twitter by adding @TayAndYou on talks you.