Microsoft has once again released the "super-smart" and super-racist Twitter bot Tay. This time, it is supposed to work better than the original version.
Redmond decided to pull the bot last week after it started circulating offensive and racist tweets less than 24 hours after its launch. Microsoft's brilliant bot, when asked by the users who were testing it to repeat what they wrote, it did so without a "second thought." So anyone could write whatever they wanted and Tay simply repeated it.
This time, however, Tay is supposed to learn more rules of good behavior. But his first tweets are likely to follow surprises for Microsoft.
As noted by VentureBeat, Tay just wrote that "smoking kush in front of the police", but it seems that Microsoft took immediate action. Tay's tweets are currently protected, so only users whitelisted by Microsoft can see it content.
For those who do not know:
"The chatbot AI Tay is a machine learning project, designed for human involvement. It is a social, a cultural and a technological experiment. Unfortunately, within his first 24 hours online, he experienced a concerted effort by some users to abuse commenting on Tay's skills. Tay responded in inappropriate ways. As a result, we retired Tay to adapt it,” Microsoft said in a statement.
It remains to be seen if this new version will work better. To see how smart the bot is, it has to compete with the whole internet…
You can also start talking with Thay on Twitter, adding @TayAndYou to your conversations.