Microsoft has once again released the "extremely smart" and super-racist Twitter bot Tay. This time, it's supposed to work better than the original version.
Redmond decided to withdraw the bot last week, when it began broadcasting offensive and racist tweets, less than 24 hours after its release. Microsoft's clever bot, when asked by users trying to repeat what they were writing, did so without a "second thought." So everyone could write whatever they wanted and Thai simply repeated it.
This time, however, Tay is supposed to learn more rules of good behavior. But his first tweets are likely to follow surprises for Microsoft.
According to VentureBeat, Tay just wrote that "smoking kush infront the police", but it seems that Microsoft took immediate action. Tay tweets are currently protected, so only users on the Microsoft permissions list can view the content.
For those who do not know:
“AI Thai chatbot is a machine learning project designed for human involvement. It is a social, cultural and technological experiment. Unfortunately, within the first 24 hours of online, he experienced a concerted effort by some users to abuse him by commenting on Tay's skills. Tay responded in inappropriate ways. "As a result, we have withdrawn Thai to adjust it," Microsoft said in a statement.
It remains to be seen if this new version will work better. To see how smart the bot is, it has to compete with the whole internet…
You can also start talking with Thay on Twitter, adding @TayAndYou to your conversations.