As reported by Motherboard and The Verge, the YouTuber Yannic Kilcher created an AI language model that trained it using three years of content from the Politically Incorrect (/pol/) board of 4chan, a place notorious for racism and other forms of bigotry.
After applying the model to bots, Kilcher left her artificial intelligence loose on the same 4chan board (/pol/) and unsurprisingly there was an incredible wave of hate.
In a 24-hour period, the bots wrote 15.000 posts that often included or interacted with racist content. They accounted for more than 10 percent of the posts on /pol/ that day, Kilcher claims.
Nicknamed GPT-4chan (from OpenAI's GPT-3), the model learned to not only capture the words used in /pol/'s posts, but also to add an overall tone that Kilcher said combined "offensiveness, nihilism, trolling and deep mistrust".
The creator of the video took care to avoid 4chan's defenses against proxies and VPNs. He was able to use a VPN to make the bot's posts appear to be coming from the Seychelles.
The AI made a few mistakes, like a few blank posts, but it was convincing enough that it took several users about two days to realize something was wrong.
Many forum members noticed only one of the robot, according to Kilcher, but the model generated so much wariness that people started accusing each other of being bots days after Kilcher disabled them.
"It's a reminder that trained AI is only as good as the material it learns from," the paper concludes.