Following intense criticism from both EU leaders and its own community, Facebook is preparing artificial intelligence to fight the exchange of messages by terrorists and, in general, to curb any propaganda on its platform.
AI technology will not only be applied to identify Muslim extremists like ISIS, but for any group that promotes violent behavior or acts of violence. A wider definition of terrorism could include violent gang activities, drug trafficking, or white nationalists who support violence.
Facebook is currently testing the use of natural language processing and analysis by adding functions that identify violent publications based on the words used from previously suspended accounts.
"Today we are experimenting with text analysis that we have already removed to identify terrorist organizations such as ISIS or Al Qaeda. "We're trying to identify text messages using an algorithm that is still in its infancy," said Monika Bickert, director of global policy management and Brian Fishman, director of counterterrorism policy, in a Facebook blog post. .
The social network has highlighted a series of initiatives it has already taken to control extremist content, such as the use of technology to identify terrorist-related photos and videos, the deletion of bogus accounts from Facebook, and the search for advice from an expert group on fight against terrorism.
Earlier this month, following the terrorist attack in London that killed seven people, British Prime Minister Theresa May called on nations to reach international agreements to "regulate cyberspace, to prevent the spread of extremist and terrorist planning". Facebook to take immediate action, Theresa May partly blamed the social network for the crime, because its platforms give terrorists a "safe place" that helps them function.
In March, a German minister said companies such as Facebook and Google could face fines worth $ 53 million unless they hide hate publications.
Shortly after the events in May, Facebook's policy manager, Simon Milner, said the company would make its social network a "hostile environment" for terrorists.
Last month, Mark Zuckerberg said his company plans to hire 3.000 additional people for the group Facebook Community Operations, by 4.500 he is currently employing. The team evaluates content that is potentially inappropriate or violates Facebook's policy.
With the latest post on the social network's blog, Facebook is asking for help from anyone who thinks they can help. So if you have ideas on how to stop the spread of terrorism online you can write to: [email protected].