Facebook tries to prevent malicious users from abusing its system in any way. For this reason it creates a world of bots that can mimic what is happening on the larger social network.
Researchers at the company have published a paper called "Web-enabled Simulation" or "Web Enabled Simulation”(WES) for testing the platform.
It's basically a shadow Facebook where non-existent users can like, share, and friend (or harass others or set up and run scams). All this far from human eyea.
Facebook describes the creation of a scaler simulationς της πλατφόρμας του, που αποτελείται από ψεύτικους χρήστες που διαθέτουν διαφορετικά είδη πραγματικής συμπεριφοράς. Για παράδειγμα, ένα “scammer” muzzle can be trained to connect with “target” bots that display behaviors similar to real Facebook fraud victims.
Other bots may be trained to invade the privacy of fake users or to serve "bad" content that violates Facebook rules.
Software simulations are obviously common, and Facebook is creating a previously automated testing tool called Sapienz.
This could help Facebook identify various things errors, or even to learn from bots' behaviors. Researchers can create WES users whose sole goal is to steal information from other bots. If they suddenly find ways to access more data, this could indicate that there is a vulnerability that fraudsters can exploit.
Facebook wants to create a whole parallel social environment. Within this large-scale fake network, they will be able to develop "completely isolated bots that can perform arbitrary actions" and will be able to model the way with which regular users respond to the platform (this has been done again in real users).
However, the researchers warn that "bots need to be properly isolated from real users to ensure that the simulation, even if run in real platform code, will not lead to unexpected interactions between bots and real users."
Facebook calls the WW system, which is an abbreviation of "WES World".