With European parliamentary elections approaching later this month, Facebook has set up an operations room to monitor misinformation, false accounts and electoral interference that violates the rules of the site, reports The New York Times and The Guardian . The effort is designed to avoid the types of large-scale campaigns that could influence elections.
The Times says that the room is similar to the room that Facebook installed in October 2018 before the 2018 elections in the United States and in Brazil, which the company closed at the end of November. Facebook also established a similar center in Delhi, before this year's elections in India. Reports say that this new room, located at the European headquarters of Facebook in Ireland, will remain open during the next elections, which will be held between May 23 and 26.
In January, Facebook announced a series of new tools that it would launch later in March, designed to "help prevent foreign interference in upcoming elections and make political advertising on Facebook more transparent."
The Guardian notes that the room has "about 40 people", which includes native speakers of "the 24 official languages of the EU". The Times points out that Facebook would not say what actions the center has taken since it was opened, but it did outline that the assembled team reviews the material marked by its automated systems or by the users. The different members of the team examine the material and make a recommendation as to whether it should be eliminated or not. "In some cases, what is marked will lead to a massive elimination of publications and accounts."
Both reports cite that the company still has problems locating and eliminating the bad actors, pointing to a campaign that Facebook recently eliminated in Spain before its election. The Guardian points out that Facebook's systems did not detect the campaign, and Facebook's head of cyber security policy, Nathaniel Gleicher, noted that Facebook can not handle the problem on its own: "The reality of the security is that you need. " as many people focused on the problem as possible. "
He stressed that the company is addressing abuse in two ways: using artificial intelligence to make it difficult for bad actors to manipulate their systems and eliminate those accounts quickly. he says, is trying to make "bad actors spend their time trying to defeat the filter, instead of trying to direct their messages." The Times also notes that Facebook is playing a kind of category. the game of the mouse, reacting to the groups as they change their methods to overcome the changes that are implemented.In the long term, Gleicher tells The Times that they are working to strengthen the manipulation of Facebook, that makes it difficult for bad actors to spread misinformation throughout their platform.