Facebook has been under stress around the world since the US election 2016 to stop the use of bogus accounts and other varieties of deception to influence public opinion.
The European Union last month accused Alphabet’s Google, Facebook and Twitter of falling short of their pledges to combat fake news before their European election after they signed a voluntary code of conduct to fend off regulation.
On Monday, Facebook stated it was setting up an operations centre that could be staffed 24 hours a day using engineers, data scientists, researchers and policy experts, and coordinate with external organisations.
“They will be proactively trying to identify emerging dangers so they can take action on them as quickly as possible,” Tessa Lyons, head of information feed ethics at Facebook, told journalists in Berlin.
Facebook also announced it is teaming up with Germany’s greatest news agency, DPA, to help it check the validity of posts, along with Correctiv, a non-profit collective of journalists that has been flagging fake news to the company as January 2017.
It is going to also train over 100,000 pupils in Germany in media literacy and seek to stop paid advertisements being abused for political ends.
Germany has been particularly proactive in attempting to clamp down on online hate speech, implementing a law last year which forces companies to delete offensive posts or face penalties of up to EUR 50 million ($56.71 million or roughly Rs. 391 crores).
The issue of misinformation and elections became prominent after US intelligence agencies concluded that Russia attempted to influence the outcome of the 2016 US presidential elections in Donald Trump’s favour, partly by using social media. Moscow denied any meddling.
Lyons stated Facebook had made progress in restricting fake news in the previous two decades, including that it would raise the amount of people working on the issue globally to 30,000 at the end of the year out of 20,000 now.
Along with human intervention, she stated Facebook is constantly optimizing its machine learning programs to spot untrustworthy messages and restrict their supply.
“This is a really adversarial space, and whether the bad actors are financially or ideologically motivated, they will attempt to get around and adapt to the job that we are doing,” she explained.