Everything You Need to Know about The Facebook Papers
A coalition of seventeen news outlets published a series of articles known as “The Facebook Papers”. Some people are now calling this Facebook’s biggest crisis ever. The papers are a collection of thousands of internal documents that were originally turned over to the U.S. Securities and Exchanges Commission by Facebook whistleblower and former product manager Francis Haugen earlier this year.
The articles from the news outlets showcased the complete portrait of how Facebook was vividly aware of its harmful effects came to light. Haugen repeatedly said that Facebook puts “growth over safety,” particularly in developing areas of the world where the company does not have language or cultural expertise to regulate content without fostering division among users.
Here are the biggest revelations from the documents:
Facebook fails to moderate harmful content in developing countries
The documents revealed that problems with hate speech and misinformation are dramatically worse in the developing world, where content moderation is often weaker. In India, for example, Facebook reportedly did not have enough resources or expertise in the country’s 22 officially recognized languages, leaving the company unable to grapple with a rise in anti-Muslim posts and fake accounts tied to the country’s ruling party and opposition figures.
Facebook AI fails to accurately detect dangerous content in non-English languages
Facebook has long relied on artificial-intelligence systems, in combination with human review, as a way of removing dangerous content from its platforms. But languages spoken outside of North America and Europe have made Facebook’s automated content moderation much more difficult.
Haugen told British lawmakers, “Facebook says things like, ‘we support 50 languages,’ when in reality, most of those languages get a tiny fraction of the safety systems that English gets. UK English is sufficiently different that I would be unsurprised if the safety systems that they developed primarily for American English were actually [under-enforced] in the UK.”
Facebook labeled election misinformation as “harmful, non-violating” content
Internal documents confirmed by multiple news outlets show that Facebook employees repeatedly raised red flags about misinformation and inflammatory content on the platform during the 2020 presidential election, but company leaders did little to address the issues. Posts alleging election fraud were labeled by the company as “harmful” but “non-violating” content.
The false narratives about election fraud fell into that group of content, which under Facebook policies does not violate any rules. It’s a gray area that allows users to spread claims about a stolen election without crossing any lines that would warrant content moderation.