WSJ: Facebook’s Own Engineers Doubt the Abilities of Its AI Systems

In this image from video, Facebook CEO Mark Zuckerberg testifies during a House Energy and Commerce Committee hearing at the U.S. Capitol in Washington, Thursday, March 25, 2021. (House Energy and Commerce Committee via AP)
House Energy and Commerce Committee via AP

The Wall Street Journal reports that Facebook executives have been heavily promoting the use of AI moderation to censor “hate speech” on its platform — but internal documents show that Mark Zuckerberg’s AI is significantly less advanced than previously believed.

The Wall Street Journal reports that while Facebook executives have been promoting the use of artificial intelligence to censor “hate speech” and violence on its platform, internal documents reviewed by the WSJ show that engineers are not as confident in the company’s AI systems as upper management appears to be.

Facebook boss Mark Zuckerberg and Sheryl Sandberg

Facebook boss Mark Zuckerberg and Sheryl Sandberg ( Kevin Dietsch/Getty)

According to the documents, Facebook’s AI system is unable to consistently identify videos of shootings from a first-person perspective. In another situation that baffled internal Facebook researchers, the AI was unable to identify the difference between cockfighting and car crashes.

Mark Zuckerberg throwing spears

Mark Zuckerberg throwing spears (Mark Zuckerberg/Facebook)

The documents also appear to show that Facebook employees estimated that the company only removes a small selection of posts that violate its rules, a single-digit percent according to some employees. When Facebook’s algorithms are not certain that the content violates the rules of the website, it chooses to slow the reach of the content but does not remove it.

The Wall Street Journal writes:

The documents reviewed by the Journal also show that Facebook two years ago cut the time human reviewers focused on hate-speech complaints from users and made other tweaks that reduced the overall number of complaints. That made the company more dependent on AI enforcement of its rules and inflated the apparent success of the technology in its public statistics.

According to the documents, those responsible for keeping the platform free from content Facebook deems offensive or dangerous acknowledge that the company is nowhere close to being able to reliably screen it.

“The problem is that we do not and possibly never will have a model that captures even a majority of integrity harms, particularly in sensitive areas,” wrote a senior engineer and research scientist in a mid-2019 note.

One senior engineer estimated that the company’s automated systems removed posts that generated 2 percent of the views of hate speech on the platform. The engineer stated: “Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term.”

Read more at the Wall Street Journal here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or contact via secure email at the address


Please let us know if you're having issues with commenting.