Inside Facebook's Fast-Growing Content-Moderation Effort

Speaking to an audience of content-moderation experts, a Facebook executive gave a rare look inside the company's post-policing team.

The Facebook logo reflected in a person's pupil
Dado Ruvic / Reuters

Monika Bickert is a serious, impressive person. Before she became Facebook’s head of global policy management, she put her Harvard law degree to work as an assistant U.S. attorney going after corrupt government officials.

On February 2, Bickert spoke very intentionally and precisely about how Facebook’s content-management team and policies are constructed at the Santa Clara University School of Law’s Content Moderation and Removal at Scale conference, organized by Eric Goldman, the director of the school’s High-Tech Law Institute.

Bickert emphasized that humans are deeply necessary to the project of content moderation, saying that Facebook now has 7,500 content moderators around the world, meeting the hiring goal Mark Zuckerberg set in May of 2017, when the company only had 4,500 content moderators. In other words, they’ve added almost the same number of content moderators as Twitter or Snapchat’s total employee head count in the last eight months.

And they’re not hiring most of those people in Silicon Valley.

“Content reviewers tend to be hired for their language expertise, and they don’t tend to come with any predetermined subject-matter expertise. Mostly they are hired, they come in, and they learn all of the Facebook policies, and then over time, they develop an expertise in one area,” she said. “The review team is structured in such a way that we can provide 24/7 coverage around the globe. That means that we often are trying to hire a Burmese speaker in Dublin, or come up with other ways of staffing languages so that the content can be reviewed or responded to within 24 hours. That’s our goal. We don’t always hit it.”

As social media has become a cultural and political battleground, content moderation has become a pressing topic for the technology industry’s biggest companies. The Santa Clara conference follows one at UCLA late last year that focused on the labor that goes into dealing with “the basic grossness of humans.” The internet companies have taken over the traditional role of governments in allowing and limiting speech within their virtual walls. And they’ve all struggled to do so fairly. In addition to Facebook, representatives from Google, Pinterest, Reddit, Yelp, and a host of other companies spoke at the event.

To give an idea of the relative scale of Facebook’s efforts, Google’s Nora Puckett said that the company’s entire trust and safety team is 10,000 people, which includes far more people than just content reviewers. At the other end of the spectrum, Pinterest’s Adelin Cai said the team moderating the service’s 200 million users is composed of only 11 full-time people.

At Facebook, there are 60 people dedicated just to crafting the policies for the company’s content moderators. These policies are not what you read in Facebook’s terms of service or community standards. They are a deep, highly specific set of operational instructions for content moderators that is reviewed constantly by Bickert’s team and in a larger intra-Facebook gathering every two weeks. For example, one rule that came to light in a Guardian investigation noted that while nudity on Facebook is verboten in general, it was okay to show adult nudity in the context of historical Holocaust photographs.

“Every week, there are updates to those policies. Sometimes it’s something little. [For example] this word, in Korean, is no longer being used as a slur, now we’re seeing people try to take it back and we have to evaluate it differently,” Bickert said.

At that biweekly content meeting, different teams across the company—engineering, legal, the content reviewers, external partners like nonprofit groups—provide recommendations to Bickert’s team for inclusion in the policy guidebook. Bickert called it a “mini legislative session.”

Her colleague Neil Potts, who spoke later, also emphasized the similarity between what Facebook is doing and what government does. “We do really share the goals of government in certain ways,” he said. “If the goals of government are to protect their constituents, which are our users and community, I think we do share that. I feel comfortable going to the press with that.”

The reason that the content rules are so detailed is that Facebook wants to reduce the bias- or judgement-based variability of the decisions that the content reviewers make.

“We try to make sure that our standards are sufficiently granular so that they don’t leave a lot of room for interpretation,” Bickert said. “We know people are going to disagree. Reviewers are gonna have different ideas about what level of nudity is offensive or what level of graphic violence is something we should take down. Or, should you be able to use certain words? What constitutes an ethnic slur? We have very specific guidance, so that if the person is in the Philippines, in India, in Texas they are gonna reach the same decision.”

And to ensure that this is happening, they have ongoing audits of all the work of reviewers to see “if that person’s accuracy is where it needs to be and if that person’s decisions are matching our policies.”

But the downside of having such specific rules is that they are blunt. “There are always situations where we look at a specific piece of content that technically doesn’t violate our hate-speech policy, but when you look at it, you think, ‘Wow, as we sit here and look at it, we all think this looks like hate speech,’” Bickert said. “So, you’re gonna have those uncomfortable ones that are close to the line, but something that we have to do is have these granular standards so that we can control for bias.”

One revealing anecdote from Bickert’s presentation seemed to show that Facebook hasn’t always taken content review as seriously as they do now. When Facebook Live launched, the technical tool that they had to review the videos did not show what part of a video tended to generate user flags. So, if a Facebook Live video was two hours long, the reviewers had to try to skim through to figure out where the objectionable material might be.

“The review tool for the content reviewers who were looking at the videos that were reported proved to not be what we needed,” she said. “It didn’t allow reviewers sufficient flexibility in going back and looking at the video.”

That sure seems like the kind of thing that you might want to have locked down before careening into a massive live-video push. But that was 2015 Zuckerberg, back in the preelection naïveté when he was a mere engineer and not a community builder.

Facebook does seem to have gotten religion on the topic over the last year and a half, but the content moderation challenge is different from the many competitive bouts and platform shifts that the company has proven able to overcome. It’s not a primarily technical challenge that can be solved by throwing legions of engineers at the problem.

“That’s a question we get asked a lot: When is AI going to save us all?” Bickert said. “We’re a long way from that.”

The current stable of machine-learning technologies is not good at looking at the context of a given post or user or community group. That’s just not how those tools work, and so the wild advances we’ve seen in other domains are not being realized in this one.

“There are some areas where technical tools are helping us do this job,” Bickert said. “But the vast majority, when we’re looking at hate speech or we’re looking at bullying or we’re looking at harassment, there is a person looking at it and trying to determine what’s happening in that offline world and how that manifests itself online.”

Alexis Madrigal is a contributing writer at The Atlantic and the host of KQED’s Forum.