Content Moderation

physician-reasoning4

Publications

Coming Soon

Research Summary

The online proliferation of false, hateful, and illegal content has required social media organizations to monitor and remove content that violates their community standards – a practice known as content moderation. Despite enormous investments in online crowdsourcing, large-scale content moderation activities face a deep category problem: it is notoriously difficult for people to agree on how to categorize and manage morally controversial content. Social media organizations attempt to reduce bias in content classification by gathering the judgments of people from around the world. But these crowdsourcing efforts often require human coders to classify content in isolation, without any social interaction. Not only has this individual-level approach to crowdsourcing been found to entrench biases, but it also conflicts with the inherently social nature of how categories are defined and applied in daily life. Here, we develop a novel, network-based approach to content moderation that harnesses social processes of meaning construction to develop reliable and unbiased categories for content moderation.

Related Resources

Funding

This project is funded by Facebook’s Content Moderation Research Award given to Douglas Guilbeault and Damon Centola.  The content is solely the responsibility of the authors and does not necessarily represent the official views of Facebook.