Caught in the Filter: How Social Media Moderation Silences Marginalized Voices

Social media platforms claim to protect users from harm–but for many marginalized communities, those same systems are doing the opposite. Researchers and advocacy groups have shown that speech by women, people of color, LGBTQ+ people, and religious minorities are censored or removed disproportionately. In contrast, harassment against these groups is often ignored (Diaz and Hecht-Felella). Content moderation systems on major social media platforms disproportionately silence marginalized voices because of structural flaws in how these systems are designed and enforced. This report examines the historical context, mechanisms, case studies, and reform proposals related to these disparate outcomes.

Published by   

Cathy Jiang

   on   

March 28, 2026

Inquiry-driven, this article reflects personal views, aiming to enrich problem-related discourse.

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Support

Social media platforms claim to protect users from harm–but for many marginalized communities, those same systems are doing the opposite. Researchers and advocacy groups have shown that speech by women, people of color, LGBTQ+ people, and religious minorities are censored or removed disproportionately. In contrast, harassment against these groups is often ignored (Diaz and Hecht-Felella). Content moderation systems on major social media platforms disproportionately silence marginalized voices because of structural flaws in how these systems are designed and enforced. This report examines the historical context, mechanisms, case studies, and reform proposals related to these disparate outcomes.

Article content

In the early days of online forums, users mostly moderated posts within their communities; however, with the rise of Web 2.0 platforms and the 1996 U.S. The Communications Decency Act shifted censorship power to private companies. Since then, disparities have often arisen from narrow definitions of “hate speech”, algorithmic bias, and inconsistent levels of human review (Eberhardt et al.). For example, Facebook once flagged photos of breastfeeding and post-masectomy bodies as “nudity,” while misogynistic abuse remained untouched (Hern). In 2017, civil rights organizations found that Facebook removed Black Lives Matter content while allowing white supremacist threats to stay online (Levin). Recently, Meta admitted to mistakenly mass-removing harmless gay-pride posts as well, even as it failed to take down violent anti-LGBTQ videos for months (Waller). These cases exemplify the systematic bias in flagging and removal of discriminatory content.

These failures are not random. Platforms rely heavily on automated systems trained on biased datasets, rigid definitions of harmful content, and inconsistent human review. The result is predictable: speech about identity and lived experience is flagged as “risky,” while coded harassment often slips through. At the same time, platforms often focus on the scale and difficulty of moderation while prioritizing user growth and legal risk. Regulators in the United States have proposed greater transparency and accountability (Diaz and Hecht-Felella), while the EU now requires platforms to assess risk that could lead to demographic disparities. Still, much more can and should be done. 

Fixing this problem will require more than minor adjustments. Platforms must be forced to confront how their systems disproportionately harm marginalized users: through greater transparency, stronger protections, and intentional efforts to address algorithmic bias. However, the scale and confidentiality of platform operations make enforcement difficult through mandated reporting. Expanding moderations also raises costs, and different jurisdictions in different countries have conflicting requirements. Despite these challenges, making a change is necessary. If social media platforms continue to moderate speech this way, they risk reinforcing the very inequalities they claim to challenge. A system that silences marginalized voices while tolerating abuse is not neutral–it is complicit. And in a digital world where visibility shapes power, that distinction matters more than ever.

Works Cited

Diaz, Angel, and Laura Hecht-Felella. “Double Standard in Social Media Content Moderation.” Brennan Center for Justice, New York University School of Law, 4 August 2021, https://www.brennancenter.org/media/7951/download/Double_Standards_Content_Moderation.pdf Accessed 24 March 2026.

Eberhardt, Jennifer L., et al. “People who share encounters with racism are silenced online by humans and machines, but a guideline-reframing intervention holds promise.” PMC, 9 September 2024, https://pmc.ncbi.nlm.nih.gov/articles/PMC11420153/. Accessed 24 March 2026.

Hern, Alex. “Facebook's changing standards: from beheading to breastfeeding images. This article is more than 12.” The Guardian, 22 October 2013, https://www.theguardian.com/technology/2013/oct/22/facebook-standards-beheading-breastfeeding-social-networking. Accessed 24 March 2026.

Levin, Sam. “Civil rights groups urge Facebook to fix 'racially biased' moderation system.” The Guardian, 18 January 2017, https://www.theguardian.com/technology/2017/jan/18/facebook-moderation-racial-bias-black-lives-matter. Accessed 24 March 2026.

Waller, Pip. “Meta blames technical error for removal of LGBTQIA+ groups' Facebook posts.” ABC News, 5 January 2025, https://www.abc.net.au/news/2025-01-06/pride-groups-slam-meta-removal-of-facebook-posts/104667198?utm_campaign=abc_news_web&utm_content=link&utm_medium=content_shared&utm_source=abc_news_web. Accessed 24 March 2026.

Filed Under:

No items found.

Cathy Jiang

Policy Media Staff Writer

Yuxuan (Cathy) Jiang is a student at Arizona College Prep High School (Class of 2028) with a strong interest in the intersection of science, mathematics, and public policy. She plans to pursue studies in neuroscience and applied mathematics in college before continuing on to law school, where she hopes to apply analytical thinking and scientific insight to legal advocacy.

Author's Profile

Similar Articles

No items found.