The Yemen Crisis
Featured article
Article content
In the early days of online forums, users mostly moderated posts within their communities; however, with the rise of Web 2.0 platforms and the 1996 U.S. The Communications Decency Act shifted censorship power to private companies. Since then, disparities have often arisen from narrow definitions of “hate speech”, algorithmic bias, and inconsistent levels of human review (Eberhardt et al.). For example, Facebook once flagged photos of breastfeeding and post-masectomy bodies as “nudity,” while misogynistic abuse remained untouched (Hern). In 2017, civil rights organizations found that Facebook removed Black Lives Matter content while allowing white supremacist threats to stay online (Levin). Recently, Meta admitted to mistakenly mass-removing harmless gay-pride posts as well, even as it failed to take down violent anti-LGBTQ videos for months (Waller). These cases exemplify the systematic bias in flagging and removal of discriminatory content.
These failures are not random. Platforms rely heavily on automated systems trained on biased datasets, rigid definitions of harmful content, and inconsistent human review. The result is predictable: speech about identity and lived experience is flagged as “risky,” while coded harassment often slips through. At the same time, platforms often focus on the scale and difficulty of moderation while prioritizing user growth and legal risk. Regulators in the United States have proposed greater transparency and accountability (Diaz and Hecht-Felella), while the EU now requires platforms to assess risk that could lead to demographic disparities. Still, much more can and should be done.
Fixing this problem will require more than minor adjustments. Platforms must be forced to confront how their systems disproportionately harm marginalized users: through greater transparency, stronger protections, and intentional efforts to address algorithmic bias. However, the scale and confidentiality of platform operations make enforcement difficult through mandated reporting. Expanding moderations also raises costs, and different jurisdictions in different countries have conflicting requirements. Despite these challenges, making a change is necessary. If social media platforms continue to moderate speech this way, they risk reinforcing the very inequalities they claim to challenge. A system that silences marginalized voices while tolerating abuse is not neutral–it is complicit. And in a digital world where visibility shapes power, that distinction matters more than ever.
Diaz, Angel, and Laura Hecht-Felella. “Double Standard in Social Media Content Moderation.” Brennan Center for Justice, New York University School of Law, 4 August 2021, https://www.brennancenter.org/media/7951/download/Double_Standards_Content_Moderation.pdf Accessed 24 March 2026.
Eberhardt, Jennifer L., et al. “People who share encounters with racism are silenced online by humans and machines, but a guideline-reframing intervention holds promise.” PMC, 9 September 2024, https://pmc.ncbi.nlm.nih.gov/articles/PMC11420153/. Accessed 24 March 2026.
Hern, Alex. “Facebook's changing standards: from beheading to breastfeeding images. This article is more than 12.” The Guardian, 22 October 2013, https://www.theguardian.com/technology/2013/oct/22/facebook-standards-beheading-breastfeeding-social-networking. Accessed 24 March 2026.
Levin, Sam. “Civil rights groups urge Facebook to fix 'racially biased' moderation system.” The Guardian, 18 January 2017, https://www.theguardian.com/technology/2017/jan/18/facebook-moderation-racial-bias-black-lives-matter. Accessed 24 March 2026.
Waller, Pip. “Meta blames technical error for removal of LGBTQIA+ groups' Facebook posts.” ABC News, 5 January 2025, https://www.abc.net.au/news/2025-01-06/pride-groups-slam-meta-removal-of-facebook-posts/104667198?utm_campaign=abc_news_web&utm_content=link&utm_medium=content_shared&utm_source=abc_news_web. Accessed 24 March 2026.