Protected Speech: Limitations of The First Amendment

The role of social media has only increased since its inception, and the boundaries of what can be posted have been tested. As we come close to the 2024 election, a line must be drawn somewhere. The Murthy v. Missouri Supreme Court case attempts to draw that line, presenting a challenge to the status quo and potentially the American constitution. Our brief will examine the intricacies of the court case prior to the decision as the preliminary facts are laid out.

At YIP, nuanced policy briefs emerge from the collaboration of six diverse, nonpartisan students.

HeadingHeading 3

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit


Executive summary

The ongoing Supreme Court case Murthy v. Missouri is characterized by a precedent-setting question regarding the intersection between the government and social media. The decision will establish the line between freedom of speech and false information. When decided, the case may enforce major political change when dictating the jurisdiction to government intervention. Our brief examines the precursor to the decision in terms of its potential outcome and stake.


With the rise in social media usage amongst adults and adolescents alike comes the simultaneous risk of misinformation. Balancing what constitutes misinformation and what many believe is free speech was and continues to be mired in controversy. This struggle became strikingly apparent during the 2020 election when social media was the site for heated political discussion - some of which became less grounded in fact and more in passionate opinions. Free elections are a cornerstone of democracy, and for elections to run their course fairly, it is critical that not only are people allowed a voice for their opinions, ideas, and thoughts. Also it is important to ensure that misinformation is not rampant - such misinformation can cloud the opinions of some and unfairly undermine the public’s trust in media sources. Chrysalis Wright, lecturer at the University of Central Florida and expert on digital disinformation, states that fake news influences “our attitudes, our beliefs,” and our “actual behavior,” suggesting that although fake news does not directly affect elections, it can indirectly skew public opinion and undermine the role of the fourth estate (press) in providing reliable news that allows the public to form their own opinions. 

This issue reached its pinnacle when in October 2023, the Supreme Court agreed to hear Murthy v. Missouri a case that aimed to determine whether the government should regulate information spread through social media to combat misinformation. Prior to this case, in May of 2022, “the attorneys general of Missouri and Louisiana…filed a lawsuit in the US District Court for the Western District of Louisiana alleging that federal government officials violated the First Amendment” by encouraging social media companies to remove content they deemed was misinformation. As of this writing, the case is being actively debated.


In the past decade, misinformation and disinformation online (social media, blogs, and podcasts) and on radio networks and mainstream news networks have affected our ability to seek public information and maintain a stable democracy. This inaccurate information has resulted in misunderstandings among the US population, such as the perception of elections being rife with fraud or vaccines being unsafe. The government, non-profit agencies, media and technology companies, and citizen groups have made a concerted effort to combat misinformation. These efforts include issuing media corrections, withdrawing wrongful information, fact-checking news and public speeches, and consumers holding news organizations and sources accountable for their content. This push has resulted in a direct collision with free speech freedoms. 

Murthy v. Missouri, initially filed as Missouri v. Biden,is a landmark Supreme Court case examining how government officials can interact with social media companies on content policing practices to fight misinformation and disinformation. The case comes amid increasing scrutiny of government influence on social media platforms and concerns about free speech. Stemming in reaction to the Biden administration’s actions, states Missouri and Louisiana have questioned the authority of pressured social media platforms like Twitter and Facebook to crack down on misinformation. The original lawsuit was filed with allegations that the administration forced these platforms to censor certain views, particularly regarding COVID-19 and election integrity. States have argued that such actions violate First Amendment protections against government restrictions on free speech.

The case highlights the ongoing debate over the government's role in regulating or influencing digital and social media spaces, especially in public discussion on forums. The case also raises critical questions about the limits of government efforts to combat misinformation and the private sector's potential to over-moderate or suppress of free speech. Fixating on interpretations of the First Amendment in context with modern communication technology, the definition of jurisdiction in private activity, and the content of moderate government participation, the Supreme Court's decision on this matter will have far-reaching effects on how content control policies are to be designed and implemented with affiliation to state agencies.

Policy Problem

This case has also brought attention to the critical ethical dilemmas caused by the government’s role in content moderation. On one hand, governments are responsible for protecting their citizens from harmful content, such as hate speech, misinformation, and incitement to violence, particularly in the digital world, where such content can spread rapidly through social media. However, governmental intervention in the content moderation decisions made by social media platforms also raises concerns surrounding freedom of speech and censorship, which could lead to the suppression of dissenting voices and those engaging in legitimate discourse, among other potential consequences. For one, there remains a risk of governments abusing their power to silence political opponents or even manipulate public opinion, which could further erode public opinion of democratic institutions. 

Additionally, there are various potential legal dilemmas that could result from governmental influence in content moderation decisions. One key concern is privacy, as increased government intervention may necessitate the collection and analysis of vast amounts of user data to identify and regulate any perceived objectionable content. Such kind of data collection could potentially infringe upon the right to privacy guaranteed by the 14th Amendment, raising concerns about surveillance and governmental overreach. Another vital concern is transparency and accountability, which ensures that citizens are  clearly informed of the criteria and processes through which governmental agencies identify potentially harmful content and that they’re held responsible for their decisions and actions within this realm. To address these issues, clear and robust legislation could prevent any such possible overreach by the government. 

Policy Options

At the time of writing, this case is actively being argued in front of the Supreme Court. Until decisions are released, the public and parties involved can only assume the potential implications of the ruling. Following the previous choices of an injunction that prevented social media companies from communicating with federal agencies. This new moderation could potentially be dissolved following the case. 

Two sides of whether the government can or can't intervene in online discourse take shape in the public debate. It is unclear whether the government can or cannot intervene with the “censorship” of misinformation. What is visible is the fact that social media companies are private entities. In recent years, these companies have sought out government collaborations to personally dedicate themselves to eliminating interference in U.S. elections and misinformation. As the scope of the Supreme Court case is limited to the government’s potential violations of the First Amendment, the obvious alternative beyond the case is that the media companies, in their philosophy, choose to carry as they have been in regulating all content they deem as offensive and obscene. Though they would have to go about this indpeendently from now on without government consultation, it is possible to coexist where freedom of speech and proper facts can both take shape. 

Yet it must be noted that much of the verification comes at the cost of relations with federal agencies. For instance, with the 2024 election, the US Department of State had been actively consulting with Facebook officials in preparation for misinformation and hacking threats. Sen. Mark Warner (D-VA) of the Senate Select Committee on Intelligence said that much of the improvements come at the cost of government platform collaboration in opposition to foreign malign influence operations. Whether these social media companies choose to persist or not in their own fight against misinformation, their attempt is sure to be made more difficult without government consultation. 

In the face of these two cases, two options remain where the Supreme Court chooses the side of the Biden Administration or the latter. If the government wins the case, relations of regulation will presume as it once had such as in the case of the collaboration amidst COVID misinformation. Not much difference will ensue except more government-provided information will be involved in digital discourse and potentially greater legislation when intervention is formally granted.

However, if Missouri and other states win for their argument that the government cannot regulate even falsehoods of speech, social media companies bear greater responsibility to project what they believe is socially correct speech and what is not. How they choose to draw the line is sure to promote more cases amidst partisan politics and the overlap of fair speech.

The solutions and a clear path forward are hard to traverse. Regardless of the final decision, social media companies hold incredible weight towards the future of democracy and digital relations. 


Oral arguments in Murthy v. Missouri were recently heard by the Supreme Court. A collection of conservative states and social media users filed a lawsuit against the administration, claiming that the case had essentially forced web companies to silence users in violation of the First Amendment. They specifically name some actions as coercive, such as providing social media businesses with information on elections and vaccines, asking the companies to dispel false information about the COVID-19 vaccine, and asking the companies to remove profiles that mimic President Biden's family. Several justices were worried about how the government handled this case, while other justices were concerned about how it would affect normal government communications with private parties. As the appeal is awaiting adjudication before the U.S. Supreme Court, this case's decision could affect how social media is used in elections and other instances where misinformation can spread. The decision can determine the future of how social media is utilized politically. Although supervision from governmental entities such as presidential administrations could diminish the spread of misinformation, this could lead to political biases in the information we consume on social media and infringe on the First Amendment right of free speech. On the other hand, supervision from the government could also improve the accuracy of news spread throughout social media. As the United States awaits the Supreme Court's final decision, it is crucial to understand how this case could alter how we view politics and news on social media. 


The Institute for Youth in Policy wishes to acknowledge Michelle Liou, Joy Park, Nolan Ezzet, and other contributors for developing and maintaining the Policy Department within the Institute.


  1. Chotiner, Isaac. 2023. “The Evolving Free-Speech Battle between Social Media and the Government.” The New Yorker. The New Yorker. July 15, 2023.
  2. Cornell Law School. 2018. “14th Amendment.” Legal Information Institute. Cornell Law School. May 17, 2018.
  3. Fergusson, Grant, and Tom McBrien. “Murthy v. Missouri and the Threat of Election Disinformation.”, March 21, 2024.
  4. “Free Speech or Free Rein? How Murthy v. Missouri Became a Soapbox for Misinformation Advocacy.” 2024. Center for American Progress.
  5. Inserra, David. “Summary of Murthy V. Missouri Oral Arguments.”, March 19, 2024.
  6. Mongrain, Philippe. 2023. “Suspicious Minds: Unexpected Election Outcomes, Perceived Electoral Integrity and Satisfaction With Democracy in American Presidential Elections.” NCBI.
  7. “Murthy v. Missouri.” 2024. Oyez.
  8. n.d. Wikipedia. Accessed April 18, 2024.
  9. Samples, John . 2019. “Why the Government Should Not Regulate Content Moderation of Social Media.” Cato Institute. April 9, 2019.
  10. “Transparency Is Essential for Effective Social Media Regulation.” Brookings. November 1, 2022.
  11. “Using psychological science to understand and fight health misinformation.” n.d. American Psychological Association. Accessed April 18, 2024.

Christine Li

Policy Analyst

Christine is a social policy writer for YIP. Raised in Brooklyn, New York, she loves going on walks and watching late night television shows.

Spencer Samet

Policy Analyst

Spencer Samet is a student at Windward School in Los Angeles California. He is passionate about current events and plans to pursue political science. Spencer works as a technology policy CO-Lead for YIP and is an active member of his highschool’s debate team.

Natalie Gelman

Policy Analyst

Tanya Mahesh

Fall 2023 Fellow

Tanya Mahesh is a High School Student from Pearland, Texas and with a keen interest in the intersection of business, technology and policy.

Vaishnavi Moturi

Policy Analyst

Vaishnavi Moturi is a student at Centennial High School and a technology policy analyst at the Institute for Youth in Policy. She is the founder and director of Hello CitizenZ, where she seeks to help create a generation of global citizens while developing technologies that improve public health systems and society’s collective health.

Suchir Paruchuri

Policy Analyst