Democracy x AI: Repercussions and Regulation

This brief details an examination into the effects of AI towards elections, voting and democracy while proposing the policies at hand to properly regulate the usage of AI without further harm to politics.

At YIP, nuanced policy briefs emerge from the collaboration of six diverse, nonpartisan students.

HeadingHeading 3

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit


This brief details an examination into the effects of AI towards elections, voting and democracy while proposing the policies at hand to properly regulate the usage of AI without further harm to politics.

Executive Summary

In the past year, there has been constant exposure to all sources of media developed from AI. From deep fake imagery to the usage of AI-generated work through ChatGPT, many have found it hard to dodge the ongoing conversation of the increasing prevalence of AI. Although many have found such appliances exciting heading into the future, the lack of understanding of where these algorithms stand has also concerned many. This brief examines AI's implications for elections and democracy with the policies of the time to prevent potential danger.


With the rise of technology and online social presence, there have been both demonstrated and ongoing concerns about AI's effects on global democratic establishments. The concept of national security has already evolved to include the elements of cyberspace as the level at which the digital intersection with the global economy and politics exists. However, not enough information has been calibrated in government entities regarding the new involvement of AI in political spaces. 

Specifically, AI poses many risks in spreading misinformation, mainly through the use of deepfakes and its many forms of fabricated content as a means of harm to potential candidates and politicians alike. In the instance of Deepfakes, it uses AI to generate new audio and video, replacing one person's likeness in a target video with another for a newer media that is publicly manipulatable They can be used to create misleading videos such as giving false endorsements or messages that can be misinterpreted as fact. 

As technology develops, its issues are becoming more prevalent. One such issue is how AI-created content can be created in the example of candidates when someone of established power is having their likeness taken advantage of and nonconsensually manipulated. With the many examples of those who have taken the pitfall of believing these often convincing media forms, a new question appears: how can the public determine the authenticity of public statements?

This is especially of concern with the US 2024 election coming soon. As polling indicates that the results of the presidential election could be incredibly close, campaigners may be fine-tuning their targeting methods to appeal to swing voters with the help of AI, influencing them to reach a final decision using microdata collected by commercial data brokers. The usage of AI in political campaigns has already begun, with the Republican National Committee utilizing this technology to produce a video warning featuring fake dystopian crises in an “AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024.” Although this piece of media did include an AI disclaimer, there’s no telling how this standard will be upheld in the future, as there is no legal requirement for disclosure. Technological advancements have merits and risks, but it is indisputable that how the world approaches them in the next election cycle and onward will be transformed. 


Although it is a contemporary tool, forms of artificial intelligence have a longstanding history in our nation. The birth of AI took place in the 1950s when computer scientist Aurther Lee Samuels created a program where a human participant could play checkers with a computer. Then, in 1966, John McCarthy coined as the father of AI, orchestrated a series of chess games with competitors in Russia by telegraph. Recently, the public and government’s perception of AI has evolved. 

With the increased access to AI, this form of technology has evolved to be seen as a threatening tool to the future of our workforce, democracy, and society. 

According to a study conducted by the Pew Research Center, 37% of Americans who are more concerned than excited about AI cite that they are worried about AI eliminating jobs, privacy, and human connection. The majority of Americans believe artificial intelligence is to blame for their declining faith in elections (57%) in institutions (56%), and dangers to democracy (52%). As AI continues to be a prevailing tool in our nation, its role in our democracy is notable and will continue to make history in upcoming elections. 

Despite the danger, there hasn’t been much in-depth investigation of how AI will affect elections due to its recent widespread usage. However, this has not stopped well-established political groups from using AI in their political endeavors. For instance, in a recent post on X, a user falsely announced that President Biden was creating a national draft supporting the war effort in Ukraine. The video was later revealed to be deeply fake, but it did not stop the 8 million users and counting who interacted with the video. Lastly, a deepfake emerged for Sen. Elizabeth Warren (D-MA), who was seen falsely stating that Republicans should not be allowed to vote in 2024. These videos have led to increased misinformation and at times, intentional propaganda. In the future, malicious actors may use generative AI to stifle votes or get past barriers that ensure elections are secure.

AI’s threat to elections goes beyond intentional harm or even deception. Among other things, several election offices utilize algorithmic tools to keep track of voter registration and validate signatures on mail-in ballots. Algorithmic decision-making on these issues is susceptible to bias, including racial discrimination. This is also the case with human decision-making. Some authorities are becoming more interested in employing generative AI to support voter education; this has chances for process acceleration but also carries significant dangers related to erroneous and unfair voter outreach.

Though there has been a prevailing negative public sentiment about AI, experts also believe AI can strengthen democracy. 

Whether in the US or other democracies worldwide, artificial intelligence might support election workers and authorities in their vital tasks of monitoring the polls. For example, AI may improve election administration procedures to increase effectiveness, dependability, and security. AI might also spot suspicious patterns in voter lists and voting equipment under close observation, preventing fraud or disenfranchisement. AI can also be used in the counting process, with tabulators being able to scan paper votes faster than poll workers, which would cut down on how long it takes to declare election results and carry out recounts. 

Additionally, AI has many opportunities to level the playing field in campaigns and democratize and enhance them. AI technologies may make it easier for underfunded and first-time applicants to enter the market financially. AI-powered simplification might help digital fundraising systems. Candidates can also use tailored ads to reach voters who are still uncertain more successfully. As a result, voters will be more informed about their alternatives while casting their ballots. Election administration personnel may identify and report on damaging hate speech that unjustly tilts the playing field for candidates and influences voter decision-making using AI efficiently and transparently.

Policy Problems

A. Voters

Voters may feel misinformed and have a greater mistrust for the press and media in general, for fear that the information they receive is tampered with by AI — especially by deepfakes which often convincingly depict a candidate stating something that turns voters off immediately.  

Voters are especially susceptible to believing these deep fakes in the months and days leading up to the election when there is insufficient time to fact-check. 

B. Candidates

In this age of AI, candidates are forced to increase expenditure on monitoring social media platforms to ensure deep fakes do not taint their reputation and, more importantly, do not misrepresent the principles and policies the candidate stands for. 

One such incident occurred in Slovakia during its October 2023 election, when Michal Šimečka, leader of the pro-Western Progressive Slovakia party (1), was maliciously represented in a deep fake as rigging the election. Such occurrences result in the proliferation of an increasingly toxic political environment, pitting candidates against each other and often resulting in unsubstantiated claims about the opposing candidates’ views, further jeopardizing the prospects of a fair and unbiased election.  

Further compounding the issue are bots and social media algorithms that increase the viewership and visibility of these viral deepfakes which often go unchecked and result in sometimes unjustly irreversible damage to the candidates’ reputations.

C. Democracy Itself

Ensuring the integrity of our electoral system is critical, and government indifference to the proliferation of misinformation through AI in social media could potentially erode the public’s trust in democratic processes, fostering widespread skepticism and potentially discouraging voter turnout. 

The importance of such regulation has not been lost on the government and social media platforms. For instance, President Joe Biden’s administration recently issued an executive order “encouraging the responsible development of AI'' (3) requiring AI developers to release safety data, promoting AI's safe development. Furthermore, Meta recently announced that its companies, Instagram and Facebook, will begin to require political ads to state whether AI was utilized in creating these ads, which is a step forward in regaining the public’s trust in political ads and decreasing the rampant misinformation through deep fakes.   

Policy Options

While there seems to be some benefit to the implications of AI on campaigns and elections, the intersection of AI and democracy presents unprecedented challenges, with potential risks ranging from simple misinformation to the integrity of our election. According to the Brookings Institution, AI could even out the financial disparities of running campaigns; future developments of AI may lead us to do away with hiring political consultants since AI could complete such mundane and time-consuming tasks at lower costs. However, some aspects of AI we may need to combat shortly include deep fake images, audio, and videos that perpetuate fraud scandals and further complicate campaigns and who voters may vote for on election day. As this groundbreaking technology continues to evolve, it is imperative to establish concrete policies to protect the democratic process. 

1. Enforcement of Greater Transparency Rules:

With the prevalence of AI in today’s society, elections have become vulnerable to the disinformation perpetuated through AI. The current forms of AI make it difficult for regular citizens to differentiate between real and fake, making such transparency disclaimers even more critical. These disclaimers would inform viewers whether AI is being used in a campaign, addressing the crucial need to spotlight potential harms caused by AI. This measure would require any campaign using AI to express the integration details between their campaign and AI. However, going beyond the simple disclaimers on political ads or campaigns, political advertisers should also disclose any AI algorithms they employ so that viewers can understand how they became a part of the target audience for such an ad. This increase in transparency would enable the public to be more aware of AI use within the election process, thus leading to a more informed and faithful population. Alongside implementing these rules, other factors will significantly increase their effect:

A. Government-Led Discourse

In partnership with the disclaimers to increase election accountability, governments should actively engage in discourse on the dangers of AI in political campaigning. A government-led public awareness campaign would underscore the risks held by deep fakes, misinformation, and manipulative content. An active presence in public education about these prominent harms is crucial in fostering a more informed population capable of analyzing AI’s impact on their media and democratic process. Fostering a more informed citizenry on the variety of impacts AI holds is beneficial for this upcoming election cycle and will serve our country well into the future. President Biden issued the US’ first AI executive order in October, which implemented civil rights guidance and a government-led narrative on the uses of AI. The narrative included both the “promise and peril” of AI abilities. This is a significant first step toward ensuring public awareness, but the conversation regarding AI should be more accessible and active. 

B. Third-Party Regulation: 

To provide an almost checks and balances system to the already proposed transparency disclaimers, implementing third-party regulation is imperative to audit election campaigns and their relation to AI systems. These regulators would focus on media platforms, especially those frequently viewed by the public eye. Their presence would ensure fairness, security, and compliance with established transparency rules. This regulation would also help combat issues regarding unofficial campaign media, potentially from abusive foreign entities. Overall, these independent audits enhance the credibility of the democratic process and work toward eliminating the risk of exposing manipulative content to the public.  

2. Potential Ban on AI in Campaigning Materials:

The threats AI poses against the democratic process can be deemed significant enough to justify a potential ban on AI being used with campaigning materials. This measure would eliminate invasive AI capabilities, trust issues, and the threat of deep fakes. The extensive capabilities of AI are laid out in an article written by a law professor at Harvard, who mentions a congressional hearing with former Open AI CEO Sam Altman, who admits he is concerned with the potential harm AI has with its intersection with voters. The article talks about the potential implementation of an AI engine called “Clogger,” which would be able to manipulate and enact “microtargeting techniques” more significantly than ever. Although, as mentioned above, such bans do pose some significant cons. The ban may hinder the use of AI to support lesser-known or less-funded campaigns, potentially exacerbating existing inequalities. When weighing the pros and cons, a decision that includes the importance of preserving democratic principles of fairness and equal representation while acknowledging the very apparent harms of AI must be made. Currently, a bill called the Protect Elections from Deceptive AI Act introduced in Congress by Senator Klobuchar aims to ban the distribution of deceptive AI-generated audio or media related to candidates for federal office. 

3. Ethical AI Certification: 

A government-passed policy laying out the essential rules that AI engines that are used in the campaigning process have to abide by. These guidelines would eliminate AI engines capable of producing deep fake and manipulative content, thus creating a more secure campaigning cycle. Approved engines would have to receive certification from the government to allow for their usage by candidates after being proven to adhere to said ethical guidelines. In support of this policy, there is an additional factor to reinforce reliability.

Regular Audits: A government-led group in charge of monitoring campaigns to ensure the use of AI is solely sourced from pre-approved engines. Implementing a regular audit schedule to review certified AI systems would guarantee the engines stay up to date in preventing their abuse. Overall, regular audits would be crucial in maintaining the integrity of this policy option. 

4. Passing Privacy Legislation:

While some of the proposed policy options above can narrowly address the issues of AI, it is also just as important that Congress pass legislation that can look to regulate and address the root causes of AI in democracy. Elizabeth Meehan from the Tech Policy Press stated that transparency has various flaws. From the individual perspective, even if they were given all this information due to greater transparency, often, they don’t have enough background on the topic to make sense of the information they are given. Therefore, it is important that we supplement some of these more narrow solutions with legislation that will allow industries to safely deploy AI without conflicts to innovation. Through President Biden's executive order, efforts are being made to implement improved and new standards. By enforcing this executive order on Safe, Secure, and Trustworthy Artificial Intelligence, some of the key provisions being enacted include having developers share their safety test results and other critical information with the federal government and establishing best practices for detecting generated AI content in efforts to protect citizens from AI perpetrated fraud and deception. But in relation to the need for Congress to pass privacy legislation to help regulate AI, the executive order also calls for Congress to pass bipartisan legislation on privacy to protect all Americans. Such a call for action in the executive order may further motivate Congress to develop a sense of urgency around the issue of AI. For example, key privacy legislation would include provisions on algorithmic accountability like the SAFE DATA Act sponsored by Senator Wicker or the Consumer Online Privacy Rights Act by Senator Cantwell. 


Several states have already introduced legislation on AI in elections and campaigns, and four bills have been successfully enacted in the following conditions: Michigan, Minnesota, and Washington. Michigan’s HB 5141 requires disclosure on political ads generated by AI. It prohibits the publication of materially deceptive media 90 days before an election where a candidate will appear on the ballot unless it includes a disclosure that the media has been manipulated. In addition, Michigan’s HB 5144 prohibits the publication of materially deceptive media 90 days before an election where a candidate will appear on the ballot unless it includes a disclosure that the media has been manipulated. Minnesota’s HB 1370 prohibits the publication of deepfake media to influence an election 90 days before. Washington’s SB 5152 requires a disclosure of synthetic media used to influence an election. State legislators are taking action against the harmful effects of AI in elections. If other governments enforce proper public and political usage of AI, there is a chance for AI to be used as a powerful force of advancement and good — but only if fairly regulated in time.


The Institute for Youth in Policy wishes to acknowledge Michelle Liou, Joy Park, Nolan Ezzet, and other contributors for developing and maintaining the Policy Department within the Institute.

Works Cited:

  1. One Hundred Fifteenth  Congress. Putin’s Asymmetric Assault On Democracy In Russia And Europe: Implications For U.S. National Security, 2018,
  2. Galston, William A. “Is Seeing Still Believing? The Deepfake Challenge to Truth in Politics.” Brookings, 8 Jan. 2020,
  3. The Repressive Power of Artificial Intelligence. (n.d.). Freedom House.
  4. First Look: RNC Slams Biden in AI Generated Ad (n.d)Thompson, Alex.
  5. The GOP’s Latest Attack on Biden isn’t scary. How it was made is. MSNBC, December 17, 2021.
  6. How AI will transform the 2024 elections | Brookings. (2023, June 12). Brookings.
  7. “Watch: RNC Responds to Biden’s Reelection Bid with Ai-Generated Ad | CNN Politics.” CNN, April 25, 2023.
  8.  Rascoe, A. (2023, July 30). How real is the threat of AI deep lakes in the 2024 election? NPR.
  9. Johnson, D., & Johnson, A. (2023, June 15). What are deep fakes? How fake AI-powered audio and video warps our perception of reality. Business Insider.
  10. Brancaccio, D., Conlon, R., & Shin, D. (2021, October 22). Are tech companies influencing politics more than governments? Marketplace.
  11. Posard, M. N. (2020, October 1). From Consensus to Conflict: Understanding Foreign Measures Targeting U.S. Elections. RAND.
  12. Wanless, A., & Walters, L. (2020, October 13). How Journalists Become an Unwitting Cog in the Influence Machine. Carnegie Endowment for International Peace.
  13. Lee, J.A.N. “Arthur Lee Samuel.” Computer Pioneers. Accessed December 17, 2023.
  14. “John McCarthy: Homage to the Father of Artificial Intelligence .” Teneo.Ai, November 16, 2023.
  15. “New Survey: Americans Think AI Is a Threat to Democracy, Will Become Smarter than Humans and Overtake Jobs, Yet Believe Its Benefits Outweigh Its Risks.” Stevens Institute of Technology, November 15, 2021. Accessed December 17, 2023.
  16. Rainie , Lee, Cary Funk , Monica Anderson, and Alec Tyson. “1. How Americans Think about Artificial Intelligence.” Pew Research Center: Internet, Science & Tech, March 17, 2022. Accessed December 17, 2023.
  17. Panditharatne, Mekela. “How AI Puts Elections at Risk - and the Needed Safeguards.” Brennan Center for Justice, September 19, 2023. Accessed December 17, 2023.
  18. Nicol Turner Lee, Joseph B. Keller, Chinasa T. Okolo, Darrell M. West, Bill Baer, Obioha Chijioke Nicol Turner Lee, and Gregory S. Dawson James S. Denford. “AI Can Strengthen U.S. Democracy-and Weaken It.” Brookings, November 21, 2023. Accessed December 17, 2023.
  19. “Summary Artificial Intelligence (AI) in Elections and Campaigns.” National Conference of State Legislatures, December 7, 2023. Accessed December 17, 2023.
  20. Regulating AI Deep Fakes and Synthetic Media in the Political Arena. (2023, December 12). Brennan Center for Justice.,voters%20go%20to%20the%20 polls.
  21. Schneier, B. (2020, January 7). The Future of Politics Is Bots Drowning Out Humans. The Atlantic.
  22. Klepper, D. (2023, November 8). Meta updates AI deepfake policy for political ads, 2 months after Google did it. Fortune.
  23. How AI Puts Elections at Risk — And the Needed Safeguards. (n.d.). Brennan Center for Justice.
  24. Fung, A., Lessig, L., & Us, C. (2023, June 6). How AI Could Take Over Elections--And Undermine Democracy. Scientific American.
  25. Brman, R. (2023, May 27). AI Could Save Politics—If It Doesn't Destroy It First. The Atlantic.
  26. AI can strengthen U.S. democracy—and weaken it | Brookings. (2023, November 21). Brookings.
  27. Harper, A., Gehlen, B., & Pereira, I. (2023, November 20). AI use in political campaigns raising red flags into the 2024 election. ABC News.
  28. Sesto, G. (2023, August 25). Ethics And Transparency In AI-Powered Political Advertising. Forbes.
  29. Meehan, E. (2023, October 24). Transparency Won’t Be Enough for AI Accountability. Tech Policy Press.
  30. The AI regulatory toolbox: How governments can discover algorithmic harms | Brookings. (2023, October 9). Brookings.
  31. Fung, A. (n.d.). How AI could take over elections – and undermine democracy. The Conversation.
  32. Field, H., & Feiner, L. (2023, October 30). Biden issues U.S.’ first AI executive order, requiring safety assessments, civil rights guidance, research on labor market impact. CNBC.
  33. Ward, B. P. W. P. (2023, July 12). What does AI need? A comprehensive federal data privacy and security law. International Association of Privacy Professionals.
  34. House, W. (2023, October 30). FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The White House.

Spencer Samet

Policy Analyst

Spencer Samet is a student at Windward School in Los Angeles California. He is passionate about current events and plans to pursue political science. Spencer works as a technology policy CO-Lead for YIP and is an active member of his highschool’s debate team.

Naomi McKenna

Fall 2023 Fellow

Naomi McKenna is a high school student at Atholton High School in Columbia, Maryland, who will graduate in 2024.

Natalie Gelman

Policy Analyst

Vaishnavi Moturi

Policy Analyst

Vaishnavi Moturi is a student at Centennial High School and a technology policy analyst at the Institute for Youth in Policy. She is the founder and director of Hello CitizenZ, where she seeks to help create a generation of global citizens while developing technologies that improve public health systems and society’s collective health.

Katelin Wong

Vice President, Policy Media Publishing

Katelin attends Cornell University, majoring in public policy, and intends to pursue a career in international law.