This brief details an examination into the effects of AI towards elections, voting and democracy while proposing the policies at hand to properly regulate the usage of AI without further harm to politics.
Executive Summary
In the past year, there has been constant exposure to all sources of media developed from AI. From deep fake imagery to the usage of AI-generated work through ChatGPT, many have found it hard to dodge the ongoing conversation of the increasing prevalence of AI. Although many have found such appliances exciting heading into the future, the lack of understanding of where these algorithms stand has also concerned many. This brief examines AI's implications for elections and democracy with the policies of the time to prevent potential danger.
Overview
With the rise of technology and online social presence, there have been both demonstrated and ongoing concerns about AI's effects on global democratic establishments. The concept of national security has already evolved to include the elements of cyberspace as the level at which the digital intersection with the global economy and politics exists. However, not enough information has been calibrated in government entities regarding the new involvement of AI in political spaces.
Specifically, AI poses many risks in spreading misinformation, mainly through the use of deepfakes and its many forms of fabricated content as a means of harm to potential candidates and politicians alike. In the instance of Deepfakes, it uses AI to generate new audio and video, replacing one person's likeness in a target video with another for a newer media that is publicly manipulatable They can be used to create misleading videos such as giving false endorsements or messages that can be misinterpreted as fact.
As technology develops, its issues are becoming more prevalent. One such issue is how AI-created content can be created in the example of candidates when someone of established power is having their likeness taken advantage of and nonconsensually manipulated. With the many examples of those who have taken the pitfall of believing these often convincing media forms, a new question appears: how can the public determine the authenticity of public statements?
This is especially of concern with the US 2024 election coming soon. As polling indicates that the results of the presidential election could be incredibly close, campaigners may be fine-tuning their targeting methods to appeal to swing voters with the help of AI, influencing them to reach a final decision using microdata collected by commercial data brokers. The usage of AI in political campaigns has already begun, with the Republican National Committee utilizing this technology to produce a video warning featuring fake dystopian crises in an “AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024.” Although this piece of media did include an AI disclaimer, there’s no telling how this standard will be upheld in the future, as there is no legal requirement for disclosure. Technological advancements have merits and risks, but it is indisputable that how the world approaches them in the next election cycle and onward will be transformed.
History
Although it is a contemporary tool, forms of artificial intelligence have a longstanding history in our nation. The birth of AI took place in the 1950s when computer scientist Aurther Lee Samuels created a program where a human participant could play checkers with a computer. Then, in 1966, John McCarthy coined as the father of AI, orchestrated a series of chess games with competitors in Russia by telegraph. Recently, the public and government’s perception of AI has evolved.
With the increased access to AI, this form of technology has evolved to be seen as a threatening tool to the future of our workforce, democracy, and society.
According to a study conducted by the Pew Research Center, 37% of Americans who are more concerned than excited about AI cite that they are worried about AI eliminating jobs, privacy, and human connection. The majority of Americans believe artificial intelligence is to blame for their declining faith in elections (57%) in institutions (56%), and dangers to democracy (52%). As AI continues to be a prevailing tool in our nation, its role in our democracy is notable and will continue to make history in upcoming elections.
Despite the danger, there hasn’t been much in-depth investigation of how AI will affect elections due to its recent widespread usage. However, this has not stopped well-established political groups from using AI in their political endeavors. For instance, in a recent post on X, a user falsely announced that President Biden was creating a national draft supporting the war effort in Ukraine. The video was later revealed to be deeply fake, but it did not stop the 8 million users and counting who interacted with the video. Lastly, a deepfake emerged for Sen. Elizabeth Warren (D-MA), who was seen falsely stating that Republicans should not be allowed to vote in 2024. These videos have led to increased misinformation and at times, intentional propaganda. In the future, malicious actors may use generative AI to stifle votes or get past barriers that ensure elections are secure.
AI’s threat to elections goes beyond intentional harm or even deception. Among other things, several election offices utilize algorithmic tools to keep track of voter registration and validate signatures on mail-in ballots. Algorithmic decision-making on these issues is susceptible to bias, including racial discrimination. This is also the case with human decision-making. Some authorities are becoming more interested in employing generative AI to support voter education; this has chances for process acceleration but also carries significant dangers related to erroneous and unfair voter outreach.
Though there has been a prevailing negative public sentiment about AI, experts also believe AI can strengthen democracy.
Whether in the US or other democracies worldwide, artificial intelligence might support election workers and authorities in their vital tasks of monitoring the polls. For example, AI may improve election administration procedures to increase effectiveness, dependability, and security. AI might also spot suspicious patterns in voter lists and voting equipment under close observation, preventing fraud or disenfranchisement. AI can also be used in the counting process, with tabulators being able to scan paper votes faster than poll workers, which would cut down on how long it takes to declare election results and carry out recounts.
Additionally, AI has many opportunities to level the playing field in campaigns and democratize and enhance them. AI technologies may make it easier for underfunded and first-time applicants to enter the market financially. AI-powered simplification might help digital fundraising systems. Candidates can also use tailored ads to reach voters who are still uncertain more successfully. As a result, voters will be more informed about their alternatives while casting their ballots. Election administration personnel may identify and report on damaging hate speech that unjustly tilts the playing field for candidates and influences voter decision-making using AI efficiently and transparently.
Policy Problems
A. Voters
Voters may feel misinformed and have a greater mistrust for the press and media in general, for fear that the information they receive is tampered with by AI — especially by deepfakes which often convincingly depict a candidate stating something that turns voters off immediately.
Voters are especially susceptible to believing these deep fakes in the months and days leading up to the election when there is insufficient time to fact-check.
B. Candidates
In this age of AI, candidates are forced to increase expenditure on monitoring social media platforms to ensure deep fakes do not taint their reputation and, more importantly, do not misrepresent the principles and policies the candidate stands for.
One such incident occurred in Slovakia during its October 2023 election, when Michal Šimečka, leader of the pro-Western Progressive Slovakia party (1), was maliciously represented in a deep fake as rigging the election. Such occurrences result in the proliferation of an increasingly toxic political environment, pitting candidates against each other and often resulting in unsubstantiated claims about the opposing candidates’ views, further jeopardizing the prospects of a fair and unbiased election.
Further compounding the issue are bots and social media algorithms that increase the viewership and visibility of these viral deepfakes which often go unchecked and result in sometimes unjustly irreversible damage to the candidates’ reputations.
C. Democracy Itself
Ensuring the integrity of our electoral system is critical, and government indifference to the proliferation of misinformation through AI in social media could potentially erode the public’s trust in democratic processes, fostering widespread skepticism and potentially discouraging voter turnout.
The importance of such regulation has not been lost on the government and social media platforms. For instance, President Joe Biden’s administration recently issued an executive order “encouraging the responsible development of AI'' (3) requiring AI developers to release safety data, promoting AI's safe development. Furthermore, Meta recently announced that its companies, Instagram and Facebook, will begin to require political ads to state whether AI was utilized in creating these ads, which is a step forward in regaining the public’s trust in political ads and decreasing the rampant misinformation through deep fakes.
Policy Options
While there seems to be some benefit to the implications of AI on campaigns and elections, the intersection of AI and democracy presents unprecedented challenges, with potential risks ranging from simple misinformation to the integrity of our election. According to the Brookings Institution, AI could even out the financial disparities of running campaigns; future developments of AI may lead us to do away with hiring political consultants since AI could complete such mundane and time-consuming tasks at lower costs. However, some aspects of AI we may need to combat shortly include deep fake images, audio, and videos that perpetuate fraud scandals and further complicate campaigns and who voters may vote for on election day. As this groundbreaking technology continues to evolve, it is imperative to establish concrete policies to protect the democratic process.
1. Enforcement of Greater Transparency Rules:
With the prevalence of AI in today’s society, elections have become vulnerable to the disinformation perpetuated through AI. The current forms of AI make it difficult for regular citizens to differentiate between real and fake, making such transparency disclaimers even more critical. These disclaimers would inform viewers whether AI is being used in a campaign, addressing the crucial need to spotlight potential harms caused by AI. This measure would require any campaign using AI to express the integration details between their campaign and AI. However, going beyond the simple disclaimers on political ads or campaigns, political advertisers should also disclose any AI algorithms they employ so that viewers can understand how they became a part of the target audience for such an ad. This increase in transparency would enable the public to be more aware of AI use within the election process, thus leading to a more informed and faithful population. Alongside implementing these rules, other factors will significantly increase their effect:
A. Government-Led Discourse
In partnership with the disclaimers to increase election accountability, governments should actively engage in discourse on the dangers of AI in political campaigning. A government-led public awareness campaign would underscore the risks held by deep fakes, misinformation, and manipulative content. An active presence in public education about these prominent harms is crucial in fostering a more informed population capable of analyzing AI’s impact on their media and democratic process. Fostering a more informed citizenry on the variety of impacts AI holds is beneficial for this upcoming election cycle and will serve our country well into the future. President Biden issued the US’ first AI executive order in October, which implemented civil rights guidance and a government-led narrative on the uses of AI. The narrative included both the “promise and peril” of AI abilities. This is a significant first step toward ensuring public awareness, but the conversation regarding AI should be more accessible and active.
B. Third-Party Regulation:
To provide an almost checks and balances system to the already proposed transparency disclaimers, implementing third-party regulation is imperative to audit election campaigns and their relation to AI systems. These regulators would focus on media platforms, especially those frequently viewed by the public eye. Their presence would ensure fairness, security, and compliance with established transparency rules. This regulation would also help combat issues regarding unofficial campaign media, potentially from abusive foreign entities. Overall, these independent audits enhance the credibility of the democratic process and work toward eliminating the risk of exposing manipulative content to the public.
2. Potential Ban on AI in Campaigning Materials:
The threats AI poses against the democratic process can be deemed significant enough to justify a potential ban on AI being used with campaigning materials. This measure would eliminate invasive AI capabilities, trust issues, and the threat of deep fakes. The extensive capabilities of AI are laid out in an article written by a law professor at Harvard, who mentions a congressional hearing with former Open AI CEO Sam Altman, who admits he is concerned with the potential harm AI has with its intersection with voters. The article talks about the potential implementation of an AI engine called “Clogger,” which would be able to manipulate and enact “microtargeting techniques” more significantly than ever. Although, as mentioned above, such bans do pose some significant cons. The ban may hinder the use of AI to support lesser-known or less-funded campaigns, potentially exacerbating existing inequalities. When weighing the pros and cons, a decision that includes the importance of preserving democratic principles of fairness and equal representation while acknowledging the very apparent harms of AI must be made. Currently, a bill called the Protect Elections from Deceptive AI Act introduced in Congress by Senator Klobuchar aims to ban the distribution of deceptive AI-generated audio or media related to candidates for federal office.
3. Ethical AI Certification:
A government-passed policy laying out the essential rules that AI engines that are used in the campaigning process have to abide by. These guidelines would eliminate AI engines capable of producing deep fake and manipulative content, thus creating a more secure campaigning cycle. Approved engines would have to receive certification from the government to allow for their usage by candidates after being proven to adhere to said ethical guidelines. In support of this policy, there is an additional factor to reinforce reliability.
Regular Audits: A government-led group in charge of monitoring campaigns to ensure the use of AI is solely sourced from pre-approved engines. Implementing a regular audit schedule to review certified AI systems would guarantee the engines stay up to date in preventing their abuse. Overall, regular audits would be crucial in maintaining the integrity of this policy option.
4. Passing Privacy Legislation:
While some of the proposed policy options above can narrowly address the issues of AI, it is also just as important that Congress pass legislation that can look to regulate and address the root causes of AI in democracy. Elizabeth Meehan from the Tech Policy Press stated that transparency has various flaws. From the individual perspective, even if they were given all this information due to greater transparency, often, they don’t have enough background on the topic to make sense of the information they are given. Therefore, it is important that we supplement some of these more narrow solutions with legislation that will allow industries to safely deploy AI without conflicts to innovation. Through President Biden's executive order, efforts are being made to implement improved and new standards. By enforcing this executive order on Safe, Secure, and Trustworthy Artificial Intelligence, some of the key provisions being enacted include having developers share their safety test results and other critical information with the federal government and establishing best practices for detecting generated AI content in efforts to protect citizens from AI perpetrated fraud and deception. But in relation to the need for Congress to pass privacy legislation to help regulate AI, the executive order also calls for Congress to pass bipartisan legislation on privacy to protect all Americans. Such a call for action in the executive order may further motivate Congress to develop a sense of urgency around the issue of AI. For example, key privacy legislation would include provisions on algorithmic accountability like the SAFE DATA Act sponsored by Senator Wicker or the Consumer Online Privacy Rights Act by Senator Cantwell.
Conclusion
Several states have already introduced legislation on AI in elections and campaigns, and four bills have been successfully enacted in the following conditions: Michigan, Minnesota, and Washington. Michigan’s HB 5141 requires disclosure on political ads generated by AI. It prohibits the publication of materially deceptive media 90 days before an election where a candidate will appear on the ballot unless it includes a disclosure that the media has been manipulated. In addition, Michigan’s HB 5144 prohibits the publication of materially deceptive media 90 days before an election where a candidate will appear on the ballot unless it includes a disclosure that the media has been manipulated. Minnesota’s HB 1370 prohibits the publication of deepfake media to influence an election 90 days before. Washington’s SB 5152 requires a disclosure of synthetic media used to influence an election. State legislators are taking action against the harmful effects of AI in elections. If other governments enforce proper public and political usage of AI, there is a chance for AI to be used as a powerful force of advancement and good — but only if fairly regulated in time.
Acknowledgment
The Institute for Youth in Policy wishes to acknowledge Michelle Liou, Joy Park, Nolan Ezzet, and other contributors for developing and maintaining the Policy Department within the Institute.
Works Cited:
- One Hundred Fifteenth Congress. Putin’s Asymmetric Assault On Democracy In Russia And Europe: Implications For U.S. National Security, 2018, www.foreign.senate.gov/imo/media/doc/FinalRR.pdf
- Galston, William A. “Is Seeing Still Believing? The Deepfake Challenge to Truth in Politics.” Brookings, 8 Jan. 2020, www.brookings.edu/articles/is-seeing-still-believing-the-deepfake-challenge-to-truth-in-politics/
- The Repressive Power of Artificial Intelligence. (n.d.). Freedom House. https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence
- First Look: RNC Slams Biden in AI Generated Ad (n.d)Thompson, Alex. https://www.axios.com/2023/04/25/rnc-slams-biden-re-election-bid-ai-generated-ad
- The GOP’s Latest Attack on Biden isn’t scary. How it was made is. MSNBC, December 17, 2021. https://www.msnbc.com/opinion/msnbc-opinion/biden-republican-ai-attack-ad-rcna8137.
- How AI will transform the 2024 elections | Brookings. (2023, June 12). Brookings. https://www.brookings.edu/articles/how-ai-will-transform-the-2024-elections/
- “Watch: RNC Responds to Biden’s Reelection Bid with Ai-Generated Ad | CNN Politics.” CNN, April 25, 2023. https://www.cnn.com/videos/politics/2023/04/25/artificial-intelligence-rnc-response-biden-campaign-cnn-vpx.cnn.
- Rascoe, A. (2023, July 30). How real is the threat of AI deep lakes in the 2024 election? NPR. https://www.npr.org/2023/07/30/1190970436/how-real-is-the-threat-of-ai-deepfakes-in-the-2024-election
- Johnson, D., & Johnson, A. (2023, June 15). What are deep fakes? How fake AI-powered audio and video warps our perception of reality. Business Insider. https://www.businessinsider.com/guides/tech/what-is-deepfake
- Brancaccio, D., Conlon, R., & Shin, D. (2021, October 22). Are tech companies influencing politics more than governments? Marketplace. https://www.marketplace.org/2021/10/20/are-tech-companies-influencing-politics-more-than-governments/
- Posard, M. N. (2020, October 1). From Consensus to Conflict: Understanding Foreign Measures Targeting U.S. Elections. RAND. https://www.rand.org/pubs/research_reports/RRA704-1.html
- Wanless, A., & Walters, L. (2020, October 13). How Journalists Become an Unwitting Cog in the Influence Machine. Carnegie Endowment for International Peace. https://carnegieendowment.org/2020/10/13/how-journalists-become-unwitting-cog-in-influence-machine-pub-82923
- Lee, J.A.N. “Arthur Lee Samuel.” Computer Pioneers. Accessed December 17, 2023. https://history.computer.org/pioneers/samuel.html.
- “John McCarthy: Homage to the Father of Artificial Intelligence .” Teneo.Ai, November 16, 2023. https://www.teneo.ai/blog/homage-to-john-mccarthy-the-father-of-artificial-intelligence-ai.
- “New Survey: Americans Think AI Is a Threat to Democracy, Will Become Smarter than Humans and Overtake Jobs, Yet Believe Its Benefits Outweigh Its Risks.” Stevens Institute of Technology, November 15, 2021. Accessed December 17, 2023. https://www.stevens.edu/news/new-survey-americans-think-ai-threat-democracy-will-become-smarter-humans-and-overtake-jobs-yet.
- Rainie , Lee, Cary Funk , Monica Anderson, and Alec Tyson. “1. How Americans Think about Artificial Intelligence.” Pew Research Center: Internet, Science & Tech, March 17, 2022. Accessed December 17, 2023. https://www.pewresearch.org/internet/2022/03/17/how-americans-think-about-artificial-intelligence/.
- Panditharatne, Mekela. “How AI Puts Elections at Risk - and the Needed Safeguards.” Brennan Center for Justice, September 19, 2023. Accessed December 17, 2023. https://www.brennancenter.org/our-work/analysis-opinion/how-ai-puts-elections-risk-and-needed-safeguards?ref=disinfodocket.com.
- Nicol Turner Lee, Joseph B. Keller, Chinasa T. Okolo, Darrell M. West, Bill Baer, Obioha Chijioke Nicol Turner Lee, and Gregory S. Dawson James S. Denford. “AI Can Strengthen U.S. Democracy-and Weaken It.” Brookings, November 21, 2023. Accessed December 17, 2023. https://www.brookings.edu/articles/ai-can-strengthen-u-s-democracy-and-weaken-it/.
- “Summary Artificial Intelligence (AI) in Elections and Campaigns.” National Conference of State Legislatures, December 7, 2023. Accessed December 17, 2023. https://www.ncsl.org/elections-and-campaigns/artificial-intelligence-ai-in-elections-and-campaigns.
- Regulating AI Deep Fakes and Synthetic Media in the Political Arena. (2023, December 12). Brennan Center for Justice. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena#:~:text=Recent%20months%20have%20seen%20deepfakes,voters%20go%20to%20the%20 polls.
- Schneier, B. (2020, January 7). The Future of Politics Is Bots Drowning Out Humans. The Atlantic. https://www.theatlantic.com/technology/archive/2020/01/future-politics-bots-drowning-out-humans/604489/
- Klepper, D. (2023, November 8). Meta updates AI deepfake policy for political ads, 2 months after Google did it. Fortune. https://fortune.com/2023/11/08/meta-label-ai-generated-deepfake-political-ads-2024-election/
- How AI Puts Elections at Risk — And the Needed Safeguards. (n.d.). Brennan Center for Justice. https://www.brennancenter.org/our-work/analysis-opinion/how-ai-puts-elections-risk-and-needed-safeguards
- Fung, A., Lessig, L., & Us, C. (2023, June 6). How AI Could Take Over Elections--And Undermine Democracy. Scientific American. https://www.scientificamerican.com/article/how-ai-could-take-over-elections-and-undermine-democracy/
- Brman, R. (2023, May 27). AI Could Save Politics—If It Doesn't Destroy It First. The Atlantic. https://www.theatlantic.com/politics/archive/2023/05/ai-political-campaigns-2024-election-democracy-chatgpt/674182/
- AI can strengthen U.S. democracy—and weaken it | Brookings. (2023, November 21). Brookings. https://www.brookings.edu/articles/ai-can-strengthen-u-s-democracy-and-weaken-it/
- Harper, A., Gehlen, B., & Pereira, I. (2023, November 20). AI use in political campaigns raising red flags into the 2024 election. ABC News. https://abcnews.go.com/Politics/ai-political-campaigns-raising-red-flags-2024-election/story?id=102480464
- Sesto, G. (2023, August 25). Ethics And Transparency In AI-Powered Political Advertising. Forbes. https://www.forbes.com/sites/forbesagencycouncil/2023/08/25/ethics-and-transparency-in-ai-powered-political-advertising/?sh=6b2a28c34170
- Meehan, E. (2023, October 24). Transparency Won’t Be Enough for AI Accountability. Tech Policy Press. https://www.techpolicy.press/transparency-wont-be-enough-for-ai-accountability/
- The AI regulatory toolbox: How governments can discover algorithmic harms | Brookings. (2023, October 9). Brookings. https://www.brookings.edu/articles/the-ai-regulatory-toolbox-how-governments-can-discover-algorithmic-harms/
- Fung, A. (n.d.). How AI could take over elections – and undermine democracy. The Conversation. https://theconversation.com/how-ai-could-take-over-elections-and-undermine-democracy-20605
- Field, H., & Feiner, L. (2023, October 30). Biden issues U.S.’ first AI executive order, requiring safety assessments, civil rights guidance, research on labor market impact. CNBC. https://www.cnbc.com/2023/10/30/biden-unveils-us-governments-first-ever-ai-executive-order.html
- Ward, B. P. W. P. (2023, July 12). What does AI need? A comprehensive federal data privacy and security law. International Association of Privacy Professionals. https://iapp.org/news/a/what-does-ai-need-a-comprehensive-federal-data-privacy-and-security-law/
- House, W. (2023, October 30). FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The White House. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/