Executive summary
As AI becomes integrated with the lives of humans, this ease of use has led to the abuse of this resource for nefarious means. A new technology where videos can be created depicting certain individuals speaking or carrying out actions that they didn’t do, known as deepfakes, has been greatly weaponized. It has been proven that 96% of these deepfakes are used to create non-consensual pornographic content which disproportionately impacts young women and girls. These videos greatly impact the lives of their subjects and have become a threat to personal privacy, identity, and misinformation circulation. In the U.S., some states have laws regarding the creation and distribution of this content, but it has been overlooked on a national level. There are also many smaller issues and groups not properly addressed in state legislatures and a lack of general education on this topic has led to even more difficulty in resolving it. To combat this, a Federal Bill illegalizing non-consensual pornographic content, criminalizing creators, and protecting victims would help to solve the issue, and including deepfake content and discussion in school curriculum would lead to increased advocacy and solutions for this topic for generations to come.
Overview
According to the PEW Research Center, over 55% of Americans report interacting with AI at least once every single day. Although there are beneficial uses for AI like increased efficiency, productivity, and innovation, putting such a powerful technology in the hands of common citizens can lead to many difficult-to-regulate issues. In the U.S., there is currently no federal legislation regarding the production and development of new forms of AI, so misuse of artificial intelligence has become greatly unregulated.
In recent times, the lack of regulation has led the way to a very destructive trend- deepfake pornography. For context, a deepfake refers to an artificial image or video (a series of images) generated by a special kind of machine learning called “deep” learning (hence the name). Although the creation of these videos often requires more advanced technological equipment, access to AI models like variational autoencoders [7] and Generative Adversarial Networks (GANs), and significant amounts of time, they have become easy for common people to create. As this tool becomes increasingly available, women have become the primary victims of deepfakes as their nature is often pornographic.
From high school girls to celebrities and other women all over the country, AI deepfake technology has been used to create grotesque pornographic content that greatly impacts women. Having these videos created can destroy one’s future aspirations, foster societal shame and disclusion, and provide risks to physical safety. This issue has at this point reached a level that must be addressed by the U.S. Government so that women and girls throughout the country can feel safe and protected. As technology becomes increasingly integrated into our lives, it is invaluable for the government to ensure its constituents are protected against the abuse of technology.
Pointed Summary
- Deepfakes are a form of fabricated video content that can be created through AI.
- These models and technologies have become available to more people than ever.
- Deepfake technology is often used to create pornographic content
- This content disproportionately affects women and girls
- There is a lack of legislative protection for victims of this issue
- The government needs to do a better job of protecting against deepfake pornography
Relevance
Research in recent years has found that there were more than 95,000 deepfakes on the internet in 2023, a 550 percent increase from the year 2019. As technology becomes increasingly more available the market is predicted to continue growing at a rapid rate. As more and more videos get created, it is important to have proper legislation in place to prosecute creators of this harmful content and protect victims of this violence.
Another critical problem is the rapid development of AI technology itself. In recent studies, it has been found that 73% of people are currently unable to accurately identify AI-created deepfakes as fake content. As AI models grow in skill, sophistication, and ability, the videos are likely to only become more realistic. It is important to have proper legal measures ready to counter the development of this innovation before the deepfakes become even more difficult to identify.
Current Stances
In September of 2023, the National Security Agency identified deepfakes as an official safety threat. Since then The Department of Homeland Security has also completed reports on deepfake’s threat to society. Although both of these statements are important, they are more heavily focused on the protection of political candidates regarding protecting democracy and eliminating misinformation in U.S. elections. The U.S. government has made the valuable step of formally identifying this issue but hasn’t taken much action on actually working to solve it or address its impacts on the entire population.
Even more recently, in February of this year, the Federal Trade Commission finalized a rule banning government and business impersonation and reportedly stated that their new goal is to extend this protection to individuals. There has since been little progress on advancing any legislation to project a broader range of U.S. citizens. Although the protection of the U.S. government is undeniably important, these actions have not properly accounted for the impact that deepfakes have on common citizens and their relation to more harmful pornographic content creation.
This past year many interest groups have begun urging the federal government to take more proactive steps to mitigate this issue. Organizations like Equality Now, The National Organization for Women, and End Cyber Abuse are actively lobbying, publishing, and advocating for legislation against deep-fake pornography creation to protect women throughout the country. Another valuable voice in this drive for legislation has been students. As young female students see this issue greatly harming the lives of themselves and their fellow students, many young women have used their platform to advocate for better protection. For example, last December A mother and her 14-year-old daughter are advocating for better protections for victims after AI-generated nude images of the teen and other female classmates were circulated at a high school in New Jersey. As stories like these circulate throughout the media, people become more aware of the prevalence of this issue and can connect to the impacts it has on people’s lives. As large human rights organizations, students, and media outlets continue pushing for the protection of these rights, hopefully, more action will be taken against this issue.
Tried Policy
Despite the Federal government’s lack of action related to the creation of this content form. Many states have passed legislation related to this issue. Current reports indicate that 10 states already have passed legislation related to deepfake pornography and many states will be voting on legislation related to this issue in the coming year. Georgia, Hawaii, Virginia, and Texas have legislation that criminalizes non consensual deepfake porn, California and Illinois have passed legislation allowing victims the right to sue those who create images using their likenesses, and Minnesota and New York have legislation combining these two approaches.
Many government representatives in states without legislation are actively working on these issues in their upcoming legislative cycle. Representative Holly Cheeseman of Connecticut stated that her main priority regarding her upcoming legislative action was the protection of AI-generated pornographic images, emphasizing studies in her area that found that 98% of so-called deepfakes, for example, are pornography. The victims are almost entirely women, and many are minors.
Another political figure working towards combating this issue has been Senator Ted Cruz of Texas. In June of 2024, he introduced legislation to protect victims of deepfake porn which would require websites to take down this content. This legislation known as the Take It Down Act has already been supported in the House by 18 other senators from both parties..
One other prominent voice on this issue has been Representative Alexandria Ocasio-Cortez from New York. She is known for proposing the Defiance Act in January 2024 which would allow victims of deepfake pornography to sue creators if they prove the act was non-consensual. This bill has been advancing rapidly in Congress and has garnered a lot of recognition and support. The increased action of state politicians on this issue has been immensely valuable in advancing legislative action surrounding this topic.
Unfortunately, a lot of the current legislation being proposed still has barriers to effectively solving this issue. In many states, proposed legislation requires proving intent, that the goal of the person making and sharing the non-consensual deepface was to harm its subject. This makes it difficult for young women to work through these cases in court and makes perpetrators harder to identify. With some changes to the current form of legislation, this issue can certainly be mitigated.
Although state legislation is valuable and is growing as a topic of interest in many states, creating a more efficient and equitable system for solving these problems on a national level would greatly help victims and improve the legal process regarding this issue.
Policy Problem
A. Stakeholders
The first stakeholder related to this issue is the creators of the AI models that have the potential to create deepfakes. Over the years more than 100,000 models that can create deepfakes have been developed. Many large AI corporations including Synthesia, iProov, Ceartas, Truepic, and Maverick Lab allow for the creation of AI-generated video content which creates the potential for deepfake creation. Because private groups and individuals own each of these corporations and models, legislation regarding regulation on the sites would need to be passed federally to account for the varying companies.
Another valuable contributor to this process is the individuals who use this software to create the content itself. Although some deepfakes are created by renowned researchers at top universities, many of these videos are created by young individuals without any formal experience. The first ever pornographic deepfake created was made by a Reddit user in 2017. Regardless of their background, these individuals are creating very harmful, inappropriate, and violating content and must be held accountable for their actions. Due to the current lack of legislation, it is often easy to get away with creating these videos, but hopefully, if legislation is in place, the individuals will face the prosecution that they deserve.
The victims of these crimes can be broken into two majorly targeted categories, high-profile figures, and high school girls. Deepfakes have been created using the faces of celebrities like Emma Watson, Gal Gadot, and most notably Taylor Swift. As Taylor Swift’s influence has grown proportionately to the success of her record-breaking Eras Tour this year, the creation of her deepfakes led to a huge call to action for this issue. Although celebrities' ability to increase awareness is important for solving the issue, high school students receive much less attention and respect when facing similar circumstances.
High school and middle school-aged girls are impacted disproportionately by the creation of deepfake creation and are not given proper protection against these measures. For many teen girls, having these videos created of them and spread throughout their schools face detrimental social and emotional problems due to this issue. Research in the past has identified that nearly 40% of deepfake victims were minors and these people lack the proper rights to fight against these issues. Working to help protect minors and young women across the country is a valuable step toward solving this issue.
B. Risks of Indifference
As AI deepfake technology has become even more accessible and efficient within the past years, this issue is growing much more rapidly than current legislation is accounting for. In the current state, this issue has grown to the point where 90-95% of deepfake videos are now non-consensual pornographic videos, and, of those videos, 90% target women—mostly underage. This issue is imperative to address as even more women and girls are being targeted by this issue every single day. According to market research, the market for AI deepfakes is projected to grow by over 40% from 2024-2030. Before even more women are put at risk by this awful content, it is time for the government to take action to ensure their protection.
C. Nonpartisan Reasoning
Regardless of one’s political values and ideologies, the creation of deepfakes is a violation of many basic human rights that are agreed upon by the masses, a right to privacy, protection of identity, and the ability to access credible information.
The creation and spread of deepfake videos are inherently a violation of individuals' rights to privacy. Deepfakes allow individuals to falsify information for personal gain and unjustly victimize other people. When individuals have inaccurate creations of them, this greatly threatens their privacy. As bad faith actors continue to use the content of others to create harmful deepfakes to advance their agendas, these individuals' privacy is greatly at risk. All people should feel comfortable under the government to have their personal information and lives protected and deepfakes are a violation of this ideal.
Another key problem with deepfakes is a threat to identity protection. The realistic nature of deepfakes often makes it incredibly difficult to tell whether the content has been falsified or not. As this content gets misinterpreted as the truth, it can greatly harm people’s identities and others to believe completely inaccurate things about these individuals. Groups can also mobilize deepfakes as a form of identity theft which has become a very pressing issue concerning high profile political candidates and business executives. Deepfakes are a massive threat to individuals' identities and there should be action taken to make sure people’s identities remain protected in the digital world.
Arguably the most prominent issue to be addressed in regards to deepfakes is the spread and creation of misinformation. Deepfakes allow for people to spread incredibly realistic falsified information which has led to a huge upturn in media disinformation. Because deepfakes are incredibly difficult to identify, this poses a huge threat to making sure people receive credible and truthful information. This misinformation fosters confusion on important societal issues and is also harmful to the way people perceive the world around them. By working to regulate deepfakes, the government can ensure that this inaccurate spread of information will be handled properly.
Policy Options
To properly address the issue of deepfake pornography, the United States government needs to work towards an effective and nationwide solution to this problem. To centralize the process for addressing this issue, it would be an accurate response to pass a Federal Bill illegalizing both the spread and creation of AI-created pornography when proven to be non-consensual. This bill would need to address all of the involved parties including the AI companies, individual creators of the content, and victims alike.
As for AI companies, the U.S. government would be given the right to shut down a company in its entirety if it was found to be mobilizing the creation of this content. The U.S. government would have access to software where deepfakes were created once the case began against the creation of the video. The government would then have the ability to deactivate the model when proven to be a contributing factor to the creation of the content.
As for the human creators, they would be tried and prosecuted for their crimes based on the age of the victim and the severity of the content of the deepfake itself. This model would be based off of the current state legislation following this format but would extend this action to the federal scale ensuring this form of trial is carried out in every state.
As for the victims of the deepfakes, they would have to testify that the creation was in factnon-consensuall and be able to prove that in a court of law. They would also be guaranteed that the video is taken of like and provided access to funded counseling resources to help them deal with the implications of this event on their life. Bringing this process to the federal level will hopefully increase the ease with which this content is handled and help victims receive the resources and support they need to move forward.
Another critical aspect that is currently non-existent related to policy surrounding this process is education. Within the current state of U.S. education, deepfakes are not a recruited subject of discussion or teaching within the classroom. Along with the new federal bill, a federal mandate should be based on outlining the inclusion of conversations about deepfakes with required middle and high school sex education and technology education in U.S. public schools. This content would aim to teach students what a deepfake is, discourage them from engaging in the creation or spread of this content, and encourage them to report this information to their school if they discover it. In regards to technological education, technology classes would be required to teach students how to identify AI-created content and help them to work through and develop the ability to analyze deepfake content.
Not only will the education policy help students to accurately identify, understand, and report deepfakes, it will create a generation who is more comfortable with this issue and able to work towards even more effective solutions. By empowering students to stand up for their rights and continue to advocate against injustices, the future of this country and the world at large will be better off. As young people become more educated on this issue, more beneficial discussions and actions will be able to be taken as AI continues to evolve into the daily lives of humans.
Conclusions and Recommendations
As a society, it is necessary to realize that AI will continue to become integrated into our lives. RAther than hiding from these complex issues, it is invaluable to address them before they spiral out of control. To protect all the future young women and girls who will grow up in the technological age of America, protection against AI deepfakes is essential for maintaining their privacy, identities, and rights. By preparing both the court system and the educational system in this country to address AI deepfake pornography, the government can ensure a system that will protect the rights of its citizens for generations to come.
Acknowledgment
The Institute for Youth in Policy wishes to acknowledge Paul Kramer, Carlos Bindert, Gwen Singer, and other contributors for developing and maintaining the Programming Department within the Institute.
References
- Aizawa, Yoshiki. “Deepfakes and the Future of Personal Privacy.” Coruzant, www.coruzant.com/tech/deepfakes-and-the-future-of-personal-privacy/#:~:text=Deepfakes%20represent%20a%20significant%20threat,exploit%20deepfakes%20for%20various%20purposes.
- “12% of Americans Say They Regularly Interact with AI.” Pew Research Center, 15 Feb. 2023, www.pewresearch.org/science/2023/02/15/public-awareness-of-artificial-intelligence-in-everyday-activities/#:~:text=12%2D18%2C%202022%2C%20finds,not%20regularly%20interact%20with%20AI.
- “AI Advantages.” Forbes, www.forbes.com/advisor/in/business/software/advantages-of-ai/.
- “AI Watch: Global Regulatory Tracker - United States.” White & Case, www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states.
- “Deepfake AI Market Report.” Grand View Research, www.grandviewresearch.com/industry-analysis/deepfake-ai-market-report#:~:text=Deepfake%20AI%20Market%20Size%20%26%20Trends,intelligence%20(AI)%20market%20growth.
- “Deepfake Pornography and Legislation.” AP News, www.apnews.com/article/deepfake-ai-nudes-teen-girls-legislation-b6f44be048b31fe0b430aeee1956ad38.
- “Deepfake Technology and Its Impact on Society.” Spectrum, IEEE, www.spectrum.ieee.org/what-is-deepfake.
- “Deepfakes Are Out of Control: Is It Too Late to Stop Them?” New Scientist, 2023, www.newscientist.com/article/2418188-deepfakes-are-out-of-control-is-it-too-late-to-stop-them/#:~:text=A%20study%20last%20year%20by,cent%20using%20a%20woman's%20likeness.
- “Deepfakes as a Security Issue: Why Gender Matters.” Women in International Security, www.wiisglobal.org/deepfakes-as-a-security-issue-why-gender-matters/.
- “Deepfakes: Navigating Data Privacy and Security.” JD Supra, www.jdsupra.com/legalnews/deepfakes-navigating-data-privacy-and-6913844/.
- “Dangers of Deepfake: What to Watch.” Stanford University, UIT, www.uit.stanford.edu/news/dangers-deepfake-what-watch.
- “Growing Pandemic of Deepfake Image-Based Sexual Exploitation Requires an Urgent Response from Lawmakers.” Equality Now, www.equalitynow.org/press_release/growing-pandemic-of-deepfake-image-based-sexual-exploitation-requires-an-urgent-response-from-lawmakers/.
- “Humans Can Detect Deepfake Speech Only 73% of the Time, Study Finds.” The Guardian, 2 Aug. 2023, www.theguardian.com/technology/2023/aug/02/humans-can-detect-deepfake-speech-only-73-of-the-time-study-finds.
- “Legislation for Deepfake Regulation.” Congress.gov, www.congress.gov/bill/118th-congress/senate-bill/4569/cosponsors.
- “Legislation on Deepfake Revenge Porn.” Cruz.Senate.Gov, www.cruz.senate.gov/newsroom/press-releases/sen-cruz-leads-colleagues-in-unveiling-landmark-bill-to-protect-victims-of-deepfake-revenge-porn.
- “Legislation to Protect Victims of Deepfake Revenge Porn.” Salazar.House.Gov, www.salazar.house.gov/media/press-releases/salazar-introduces-legislation-protect-victims-deepfake-revenge-porn.
- “NBC Connecticut Report on Deepfake Prevention.” NBC Connecticut, www.nbcconnecticut.com/news/politics/heres-what-is-being-done-to-prevent-so-called-deep-fakes-in-connecticut/3207841/.
- “Public Awareness of Artificial Intelligence in Everyday Activities.” Pew Research Center, www.pewresearch.org/science/2023/02/15/public-awareness-of-artificial-intelligence-in-everyday-activities/#:~:text=12%2D18%2C%202022%2C%20finds,not%20regularly%20interact%20with%20AI.
- “States Take Aim at Deepfake Porn.” U.S. News & World Report, 6 Sept. 2024, www.usnews.com/news/u-s-news-decision-points/articles/2024-09-06/states-take-aim-at-deepfake-porn-plus-no-soap-for-you-in-illinois.
- “The Rise of Deepfakes.” Los Angeles Times, www.highschool.latimes.com/dulles-high-school/the-rise-of-deepfakes/.
- “The Stakes of Deepfakes: The Growing Necessity of Federal Legislation to Regulate This Rapidly Evolving Technology.” Princeton Legal Journal, legaljournal.princeton.edu/the-high-stakes-of-deepfakes-the-growing-necessity-of-federal-legislation-to-regulate-this-rapidly-evolving-technology/.
- “What Are Deepfakes and How Can You Spot Them?” The Guardian, 13 Jan. 2020, www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them.
- “Why Mainstream Media Overlooks the Gendered Impacts of Deepfake Tech.” End Cyber Abuse, www.endcyberabuse.org/why-mainstream-media-overlooks-the-gendered-impacts-of-deepfake-tech/.
- “Women’s Rights Institute Reports on Deepfakes.” Korea Times, www.koreatimes.co.kr/www/nation/2024/09/113_381941.html#:~:text=The%20Women's%20Human%20Rights%20Institute,crimes%20this%20year%20were%20minors.
- “Your Guide to Deepfake Technology.” Venture Radar, www.ventureradar.com/keyword/Deepfakes.
- “Taylor Swift's Eras Tour Records.” Grammy, www.grammy.com/news/taylor-swift-eras-tour-records-movie-disney-plus-stream.
- “Teen Girls Targeted by Deepfake Nudes: A Call for Legislation.” AP News, www.apnews.com/article/deepfake-images-taylor-swift-state-legislation-bffbc274dd178ab054426ee7d691df7e#.
- “Virginia Security on Deepfakes.” University of Virginia, www.security.virginia.edu/deepfakes.
- “Artificial Intelligence Training and Deepfake Technology.” New York Times, 7 Feb. 2023, www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html.