AI is a Death Sentence in Academia. Let’s Change That.

Published by   

Natasha Agarwal

   on   

May 13, 2026

Inquiry-driven, this article reflects personal views, aiming to enrich problem-related discourse.

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Support

Article content

Haishan Yang has received the death penalty— but his life is not what’s on the line. A University of Minnesota Ph.D. student in economics, Yang, was recently expelled after being accused of using GPT programs to generate responses to questions on his doctoral exam. The phrase “death penalty” was Yang’s own— referring to the way the incident jeopardized his international scholarship, visa, reputation, and career prospects. But his story is a microcosm of the larger picture, not just one man’s conduct. Universities across the nation have seen a rise in AI-misconduct cases, illuminating an impending intellectual crisis that threatens students everywhere.

While I might not be on trial, I live in that gray area too. Despite never using ChatGPT in high school, I now turn to artificial intelligence (AI) platforms for assistance every day in college: brainstorming for essays, checking projects against rubrics, and clarifying long readings.  While these actions align with Penn's AI policies, once-thoughtless collaborations are now colored by profound feelings of dependency and unease. The same tool that cancels out my writer's block now clouds my creativity, leaving me with a burning question: As an Ivy League student, am I losing my ability to think independently? Moments like these demonstrate how growing uncertainty can push students to misuse technology in compromising ways.

Haishan Yang’s story is not the only cautionary tale. While 86% of undergraduate students  across the globe use AI regularly, only 48% of them feel prepared for an AI-powered world, indicating an unsettling confidence gap regarding the proper and ethical use of generative AI tools.

The desire to outsource your thinking—have AI teach key concepts or complete your assignments—becomes all the more tempting when you are sitting in a library, sleep-deprived and overburdened with to-dos. Without mandated AI literacy courses for all American undergraduates, students are not just abusing AI—they are also operating without safeguards or shared definitions of academic integrity and ethical use. To bridge the gap, every university in the United States should implement a standardized, one-time AI fluency course to ensure that college students can use AI acceptably, efficiently, and confidently.

The Ohio State University is leading the way as the first college in America to establish a required AI  fluency course. There, all first-year students starting in 2025 engage in interactive workshops that model how large language models fuel AI outputs and which ethical boundaries to consider when using generative resources. Some experts have even suggested creating a competency framework for  AI literacy courses in K-12 settings. While training a five-year-old seems extreme, it underscores how deeply AI will embed itself in our lives and how feasible it is to implement modules to teach it. If Kindergarteners learn AI fluency, why shouldn’t college students?

Other countries have also proven this task achievable. China has incorporated AI instruction into required secondary school curricula nationwide, while Germany proposes a six-module standardized course to debunk misconceptions and spur enriching dialogues. These examples are not arbitrary—they are successful initiatives targeting the same issues that U.S. universities are facing, providing a replicable blueprint to increase ethical decision-making and confidence. By acknowledging and addressing the lasting nature of AI rather than concealing it, these nations cultivate informed, responsible citizens equipped to thrive in an AI-driven world.

Yet not all schools have the infrastructure—or the will—to follow suit. Critics argue that widespread AI curricula are impractical due to economic and staffing constraints—instead advocating outright bans on AI in education. But eradicating AI is no longer a fruitful solution—breeding secretive, academically dishonest use of AI that leaves graduates unprepared for the modern workforce. The better solution is to equip students with the skills needed to navigate AI ethically and intelligently.

Another objection lies in the logistics—what professor willingly overhauls their syllabus to teach AI fundamentals and ethics? To resolve this issue, universities can administer standardized,  month-long AI fluency modules independent from regular coursework. These courses—centered on prompt engineering, bias detection, and the application of ethical guidelines when using generative platforms—can be taught by any professor who undergoes brief training. By mastering these skills, students can maintain independent thinking and enhance creativity even when using AI. This approach addresses resource and ethical concerns without displacing students’ normal schedules, just as OSU has done.

Despite being the most technologically savvy generation in history, we have already surrendered part of our autonomy to the algorithms we built. Now more than ever, universities must implement mandatory AI fluency courses to help us reclaim those missing pieces— redefining. AI as a collaborator, not a crutch. Students are already taught to read, write, and think critically—AI literacy is the next key skill for success. If we fail to teach it, we won’t just become passive victims of the Fourth Industrial Revolution. We will become our own executioners.

Acknowledgment

The Institute for Youth in Policy would like to acknowledge Gabriel Sanroman for editing this op-ed.

Bibliography

Cathey, Libby. 2025. “Ohio State Students Begin First-Ever AI Fluency Program.” 10tv.com.  WBNS. September 5, 2025. https://www.10tv.com/article/news/local/ohio-state-students begin-ai-fluency-program/530-f9b837ca-3179-4f1b-b797-09a5514274c1.

Digital Education Council. 2024. “What Students Want: Key Results from DEC Global AI  Student Survey 2024.” Digitaleducationcouncil.com. August 7, 2024.  

https://www.digitaleducationcouncil.com/post/what-students-want-key-results-from-dec global-ai-student-survey-2024.

Feven Gerezgiher. 2025. “‘A Death Penalty’: Ph.D. Student Says U of M Expelled Him over  Unfair AI Allegation.” MPR News. January 17, 2025.  

https://www.mprnews.org/story/2025/01/17/phd-student-says-university-of-minnesota expelled-him-over-ai-allegation.

Ford-Rojas, Kristian. 2025. “The Hypocrisy of AI Bans in Education: We’re Failing Students.”  Medium. January 29, 2025. https://medium.com/@fordkristian/the-hypocrisy-of-ai-bans in-education-were-failing-students-faa90e580ecc.

Lorena Casal Otero, Alejandro Catala, Carmen Fernández-Morante, M Taboada, Beatriz  Cebreiro López, and Senén Barro. 2023. “AI Literacy in K-12: A Systematic Literature  Review.” International Journal of STEM Education 10 (1). https://doi.org/10.1186/s40594-023-00418-7.

Reynoldson, Miriam. 2025. “Against AI Literacy: Have We Actually Found a Way to Reverse  Learning?” Substack.com. The Mind File. April 20, 2025. https://themindfile.substack.com/p/against-ai-literacy-have-we-actually.

Shalwa. 2025. “AI Plagiarism Statistics 2025: Transforming Academic Integrity.” Artsmart.ai.  February 23, 2025. https://artsmart.ai/blog/ai-plagiarism-statistics/.

Southworth, Jane, Kati Migliaccio, Joe Glover, Ja’Net Glover, David Reed, Christopher  McCarty, Joel Brendemuhl, and Aaron Thomas. 2023. “Developing a Model for AI  across the Curriculum: Transforming the Higher Education Landscape via Innovation in  AI Literacy.” Computers and Education: Artificial Intelligence 4 (1): 100127.  https://doi.org/10.1016/j.caeai.2023.100127.

“Statement on Guidance for the University of Pennsylvania Community on Use of Generative  Artificial Intelligence | Information Systems & Computing.” 2025. Upenn.edu. 2025.  https://isc.upenn.edu/security/AI-guidance.

Filed Under:

No items found.

Natasha Agarwal

Policy Media Journal Editor

Natasha Agarwal is a UPenn student interested in law, human rights, nonprofit work, and artificial intelligence. The founder of BelieveNBooks and author of Tales of African Women Trailblazers, Natasha has distributed 240,000 books to 45,000 youth to improve literacy and bring inspiration to women. At YIP, she is thrilled to shed light on policy developments as a Journal Editor in the Policy Media Department.

Author's Profile