I. Executive Summary
Youth engagement with screens, especially artificial intelligence (AI) tools and social media, has grown rapidly, affecting learning. According to the American Psychological Association (APA), AI is already an integral part of adolescents' daily lives, ranging from predictive text to generative chatbots. Because adolescence (roughly ages 10-25) is a critical period for brain development as well as emotional and social development, the APA warns that adolescents are incredibly vulnerable to AI systems that present inaccurate information, simulate human interaction, and present persuasive content designed to influence adolescents’ beliefs or emotions without them often realizing it. Research cited in the advisory emphasizes that adolescents are less likely than adults to question the accuracy or intent of AI-generated responses, making them more susceptible to misinformation, bias, and excessive reliance on AI tools, especially in educational settings.
While AI tools can appear useful to students when it comes to brainstorming ideas, summarizing information, or other functions, overreliance can lead to a weakened critical thinking system, independent problem solving, and being unable to interact with peers. The advisory recommends incorporating safeguards to mitigate potential harm, a more integrated youth-privacy setting feature, reduced persuasive design elements, and more. Additionally, in school settings, schools and policymakers are urged to implement AI literacy education to teach students how to navigate AI systems and the potential risks within AI usage, as stated above. The goal is not to eliminate AI or screen use in educational settings, but to teach adolescents to use them responsibly within defined limits.
II. Relevance
Youth engagement with AI and screen-based technologies in education is an important public affairs issue because it directly impacts students’ learning, mental health, and their long-term development. Adolescents are in a critical stage of brain and social-emotional development, making them vulnerable to AI systems. When AI is used in abundance without safeguards, it leads to an overreliance on these tools, weakens critical thinking, and reduces social engagement with one's surroundings.
The public should care because education policy shapes how young people grow, not just how they perform in school. As the American Psychological Association points out, many adolescents are more likely to trust AI-generated information without questioning it. They may not realize when AI is influencing their decisions or behavior. Because of this, students are more vulnerable to misinformation and unhealthy dependence on technology, especially when AI tools are designed to feel personal or human. If policymakers do not step in early with clear guidelines and protections, these systems could do real harm by weakening critical thinking skills and increasing social isolation. Addressing these risks now helps ensure that AI supports students rather than shaping their development in ways that are hard to undo.
III. History
Legislative concerns over youth screen use in education have intensified over the past decade as digital tools have shifted from supplemental resources to core instructional infrastructure. While earlier debates centred on physical devices and social media distractions, the widespread adoption of generative AI has introduced new concerns related to cognitive development and misinformation. Specifically, educators and psychologists worry that students may develop an overreliance on AI for tasks that traditionally build critical thinking, such as essay-writing or math problems. This dependency risks atrophying students’ abilities to construct original arguments and persevere through difficult problems independently. AI tools, in particular, have become embedded in academia, with the share of teens reporting using ChatGPT for schoolwork doubling from 13% in 2023 to 26% in 2024. This dramatic increase signals a meaningful shift in how students approach learning tasks: what began as experimental use has quickly become normalized classroom behavior. This shift accelerated during and after the COVID-19 pandemic, when prolonged remote learning normalized sustained screen time.
The Children’s Online Privacy Protection Act (COPPA) and the Family Educational Rights and Privacy Act (FERPA) primarily regulate data collection and record-keeping. These offer limited protection against risks of consuming AI-generated content, such as misinformation and unhealthy emotional reliance. In response, professional and international bodies have begun issuing guidance and cautionary research, yet it has not yet been translated into legislation. The American Psychological Association’s 2025 health advisory finds that adolescents are more likely than adults to trust AI outputs without questioning theiraccuracy or intent, increasing susceptibility to bias and overreliance during a critical developmental period. Similarly, UNESCO has published Guidance for Generative AI in Education to urge policymakers to enact stricter AI regulations.
In response to growing concerns about youth AI use, President Trump has signed an executive order promoting AI education in K-12 schools, citing concerns about AI literacy and global competitiveness.
IV. Stakeholders
The most vocal stakeholders of the use of technology in education are teachers, who are directly responsible for students’ learning outcomes. Teachers are often the main proponents of phone bans in schools, citing smartphones and social media as both direct distractions and indirect deterrents to students’ attention spans. Parents are divided in the debate on phones in schools. On one hand, mental health concerns, as written in popular books like The Anxious Generation by Jonathan Haidt, have sparked global conversation about youth technology use. Yet parents also argue that phones and communicative technology are used to keep in contact with their children and ensure their safety. Neurodivergent students with ADHD, autism, and dyslexia often depend on assistive technology to aid their learning. Strict no-device policies can impede students’ learning and exacerbate the digital divide among low-income students.
The federal government is a notable stakeholder, with federal subsidies (E-Rate) providing billions to schools for internet access and devices like Chromebooks. By making educators dependent on technology ecosystems such as Google Classroom, it becomes nearly impossible to create a policy that bans them, as the ROI from the E-Rate investment would vanish. These technologies are regarded as “educational uses,” while social media and phones are the main targets of recent bans. Mostly, bans on cellphones during the school day have originated at the state level, as the U.S. Constitution delegates education to the states rather than the federal government. Each state has primary legal authority to manage its public school system, including regulating technology in the classroom.
V. Impact on Young People
From using chatbots as therapists to completing academic papers, the accessibility and efficiency of technology has made it a consistent presence in the lives of younger generations. AI is often embedded or integrated into existing technological applications, making engagement less of a choice. However, even if these changes are subtle, adolescents, in a crucial period of brain development, are highly sensitive to social feedback and emotionally engaging environments, which are often exploited by AI systems. For example, chatbot therapists simulate care and may offer dangerous validating responses to confessions such as self-harm. Many of these social AI companions lack proper safeguards or age regulation, and routinely claim to be real companions with feelings but end up exacerbating teen mental health conditions. However, only 10% of students think their schools have established clear guidelines for AI use, according to a recent 2026 report.
When screen-time policies are too permissive, young people can default to mindless consumption of online content. CDC data from 2025 show that adolescents with more than four hours of daily non-school screen time have significantly higher rates of social isolation despite being digitally connected. This can have a real effect in the classroom, where adolescents turn to screens feeling stress or boredom, but excessive use increases their social anxiety. As of December 2025, 35 states and Washington, D.C., have signed or enacted laws or policies on student cell phone use in K-12 classrooms. These laws not only encourage face-to-face interactions amongst students but also student focus in classrooms. In fact, 2026 Senate testimony revealed that students in restricted environments possessed higher test scores.
Reevaluating screen time also encompasses the younger generation’s increasingly passive consumption of media. Training them to be digital citizens in the future can promote ethical behavior and protect them from increasingly sophisticated online threats, such as scams, misinformation, and cyberbullying.
VI. Conclusion
Young people are rapidly adopting AI-based tools in their daily lives and interactions with technology. However, this widespread adoption has outpaced the development of regulatory standards and educational frameworks needed to protect adolescent users. Given that adolescence represents a critical period of developmental vulnerability, where the brain is still forming critical thinking skills and other cognitive patterns, unregulated AI systems pose heightened risks.
Moving forward, policymakers must prioritize AI literacy education and transparency requirements that help students recognize biases in AI-generated content and the limitations of the systems themselves. Organizations like UNESCO have emphasized that teaching appropriate reliance on AI tools is just as important as teaching technical skills. Existing data privacy laws for youth, while valuable, were designed for an earlier digital era and remain insufficient to address the unique challenges posed by AI; policymakers should consider updating these frameworks to include protections specific to AI-generated content. At the school level, clearer boundaries for when and how AI tools can be used in academic settings would help ensure technology continues to enhance youth development.


.jpg)
.jpg)
.jpg)
.jpg)






