AI In Hiring

AI enabled hiring tools are moving from optional screening aids to systems that can shape who gets seen, interviewed, and ultimately hired. In a national survey, most U.S. adults oppose AI making the final hiring decision (71%), even if views are more mixed on AI reviewing applications (Pew Research Center, 2023). Roughly two thirds say they would not want to apply for a job at an employer that uses AI to help make hiring decisions, with many citing loss of the human side of hiring and worries about bias or flawed screening logic (Pew Research Center, 2023). That combination, rapid adoption and visible skepticism, makes hiring a high stakes domain where weak oversight can have significant consequences.

Published on  

April 15, 2026

  by

At YIP, nuanced policy briefs emerge from the collaboration of six diverse, nonpartisan students.

HeadingHeading 3

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Support

I. Executive Summary

Fairness has been the core issue of the AI recruiting controversy; however, fairness has primarily been regarded as a slogan rather than as an objective, verifiable measure based on the standards of transparency, accountability, and tangible rights (Rigotti and Fosch-Villaronga, 2024). Recruiting AI systems represent a significant amount of risk with regard to the potential for privacy invasions and social discrimination, particularly for individuals who are already at risk for being discriminated against. Therefore, safeguards need to be integrated into the design, deployment, and monitoring of AI recruiting systems, and cannot be implemented as "after-the-fact" fixes to prevent harm to individuals. There is a viable policy approach that would include the documentation of job relevance and consistency, regular disparate impact testing, transparent notification to applicants when an automated tool is utilized, direct human oversight for decision-making actions that result in significant consequences to the applicant, and a procedurally fair process by which applicants may obtain explanations and/or redress. This changes the debate from the question of whether or not AI should be used in hiring processes, to the extent to which AI is capable of being demonstrated as lawfully, transparently, and fairly practiced.

II. Relevance

As organizations increasingly prioritize artificial intelligence within the recruitment process, decisions made by these automated systems continue to shift how the modern workforce navigates the labour market. These changes mark the move away from a formerly human-to-human landscape towards an advanced environment where automation serves a key role in allocating economic opportunities. While these tools promise an accelerated hiring process, it comes at the cost of exacerbated risks, such as those during the vital stages of the resume screening and video interview analyses processes. As a result, without clear oversight AI can operate on flawed logic, and systemic bias or algorithmic discrimination. 

The public should care deeply about this issue as artificial intelligence now directly dictates and impacts how individuals partake in the modern economy. When 71% of Americans express discomfort in having AI make the final decisions in the recruitment process, they are reacting to the documented issue of automated discrimination. This occurs when automated decision-making systems produce unfair outcomes that systemically disadvantage certain groups of people, often while scaling human prejudices that these very systems were made to eliminate. While organizations strive for administrative efficiency, heavy reliance on systems that overlook human nuance and reasoning can risk a future where qualified candidates are screened out by rigid data patterns and face a job market that they can not fully understand. For policymakers, the significance of this issue no longer lies in technological innovation but rather in ensuring that the pursuit of automated efficiency does not come at the cost of workplace equity, civil rights and opportunity. 

III. History

The use of artificial intelligence in hiring has developed over the past two decades rapidly alongside the growth of digital job platforms and data-driven human resources practices. Early hiring software in the 2000s focused on keyword filtering and applicant tracking systems (ATS) which ultimately helped employers manage large volumes of applications. According to the Pew Research Center (2023), as AI and machine learning advanced in the 2010s, employers began adopting more sophisticated tools capable of ranking candidates through their resumes, analyzing video interviews, predicting job performance, and assessing personality traits. Major corporations and AI platforms promoted these systems as efficient solutions to reduce hiring time and costs while improving candidate matching and user satisfaction.  

As integration of AI usage has increased, public concern and policy debate have grown alongside it. Pew Research Center (2023) reports that while Americans show some acceptance of AI reviewing applications, most oppose allowing AI to make final hiring decisions and express discomfort with employers heavily relying on automated tools as such. The Harvard Business Review (2026) notes that as AI can improve efficiency and identify suitable candidates for the job, poorly designed systems may reinforce existing patterns that favored certain groups of applications, unintentionally reinforcing existing inequalities. For example, if previous hiring trends favored candidates from certain schools, backgrounds, or demographics, an algorithm may learn to rank similar candidates based on past trends. These concerns have been highlighted by policymakers and researchers, shifting the debate toward how AI hiring tools can be integrated and used transparently with safeguards that protect fairness and equal opportunity in the labor market. 

IV. Notable Stakeholders

Company policies governing the hiring process, and the decision on whether and how to incorporate AI, are likely determined by the Human Resources (HR) team. HR practitioners and employers are likely to employ AI to dramatically expedite hiring processes, reach a global talent pool, better select traits and skills needed for a role, amongst other needs. After all, AI is not only used on the hiring side but also the application side; job applicants are able to tailor their resume or blast hundreds of applications at a time using new AI technology. Additionally, many AI software vendors such as OpenAI and Paradox have become legally liable for discrimination, just like an employer, in the landmark Mobley v. Workday case. However, they are lobbying federally for preemption, or one national law that will override strict state laws and protect them from potential lawsuits. 

Entry-level white collar job applicants are the most affected by AI in the hiring process due to the oversaturation of applications in those roles. Manual methods are labor intensive and impractical in reviewing thousands of applications at a time. Additionally, implicit biases due to personal characteristics of HR practitioners become unavoidable. However, there is also debate on whether AI will exacerbate inequalities because it learns from historical data. Demographics such as women and caregivers, who may appear to have an employment gap, are penalized by AI algorithms. Furthermore, researchers from Tasmania (2023) have shown that AI still correlates characteristics like courage and assertiveness with male success. Therefore, civil rights and advocacy groups such as the ACLU are pushing for legislation like the Eliminating BIAS Act, which was reintroduced in early 2026.

V. Impact

The increasing use of artificial intelligence in hiring processes has important implications for young people entering the workforce. Entry level and early career positions often receive the highest volume of applications, which has led many employers to adopt automated screening systems to sort and rank candidates before a human reviewer becomes involved. As a result, young applicants who are still developing formal work experience may be disproportionately affected by algorithmic filtering. Skills gained through school leadership, volunteer work, internships, or part time employment may not align with the specific keywords or data patterns prioritized by automated systems, increasing the likelihood that otherwise qualified candidates are filtered out early in the hiring process (Pew Research Center, 2023).

At the same time, AI driven hiring tools risk reinforcing structural inequalities that already shape access to employment opportunities for many young people. Because these systems are frequently trained on historical hiring data, they can unintentionally replicate patterns that favored candidates from certain educational institutions, socioeconomic backgrounds, or demographic groups (Harvard Business Review, 2026). The limited transparency of many AI systems further complicates this issue, as applicants often receive little explanation for rejection decisions and have few opportunities to understand how their applications were evaluated. As early career opportunities play a critical role in shaping long term economic mobility, the expansion of automated hiring technologies raises significant concerns about fairness, accountability, and equal access for the next generation of workers.

VI. Conclusion

Companies have increasingly employed AI-based tools to reduce costs and improve efficiencies in their hiring processes. Because employment is one of the primary gateways to economic stability and social mobility, unregulated AI systems pose dangerous risks, especially for early-career workers and people belonging to marginalized communities. Trust in the job market can be eroded if applicants are unable to understand how decisions are made and cannot appeal rejections they deem unfair. AI hiring tools are not neutral technologies, which is why civil rights protections that exist in traditional job practices should be adapted digitally. The existing laws against hiring discrimination remain insufficient for opaque AI systems.

Therefore, policymakers should aim to protect job applicants and the job market by setting standards for accountability and job-related validity checks. This includes requiring employers to disclose automated hiring practices and providing channels for recourse if harm occurs. Ultimately, regulating AI so that it can preserve dignity and equality to opportunities in the workforce will determine if the next generation can enter a job market defined by access or exclusion. 

VII. References

“AI Has Made Hiring Worse—But It Can Still Help.” Harvard Business Review, 26 Jan. 2026. hbr.org, https://hbr.org/2026/01/ai-has-made-hiring-worse-but-it-can-still-help.

Rainie, Lee, et al. “Americans’ Views on Use of AI in Hiring.” Pew Research Center, 20 Apr. 2023, https://www.pewresearch.org/internet/2023/04/20/americans-views-on-use-of-ai-in-hiring/.

Rep. Summer Lee, Sen. Markey Reintroduce Legislation to Mandate Civil Rights Offices in Federal Agencies That Manage Artificial Intelligence | Congresswoman Summer Lee. 15 Jan. 2026, http://summerlee.house.gov/newsroom/press-releases/rep-summer-lee-sen-markey-reintroduce-legislation-to-mandate-civil-rights-offices-in-federal-agencies-that-manage-artificial-intelligence.

Rigotti, Carlotta, and Eduard Fosch-Villaronga. “Fairness, Ai & Recruitment.” 2023. SSRN, https://doi.org/10.2139/ssrn.4619641.

Policy Brief Authors

Alicia McDuffie

Team Lead

Alicia is a student at Barnard College of Columbia University and the Team Lead for Technology at YIP. She is a committed activist focused on minimizing harmful rhetoric online and strengthening safe, accessible digital spaces.

Author's Profile

Abdullahi Ahmadu

Technology Policy Analyst

Abdullahi Ahmadu is a Technology Policy Analyst with the Institute for Youth in Policy, focused on how emerging technologies shape markets, governance, and public outcomes. Based in Georgia, United States, he is especially interested in economics and development, and is passionate about policy that expands opportunity and long-run prosperity.

Author's Profile

Queen-Aset Blisset

Policy Analyst

Author's Profile

Ishita Jalan

Technology Analyst

Ishita Jalan is a high school student at Mission San Jose High School in Fremont, California, with a passion for education policy. She has written a policy proposal focused on improving educational equity in the Bay Area, which was formally presented to her district, and she hopes to eventually make a meaningful impact in shaping policies.

Author's Profile

Kayla Peng

Tech Analyst

Kayla Peng is a student at the Kent Place School in New Jersey. She has experience in the intersections of technology, environmental advocacy, public policy, and journalism.

Author's Profile

Diya Sharma

Technology Policy Analyst

Diya Sharma is a youth policy advocate. She has drafted and passed legislation at the district and state level, debated on the national circuit, and presented to national research conferences. She aspires to work at the intersection of technology and public policy.

Author's Profile

Similar Policy Briefs

No items found.