The Pitfalls of Surveillance Technology: Policy Analysis and Solutions

This brief analyzes the pitfalls of surveillance technology and how to counteract biased systems in an increasingly digitized world.

At YIP, nuanced policy briefs emerge from the collaboration of six diverse, nonpartisan students.

HeadingHeading 3

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Card Title

Lorem ipsum dolor sit amet conse adipiscing elit

Support

Executive summary

This brief analyzes the deficiencies in facial recognition technology used in the criminal justice system, from how it disproportionately affects people of color to its violation of people’s privacy rights. This brief also explores solutions to minimize inaccuracies in these recognition systems and thus prevent wrongful incrimination.

Overview

A. Pointed Summary
B. Relevance

Facial recognition technology has become increasingly relevant due to its inaccuracies and the issues it presents. Former Detroit police chief James Craig explains that if the city were to employ facial recognition by itself, it would misidentify the person 96% of the time. The majority of the time, these misidentifications involve minorities, with six out of seven cases involving a Black person. A study by the National Institute of Standards and Technology found that people of color are ten to one hundred times more likely to be misidentified than White people. The institute discovered that the algorithms struggled to distinguish facial structures on darker skin tones. Facial recognition technology also presents privacy issues, with Georgetown University’s Center on Privacy & Technology estimating that one in two adults have their image stored in a law enforcement facial recognition network.  

The fight to ban facial recognition technology has received national attention due to the many instances of misidentification of people of color. Human Rights Watch and 180 other groups have called on governments and companies to stop using this technology.

History

Facial recognition, a type of surveillance technology, is a widely utilized tool in the criminal justice system, making the impact of its potential biases just as significant for those it disproportionately affects. According to a 2011 study by the National Institute of Standards and Technologies, “who makes the software [for surveillance technologies such as facial recognition] strongly affects how it works,” given that facial recognition software created by a majority of people from a specific demographic is more likely to work accurately on people from that demographic. This suggests  a lack of diversity in the technology sector both historically and today. Therefore, certain researchers such as Joy Buolamwini, an analyst at the MIT Media Lab, argue that the root cause of facial recognition software disproportionately affecting “black faces” is connected to the algorithm being majorly written by white engineers. Racial bias may lead to the algorithm learning from “training data” that is often composed of white faces.  

A. Current Stances

In 2012, a report from the Federal Bureau of Investigation highlighted the face recognition software’s disproportionate effect on black individuals. In 2021, “a coalition of grassroots organizations” called on Congress to take urgent action against face recognition. Furthermore, debates surrounded the use of face recognition by the FBI during the Black Lives Matter movement. Several different cases of wrongful accusation caused by face recognition across the country also contributed to the current widespread public opinion against the racist tendencies of surveillance technologies. 

B. Tried Policy

In their published report concerning the ethics of artificial intelligence, Joy Buolamwini and her research partner Timnit Gebru, a former co-lead of Google’s ethical AI team, found a 34.7% maximum classification error for women with darker skin in the commercial gender-classification systems by Microsoft, Face++, and IBM. About a year after their report had published, the companies succeeded in reducing their error rates. The study influenced the 2019 Algorithmic Accountability Act in the United States, which makes it mandatory for companies to “conduct automated decision system impact assessments and data protection impact assessments”. It also led to the ban of utilizing facial recognition by the police in cities like San Francisco, Portland and Boston. In addition, the U.S. Commission on Civil Rights is conducting an ongoing report to assess  the training, utilization, and testing of facial recognition by “government agencies and law enforcement.” Another bill, the Facial Recognition Act of 2023, aims to regulate the use of facial recognition by law enforcement. However, it remains in the early stages and has not been considered by committees yet. 

Policy Problem

A. Stakeholders

The most prevalent stakeholders in the issue of biased surveillance technology are victims of such bias. According to the American Civil Liberties Union, surveillance systems work best for middle-aged White men and less so for women, people of color, and young and old individuals. These technologies work worst for Black women and can cause wrongful incrimination. Other key stakeholders in the issue include law enforcement officials who may wrongfully utilize these technologies. In order to get convictions, officials may use faulty technology, and policy around this technology would impact their practice. 

B. Risks of Indifference

Without surveillance technology reform, the criminal justice system faces an increase in wrongful detention along gender and race-based lines. Widening access to these technologies increases the risk of unjust incrimination against targeted communities. According to the US Department of Health and Human Services, incarceration negatively impacts people and communities in which the incarcerated people are close to or a part of. Minimizing unjust incrimination can lower this adverse outcome and foster flourishing communities. 

C. Nonpartisan Reasoning

Surveillance technology is argued by many to be extremely beneficial to prosecute criminals. Proponents argue that different surveillance technologies enhance public safety by deterring crime, assisting law enforcement in investigations, and improving emergency response times. Surveillance tools provide law enforcement agencies with valuable evidence for investigating crimes and apprehending suspects, potentially leading to higher but more accurate conviction rates for criminal cases thereby reducing crime rates and enhancing public security. Additionally, surveillance technology can be a cost-effective solution to help increase the efficiency of police departments with more efficient allocation of resources and personnel. Therefore, technical advancements within the system are encouraged by many who believe surveillance technology can be used for a net good. 

However,there are increasing worries about bias and discrimination due to surveillance technology, an especially pressing issue for people of color. According to The Brookings Institution, people of color are disproportionately affected by surveillance technology and facial recognition, causing alarm over false and racist prosecution.

Moreover, there are also privacy concerns, as personal information is available publicly. These developments raise concerns about the erosion of individual privacy rights, particularly with the widespread collection and analysis of personal data without consent. Although some websites and applications require completing a privacy policy before accessing their content, few people know what they consent to. For these reasons, people hesitate to trust surveillance technology due to its capacity for exploitation and false accusations.

Policy Options

Presently, law enforcement has the power to run identification programs without limitations, exacerbating existing issues of racial inequality in policing. Legislation passed requiring police to obtain a warrant before applying any biometric AI systems tasked with identifying suspects through digital footage would ensure suspects their digital rights. Additionally, such policy would demand officers to show they reasonably suspect a person of committing a crime before being issued a warrant, similar to practices regarding the intrusion of private spaces. Such processes protect the constitutional Fourth Amendment rights of all citizens, preventing police forces from searching for convictions without reasonable foundations. A legislative solution will keep political and social applications of technology from interfering with justice, focusing on evidence over prejudice. 

To effectively empower the local communities most affected by decisions regarding surveillance, a new approach should be adopted, according to the American Civil Liberties Union. This process provides the public with substantial information about any proposed biometric technology and the opportunity to express opinions before elected officials. These representatives, accounting for the positions of their constituents, should determine the limits of surveillance technology. Many states and municipalities have already enacted laws prohibiting government use of facial recognition (excluding certain extenuating circumstances). More than seven states, including Maine and Virginia, have passed such legislation. 

Private companies face few legal limitations when collecting users' personal information and fail to implement safeguards for security. Uniform transparency must be required in surveillance technology, involving the disclosure of methodologies and data sources used in face recognition algorithms. Incentivizing, or even mandating, the publishing of processes and results has the potential to create an environment of increased clarity within the surveillance industry. Currently, private companies enforce a patchwork of state privacy laws. For instance, Clearview AI only offers special privacy protections to residents of California and Illinois without acknowledging the rights of the 48 other states. Government action is vital to rectify this issue; legislators should pass protections for all residents across the U.S., especially those most vulnerable to the threats of AI recognition. 

Furthermore, the datasets that private companies hold can contribute to the disproportionate impact that facial recognition has on people of color. For instance, as a result of historically racially biased policing practices, mugshot databases include an unusually high amount of images of African Americans, Latinos, and immigrants. Datasets vary between private companies, increasing the possibility of both data-driven and scenario-modeling factors contributing to race bias

Developing methods to rectify the failures of facial recognition technology is achievable through  holistic approaches that address technical and societal challenges. Prioritizing policies that improve the diversity of training datasets and the accuracy of facial recognition algorithms must be passed. Through the integration of diverse ethnic datasets, the training data will be more representative and have a greater probability of producing a more accurate and unbiased facial recognition system.

Additionally, third-party audits of algorithms should be standard procedure, as examiners beyond the scope of the private sector can analyze the risks and likelihood of misidentification through an impartial lens. This should be subject to local and international regulatory institutions.  These audits may also detect cases of misidentification and false alarms, which have been documented in several instances due to the weaknesses of facial recognition technology. By integrating such audits, the accuracy of such technologies would be increased, thus increasing public trust.

Conclusions

By reinforcing Fourth Amendment Rights, proposing local legislation, and emphasizing transparency, policymakers can counteract the biases associated with surveillance technology. It is especially imperative now, as technology and AI become increasingly complex, to mitigate any biases that can be a byproduct. Technology should be used as a tool for good–policymakers must commit to this principle by enforcing diversity in surveillance. Security does not just mean surveilling the people; rather, it requires that people feel confident in their government and believe that it is acting in their best interest.

Acknowledgment

The Institute for Youth in Policy wishes to acknowledge Michelle Liou, Joy Park, Nolan Ezzet, and other contributors for developing and maintaining the Policy Department within the Institute.

References

  1. Alms, Natalie. 2024. “Civil Rights Commission Digs Into Government Use of Facial Recognition.” Government Executive. March 12, 2024. https://www.govexec.com/management/2024/03/civil-rights-commission-digs-government-use-facial-recognition/394880/
  2. Birhane, Abeba. 2022. “The Unseen Black Faces of AI Algorithms.” Nature 610 (7932): 451–52. https://doi.org/10.1038/d41586-022-03050-7
  3. Chen, Brian X. 2022. “Security Cameras Make Us Feel Safe, but Are They Worth the Invasion? (Published 2022).” The New York Times. https://www.nytimes.com/2022/11/02/technology/personaltech/security-cameras-surveillance-privacy.html
  4. Crockford, Kade. “How Is Face Recognition Surveillance Technology Racist?: ACLU.” American Civil Liberties Union, July 17, 2023. https://www.aclu.org/news/privacy-technology/how-is-face-recognition-surveillance-technology-racist. 
  5. Del Villar, Ashley, and Myaisha Hayes. 2023a. “How Face Recognition Fuels Racist Systems of Policing and Immigration — and Why Congress Must Act Now | ACLU.” American Civil Liberties Union, February 24, 2023. https://www.aclu.org/news/privacy-technology/how-face-recognition-fuels-racist-systems-of-policing-and-immigration-and-why-congress-must-act-now
  6. “Facial Recognition Act of 2023 (H.R. 6092).” n.d. GovTrack.Us. https://www.govtrack.us/congress/bills/118/hr6092
  7. Hao, Karen. 2022. “We Read the Paper That Forced Timnit Gebru Out of Google. Here’s What It Says.” MIT Technology Review, January 10, 2022. https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
  8. “How surveillance cameras can help prevent and solve crime.” 2013. Urban Institute. https://www.urban.org/urban-wire/how-surveillance-cameras-can-help-prevent-and-solve-crime
  9. “Incarceration.” Incarceration - Healthy People 2030. Accessed April 25, 2024. https://health.gov/healthypeople/priority-areas/social-determinants-health/literature-summaries/incarceration. 
  10. “Police surveillance and facial recognition: Why data privacy is imperative for communities of color | Brookings.” 2022. Brookings Institution. https://www.brookings.edu/articles/police-surveillance-and-facial-recognition-why-data-privacy-is-an-imperative-for-communities-of-color/
  11. Reporter, Guardian Staff. 2017. “How White Engineers Built Racist Code – and Why It’s Dangerous for Black People.” The Guardian, December 5, 2017. https://www.theguardian.com/technology/2017/dec/04/racist-facial- recognition-white-coders-black-people-police?CMP=share_btn_url. 
  12. Slobogin, Christopher, and Sarah Brayne. 2022. “Surveillance Technologies and Constitutional Law.” Annual Review of Criminology 6 (1). https://doi.org/10.1146/annurev-criminol-030421-035102
  13. “Study Finds Gender and Skin-Type Bias in Commercial Artificial-Intelligence Systems.” MIT News. Massachusetts Institute of Technology, February 11, 2018. https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212

Policy Brief Authors

Aarya Chowdhry

Criminal Justice Team Lead

Aarya currently co-leads Criminal Justice Policy in the Policy department and is an Outreach Intern in the Education department at YIP. Fortunate to call Kanpur, India, her hometown, she is an avid reader, learner and poet.

Author's Profile

Anagha Nagesh

Director of Policy

Anagha is a current student at John P. Stevens High School in New Jersey. She joined YIP in the Spring 2023 fellowship, served as a Criminal Justice Policy Lead, and now is the Director of Policy Media. She hopes to pursue political science or policy in college. In her free time, she likes to sing, act, and travel.

Author's Profile

Taylor Luna

Policy Analyst

Author's Profile

Gauri Vaidya

Gauri Vaidya

Gauri is passionate about developmental policy, photography, and political engagement and discourse. Her areas of interest include the Middle East, Post-Colonial South Asia and conflict studies.

Author's Profile

Naomi McKenna

Fall 2023 Fellow

Naomi McKenna is a high school student at Atholton High School in Columbia, Maryland, who will graduate in 2024.

Author's Profile

Alayna Hassan

Alayna Hassan

Alayna is studying natural sciences with a specialization in English, in Sweden. She is very passionate about health, public policy and social justice (among many other topics). In the future, she aspires to merge these interests to help create change for good. In her free time, she love to read, binge-watch sitcoms and doing anything creative.

Author's Profile

Emily Tsai

Policy Analyst

Emily is a passionate and inquisitive individual who finds joy in the simple act of reading. As a current junior, she has cultivated her fervor within the realm of gender rights, criminal justice, and public policy.

Author's Profile