I. Overview
Predictive policing is the forecasting of future crime locations by harnessing the power of information, geospatial intervention models, and evidence-based models in order to reduce crime, enhance public safety, and support proactive prevention. The discipline of crime analysis first appeared in 1842, when the London Metropolitan Police began using pattern recognition to solve and prevent crimes. By 1900, the U.S. federal government began the collection of national data with death rates (homicide rates) and additional metrics, such as prison population rates. This event marked the transformation in the scale and systemization of national data collection.
In 1930, a pivotal moment occurred when the FBI was given the authority to collect and distribute crime data. This development allowed crime analysis to advance substantively in the 20th century. However, obstacles, such as the technical and personnel demands, prevented police departments from fully adopting mapping and hot-spot analysis.
However, the modern era has dramatically reshaped this landscape. The new availability of extensive data sets, improved data storage, and sophisticated software has enabled law enforcement to make crime mapping and predictive policing central components of modern policing. In addition, growing collaboration between police officers, scholars, and businesses has accelerated the development of analytical techniques and encouraged their widespread use. This policy brief examines the essential components of predictive policing and how these tools can be utilized to address public safety and reduce crime.
II. History
A. Current Stances
Proponents of predictive policing believe that it helps forecast crimes more accurately and effectively than traditional methods. Since predictive policing uses data-driven techniques and evidence-based intervention models, advocates say that it allows police departments to be more proactive than reactive in their approaches. The companies that are creating predictive policing models also claim that this data-based approach can help remove bias from police decision-making. They argue that using data prevents policing decisions from being swayed by personal influences, allowing for more objective actions. Finally, advocates also say that it can be more cost-efficient for police departments to employ because it improves their overall efficiency. Using a proactive approach to policing allows them to confront issues before they become larger problems. Therefore, this approach subsequently would require less staffed personnel in each station, which is less expensive for the police departments.
Opponents of predictive policing argue that it often comes with a lack of transparency and accountability. Many police departments that have implemented predictive policing programs thus far have been non-transparent with regards to the details and data used to train these programs. Additionally, there is no data being kept about how crime predictions end up being utilized. This creates a large data gap that makes it difficult to hold these departments accountable.
Research has found that current predictive policing models run the risk of using “dirty data.” When algorithms rely on historical data that don’t mitigate problems of undercoverage, they can risk perpetuating bias (through racially or demographically targeted arrests) because of the way the datasets have been trained. Predictive policing gives racially-biased policing methods the appearance of objectivity, even if that’s not truly accurate. Although the companies claim that they are using diverse datasets, those who oppose predictive policing assert that when past technologies like facial recognition and social media monitoring were adopted, adequate safeguards hadn’t been entirely implemented.
Finally, dissenters also argue that predictive policing violates people’s Fourth Amendment rights. The Fourth Amendment says that officers require “probable cause” to stop and investigate someone. Predictive policing may make it easier for officers to claim that this is the case.
B. Tried Policy
Many major metropolitan police departments have attempted to implement different predictive policing models with varying degrees of success.
The Los Angeles Police Department (LAPD) began working with federal agencies to implement predictive policing in 2008. Since then, they have tried several different programs, including LASER and PredPol, which have been funded by the Federal Bureau for Justice Assistance. LASER identifies specific areas where gun violence was deemed most likely to occur, while PredPol calculates “hot spots” for property-related crimes. LASER was shut down in 2019 after internal audits found inconsistencies with how individuals were being selected and treated. Some of LAPD’s stations have stopped using PredPol as well due to similar concerns.
The New York Police Department (NYPD) started testing predictive policing programs in 2012. The original trials occurred with programs including Azavea (based in Philadelphia), KeyStats (based in Bronxville, NY), and PredPol (based in Santa Cruz, CA). In 2013, the NYPD developed their own predictive policing algorithms and started to employ them. These algorithms included categories for shootings, burglaries, felony assaults, grand larcenies, grand larcenies of motor vehicles, robberies, and more. Since they first started predictive policing trials, NYPD has been incredibly secretive about what goes into these algorithms. In fact, even the names of the companies that they were trialing weren’t made publicly unavailable until the City of New York ordered NYPD to release this data in 2017, following a lawsuit by the Brennan Center for Justice. This secrecy opens up questions about potential racial motivations that may be embedded into the datasets.
The Chicago Police Department (CPD) piloted a huge person-based predictive policing program in 2012. This program, known as the “Heat List” or “Strategic Subjects List,” created a list of people who were considered most likely to commit gun violence or to be a victim of it. Chicago’s program is unique in that it doesn’t just identify people for the police to visit, as CPD also intended to help bring social workers and other supports to individuals identified. In 2016, a RAND Institute analysis of the program found that it was largely ineffective and didn’t actually save any lives. The Strategic Subjects List was said to have disproportionately targeted people who have been arrested or fingerprinted in Chicago since 2013. In January 2020, right before the program was shelved, Chicago’s Office of the Inspector General found that the Strategic Subjects List relied heavily on arrest records as a way to identify risk factors, even if there was no evidence of future risk behavior.
III. Policy Problem
A. Stakeholders
Predictive policing aims to prevent crime affecting community members, law enforcement agencies, and technology and data analysis developers. These groups interact to incorporate community feedback, establish oversight, and guide research to fuel technological advances and combat crime-related issues. An effective predictive policing mechanism should increase public safety by addressing community issues and improving efficiency in crime detection.
In order to achieve this effectiveness, residents in particular must be prioritized to build community trust amongst all stakeholders involved in the predictive policing development process. This is not only to build beneficial rapport and relationships amongst community members, but also to help them feel more compelled to work with these groups to identify the root causes of crime, participate in data analysis, and share perspectives crucial to this process.
Further, reaching out to diverse stakeholder groups in the field, including legal professionals, specialized groups, and local governments, is paramount to address any ethical or structural issues that may be missed.
B. Risk of Indifference
The recent issue of inaction regarding the inherent biases embedded in historical crime data poses a significant risk to the legitimacy and efficacy of predictive policing: the risk of indifference. When law enforcement and developers are indifferent to the fact that past policing practices, which often resulted in the disproportionate surveillance and arrests of individuals in marginalized communities, feed the current algorithms, the predictive models simply amplify existing disparities. This indifference creates a self-perpetuating feedback loop: biased data leading to biased predictions, generating more data to reinforce the flawed assumptions. Further, indifference to how these predictions are generated or lack of transparency to how the algorithms and data analysis operates can quickly erode trust essential for predictive policing. If residents are unable to understand why their neighborhood is consistently designated a “hotspot,” they can view the technology not as a safety tool but as a mechanism for target surveillance, leading to non-cooperation, hostility, and failure of the entire crime prevention strategy. This has happened with softwares like PredPol, which was found to lead police to patrol already over-policed communities even more, exemplifying this reality.
C. Nonpartisan Reasoning
Regardless of one’s political affiliation, the goal of creating safer communities and ensuring equitable outcomes is nonnegotiable. Hence, to best achieve this shared goal, it is ethically essential that crime prevention strategies be non-partisan, evidence-based, and community-centered. The reality of a crisis call is that the afflicted community members, especially the groups considered the most vulnerable, need help, and law enforcement is left to deal with the aftermath of preventable harm. This perpetual state of reaction is inefficient and highlights a systemic failure to address the root conditions that produce crime. Thus, predictive policing, when implemented with all stakeholders in mind, is paramount to addressing issues before they perpetuate and negatively affect communities. This proactive effort can be the key to safeguarding community well-being.
IV. Policy Options
In considering policy options, predictive policing must balance numerous competing priorities. Its primary focus is using data to make predictions about crime, yet technological tools cannot violate privacy or civil liberties. Thus, the quality of data and any potential biases that might be involved in its collection or analysis are extremely important to recognize. As such, “predictive policing is not meant to replace tried-and-true police techniques. It builds on the essential elements of all policing strategies for the greater good” (NIJ Journal).
Current analytical tools and techniques used for predictive policing certainly have room for growth. They consist mainly of hot spot analysis, data mining, geospatial prediction, and social network analysis. This section takes a deeper look into each technique, examining future directions to simultaneously maintain effective and ethical policing.
Hot spot analysis essentially consists of identifying locations where crime is concentrated based on past incident data, followed by an increase of patrols and monitoring. The benefit is that it allows for efficient usage of police resources, as officers can reduce response time which also may lead to more arrests. However, hot spot analysis can also reinforce biases toward specific neighborhoods, which may result in a loss of trust in that community.
Geospatial prediction functions similarly to hot spot analysis. However, rather than focusing on specific locations where crime is concentrated, geospatial prediction focuses on predicting crime patterns of entire cities. This technique can allow for the identification of crime hot spots and can be overlaid with other datasets to analyze underlying causes of crime. Still, the ability of geospatial prediction to function properly is largely dependent on the quality of data which the system is fed. As such, heavy consideration must be given to collection of the data which geospatial algorithms use, to ensure accuracy and partiality.
Data mining uses algorithms to find patterns in large data sets that can help uncover obscure links between people and crimes. This technique can help to proactively monitor people and establish data for the future, but it comes off with a high risk of privacy invasion and biases affecting the data. Additionally, the usage of algorithms always prompts an accountability and transparency issue that can infringe upon civil rights.
Social network analysis uses statistical tools to analyze how individual actors are affected by those around them and how those relationships affect real world behavior. The analysis of human networks offers an understanding as to how connections between individuals and groups can transmit or foster criminal activity. Data used for social network analysis can come from a variety of places, like crime records, surveys, emails, or interviews. Similar to geospatial prediction, the quality of data used for social network analysis is highly important.
There are a couple elements that make ethical predictive policing when employing these techniques. First, members of the community should be involved in the process. Police departments must provide transparency to communities, especially to assure that predictive policing does not violate civil rights and that elements of a just arrest (like probable cause) are still a requirement. In order to ensure transparency, law enforcement agencies should be mandated to disclose all usage of predictive policing tools to the general public, including data sources, impact assessments, and methodologies. Officials should also include extensive consultations with independent individuals and vulnerable communities in their discussions surrounding the implementation of predictive policing techniques.
Additionally, human oversight of algorithms and any other systems used must be maintained. It would be highly irresponsible to leave predictive policing solely up to the discretion of algorithms and machines. Technology often requires human involvement to recognize biases, especially if it has been trained on a dataset containing such biases. To subsequently minimize human biases, algorithms used for predictive policing should not be made only by the police departments looking to use them. They should be created with the input of independent experts, state officials, and representatives of possible communities impacted. To ensure the long-term efficacy and impartiality of algorithms and departments using predictive policing, regular auditing should be taking place. Departments should continuously monitor the working ability of their predictive policing and be constantly adjusting for any issues or biases that appear to arise. In order to ensure that departments are taking the necessary precautions when engaging with technology, legislation could be enacted establishing the legal framework to mandate these steps.
V. Conclusion
There is a growing emphasis on utilizing predictive policing to address local safety concerns. Predictive policing uses data and analytical tools to identify areas where crime is more likely to occur, which allows law enforcement to allocate resources more efficiently. Community involvement through meetings, feedback sessions, and public communication helps ensure that residents understand how these tools guide policing strategies. At the same time, collaboration within law enforcement and with community members to share information strengthens trust and improves overall effectiveness. Together, these joint efforts support a more coordinated approach to preventing crime and promoting neighborhood safety.


.jpg)
.jpg)
.jpg)
.jpg)







.jpg)

.jpg)