The public media is replete with examples of algorithms — mathematical-based formulas — that are so prevalent in our everyday lives; and now predictive algorithms have spread to the criminal justice arena. “Predictive policing” describes using data about past crimes to predict where future crimes will occur, who might commit those future crimes, and who might be victimized by those future crimes. This stems from the view of many criminologists that a relatively small pool of offenders are responsible for a disproportionately high percentage of crimes committed. Much like weather and earthquake forecasting, or predictions by epidemiologists about the upcoming flu season, these criminologists, and their computer science counterparts, believe that historical data churned through their computerized models can produce figures about where crime will occur, by whom, against whom.
Where has predictive policing been used?
Chicago utilizes a “Strategic Subject List” of those most likely to be involved in future shootings, either as a shooter or victim, following which officers or social workers visit those on the high-risk list. Los Angeles uses “Operation LASER,” which assigns points to potential offenders, seeking to predict their chances of recidivism, along with PredPol, which predicts where property crimes will occur and allows patrol officers to target specific locales for extra surveillance. Here in Pittsburgh, a team at Carnegie Mellon University has produced a predictive software tool called CrimeScan, a method of assessing areas for targeted policing.
While many tout the benefits of utilizing predictive technology to help reduce crime, others express concerns that predictive policing can exacerbate racial profiling and other police strategies that target low-income or African American communities. Opponents of predictive policing also point out:
- These algorithms could encourage singling out suspects for crimes they have not yet committed;
- Individuals who are targeted by these computerized programs should have the opportunity to challenge their inclusion on lists of suspects most likely to re-offend;
- If the data used in predictive policing software is derived from past arrest and conviction information, racially-biased policing is simply perpetuated; and
- The proprietary nature of the software used in predictive policing reduces transparency about how particular names or locales become included on targeted lists, a secretiveness exacerbated by the use of machine learning to refine these lists.
Predictive policing software uses algorithms, which compile large amounts of data which is then analyzed for correlations. However, these algorithms are carefully guarded by the for-profit companies that want to protect their intellectual property. Because these algorithms are like “black boxes,” opaque to oversight, many suggest that “Algorithmic Impact Assessments” (called AIAs) be implemented to allow assessment of these algorithms. This could permit the public to better understand the impact of these systems on their communities, thus injecting a greater level of accountability and potential evaluation. AIAs would more easily allow outside researchers to analyze and evaluate predictive policing software for potential problems and biases.
In some cities, activists and groups have brought legal challenges to the use of predictive software. For example, in New York City, plaintiffs have brought an action under the Freedom of Information Law, seeking records related to predictive policing products or services used by the city’s police. The plaintiffs want to know: How did the city choose this particular software? How was it tested? How is it used?
If you are charged with a crime, your criminal defense attorney will help ascertain whether predictive policing techniques were used in the investigation, and whether they pass constitutional and legal muster. Contact Wyland Law Group for a free consultation at 412-710-0013