By Sidney Perkowitz
In an age of anxiety, the words sound so reassuring: predictive policing. The first half promises an awareness of events that have not yet occurred. The second half clarifies that the future in question will be one of safety and security. Together, they perfectly match the current obsession with big data and the mathematical prediction of human actions. They also address the current obsession with crime in the Western world – especially in the United States, where this year’s presidential campaign has whipsawed between calls for law and order and cries that black lives matter. A system that effectively anticipated future crime could allow an elusive reconciliation, protecting the innocents while making sure that only the truly guilty are targeted.
It is no surprise, then, that versions of predictive policing have been adopted (or soon will be) in Atlanta, New York, Philadelphia, Seattle and dozens of other US cities. These programs are finally putting the enticing promises to a real-world test. Based on statistical analysis of crime data and mathematical modelling of criminal activity, predictive policing is intended to forecast where and when crimes will happen. The seemingly unassailable goal is to use resources to fight crime and serve communities most effectively. Police departments and city administrations have welcomed this approach, believing it can substantially cut crime. William Bratton, who in September stepped down as commissioner of New York City’s police department – the nation’s biggest – calls it the future of policing.
But even if predictive policing cuts crime as claimed, which is open to question, it raises grave concerns about its impact on civil rights and minorities – especially after the fatal police shooting in 2014 of Michael Brown, an unarmed 18-year-old black man, in Ferguson, a suburb of St Louis in Missouri. Subsequent fatal interactions between police and minorities, including the deaths of several unarmed black citizens through police actions and a brutal ambush of police in Dallas, spotlight their ongoing troubled interactions. So does a recent Department of Justice report that found widespread racism in the operations of the Baltimore police department. Predictive policing is likely to affect these issues by offering police new ways to seek and scrutinise criminal suspects without unfairly singling out minority communities. However, rather than allaying public concerns, it might end up increasing tensions between police and minority communities.
The American Civil Liberties Union (ACLU) has issued multiple warnings that predictive policing could encourage racial profiling, and could finger individuals or groups selected by the authorities as crime-prone, or even criminal, without any crime. Equally troubling, the approach is motivated by the reductive dream of cleanly solving social problems with computers. Like any technology, predictive policing is subject to the ‘technological imperative’, the drive to carry a technology to its ultimate without considering its human costs. Our society is supposed to be based on fair and equal justice for all but, to many critics, predictive policing relies on a contrary vision of targeted justice, meted out according to where and how citizens happen to live as determined by computer algorithm.
Picture: Ciar (Own work) [Public domain], via Wikimedia Commons