Values and Consequences in Predictive Machine Evaluation. A Sociology of Predictive Policing
Abstract
Predictive policing is a research field whose principal aim is to develop machines for predicting crimes, drawing on machine learning algorithms and the growing availability of a diversity of data. This paper deals with the case of the algorithm of PredPol, the best-known startup in predictive policing. The mathematicians behind it took their inspiration from an algorithm created by a French seismologist, a professor in earth sciences at the University of Savoie. As the source code of the PredPol platform is kept inaccessible as a trade secret, the author contacted the seismologist directly in order to try to understand the predictions of the company’s algorithm. Using the same method of calculation on the same data, the seismologist arrived at a different, more cautious interpretation of the algorithm's capacity to predict crime. How were these predictive analyses formed on the two sides of the Atlantic? How do predictive algorithms come to exist differently in these different contexts? How and why is it that predictive machines can foretell a crime that is yet to be committed in a California laboratory, and yet no longer work in another laboratory in Chambéry? In answering these questions, I found that machine learning researchers have a moral vision of their own activity that can be understood by analyzing the values and material consequences involved in the evaluation tests that are used to create the predictions.