Towards “Automated Justice”

02/05/2018

Our world runs on the application of big data, algorithms, and artificial intelligence (AI) in many areas of our lives; social networks suggest whom to befriend, algorithms trade our stocks, and even romance is no longer a statistics-free zone. Big data coupled with algorithms has become a central theme of intelligence, security, defence, anti­terrorist, and crime policy efforts, as computers help the military find its targets and intelligence agencies justify their assessments on the basis of massive pre-emptive surveillance of public telecommunications networks, as revealed by Edward Snowden in 2013. The algorithms that are mining for intelligible interpretations of big data sets are forming new types of knowledge production in the crime control domain as well. Law enforcement agencies increasingly use crime prediction software, e.g. PredPol (Santa Cruz, California), CompStat (New York), Precobs (Zürich, Munich), and Maprevelation (France), to allocate their resources, while criminal courts are increasingly relying on sentencing prediction instruments and probation commissions on probation algorithms.

While computer scientists are concerned about the increasing computational demands on computing capacity, storage, and communications – which are reaching the limitations of physical laws in terms of reliability and economics, social scientists have not yet been able to fully grasp the social and ethical implications of the existing computing paradigms and the implications of big data and automated decision-making systems. The question is not only how far the automation of governance will (and should) develop with the powerful new computing alternatives, such as quantum, bio-inspired, and crowd computing, but what the socially destructive consequences of big data and automated decision-making systems already at work in contemporary surveillance-based capitalist societies are. We can already witness discrimination against the less affluent and less powerful parts of the population in several domains, e.g. in insurance, education, and employment including the crime and security domain. Furthermore, we can witness distortions of democratic processes, such as general elections. Similar to the mass-scale emotional contagion of Facebook users in a now infamous experiment, millions of individuals have been involved in a sort of “political contagion” for various political ends based on social network data, as revealed by the Cambridge Analytica whistle-blowers in 2018.

The crime and security domain

Automated decision-making processes strongly infringe upon fundamental liberties in policing and criminal justice settings as these actors have been granted a monopoly over the use of physical force. In the centuries of the post-Enlightenment era, very nuanced legal concepts and procedures have been designed to regulate the use of this force – we can trace this all the way back to Beccaria’s seminal treatise On Crimes and Punishments from 1764. What is at stake with automated decision-making processes based on AI is the very foundation of the legal procedures and concepts designed to regulate the use of physical force. Today, law enforcement agencies no longer just operate in the paradigm of the ex-post facto punishment system based on manifested criminal acts. They instead employ ex ante preventive measures based on assumed psychological states. For instance, the notion of a “sleeping terrorist” in post-9/11 German anti-terrorist legislation is a prominent example involving such “algorithmic” identification of a “target”. The law criminalised remote preparatory criminal activities (and was subsequently struck down by the Federal Constitutional Court).

When law enforcement and intelligence agencies stockpile data – or outsource the harvesting of data to internet or telecom behemoths in exchange for a lenient regulatory regime – for profiling purposes or to predict the place and time of future locus delicti commissi, the boundaries between the concepts of a “suspect”, an “accused”, and a “convict” begin to dissolve. When predictive policing software uses crime statistics on all types of crime, petty and nuisance crime gains more attention than “white-collar crime”. Low-level crime might help PredPol predict mapping coordinates for serious crimes, but this comes at the cost of “a nasty feedback loop”. The focus of police is disproportionally devoted to minorities, which as a result are “over-policed”: with an increased focus on specific groups, the police discover more crime and in turn more data is generated to amplify the need for policing. Financial crime or other white-collar crime committed by the affluent financial elite then disproportionally falls under the radar compared to the scale of the damage resulting therefrom.

In criminal justice settings, criminal courts and probation commissions are also increasingly using AI decision-making tools to predict the future crimes of those awaiting trial and parole, or to predict unwillingness to collaborate with authorities in bail and parole procedures. For instance, the Arnold Foundation algorithm, which is being used in 21 jurisdictions in the USA, uses 1.5 million criminal cases to predict defendants’ behaviour. Similarly, a study of 1.36 million pre-trail detention cases conducted by Stanford University scholars purports that a computer can predict whether a suspect will flee or re-offend better than a human judge.

Data sets and algorithms are human artefacts. Automated decision systems may thus severely magnify human errors and flaws. Similar to “flash crashes” in algorithmically enhanced finance settings or “high-frequency trading”, when 99% of a share’s value can be wiped out in a matter of minutes, automated decision systems can disproportionately send certain groups to prison. Relying too heavily on automated calculations of risk can set in motion a vicious circle of bad decisions and exacerbate and magnify existing social problems. 

For instance, in a detailed assessment of the COMPAS recidivism algorithm, ProPublica discovered how the system is biased against black individuals. In fact, several scholars have warned how “automated governance” or “algorithmic governmentality” can lead to “social sorting on steroids”. This has already encroached on fundamental liberties, such as the right to non-discrimination. It will inevitably reproduce inequality as data is collected from societies that are divided along economic, racial, gender, and ethnic lines. In criminal justice settings, AI is undermining the equality of arms in judicial proceedings, as well as the right to a fair trial, i.e. the right to a natural judge and the right to an independent and impartial tribunal. In balancing fairness and efficacy, the constitutional framing of liberal criminal procedure has always inclined to fairness: it is better to let ten criminals escape prison than to sentence one innocent person to jail. This has long been one of the main distinction between authoritarian and democratic political systems.

From democracy to “algocracy”?

The mentioned Facebook experiment regarding the massive-scale emotional contagion of its users clearly demonstrated how powerful tools induce mood and sentiment. The recent Cambridge Analytica case showed how powerful tools also exist for inducing “political contagion” as to the public at large. “Automated justice” is thus only part of a wider trend towards “automated governance” that can distort democratic processes. What is at stake with the advent of automated decision-making tools based on AI is the rule of law, which is slowly being substituted for by the “rule of the algorithm”, as well as democracy itself, which may be replaced by “algocracy”.

References

  1. Ambasna-Jones, M. (2015, August 3). The smart home and a data underclass. The Guardian.
     
  2. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks. ProPublica.
     
  3. Barocas, Solon, and Andrew D. Selbst (2015). Big Data’s Disparate Impact. SSRN Scholarly Paper ID 2477899. Rochester, NY: Social Science Research Network.
     
  4. Cohen, J. E., Hoofnagle, C. J., McGeveran, W., Ohm, P., Reidenberg, J. R., Richards, N. M., … Willis, L. E. (2015). Information Privacy Law Scholars’ Brief in Spokeo, Inc. v. Robins (SSRN Scholarly Paper No. ID 2656482). Rochester, NY: Social Science Research Network.
     
  5. Dewan, S. (2015, June 26). Judges Replacing Conjecture With Formula for Bail. The New York Times.
     
  6. Ekowo, M., & Palmer, I. (2016). The Promise and Peril of Predictive Analytics in Higher Education. New America.
     
  7. Kerr, I., & Earle, J. (2013). Prediction, Preemption, Presumption: How Big Data Threatens Big Picture Privacy. Stanford Law Review Online, 66(65), 65–72.
     
  8. Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2017). Human Decisions and Machine Predictions (No. w23180). Cambridge, MA: National Bureau of Economic Research.
     
  9. Koumoutsakos, P. (2017). After Digital? Emerging Computing Paradigms, Workshop, December 8, 2017, Collegium Helveticum, Zürich.
     
  10. Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790.
     
  11. Lewis, P., & Hilder, P. (2018, March 23). Leaked: Cambridge Analytica’s blueprint for Trump victory. The Guardian.
     
  12. Lyon, D. (2017). Big Data Vulnerabilities: Social Sorting on Steroids?, 6. April, Lecture at Vulnerability conference, University of Copenhagen.
     
  13. Meek, A. (2015, September 14). Data could be the real draw of the internet of things – but for whom? The Guardian.
     
  14. Morozov, E. (2013). To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix Problems that Don’t Exist. London: Allen Lane.
     
  15. O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.
     
  16. Rouvroy, A. & Berns, T. (2013). Gouvernementalité algorithmique et perspectives d'émancipation », Réseaux 2013/1 (No 177), p. 163-196.
     
  17. Selingo, J. (2017, November 4). How Colleges Use Big Data to Target the Students They Want. The Atlantic.
     
  18. Webb, A. (2013, January 28). Amy Webb: Why data is the secret to successful dating. Visualised. The Guardian.
reagissez !

Ajouter un commentaire