Machine Learning & Artificial Intelligence are Corrupting the US Justice System

Brandon Wang

https://img.particlenews.com/image.php?url=1yyNwY_0YOWlQVu00Photo by Ali Shah Lakhani on Unsplash

The Emergence of Machine Learning in the United States Justice System

Technology has become an essential part of our everyday lives. So much so that many of us would not be able to function without it. Machine learning and artificial intelligence is now something that exists in almost every faucet of our lives. There is more data than ever in the world around us and using these technologies, we’re able to take advantage of this data and turn that information into something much more useful. Despite the advantages, these computer algorithms are still experiencing some growing pains. Not every calculation is perfect and in some cases, this can lead to disastrous end results. The use of complex computer algorithms in the United States justice system has been a severely controversial point in recent years.

Society’s decisions and actions are being influenced by machine algorithms at a highly alarming level. Leveraging new technologies, companies and organizations have immediate access to more data than ever before. These same agencies are investing more manpower and resources in developing algorithms to provide insight and answers for problems that have far reaching social consequences. Specifically, many computer programs that predict the probability of convicted criminals committing future crimes have been accused of being grossly biased against certain minorities. 

How Predictive Algorithms Become Racist

Taking the fears of bias and discrimination into account, the United States Attourney General Eric Holder predicted that computer programs like these may be inadvertantly slipping racial biases into the courtrooms. Since no further preventive actions were taken, in 2016, a Propublica investigation launched and revealed that the algorithm teaching the artificial intelligence system utilized by judges to dictate the future crime rate of convicted criminals was exacerbating racial biases. These predictive risk assessment scores for convicted criminals is unfortunately becoming a normal occurrence and fundamental decision driver in many courtrooms throughout the United States. A change in the algorithm’s scores can mean the difference between various bond amounts or even allowing the defendant's freedom. In various states, these scores are implemented into a judge’s criminal sentence. 

The investigation targeted a criminal profiling algorithm called COMPAS that was developed by Northpointe. A disconnect between the predictive scores of eighteen-year old Brisha Borden and forty-one year old Vernon Prater and the reality of their actions two years later shined a light on an inherent data driven racial bias. In 2014, both parties were convicted of similar crimes yet COMPAS calculated a high risk rating for Borden who is black versus a much lower score for Prater. This was odd seeing as how Prater had a much more severe history of crime whereas Borden, although still having misdemeanors, only had derogatory marks on her juvenile record. Looking two years later, Borden had been living life crime free while Prater had been convicted and was sentenced to eight years on account of stealing thousands of dollars in electronics after breaking into a warehouse. 

In addition to the specific cases of Brisha Borden and Vernon Prater, seven thousand other convicted criminals were analyzed and set up against the same algorithms to generate recidivism rate probabilities. It was inevitably proved that the COMPAS scoring system was dangerously unrealistic and only had an accurate prediction of recidivism in twenty percent of the cases. Delving deeper, a statistically significant number of events where racial biases were evident was revealed. The Propublica investigation concluded that, in particular, the computer algorithm predicted inappropriately high rates of future crimes for black defendants. On the other hand, white defendants were mislabeled as low risk for recidivism at a much higher rate than their black counterparts. These results may be explained away by allowing the predictive model to utilize a defendant's prior criminal records or severity of crimes yet when Propublica adjusted for these factors in their analysis, no results altered were statistically significant. 

In a retaliatory effort, Northpointe took the opposite stance to defend COMPAS in which they dug at Propublica’s interpretation of their analysis. Despite their claims of improper due diligence on Propublica’s part, Northpointe will not disclose any of their proprietary algorithm’s rules or decisions taken into account when a final risk score is delivered for a defendant or convicted criminal. What is known is that the COMPAS model makes use of one hundred and thirty seven questions to arrive at a few scores. Race is not directly asked for in any part of the analysis. 

Machine Learning and Artificial Intelligence in the Future of the Criminal Justice System

Even now Northpointe’s predictive models are still being used as one of the most commonly utilized recidivism analysis tools in and out of courtrooms throughout the country. Despite proven racial biases, computer models are still being championed as a highly efficient tool for the criminal justice system. This is more due to the United States having a strikingly higher incidental rate of incarceration than any other country. The possibility of an accurate and automated system of categorizing people would be very beneficial for the United States economically and improve the overall justice system. Despite all the benefits, the dangers may be too severe to overlook. Any impurities in the system could mean ruining a life with an overly harsh sentence or be a hazard to the public by letting a dangerous individual roam free.

This technology is still in its early stages and as such, needs to be implemented carefully. The use cases for these predictive models must be carefully scrutinized. If these incredibly powerful tools are widely accepted without the proper oversight, more harm might be done than good. Yet, despite all the worries, technological advancements have in the broader sense improved our quality of life and will continue to do so in the near future.

Works Consulted

Darcel. “Why We Need to Reform New York's Criminal Justice Reforms.” The New York Times, The New York Times, 25 Feb. 2020, www.nytimes.com/2020/02/25/opinion/new-york-bail-reform.html. 

Hammond, Kristian. “5 Unexpected Sources of Bias in Artificial Intelligence.” TechCrunch, TechCrunch, 10 Dec. 2016, techcrunch.com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/. 

Julia Angwin, Jeff Larson. “Machine Bias.” ProPublica, 23 May 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. 

laura_hudson. “Technology Is Biased Too. How Do We Fix It?” FiveThirtyEight, FiveThirtyEight, 20 July 2017, fivethirtyeight.com/features/technology-is-biased-too-how-do-we-fix-it/. 

Rieland, Randy. “Artificial Intelligence Is Now Used to Predict Crime. But Is It Biased?” Smithsonian.com, Smithsonian Institution, 5 Mar. 2018, www.smithsonianmag.com/innovation/artificial-intelligence-is-now-used-predict-crime-is-it-biased-180968337/. 

Comments / 0

Published by

Foodie / Personal Finance Enthusiast / Post-grad Life

San Francisco, CA
1662 followers

More from Brandon Wang

Comments / 0