15 Mar The Terrifying, Buggy Future of An AI-Assisted Justice System Is Here
The idea of a criminal justice system that uses artificial intelligence (AI) to reduce crime is not new. But books and films like The Minority Report have traditionally been the stuff of science fiction. However, over the past several years, the creep of artificial intelligence and the use of predictive algorithms have brought both the technology and the ethic of using it to the forefront of the criminal justice system.
A “glitch” denies prisoners in Arizona their freedom.
In February, a whistleblower reported that a glitch in a software system kept hundreds in prison longer than they should have been. The program, called ACIS, was designed to assist with managing the state’s prison populations. The Arizona Department of Corrections contracted IT company Business & Decision North America to create the software and maintain it. It had paid more than $24 million to the company as of 2019.
The source of the glitch, in a truly inhumane twist of irony, is a new law that was designed to help prisoners leave prison earlier. In June 2019, lawmakers voted a new state law onto the books that established a new system for people convicted of specific drug-related crimes. The system allows prisoners to shave as much as 70% of their prison time off by following certain rules and participating in designated programs.
Problems began when ACIS was not able to take the new law into consideration. Prison staff needed to enter data manually. But even then, if they entered the data incorrectly, it was sometimes impossible to change that information. Arizona DOC spokesperson Bill Lamoreaux said that at least 733 prisoners are eligible for the sentence reduction program but not yet enrolled.
It’s not just prison sentences affected by ACIS bugs, either. A report stated that glitches in ACIS software could affect “inmate health care, head counts, inmate property, commissary and financial accounts, religious affiliation, security classification, and gang affiliations.”
Facial Recognition on the Rise
Another area in which technology is playing an increasing role in law enforcement is in the use of facial recognition software. Facial-recognition app Clearview saw usage spike in January. According to the company’s CEO, the app saw a 26% increase. The source of that increase? Clearview’s CEO said that the increase started Jan. 7, one day after the Capitol riots in Washington, D.C. Police departments such as those in Miami are using Clearview to match photos from the FBI to local people. They send matches they find with their system to the agency.
Clearview remains a target of activist groups around the country. Some AI-based facial recognition software in the criminal justice system uses government images such as driver’s license and passport photos. However, Clearview’s system pulls images from private sources such as social media accounts.
Some locales are fighting back.
The company quickly faced legal resistance. In 2008, lawmakers in Illinois enacted the Biometric Information Privacy Act (BIPA). This law protects residents of the state from illegal data collection and storage. Under the law, residents can sue violators. And that’s exactly what one Illinois resident did in 2020, when he filed a lawsuit against the company. As a result, the company ended all private and public-sector contracts in the state in 2020.
Earlier this year, the Hamburg data protection authority, a regulating body for the European Union, declared the company’s database illegal in the EU. Challenges came from the private sector, too. Attorneys from LinkedIn, YouTube, Twitter and Facebook all sent cease-and-desist letters to the company in early 2020.
Since a New York Times exposé in 2020, the company said it would stop doing business with private companies. However, it still works closely with law enforcement. As of Feb. 2021, Clearview has contracts with over 2400 law enforcement agencies in the United States.
Algorithm-based pre-trial risk assessment tools are already having a significant impact on sentencing.
Perhaps the most frightening use of AI in the criminal justice system, however, is in pre-trial risk assessment tools. These “tools” are software systems that determine if defendants are a flight risk or a risk of other crime prior to their trials. Courts across the country use them to determine bail. The US Courts website says the following about the tool it uses:
“The federal Pretrial Risk Assessment (PTRA) is a scientifically based instrument developed by the Administrative Office of the U.S. Courts (AO) and used by United States probation and pretrial services officers to assist in determining a defendant’s risk of failure to appear, new criminal arrests, or technical violations that may lead to revocation while in the pretrial services system.“
How pre-trial risk assessment tools are supposed to work, and how they actually work are different stories.
Assessment tools make predictions based on historical data. They look for patterns in that data that correlate, either positively or negatively with “success.” In this case, “success” means when someone shows up for their court appearance and does not get arrested in the pretrial phase. Using this data, the system gives individuals a score based on what it determines their risk to be. It then puts people into groups based on their risk level, ranging from low to high.
But studies show that these systems tend to over-predict. In Kentucky, software flagged some people as likely to have “new violent criminal activity” (NVCA). However, a study showed that only 8.6% to 11% of people with NVCA flags were arrested again for a violent crime within six months of being released. This means that around 90% of the people the system predicted would be arrested for a violent crime were not. A coin flip would produce exponentially more accurate results.
As of 2020, fewer than 10% of jurisdictions in the US were using pre-trial risk assessment tools. However, that number is on the rise.
Adopting AI Into The Justice System Before We Understand It
There are two main issues with how the United States is adopting AI into the justice system. The first is that law enforcement agencies are doing it before we truly understand how it works or even how it should work. It’s part of the same worrying pattern that has led to police departments becoming increasingly militarized — adopting crime-fighting technology first and asking questions later. Arizona spent more than $24 million on system that fails at its most basic task. Law enforcement agencies in Illinois undoubtedly used Clearview in clear violation of state law for years. In both cases, the human cost has been tremendous.
Which leads to the second issue. The current influx of AI and other technology in the criminal justice system is attempting to create a computational solution to very human problems. People are not software. Nor are they points of data. They are complex individuals living in complex environments, making complex decisions. In an effort to create more efficient systems, justice officials are removing humanity from the system itself. If, as it claims to be, the criminal justice system in this country is focused on rehabilitating humans, it must first treat them as such.