The student news site of Baruch

The Ticker

The student news site of Baruch

The Ticker

The student news site of Baruch

The Ticker

Polls
Sorry, there are no polls available at the moment.

Using AI for criminal justice is innovative, but still has flaws

Mike+Mackenzie+%7C+Flickr
Mike Mackenzie | Flickr

Criminal justice systems, especially in the United States, are fundamentally about human judges and prosecutors making human decisions to punish human criminals. This amount of humanity can cause a myriad of problems. From the nightmare of bureaucratic incompetence to the biases that pervade the courts, the criminal justice system has seen instances in which it did not deliver justice to the victims. 

One of the ways that police departments are trying to eliminate biases of race and gender is through the use of artificial intelligence.

Artificial intelligence is very common when attempting to “predict” something. Whether it is attempting to determine who will default on their loan or which team is most likely to score in basketball, AI is increasingly being used in almost every facet of society. 

In policing, predictive algorithms are now being used to determine which neighborhoods are most vulnerable to crime, and which criminals that are most at risk of becoming repeat offenders. 

Once the initial fear of a method other than human analytics being used to determine the likelihood that an individual would commit a crime subsides, the theory behind predictive-policing AI algorithms is logically sound. Simply put, AI algorithms use formulas and a number of criteria to generate an output, which in this case has to do with the litigation system.

Intrinsically, algorithmic processing is about using input data and running it through a series of criteria to determine an output. The inputs are data about a person that include race, gender, where they are from and whether they’re a repeats offender. However, it’s still unclear how much weight each input has. Some examples of outputs that can be determined are different risk levels of whether a criminal will become a repeat offender or their probation sentences.

By using AI and existing data, police departments that are underfunded can save money by delivering officers to neighborhoods that need protection and reduce their overall burden. However, there are problems within every component of the systems in use across the country. 

The inputs in predictive algorithms are inherently flawed, damaging the entire set of processes involved in making a decision. Simply put, if the data is biased, the output it generates will also be biased. One of the most common criticisms of policing in America is the prejudice that allows for African Americans and Latinos to be unfairly profiled and deemed more liable to commit crime.

Systematic racism is seen everywhere, from being pulled over to longer sentences for criminal convictions. Researchers from Stanford’s Computational Journalism Lab and the School of Engineering, concluded that black and Latino drivers are ticketed, searched and arrested more often than whites.

For example, when pulled over for speeding, black drivers are 20% more likely than whites and Latino drivers 30% more likely than whites to be ticketed. Black and Latino drivers are about twice as likely to be searched compared with whites.

This prejudice is especially prevalent in poor, urban communities. Some argue this is due to the antiquated “broken windows” policy of urban policing, which dictates the immediate curbing of small crimes in an attempt to dissuade people from committing bigger crimes. However, the belief systems of the police officers can be biased, and the biases seep into the data that is being used. 

This type of corrupted data, or “dirty data,” allows for the racial and economic predispositions present in the policing structure to become exacerbated. 

A common misconception surrounding artificial intelligence as a whole is that the mere usage of AI is enough to dispel any bias and transform any result into an objective truth. 

Bias is harder to dispel than policymakers and governments realize. We can conclude that if prejudice exists in the data, then prejudice exists in the whole system.

While biased data is an issue in predictive policing, another issue is the lack of transparency in the determination of the outputs. 

The factors that are being used to determine sentencings and probation hearings are not public knowledge, and the weights that these factors have in the algorithm are also unknown to the general populace. That being said, only the litigation system really knows how this AI system is being used. 

According to Richard Beck, a professor of criminology and statistics at the University of Pennsylvania, and a developer of predictive policing algorithms in Philadelphia, “All machine-learning algorithms are black boxes, but the human brain is also a black box.”

Beck continues by saying, “If a judge decides they are going to put you away for 20 years, that is a black box.”

With the fact that no one really knows what is going on inside a human’s brain, the idea behind the AI algorithms is to increase transparency. However, with hiding the complexity of the algorithms behind the façade of the complexity of human thought, then the AI could be considered redundant and useless. 

Finally, the outputs are flawed as they do not take human opinion into account. Of course, one cannot argue against a computer, but sometimes there are arguments that need to be made.

If a convict has shown genuine remorse for his actions and has not become a repeat offender, will the computer take that into account even if they come from a “high-risk” environment? Another argument that could be made is the power of the software. 

Will predictive-policing algorithms become the definitive authority on criminal sentencing, or will human interaction still be at the crux of the criminal litigation system? 

While predictive policing algorithms sounds like something ripped from a sci-fi film — it has, see Minority Report, starring Tom Cruise — it leaves a lot more questions than answers. Artificial intelligence has seeped into society seamlessly, affecting every aspect. However, the question of the necessity of AI needs to be asked when it is adapted into a polarizing sector like criminal justice. 

Technology is all about increasing convenience for the user. However, will convenience become dangerous when algorithms determining policing and prosecution methods? Only time will tell.

Leave a Comment
More to Discover
Donate to The Ticker

Comments (0)

All The Ticker Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *