The Troubling Judicial Trend to Use Algorithms to Predict Future Dangerousness

AlgorithmNobody can predict the future. It is nevertheless possible to predict outcomes for large groups of people with reasonable accuracy. As a group, nonsmokers live longer than smokers, although at the individual level, some smokers live long lives. As a group, drivers who collect speeding tickets are more likely to crash their cars than sedate drivers, but many heavy-footed drivers are never in accident.

Those examples demonstrate that predictions of group outcomes tell us very little about individual outcomes. While smokers (as a group) die ten years before nonsmokers (as a group), that statistic does not accurately predict when any particular smoker will die. The statistical analysis of groups can help statisticians identify risk factors — smoking elevates your risk of premature death — but risk factors cannot predict individual outcomes. Having an elevated risk of premature death does not mean that you will die prematurely.

Courts are increasingly relying on algorithms that identify risk factors based on an analysis of groups when they make individualized decisions about a defendant’s freedom. Algorithms might tell a judge that people who belong to certain groups have an elevated risk of committing future crimes, but they don’t tell judges whether a specific defendant will commit a future crime. Treating algorithmic analysis as if it can predict the future is grossly unfair to individual defendants. Unfortunately, legislatures are increasingly requiring judges to rely on algorithms rather than individualized assessments when they set bail or impose sentences.

Government and Algorithms

Governments use algorithms in a variety of ways. Data is fed into a computer that is programmed with decision-making rules, and the computer makes decisions that government officials implement. In New York City, for example, algorithms are used to make police department staffing decisions, to match students with schools, to identify Medicaid fraud, to assess teacher performance, and to determine work assignments of building inspectors.

Algorithms can be benign and useful, but they can also produce decisions that appear to be arbitrary. People who are affected by algorithms often complain that the algorithms are not transparent, making it impossible to determine whether decisions are based on biases that are built into the decision-making rules. Transparency should be a given when the government makes a decision — the government is created by and serves the governed, so the governed should have a right to know how it is being served by the government it creates — but as New York City discovered when a council member proposed to make the city’s algorithms public, tech companies that develop algorithms often argue that they are propriety information and that disclosing them would place the developer at a competitive disadvantage. Understanding the decision-making rules used by an algorithm, in other words, might be bad for business.

Frustrated council members wondered how they could engage in oversight of city decision-making when they are denied basic knowledge of how decisions are made. Decisions made by city employees without the benefit of algorithms might be based on bias, but biases can be rooted out by questioning employees. Nobody can question whether bias infects the hidden source code used by a computer program.

Algorithms and Bail Decisions

Risk-assessment algorithms are replacing individualized decision-making when judges are called upon to set bail or impose sentences. Like most judicial decisions, granting or denying bail requires judges to balance competing interests. On the one hand, granting bail allows individuals who are presumed innocent of any crime to remain free until their guilt has been determined. Given that many criminal charges are dismissed or resolved without a sentence of incarceration, while many others end in “not guilty” verdicts, bail assures that innocent defendants are not punished by languishing in jail while and that guilty defendants are not incarcerated prior to trial when jail is not an appropriate punishment for the crime.

On the other hand, some defendants might skip town if they are granted bail. Others might commit new crimes on bail. Denying pretrial release assures that defendants will appear in court and protects society from new crimes, but it achieves those goals at a heavy cost if bail is denied routinely. Our constitutional values, including the presumption of innocence and the Eighth Amendment’s prohibition of excessive bail, demand that defendants should usually remain at liberty unless and until they are convicted of a charged crime.

Judges have traditionally balanced those competing interests by considering the seriousness of the criminal charge, the strength of the evidence, the defendant’s ties to the community, and whether the defendant has ever missed a court appearance in the past. Individualized decisions are necessarily inconsistent, since different judges weigh those factors differently. Judges might also be influenced by racial or gender bias when they assess risk.

The judicial system tends to be influenced by economic bias, in that routinely setting bail at $100 for minor crimes makes it easy for people with assets to post bail while the impoverished cannot. Studies demonstrate that jails are crowded because pretrial detainees who pose little risk of absconding or committing new crimes are denied release because they lack the financial resources to post bail.

Well-meaning individuals view risk algorithms as an antidote to bias. Alaska is the most recent state to rely on algorithms in the hope of reducing jail overcrowding. When the algorithm identifies arrestees as “low risk,” judges are encouraged to release them with conditions of supervision. The algorithm gives political cover to judges who might otherwise fear criticism that they are releasing too many accused offenders into the community. Similar bail reform has substantially reduced pretrial detention in New Jersey, although it isn’t clear whether the same result could have been accomplished without basing bail decisions on algorithms.

Reducing the number of pretrial detainees saves taxpayers money and encourages respect for the presumption of innocence, but individuals who are identified as “high risk” have little opportunity to challenge the assumptions made by the algorithm, since those assumptions are hidden from the public. Some bail algorithms consider zip codes or levels of education in making risk assessments, factors that are closely correlated to race. Instead of eliminating bias, algorithms may therefore build bias into the decision to grant pretrial release.

Algorithms and Sentencing

Algorithms may be even more problematic when judges rely on them to determine the length of a criminal sentence. Like most states, judges in Wisconsin consider presentence reports, prepared by state employees, when they impose sentences. The reports describe the crime, provide information about the offender’s background and criminal history, include a victim impact statement, and generally discuss the nature of the punishment that might be appropriate for the crime.

In recent years, Wisconsin’s presentence reports have included an assessment of the risk that an offender will commit future crimes. Judges tend to impose longer sentence when that risk is high. Yet the risk assessment is based on an algorithm. As we have seen, risk algorithms are problematic for two reasons. First, algorithms may be based on hidden biases, including racial bias. Second, algorithms can only “predict” group behavior, which says nothing about whether a particular offender is likely to reoffend.

The first problem was considered by the Wisconsin Supreme Court in a defendant’s challenge to the sentencing algorithm. Eric Loomis was sentenced to 7 years in prison for shooting from a car without causing an injury. The judge based the sentence on the algorithm’s conclusion that Loomis presented a “high risk” of reoffending. Loomis’ attorney wanted to test the scientific validity of the algorithm’s conclusion, but the company that supplies the algorithm refused to release its proprietary source code. The Wisconsin Supreme Court concluded that Loomis had no right to see it.

A ProPublica investigation found that the particular sentencing algorithm used in Wisconsin and several other states labels black defendants as “future criminals” at almost twice the rate as white defendants. Whether that outcome reflects racial bias inherent in the algorithm will be impossible to determine if courts continue to shield the source code from analysis.

The second problem is inherent in all sentencing algorithms, but it is a problem that judges don’t widely understand. If an algorithm concludes that 70% of offenders who share a defendant’s characteristics are likely to reoffend, that doesn’t mean that the defendant has a 70% likelihood of reoffending. While 70% of group members have a statistical likelihood of reoffending, 30% do not. The algorithm tells a judge nothing about whether a particular defendant is part of the 70% or part of the 30%.

Since risk algorithms have no predictive value at the level of individuals, it is questionable whether algorithms should play any role at all in the sentencing of defendants. The judiciary’s reliance on unreviewable algorithms is nevertheless a growing trend that deserves the attention of citizens and elected representatives who are concerned about the fairness of judicial decision-making.  

comments powered by Disqus