Too Much Faith in Our Machines
By Lala Xu
11/5/2017
In 2014, New Jersey Governor Chris Christie signed the Bail Reform and Speedy Trial Act into law. Part of that law, which took effect on January 1st of this year, requires courts to use a risk assessment algorithm called Public Safety Assessment (PSA) to determine bail for defendants.
The PSA is a tool created by the Laura and John Arnold Foundation based on nine factors, which have been found to be correlated with future criminal activity or failure to appear in court. These factors include: age at current arrest, if the offense was violent, whether there are other pending charges, prior convictions (misdemeanor, felony, violent), prior failure to appear in court (within two year, older than two years), and prior incarceration. Before each defendant arrives in court, all of this information is inputted into a system. Then, three scores assessing the risk of failing to appear in court (FTA), New Criminal Activity (NCA), New Violent Criminal Activity (NVCA) use combinations of the nine factors and the points assigned to each to generate a score. The higher the number, the more risk the defendant is predicted to pose. In New Jersey, this score is also accompanied by a recommendation for a course of action. These actions include release without bail, release contingent on wearing an ankle monitor, and setting bail at a certain amount.
In the age of big data and machine learning, it is no surprise that states are adopting more data driven processes to make decisions in the criminal justice systems. At the last count, at least 23 states use risk assessment algorithms for criminal justice on either the state or local levels. Additionally, risk assessment algorithms are being used at various points in the justice process, from assigning bail to deciding whether or not to release an inmate for parole. They are meant to provide a more objective assessment of whether or not a defendant is a risk for more criminal behavior in the future. However, the law recognizes that it may not be completely foolproof. Thus, judges are given the ability to override the algorithm’s recommendation and assign a harsher or more lenient punishment. Even with these caveats, the use of risk assessment algorithms has received a wide range of criticisms.
The most serious charge is that the algorithms are discriminatory. In theory, an algorithm should make the decision-making process more equitable by eliminating human bias and error. However, an algorithm is only as good as the information that has been fed into it, so unsurprisingly, the algorithm replicates much of the discrimination we currently see in our criminal justice process. In May 2016, ProPublica published a study examining the biases in the COMPAS risk assessment algorithm. Although COMPAS is similar to the PSA it uses a survey with 137 questions, instead of just nine, to generate a score between 1 and 10, with 10 being the highest risk. The study found that for defendants with the same criminal history and recidivism record, black defendants were 77% more likely to be assigned a high risk score for committing a violent crime and 44% more likely to be assigned a high risk score for committing any crime.
Another major criticism is aimed at the lack of transparency of these algorithms. A paper published in April written by two law professors Robert Brauneis and Ellen Goodman documented the “black box” problem of these algorithms. Of the 42 Freedom of Information Act requests they filed for, spanning 6 algorithms including PSA, only one county disclosed the predictive algorithm it uses and how it was developed. Although the algorithm for the PSA is public, its development is not, and that lack of transparency is at the base of the lawsuits. The lack of transparency also exacerbates the issues surrounding potential racial and gender discrimination, because the “black box” of the algorithms makes it difficult to isolate where the bias is coming from.
Besides these two main criticisms, there have been criticisms levelled at aspects of a specific algorithm, such as the two addressed in the lawsuits that are being brought against the PSA in New Jersey. The first lawsuit filed by the former US Solicitor General Paul Clement challenges the legality of the algorithm’s recommendations. Clement argues that bail is a right guaranteed by the Eighth Amendment, and any other punishment can be considered unconstitutional. The other lawsuit, filed by the mother of a victim killed by a defendant who was released prior to his trial based off his risk score, criticizes the PSA for failing to consider gun ownership as a factor.
Risk assessment algorithms have great potential to be a useful and innovative tool in criminal justice, however they still need much work before they can be considered a reliable decision-making mechanism.
11/5/2017
In 2014, New Jersey Governor Chris Christie signed the Bail Reform and Speedy Trial Act into law. Part of that law, which took effect on January 1st of this year, requires courts to use a risk assessment algorithm called Public Safety Assessment (PSA) to determine bail for defendants.
The PSA is a tool created by the Laura and John Arnold Foundation based on nine factors, which have been found to be correlated with future criminal activity or failure to appear in court. These factors include: age at current arrest, if the offense was violent, whether there are other pending charges, prior convictions (misdemeanor, felony, violent), prior failure to appear in court (within two year, older than two years), and prior incarceration. Before each defendant arrives in court, all of this information is inputted into a system. Then, three scores assessing the risk of failing to appear in court (FTA), New Criminal Activity (NCA), New Violent Criminal Activity (NVCA) use combinations of the nine factors and the points assigned to each to generate a score. The higher the number, the more risk the defendant is predicted to pose. In New Jersey, this score is also accompanied by a recommendation for a course of action. These actions include release without bail, release contingent on wearing an ankle monitor, and setting bail at a certain amount.
In the age of big data and machine learning, it is no surprise that states are adopting more data driven processes to make decisions in the criminal justice systems. At the last count, at least 23 states use risk assessment algorithms for criminal justice on either the state or local levels. Additionally, risk assessment algorithms are being used at various points in the justice process, from assigning bail to deciding whether or not to release an inmate for parole. They are meant to provide a more objective assessment of whether or not a defendant is a risk for more criminal behavior in the future. However, the law recognizes that it may not be completely foolproof. Thus, judges are given the ability to override the algorithm’s recommendation and assign a harsher or more lenient punishment. Even with these caveats, the use of risk assessment algorithms has received a wide range of criticisms.
The most serious charge is that the algorithms are discriminatory. In theory, an algorithm should make the decision-making process more equitable by eliminating human bias and error. However, an algorithm is only as good as the information that has been fed into it, so unsurprisingly, the algorithm replicates much of the discrimination we currently see in our criminal justice process. In May 2016, ProPublica published a study examining the biases in the COMPAS risk assessment algorithm. Although COMPAS is similar to the PSA it uses a survey with 137 questions, instead of just nine, to generate a score between 1 and 10, with 10 being the highest risk. The study found that for defendants with the same criminal history and recidivism record, black defendants were 77% more likely to be assigned a high risk score for committing a violent crime and 44% more likely to be assigned a high risk score for committing any crime.
Another major criticism is aimed at the lack of transparency of these algorithms. A paper published in April written by two law professors Robert Brauneis and Ellen Goodman documented the “black box” problem of these algorithms. Of the 42 Freedom of Information Act requests they filed for, spanning 6 algorithms including PSA, only one county disclosed the predictive algorithm it uses and how it was developed. Although the algorithm for the PSA is public, its development is not, and that lack of transparency is at the base of the lawsuits. The lack of transparency also exacerbates the issues surrounding potential racial and gender discrimination, because the “black box” of the algorithms makes it difficult to isolate where the bias is coming from.
Besides these two main criticisms, there have been criticisms levelled at aspects of a specific algorithm, such as the two addressed in the lawsuits that are being brought against the PSA in New Jersey. The first lawsuit filed by the former US Solicitor General Paul Clement challenges the legality of the algorithm’s recommendations. Clement argues that bail is a right guaranteed by the Eighth Amendment, and any other punishment can be considered unconstitutional. The other lawsuit, filed by the mother of a victim killed by a defendant who was released prior to his trial based off his risk score, criticizes the PSA for failing to consider gun ownership as a factor.
Risk assessment algorithms have great potential to be a useful and innovative tool in criminal justice, however they still need much work before they can be considered a reliable decision-making mechanism.