nettime's avid reader on Tue, 24 May 2016 08:53:40 +0200 (CEST)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> US: Software to predict future criminals is biased against blacks.


https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner,
ProPublica. May 23, 2016

<....>

In 2014, then U.S. Attorney General Eric Holder warned that the risk
scores might be injecting bias into the courts. He called for the U.S.
Sentencing Commission to study their use. “Although these measures
were crafted with the best of intentions, I am concerned that they
inadvertently undermine our efforts to ensure individualized and equal
justice,” he said, adding, “they may exacerbate unwarranted and
unjust disparities that are already far too common in our criminal
justice system and in our society.”

The sentencing commission did not, however, launch a study of risk
scores. So ProPublica did, as part of a larger examination of the
powerful, largely hidden effect of algorithms in American life.

We obtained the risk scores assigned to more than 7,000 people
arrested in Broward County, Florida, in 2013 and 2014 and checked to
see how many were charged with new crimes over the next two years, the
same benchmark used by the creators of the algorithm.

The score proved remarkably unreliable in forecasting violent crime:
Only 20 percent of the people predicted to commit violent crimes
actually went on to do so.

When a full range of crimes were taken into account — including
misdemeanors such as driving with an expired license — the algorithm
was somewhat more accurate than a coin flip. Of those deemed likely to
re-offend, 61 percent were arrested for any subsequent crimes within
two years.

We also turned up significant racial disparities, just as Holder
feared. In forecasting who would re-offend, the algorithm made
mistakes with black and white defendants at roughly the same rate but
in very different ways.

-- The formula was particularly likely to falsely flag black
defendants as future criminals, wrongly labeling them this way at
almost twice the rate as white defendants.

-- White defendants were mislabeled as low risk more often than black
defendants.

Could this disparity be explained by defendants’ prior crimes or
the type of crimes they were arrested for? No. We ran a statistical
test that isolated the effect of race from criminal history and
recidivism, as well as from defendants’ age and gender. Black
defendants were still 77 percent more likely to be pegged as at higher
risk of committing a future violent crime and 45 percent more likely
to be predicted to commit a future crime of any kind. (Read our
analysis.)

The algorithm used to create the Florida risk scores is a product of a
for-profit company, Northpointe. The company disputes our analysis.

In a letter, it criticized ProPublica’s methodology and defended the
accuracy of its test: “Northpointe does not agree that the results
of your analysis, or the claims being made based upon that analysis,
are correct or that they accurately reflect the outcomes from the
application of the model.”

Northpointe’s software is among the most widely used assessment
tools in the country. The company does not publicly disclose the
calculations used to arrive at defendants’ risk scores, so it is
not possible for either defendants or the public to see what might be
driving the disparity. (On Sunday, Northpointe gave ProPublica the
basics of its future-crime formula — which includes factors such as
education levels, and whether a defendant has a job. It did not share
the specific calculations, which it said are proprietary.)

Northpointe’s core product is a set of scores derived from 137
questions that are either answered by defendants or pulled from
criminal records. Race is not one of the questions. The survey asks
defendants such things as: “Was one of your parents ever sent to
jail or prison?” “How many of your friends/acquaintances are
taking drugs illegally?” and “How often did you get in fights
while at school?” The questionnaire also asks people to agree or
disagree with statements such as “A hungry person has a right to
steal” and “If people make me angry or lose my temper, I can be
dangerous.”


<....>

Northpointe was founded in 1989 by Tim Brennan, then a professor of
statistics at the University of Colorado, and Dave Wells, who was
running a corrections program in Traverse City, Michigan.

Wells had built a prisoner classification system for his jail. “It
was a beautiful piece of work,” Brennan said in an interview
conducted before ProPublica had completed its analysis. Brennan
and Wells shared a love for what Brennan called “quantitative
taxonomy” — the measurement of personality traits such as
intelligence, extroversion and introversion. The two decided to build
a risk assessment score for the corrections industry.

Brennan wanted to improve on a leading risk assessment score, the LSI,
or Level of Service Inventory, which had been developed in Canada.
“I found a fair amount of weakness in the LSI,” Brennan said. He
wanted a tool that addressed the major theories about the causes of
crime.

Brennan and Wells named their product the Correctional Offender
Management Profiling for Alternative Sanctions, or COMPAS. It assesses
not just risk but also nearly two dozen so-called “criminogenic
needs” that relate to the major theories of criminality, including
“criminal personality,” “social isolation,” “substance
abuse” and “residence/stability.” Defendants are ranked low,
medium or high risk in each category.


<....>

In 2011, Brennan and Wells sold Northpointe to Toronto-based
conglomerate Constellation Software for an undisclosed sum.

<...>







#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject: