SF DAs Office Using AI to Reduce Implicit Bias from Prosecution

By: Kate Bulycheva

In June, the San Francisco District Attorney’s Office partnered with Stanford’s Computational Policy Lab to reduce implicit bias from prosecution using artificial intelligence. The artificial intelligence bias-mitigation tool redacts instances of race from police reports to make charging decisions more transparent and equitable. 

In 2016, 41 percent of those arrested in California were Latino, 36 percent were white and 16 percent were African-American. African-Americans and Latinos represented only six percent and 39 percent of the state population, respectively, according to a study conducted by the Public Policy Institute of California. 

The same racial disparities can be found in San Francisco. African-Americans accounted for 41 percent of people arrested between 2008 and 2014, while making up only six percent of the city’s population, according to a recent study by UC Berkeley and the University of Pennsylvania.

“The tool is set to look for racial information, a suspect’s race and name, in addition to data that might also serve as a proxy –– physical description, like skin, hair and eye color, neighborhood data, names of witnesses and victims to avoid any inference of one’s race,” said Alex Chohlas-Wood, the Deputy Director of the Stanford Computational Policy Lab. 

“In addition, police officers’ names will be redacted, as an inference can be drawn based on where they are stationed or what area they patrol. To locate data, [the] bias-mitigation tool uses a combination of computer and statistical techniques, including Natural Language Processing techniques,” said Chohlas-Wood. “After removing the racial information, the tool will instead add a generic token to the description of the incident, so not ‘Alex Chohlas-Wood,’ but ‘Person 1,’ making it easier for intake attorneys to track what actions took place.”

After a blind review, a preliminary decision for charging will be made. At the end of the traditional charging window, a complete police report will be released, including the unredacted narrative, body camera footage and photos to assist in making the final charging decision. If a review of the full report shows that the prosecution has added or dropped charges, they must  explain such changes. Then, using these changes, the SFDA will refine the tool. “The expectation, however, is that [a] change in decision was the result of having more evidence [and not implicit bias],” Chohlas-Wood said.

“Even though there is a whole lot of grey area with actuarial tools, you can audit them, [whereas] you can’t audit a judge’s gut,” said David Ball, Professor of Criminal Law at Santa Clara University School of Law. “My hope is that people will acknowledge the fact that such actuarial tools are reflecting the structural racism that we have in our society, instead of sometimes blaming [them] for it. Even if we take away the actuarial tools, we’ll still have a society where a middle-aged white male is less likely to be arrested than a poor person or a person of color. Actuarial tools just tell us that, they don’t create that. If you start with the system where endogenous decisions made about policing and sentencing, and where poverty and race are correlated, they will reflect back the racism we have in society. These tools, themselves, and it's unfair to blame them, are just holding a mirror to the society.” 

Notably, there has been some criticism of the implementation of artificial intelligence by other prosecutors, who say that city crime maps cannot be ignored and racial information should be readily available. Professor Ball agreed that there is valuable information in knowing the location of the incident, saying “it’s more of a question of what’s the trade-off between racial disparity and accuracy.” Chohlas-Wood, however, noted that “the tool won’t replace the decision process itself, and all collected evidence will be available for review upon making final charging decisions.”

The bias-mitigation tool was fully implemented in the general felony intake unit by the San Francisco District Attorney’s Office in July. Yet, “the use can certainly be expanded to other case types in the future,” said Chohlas-Wood.

Santa Clara County will not be implementing Stanford’s bias mitigation tool, said Santa Clara District Attorney Jeff Rosen. While he acknowledged that explicit and implicit biases exist, he said that ultimately “artificial intelligence tools will not be the solution that is being searched for.” 

Rosen does not want his office using blind tools in the decision-making process, but would rather have his District Attorneys review all the data when making charging decisions, he said. 

However, Yolo County is interested in the “blind justice” tool, Chohlas-Wood said. While the tool was initially created for San Francisco County, as its use spreads, the tool can be easily tweaked to be more universal, which will also make it easier for other prosecutors in California and beyond to adapt.

(Editor's Note: This article was originally published in the December 2019 [Volume 50, Issue 2] version of The Advocate.)

Previous
Previous

California Skeptical of Police Using Facial Recognition Software

Next
Next

#MeToo Comes to Big Law