NYPD Built Bias Safeguards Into Pattern-Spotting AI System

5 min read
NYPD Built Bias Safeguards Into Pattern-Spotting AI System
Contents

Law enforcement’s use of artificial-intelligence-powered facial recognition has drawn criticism from civil libertarians who say these systems can perpetuate bias. So when a New York Police Department analytics team set out to develop an AI system to spot crime patterns, it took steps to make it fair and transparent.

The NYPD has been using the pattern-recognition system, called Patternizr, since December 2016. Its use became widely known this year, when two of the officials in charge of the project, Evan Levine and Alex Chohlas-Wood, wrote a paper detailing their work and the efforts they took to make the system objective. Mr. Levine is the assistant commissioner of data analytics in the NYPD’s Office of Crime Control Strategies. Mr. Chohlas-Wood, a former director of analytics at the NYPD, is now deputy director of the Stanford Computational Policy Laboratory, which looks at how technology can be used in criminal justice, education and other areas.

Patternizr uses machine learning to analyze the NYPD’s database of burglary, robbery and grand-larceny complaints to find crimes likely to have been committed by the same person or group. It compares new crimes against 10 years of records for burglaries and robberies and three years of grand larcenies, such as thefts of high-value items.

The system looks for similarities, including the date and time the crimes occurred and how they were committed. The system considers all of these similarities, combines them into one score and uses the scores to rank potential matches.

Every week, the NYPD runs about 600 complaints through the system, which returns some 100 possible patterns, according to statistics the department released this year. Finding patterns helps the police tie together evidence from different crimes, making it easier to identify and arrest lawbreakers.

The use of AI in law enforcement has been controversial. San Francisco and nearby Oakland, for instance, have banned police use of facial recognition.

One problem is that artificial intelligence and other computer systems can be biased. Facial-recognition systems are trained on millions of faces, but if the images fed into the system don’t have much diversity, the system will have trouble picking out faces with unfamiliar skin colors. Opponents of law enforcement’s use of the technology say that a poorly trained facial-recognition system could lead police investigators to disproportionately target certain people simply because of the way they look.

Given the attention to potential bias in policing applications, Patternizr’s developers set out to incorporate as many safeguards as possible. “Fairness and transparency is something we started thinking about from day one,” Mr. Levine said.

Many organizations, when building AI systems, are so focused on the technology’s potential benefit that they don’t think about possible bias, said Gartner Inc. analyst Darin Stewart, who has worked with companies and organizations that use machine-learning systems similar to Patternizr. Considering bias, he says, “that’s half the battle.”

Among the measures the NYPD team took to weed out bias from Patternizr was to exclude information about race and gender. When it comes to details about reported crimes, the team included only nonsensitive attributes, such as the number of people involved, their height, their weight, and the type of force used. Because the system is trying to identify patterns, not individuals, suspect description is a relatively unimportant factor, the developers said.

“In terms of race, it’s essentially operating blindly”

Mr. Chohlas-Wood said.

The team was also careful when it came to location, which has the potential to be a proxy for race. They hid specific locations from the model. Instead, the system uses just the distance between crimes, an important feature in linking two incidents as part of a pattern. For instance, a burglar is likely to hit a number of victims in the same area.

The developers also decided that the system wouldn’t be the main decision maker. Teams that include civilian analysts and uniformed officers review each of the system’s findings, deciding whether the information the system has put together is, indeed, a pattern.

“We wanted to make sure the algorithm didn’t make any decisions on its own,” Mr. Levine said. “For us it’s very important, because when we do determine something is a pattern, we do take operational actions based on that.”

This gets back to the idea of accountability, Mr. Chohlas-Wood said. The algorithm is there to help people search and discover, “but ultimately, the responsibility for identifying and creating patterns still lies with humans,” he said.

“Patternizr makes it much simpler,” he said. “Patternizr allows you to search for things outside of [your] precincts….[C]riminals, they don’t really follow precinct boundaries.”

Before rolling out the system, the NYPD tested Patternizr on randomly selected crime patterns and found no evidence that the algorithm flagged crimes with suspects of any particular race at a rate higher than expected by random chance.

In their effort to be transparent, Mr. Chohlas-Wood and Mr. Levine submitted their paper on how they built Patternizr and the steps they took to root out bias to the Informs Journal on Applied Analytics, which published it in February.

Critics, however, say the NYPD didn’t go far enough.

The New York Civil Liberties Union is concerned that because Patternizr works off historical records, the data might reflect the past bias of police officers and of society as a whole, according to the NYCLU’s lead policy counsel, Michael Sisitzky.

“It’s good that the NYPD did its own study and released a report into how they were dealing with and trying to account for bias in the use of the tool. But the report was done internally. And the use of Patternizr was only revealed years after its development and implementation,” Mr. Sisitzky said.

“We’re not looking to prevent the NYPD from using tools that work. But it’s important that there be independent oversight and accountability to ensure the tools are being used for purposes that the NYPD is saying they are being used for”

To ensure fairness, the NYCLU says the NYPD should allow independent researchers to audit systems before they are tested on New Yorkers.

NYPD spokeswoman Devora Kaye said Patternizr was designed with fairness as a central priority, adding that the department has participated in the New York City Automated Decision Systems Task Force, which is tasked with recommending a process for reviewing algorithms used by the city.

Mr. Stewart of Gartner said the NYPD, by publishing its findings, took the right steps to be as transparent as possible.

Financial companies, health-care providers and other organizations that have a material impact on people’s lives should follow the NYPD’s example in explaining how their AI makes decisions, he said.

“I give talks on this for Gartner,” he said. “And one of the foundational principles that I hammer on in that talk is the need to document and publish the assumptions, the measure of fairness, and how the algorithm is working - which sound like exactly what the NYPD is doing.”