07/15/2022
From the Washington Post news section:
The never-ending quest to predict crime using AI
The practice has a long history of skewing police toward communities of color. But that hasn’t stopped researchers from building crime-predicting tools.
By Pranshu Verma
July 15, 2022 at 7:00 a.m. EDT… A group of University of Chicago scientists unveiled an algorithm last month, boasting in a news release of its ability to predict crime with “90% accuracy.”
The algorithm identifies locations in major cities that it calculates have a high likelihood of crimes, like homicides and burglaries, occurring in the next week. The software can also evaluate how policing varies across neighborhoods in eight major cities in the United States, including Chicago, Los Angeles and Philadelphia.
But using artificial intelligence to direct law enforcement rings alarm bells for many social justice scholars and criminologists, who cite a long history of such technology unfairly suggesting increased policing of Black and Latino people.
… Police have long used any tool available to predict crime. Before technological advances, cops would huddle in conference rooms and put pins of crime incidents on a map, hoping the clusters would help them figure out where they should look next. …
Predictive policing tools are built by feeding data — such as crime reports, arrest records and license plate images — to an algorithm, which is trained to look for patterns to predict where and when a certain type of crime will occur in the future.
But algorithms are only as good as the data they are fed, which is a problem particularly for people in the United States, said Vincent Southerland, the co-faculty director of New York University’s Center on Race, Inequality and the Law.
Historically, police data in the United States is biased, according to Southerland. Cops are more likely to arrest or charge someone with a crime in low-income neighborhoods dominated by people of color, a reality that doesn’t necessarily reflect where crime is happening, but where cops are spending their time.
For example, you can just shoot somebody in Beverly Hills and leave the dead body out on the curb for the garbage truck to take away and the Beverly Hills PD will almost never hassle you about it due to your white privilege. Whereas there’s no litter or graffiti in South-Central L.A. due to the LAPD locking up People of Color for even the pettiest offenses.
That means most data sets of criminal activity overrepresent people of color and low-income neighborhoods. Feeding that data into an algorithm leads it to suggest more criminal activity is in those areas, creating a feedback loop that is racially and socioeconomically biased, Southerland added.
“You have data that is infected by, or tainted by, some bias — and that bias is going to appear on the other side of the analysis,” he said. “You get out of it, what you put into it.”
In the real world, predictive policing software has caused significant problems.
In 2019, the Los Angeles Police Department suspended its crime prediction program, LASER, which used historical crime data to predict crime hotspots and Palantir software to assign people criminal risk scores, after an internal audit showed it led to police to unfairly subject Black and Latino people to more surveillance.
In Chicago, the police used predictive policing software from the Illinois Institute of Technology to create a list of people most likely to be involved in a violent crime. A study from RAND and a subsequent investigation from the Chicago Sun-Times showed that the software included every single person arresĀted or fingerprinted in Chicago since 2013 on the list. The program was scrapped in 2020.
… Chattopadhyay said his team’s software was made knowing the troubled past of algorithms.
In making the algorithm, Chattopadhyay’s team segmented major cities into 1,000 square foot city blocks and used city crime data from the last three to five years to train it. The algorithm spits out whether there is a high or low risk of crime happening in a segment at a certain time, up to one week into the future.
To limit bias, the team omitted crime data such as marijuana arrests, traffic stops or low-level petty crimes, because research shows Black and Latino people are more often targeted for those types of offenses. Instead, they gave the algorithm data on homicides, assaults and batteries, along with property crimes like burglaries and motor vehicle thefts.
But the main point of the study, he said, was to use the algorithm to interrogate how police are biased. His team compared arrest data from neighborhoods of varying socioeconomic levels. They found crime that happened in wealthier areas led to more arrests, whereas in poorer neighborhoods, crime didn’t always have the same effect, showing a discrepancy in enforcement.
Chattopadhyay said these results help provide evidence to people who complain that law enforcement ignores poorer neighborhoods when there’s a spike in violent or property crime. “This allows you to quantify that,” he said. “To show the evidence.”
Uh…doesn’t this finding that the police don’t arrest people in poor neighborhoods flatly contradict the previous premise that they arrest too many people in poor neighborhoods?
… Criminal justice scholars, policing experts and technologists note that even if an algorithm is accurate, it can still be used by law enforcement to target people of color and those living in poorer neighborhoods for unjustified surveillance and monitoring.
Once again, uh…
Andrew Papachristos, a sociology professor at Northwestern University, said that when law enforcement uses algorithms to map and analyze crime, it often subjects people of color and low-income communities to more policing. When criticized for over-policing in certain neighborhoods, they often use data to justify the tactics he said.
Can you people get your story straight?
This is a content archive of VDARE.com, which Letitia James forced off of the Internet using lawfare.