Mind Matters Natural and Artificial Intelligence News and Analysis
policewoman-holding-arrested-young-woman-while-her-partner-talking-on-portable-radio-stockpack-adobe-stock
policewoman holding arrested young woman while her partner talking on portable radio
Photo licensed via Adobe Stock

Can AI Really Predict Crime a Week in Advance? That’s the Claim.

University of Chicago data scientists claim 90% accuracy for their algorithm using past data — but it’s hard to evaluate
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The University of Chicago recently announced to great fanfare that,

Data and social scientists from the University of Chicago have developed a new algorithm that forecasts crime by learning patterns in time and geographic locations from public data on violent and property crimes. The model can predict future crimes one week in advance with about 90% accuracy.

University of Chicago Medical Center, “Algorithm Predicts Crime a Week in Advance, but Reveals Bias in Police Response” at Newswise (June 28, 2022)

Many thought immediately of the 2002 movie Minority Report, in which three psychics (“precogs”) visualize murders before they occur, thereby allowing special PreCrime police to arrest would-be assailants before they can commit them. Have these University of Chicago researchers made this fiction reality?

Here’s the trailer.

No. Their model is much more prosaic. What the model predicts, using historical data on where and when crimes have occurred, is where and when crimes are likely to occur. The model doesn’t predict that Jessie will assault Jodie at 10 pm on April 1 at 123 Waverly Place. Instead, it “predicts” hot spots, relatively large geographic areas where there are likely to be a relatively large number of, say, street crimes.

That doesn’t seem particularly difficult but the authors report that their model performed really well in the National Institute of Justice Real-Time Crime Forecasting Challenge. Entrants were tasked with predicting hot-spot crime areas in Portland.

The challenge was realistic in that the competing teams were given historical data for the period March 1, 2012, through July 31, 2016, that they could use to develop and calibrate their models. Additional data were released over the next six months for model testing. During the final week of this six-month testing period, between February 22, 2017, and February 28, 2017, the teams could submit their official hot-spot forecasts for the next week, two weeks, month, two months, or three months beginning on March 1, 2017. This was, as advertised, a real-time forecasting challenge in that the 62 entrants competing for $1.2 million in prize money had to make predictions of things that had not yet occurred.

Too often, people “predict” things that have happened in the past — which is often easy and usually useless because, as the Danish proverb warns,

It is difficult to make predictions, especially about the future.

The University of Chicago team reported that it did well in the Portland challenge but it made its predictions five years after the real-time contest ended! We have no way of knowing how often their model was tweaked to predict the past better and it is surely not fair to compare their backtests with real-time forecasts.

On the other hand, we should be thankful that the Chicago model doesn’t claim to predict specific individual crimes, like Jessie assaulting Jodie. Too many people might believe the algorithm and want Jessie arrested. We are frighteningly close to that nightmare scenario.

Algorithmic criminology is now widely used to set bail for people who are arrested, determine prison sentences for people who are convicted, and decide on parole for people who are in prison. Richard Berk is a professor of criminology and statistics at the University of Pennsylvania. One of his specialties is algorithmic criminology: “forecasts of criminal behavior and/or victimization using statistical/machine learning procedures.” He wrote that, “The approach is ‘black box’, for which no apologies are made,” and gives an alarming example: “If I could use sun spots or shoe size or the size of the wristband on their wrist, I would. If I give the algorithm enough predictors to get it started, it finds things that you wouldn’t anticipate.” Things we don’t anticipate are mostly things that don’t make sense, but happen to be coincidentally correlated.

It is unsettling that Berk and other intelligent, well-meaning people think that bail, sentencing, and parole decisions should be based on what may well be statistical coincidences. In addition, some predictors may well be proxies for gender, race, sexual orientation, and other factors that should not be considered. People should not be given onerous bail, unreasonable sentences, and denied parole because of their gender, race, or sexual orientation — because they belong to certain groups. What should matter are the specific facts of a particular case.

If decisions about releasing people from jail are based on AI algorithms, it is just a short step to putting people in jail based on statistical algorithms. In 2016 two Chinese researchers reported that they could apply their computer algorithm to scanned facial photos and predict with 89.5 percent accuracy whether a person is a criminal. They reported that their algorithm identified “some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose–mouth angle.” Such algorithms are not only easily misled by statical coincidences, they are inherently discriminatory. Indeed, it is hard to imagine something more racially discriminatory than facial recognition software.

Yet, one blogger wrote,

What if they just placed the people that look like criminals into an internment camp? What harm would that do? They would just have to stay there until they went through an extensive rehabilitation program. Even if some went that were innocent; how could this adversely affect them in the long run?

As I have written elsewhere, the real danger today is not that computers are smarter than us, but that we think that computers are smarter than us and consequently trust them to make decisions they should not be trusted to make.


You may also wish to read: The AI illusion – state-of-the-art chatbots aren’t what they seem GPT-3 is very much like a performance by a good magician. You can thank human labelers, not any intelligence on GPT-3,s part, for improvements in its answers. (Gary Smith)


Gary N. Smith

Senior Fellow, Walter Bradley Center for Natural and Artificial Intelligence
Gary N. Smith is the Fletcher Jones Professor of Economics at Pomona College. His research on financial markets statistical reasoning, and artificial intelligence, often involves stock market anomalies, statistical fallacies, and the misuse of data have been widely cited. He is the author of dozens of research articles and 16 books, most recently, The Power of Modern Value Investing: Beyond Indexing, Algos, and Alpha, co-authored with Margaret Smith (Palgrave Macmillan, 2023).

Can AI Really Predict Crime a Week in Advance? That’s the Claim.