The possibility of an AI judge is raised in a recent article on new developments in artificial intelligence (AI) in court systems, which include Los Angeles’ Gina the Avatar for traffic ticket resolution and a proposed Jury Chat Bot.
Some experts think AI might be fairer than human judgment:
It may not be particularly hard to build an AI-based system that delivers better results than humans, panelists at the conference noted. There’s plenty of evidence of all kinds of human bias built into justice systems. In 2011, for instance, a study of an Israeli parole board showed by the parole board delivered harsher decisions in the hour before lunch and the hour before the end of the day.Stephanie Condon, “AI in the court: Are robot judges next?” at ZDNet
But others warn of AI’s limitations. Many AI decisions are not “explainable” because the computer system is motoring through 10,000 cases and comes up with a mathematical solution. Humans do not think that way and may not regard the decision as fair, no matter what it is.
In any event, one Superior Court judge warns that many cases don’t come down to information alone:
“In my experience in judging, especially with a self-represented litigant, most of the time people don’t even now what to tell you,” she said. If an automated system builds its decision based on the information it receives, she continued, “how are you going to train it to look for other stuff? For me that’s a very subjective, in-the-moment thing.”
For instance, Chang said, “if they’re fidgeting, I’ll start asking them questions, and it will come to a wholly different result.”Stephanie Condon, “AI in the court: Are robot judges next?” at ZDNet
She cited immigration cases where the unsuccessful litigant is immediately murdered after deportation to a home country. Some such risks may be hard to quantify, especially if few wish to know about or accept responsibility for the outcomes.
On the other hand, we may be prone to inflating the difference AI will make. Gonzaga University law professor (emeritus) David DeWolf (right) doesn’t see AI in the courtroom as a threat to justice. He told Mind Matters News,
It’s hard to be too critical of AI in the courtroom because the current state of the U.S. legal system is so flawed. Resolving disputes through a trial is the very last resort, like going to war when diplomacy fails. It is never your first option.
Taking criminal sentencing as an example, there are multiple axes upon which the “right” sentence should be built. Retribution, deterrence, incapacitation, and rehabilitation are all relevant considerations. The desire to individualize a sentence to optimize these factors has to be limited to avoid arbitrary subjective judgments by the judge (or commission) imposing the sentence.
The late economist Kenneth Boulding pointed out that there were three ways of organizing human behavior: coercion, exchange, and gift. Armies and the legal system operate on the basis of coercion. Markets operate on the basis of exchange, while families, friends and churches operate on the basis of gift. All societies incorporate all three systems, but the less they rely on coercion, and the more they benefit from gift, the healthier they are.
I’m less worried about the use of AI in the legal system than I am about the increasing dependence upon law – a form of coercion – to regulate human behavior.
If Dr. DeWolf proves correct, the principal concern should perhaps be that AI can do nothing to address fundamental problems with the way a system works. Those problems derive from human choices in the face of incentives, constructive or perverse.
We might also ask, what exactly has AI changed in various professions today? In disciplines that require years of study, like law, AI is not taking jobs so much as creating them. Just a few examples:
- Can AI prove that Shakespeare had ghostwriters? Its verdicts, of varying reliability, will likely give scholars more food for thought than ever.
- Does AI challenge Biblical archeology? Far from it, AI can sometimes decipher texts burnt to a crisp in temple fires, enabling us to document much earlier dates for the first manuscripts of sacred scriptures.
- Can AI help us decipher lost languages? That depends mainly on the reasons we haven’t yet deciphered them. But hundreds of thousands of ancient documents lie untranslated today even if we can in principle decipher them because it is tedious work for the few specialists in the languages. Again, if AI does the tedious work, scholars will have more to write about.
- Will AI end astrophysics as we know it? By automating the many hours spent scanning the sky, astrophysics should provide far more data for scientists to mull over, so the effect should be the opposite.
In general, in fields where human judgment is required, the huge increase in information that AI methods offer should result in more opportunities to exercise it.
Where has AI failed? In one spectacular example, a hospital tried to automate and streamline the process of telling a man that he was dying. Never again!, the administration vowed, after a huge outcry. But that was a failure in judgment on their part; a non-personal approach to dying should never have been considered in the first place.
Robot-proofing your career, Peter Thiel’s way
Students, don’t let smart machines disrupt your future Three ways you can avoid life in Mom’s basement and the job pouring coffee.
Creative freedom, not robots, is the future of work. In an information economy, there will be a place where the human person is at the very center
Maybe the robot will do you a favor and snatch your job. The historical pattern is that drudgery gets automated, not creativity