Mind Matters Natural and Artificial Intelligence News and Analysis
ai-related-law-concept-shown-by-robot-hand-using-lawyer-working-tools-in-lawyers-office-with-legal-astute-icons-depicting-artificial-intelligence-law-generative-ia-stockpack-adobe-stock
AI related law concept shown by robot hand using lawyer working tools in lawyers office with legal astute icons depicting artificial intelligence law . GEnerative IA
Image Credit: Guillem de Balanzó - Adobe Stock

AI-Lawyers Will Have Fools for Clients

When tech prophets promise everything—except accuracy.
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

With OpenAI in an increasingly precarious financial position, and Sam Altman’s reputation in a similarly precarious position, others have stepped into the breach with implausibly optimistic predictions about the capabilities of large language models (LLMs) such as OpenAI’s GPT, Google’s Gemini, and Anthropic’s Claude.

In a recent Financial Times interview, Mustafa Suleyman, CEO of Microsoft AI, said that AI is on the cusp of “professional-grade AGI” that will deliver “human-level performance on most, if not all professional tasks” and, as a consequence, most of the work currently done by accountants, lawyers and other professionals “will be fully automated by an AI within the next 12 to 18 months.” (I will assess his prediction 18 months from now. Anyone want to wager on its accuracy?)

Suleyman’s unrealistic blather is deeply reminiscent of AI guru Geoffrey Hinton’s 2016 assertion that, We should stop training radiologists now, it’s just completely obvious within five years deep learning is going to do better than radiologists.

Ten years later, the demand for human radiologists is stronger than ever. Of course, Hinton also made the circular claim that the statistical ability of LLMs to autocomplete sentences is proof that they understand the text they generate:

People say, It’s just glorified autocomplete . . . Now, let’s analyze that. Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand what’s being said. That’s the only way. So by training something to be really good at predicting the next word, you’re actually forcing it to understand.

To the contrary, there is ample evidence that: (a) people are too quick to assume that computer programs that write are also thinking; and (b) LLMs do not understand the text they input and output in any meaningful sense of the word “understand.” (See here and here for examples of LLM-generated text that is lucid and dumb.)

I have written before about why  AI is unlikely to replace lawyers and will use that example here, but similar arguments apply to most professions. Some work is rote and might be done more efficiently if less accurately by LLMs. For example, computers might be used to search for legal precedents though they might be imperfect judges of the relevance and, in the case of LLMs, might hallucinate (a word I generally avoid because it suggests thinking is involved). A website listing cases “where the court or tribunal has explicitly found (or implied) that a party relied on [AI-generated] hallucinated content or material“ is currently approaching 1,000 cases. These are, of course, only those cases that have been caught. Lazy lawyers have no doubt filed hallucinated content orders of magnitude more often. That is not only an indictment of LLMs but of lawyers who think, as does Hinton, that glibness equals understanding.

The real work of good lawyers is not searching for precedents but using critical thinking skills to formulate and articulate persuasive arguments and negotiate acceptable agreements. In a recent article in the Yale Alumni Magazine, Clay Shirky, NYU’s vice provost for AI and Technology in Education, wrote about the confusion between an LLM passing the LSAT and being a good lawyer:

I asked one of my lawyer colleagues about the LSAT story; he replied, I cannot convey to you how little of my day is like answering a question on the LSAT.”

A lawyer friend of mine similarly emailed me that law school was not at all about learning the answers to LSAT questions: “One of my Yale Law professors, Charles Reich, told us that his job — the job of all law professors — is to teach us how to think like a lawyer.”

I have written previously about a hypothetical example of a lawyer advising a client in a criminal case who has been offered a pre-trial deal:

 As an experienced trial lawyer, you consider the evidence, the composition of the jury, the competency of the prosecutor, whether your client will testify and how that might go, and other relevant information. You tell your client the possible outcomes and your assessment of the likelihood of these various possibilities. How could an LLM — no matter the amount pre-training and post-training — offer equally well-informed and trustworthy advice? An LLM has no way of understanding the relevant information and no means of coming up with subjective probabilities.

 If your client rejects the plea bargain, then you need to decide the most useful evidence, the most promising way of presenting the case, and the lines of cross-examination that are most likely to be successful. Again, this is based on your knowledge of the details of this particular case and your past experience handling other relevant cases.

Prosecutors make very similar assessments in deciding whether to file charges, in offering a plea bargain, and in negotiating a deal. In civil cases, attorneys for both sides make recommendations based on analogous considerations. No amount of hyperscaling on larger and larger databases and no amount of post-training, even by experienced lawyers with considerable domain-specific experience, can possibly prepare an LLM to use the detailed specifics of a particular case to come up with reliable subjective probabilities, to negotiate a good deal, or to prepare and present compelling arguments to a judge or jury.

In every subfield of law, in any situation involving substantive disagreements, LLMs will be stymied by their inability to assess the unique details and produce compelling subjective probabilities and strategies. If an AI-lawyer replaces a human lawyer, it will have a fool for a client.


Gary N. Smith

Senior Fellow, Walter Bradley Center for Natural and Artificial Intelligence
Gary N. Smith is the emeritus Fletcher Jones Professor of Economics at Pomona College. His research on stock market anomalies, statistical fallacies, the misuse of data, and the limitations of AI has been widely cited. He is the author of more than 100 research papers and 20 books, most recently, Standard Deviations: The truth about flawed statistics, AI and big data, Duckworth, 2024.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

AI-Lawyers Will Have Fools for Clients