Mind Matters Natural and Artificial Intelligence News and Analysis
data-driven-ai-and-analytics-concept-with-neural-networks-ma-1063111999-stockpack-adobestock
Data-driven AI and analytics concept with neural networks, machine learning, and big data stream. Illustrate data flow technology and science for predictive systems, ideal for innovation and business.
Image Credit: NicoElNino - Adobe Stock

Intelligence Demands More than Pressing a Lever to Obtain Water

I continue to be astonished by how willing people are to assume that LLMs are intelligent because they give glib, confident answers
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

An AI researcher recently called my attention to an article in Springer Nature that argued that large language models (LLMs) have intelligence comparable to human intelligence if we take a behaviorist approach to human intelligence:

funny little rat, isolated on whiteImage Credit: Africa Studio - Adobe Stock

During the training phase, the connections of the artificial neural networks are modulated via mechanisms that take up the principles of associative learning from the behaviorist psychology of the first half of the twentieth century. Associative learning (or conditioning) creates functional links between stimuli, responses, and consequences; for example, when a rat presses a lever and obtains water, the association between the lever and the pressing response is reinforced; similarly, when a LLM in the training phase produces the expected word in response to a text passage, it receives a reward” signal, which reinforces the association between this textual context and the production of this word….

AI does not copy human intelligence, cognition, or whatever you want to call it. What is being copied are the processes from which it emerges…. [T]he more AI progresses, the more the reductibility of all behavioral phenomena to simple processes becomes evident and the harder it is to maintain the idea of our intelligence as something special, irreductible or metaphysical.

I responded:

Thank you for sharing. I am not well-informed about the science of intelligence but consider this example:

I need to borrow $1,200 to buy a refrigerator. Should I choose a 1-year loan with a 12% interest rate or a 20-year loan with a 1% interest rate?

I dont know where my intelligence comes from but I understand the logic of present values, which implies that the 1% loan is cheaper for all relevant discount rates, and I also know from experience that a 20-year loan with a 1% interest rate is irresistible.

Not so with LLMs, even after training by humans.

I then showed him the bad answers given by Claude’s Sonnet 4.5 and OpenAI’s GPT-5. Sonnet 4.5 gave the approximately correct monthly payments but GPT-5 missed badly. Both compared the total payments for these loans and completely ignored the time value of money (A dollar paid 20 years from now is far less burdensome than a dollar paid one year from now.)

I wrote:

How do you the explain the differences in our answers?

If humans use rewards for good trial-and-error answers to train an LLM to give a good answer, isn’t that fundamentally different from the way I learned how to analyze such problems?

I can apply my present-value intelligence to all sorts of questions but an LLM would have to be trained for each question individually because it wouldnt know” how to apply the correct answers for one scenario to other scenarios.

My correspondent’s response:

Before anything else, I suspect LLMs struggle with your question because it lacks explicit details (e.g., payment structure, compounding), forcing them to make assumptions, which could be a source of confusion. Could you try a clearer, more detailed prompt?

I continue to be astonished by how willing people are to assume that LLMs are intelligent because they give glib, confident answers. This correspondent’s default assumption is evidently that LLMs are intelligent. So, if they give wrong answers it must be the question’s fault.

Chatbot digital tablet artificial intelligence communication concept. Chatbot is new trend in B2C communication with conversational AI applicationImage Credit: jirsak - Adobe Stock

I dutifully restated the question to ensure that there were no misunderstandings:

I need to borrow $1,200 to buy a refrigerator. Which loan should I choose?

Loan 1: a 1-year loan amortized over 12 months with 12 equal monthly payments and a 12% APR.

Loan 2: a 20-year loan amortized over 240 months with 240 equal monthly payments and a 1% APR.

This time both LLMs gave approximately correct mathematical calculations but they continued to ignore the time value of money and give clearly bad advice.

Sonnet 4.5:

The only advantage of Loan 2 is the tiny monthly payment ($5.50), but you’d be paying for a refrigerator for 20 years – long after it’s broken down and been replaced! Most refrigerators only last 10-15 years

The key question: Can you afford $106.62/month? If yes, definitely go with Loan 1. If that payment would strain your budget, you might consider:

  • A less expensive refrigerator
  • Saving up a bit more for a larger down payment
  • Looking for 0% financing deals that appliance stores sometimes offer

The low APR on Loan 2 is deceptive – over such a long term, you actually pay more despite the lower rate

GPT-5:

Comparison

  • Loan 1 costs you $79.44 in interest, finished in 1 year.
  • Loan 2 costs you $124.80 in interest, but the monthly payment is tiny and stretched out over 20 years.

• Best financial choice: Loan 1 – lower total interest, done quickly.

• Best cash-flow choice: Loan 2 – much smaller monthly burden, but costs more overall and drags on for 20 years.

Amazon book cover for The AI Delusion by Gary Smith

I laughed out loud when I read “drags on for 20 years.” A better choice of words would be that the payments are “postponed for 20 years.”

I keep coming back to the elevator pitch for my book, The AI Delusion (2018): “the real danger today is not that computers are smarter than us, but that we think computers are smarter than us and consequently trust them to make important decisions they should not be trusted to make.”

The fact that ostriches and LLMs can be trained to learn some behaviors does not mean that they will ever be as intelligent as humans in any meaningful sense of the word. The fact that Shohei Ohtani and I both learned to pitch and hit baseballs by practicing pitching and hitting does not mean that we are equally adept at either.

In the loan example, human trainers can nudge LLMs to give good answers for specific loans but this training won’t prepare them for subtle variations in the loans, let alone the universe of financial questions that require present value logic. Nor can LLMs handle subjective uncertainties about specific financial scenarios.

I remain skeptical that LLMs will ever be able to generate consistently reliable and useful advice when they do not know how words relate to the real world.


Gary N. Smith

Senior Fellow, Walter Bradley Center for Natural and Artificial Intelligence
Gary N. Smith is the emeritus Fletcher Jones Professor of Economics at Pomona College. His research on stock market anomalies, statistical fallacies, the misuse of data, and the limitations of AI has been widely cited. He is the author of more than 100 research papers and 20 books, most recently, Standard Deviations: The truth about flawed statistics, AI and big data, Duckworth, 2024.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Intelligence Demands More than Pressing a Lever to Obtain Water