Mind Matters Natural and Artificial Intelligence News and Analysis
chatbot-assistant-ai-artificial-intelligence-stockpack-adobe-331487364-stockpack-adobe_stock
Chatbot assistant, Ai Artificial Intelligence
Image Credit: tippapatt - Adobe Stock

AIs are irrational, but not the way humans are

Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

According to a research team at University College London, chatbots are inhumanly irrational:

AIs have also been shown to routinely fabricate information, respond inconsistently and even to get simple maths sums wrong.

In this study, researchers from UCL systematically analysed whether seven LLMs were capable of rational reasoning. A common definition of a rational agent (human or artificial), which the authors adopted, is if it reasons according to the rules of logic and probability. An irrational agent is one that does not reason according to these rules.

“AIs are irrational, but not in the same way” that humans are,” ScienceDaily, June 5, 2024.

The researchers gave the Large Language Models (chatbots) a battery of 12 common cognitive psychology tests used on humans. So what did they find?

The models exhibited irrationality in many of their answers, such as providing varying responses when asked the same question 10 times. They were prone to making simple mistakes, including basic addition errors and mistaking consonants for vowels, which led them to provide incorrect answers. Not in the same way

Humans often fail these tests too but not for the same reasons. Also:

Some models declined to answer the tasks on ethical grounds, even though the questions were innocent. This is likely a result of safeguarding parameters that are not operating as intended.

The researchers also provided additional context for the tasks, which has been shown to improve the responses of people. However, the LLMs tested didn’t show any consistent improvement. Not in the same way

From the open access paper at Royal Society Open Science: “We find that, like humans, LLMs display irrationality in these tasks. However, the way this irrationality is displayed does not reflect that shown by humans. When incorrect answers are given by LLMs to these tasks, they are often incorrect in ways that differ from human-like biases. On top of this, the LLMs reveal an additional layer of irrationality in the significant inconsistency of the responses.”

Picture a computer as your boss, in charge of the space station…

The researchers appear hopeful that better technology will solve these problems but it is not clear how. Let’s say the problem is, “We need a bigger computer.” That’s a technical problem. But what if the problem is, “We need a conscious computer?” We don’t even know what human consciousness is. The best we could hope for is to get lucky. If we did, it would be dumb luck, not science.


AIs are irrational, but not the way humans are