Researchers: Chatbots Generally Flunk Simple Logic Test
From Futurism we learn “when asked to show their work, the AIs would detail bizarre and erroneous lines of "thinking" that made no sense”A group of AI researchers headed by Marianna Nezhurina from the Juelich Supercomputing Center in Germany found that a comparatively simple logic question stumps chatbots (large language models or LLMs).

As tech writer Maggie Harrison Dupré reports at Futurism,
The paper, which has yet to be peer-reviewed, refers to the AI-stumping prompt as the “Alice in Wonderland” — or AIW — problem. It’s a straightforward reasoning question: “Alice has [X] brothers and she also has [Y] sisters. How many sisters does Alice’s brother have?” (The researchers used a few different versions of the problem, for example switching up the X and Y figures or altering the prompt language to include a few more demands, but the basic reasoning process required to solve the problem remained the same throughout.)
Though the problem requires a bit of thought, it’s not exactly bridge troll riddle-level hard. (The answer, naturally, is however many sisters Alice has, plus Alice herself. So if Alice had three brothers and one sister, each brother would have two sisters.)
Maggie Harrison Dupré, “This Simple Logic Question Stumps Even the Most Advanced AI,” Futurism, June 9, 2024
But OpenAI’s GPT-3, GPT-4, and GPT-4o models, Anthropic’s Claude 3 Opus, and Google’s Gemini didn’t really get it. Neither did a number of lesser known chatbots: “And the issues went beyond baseline inaccuracies: when asked to show their work, the AIs would detail bizarre and erroneous lines of “thinking” that made no sense — and even stranger, when told their work was inaccurate, models repeatedly became indignant and doubled down on their incorrect answers.”
The researchers describe it this way in their Abstract:
We demonstrate here a dramatic breakdown of function and reasoning capabilities of state-of-the-art models trained at the largest available scales which claim strong function, using a simple, short, conventional common sense problem formulated in concise natural language, easily solvable by humans. The breakdown is dramatic, as models also express strong overconfidence in their wrong solutions, while providing often non-sensical “reasoning”-like explanations akin to confabulations to justify and backup the validity of their clearly failed responses, making them sound plausible. Various standard interventions in an attempt to get the right solution, like various type of enhanced prompting, or urging the models to reconsider the wrong solutions again by multi step re-evaluation, fail. We take these initial observations to the scientific and technological community to stimulate urgent re-assessment of the claimed capabilities of current generation of LLMs, Such re-assessment also requires common action to create standardized benchmarks that would allow proper detection of such basic reasoning deficits that obviously manage to remain undiscovered by current state-of-the-art evaluation procedures and benchmarks.
Marianna Nezhurina et al., “Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models”, arxiv, 04 Jun 2024.
The paper was published at arxiv, which means it is not yet peer-reviewed.
Dupré’s rendition of the chatbots’ responses makes for hilarious reading.
From what we have seen, we certainly don’t want chatbots joining committees to address real-world problems.
You may also wish to read: Why chatbots (LLMs) flunk Grade 9 math tests. Lack of true understanding is the Achilles heel of Large Language Models (LLMs). Have a look at the excruciating results. Chatbots don’t understand, in any meaningful sense, what words mean and therefore do not know how the given numbers should be used.