

Gary Smith


Large Language Models: Inconsistent Because They’re Unintelligent
Here’s what happened when I tested popular LLMs on student exercise questions I have been using for over fifty years
For a Sounder Approach to Investment: Deep-Six the 60/40 Rule
Sometimes, good decisions require more than blindly accepting advice from humans as well as computers!
Intelligence Demands More than Pressing a Lever to Obtain Water
I continue to be astonished by how willing people are to assume that LLMs are intelligent because they give glib, confident answers
Glued to Our Phones. Our Troubling Addiction to Social Media
It's disconcerting to see people out for a nature walk — completely absorbed in the traffic on their phones
A Realistic Direction for Artificial General Intelligence Today
Based on GPT5's performance to date, it would make a superb substitute for a mansplainer I know of — call him Brock
GPT 5.0 Doesn’t Understand But Is Eager to Please
Over a number of tries, it couldn’t get the labels on an illustration right because it does not understand what the words mean and how they relate to the image
What Kind of a “PhD-level Expert” Is ChatGPT 5.0? I Tested It.
The responses to my three prompts made clear that GPT 5.0, far from being the expert that CEO Sam Altman claims, can’t address the meanings of words or concepts
ChatGPT-5 Tries Out “Rotated” Tic-Tac-Toe. You Be the Judge…
They say that dogs tend to resemble their owners. Chat GPT very much resembles Sam Altman — always confident, often wrong
AI Is a Long Way From Replacing Software Coders
Despite C-suite claims, they are not likely to take our jobs any time soon because they do not understand what words mean and how words relate to the physical world
Sci Foo Unconference: Horseshoe Crabs, Alchemy and (Of Course) AI
It’s called an “unconference” because attendees do not present papers in pre-organized sessions; they propose topics at the venue and those with most interest are selected
The Job Market is Telling Us Something About AI and Jobs…
But it’s not telling us the same things as the AI hypesters are telling us
No, Large Reasoning Models Do Not Reason
Large reasoning models continue the Large Language Model detour away from artificial general intelligence
LLMs Are Bad at Good Things, Good at Bad Things
LLMs may well become smarter than humans in the near future but not because these chatbots are becoming more intelligent
Yes, Large Language Models May Soon be Smarter than Humans…
But not for the reason you think
LLMs Still Cannot be Trusted for Financial Advice
The limitations of Large Language Models (chatbots) are illustrated by their struggles with financial advice
Large Language Models: A Lack-of-Progress Report
They will not be as powerful as either hoped or feared
Machine Learning Algos Often Fail: They Focus on Data, Ignore Theory
Without a theory, a pattern is just a pattern
Yes, the AI Stock Bubble Is a Bubble
It's unfolding the way a financial bubble typically does