

Gary Smith


The Government-Debt Tipping Point Is Nonsense
There are serious problems with the economics paper by Reinhart and Rogoff, whose recommendations were widely followed
How can someone who fails be faulted when everyone else is failing?
Market turbulence can cause endowment fund managers to travel with the herd — sacrifice returns in order to reduce annual volatility
AI Is Still a Delusion
Following instructions and performing fast, tireless, error-free calculations is not intelligence in any meaningful sense of the word
LLMs Can’t Be Trusted for Financial Advice
The LLM responses demonstrated that they do not have the common sense needed to recognize when their answers are obviously wrong
A Man, A Boat, and a Goat — and a Chatbot!
Forty-five years ago, Douglas Hofstadter noted a key problem with AI: It can’t do the astonishing things our brains do, as chatbots reveal when asked to solve puzzles
Large Language Models Are Often Wrong, Never in Doubt
LLMs are statistically driven text generators, nothing more
Universities Should Prioritize Critical Thinking Over Large Language Models
It is doubtful that LLMs can generate novel ways to boost productivity
Elon Musk: AI will be smarter than a human in 2025: Why he’s wrong
The superficial glibness of LLMs is a wonderful example of the adage that a little knowledge is a dangerous thing
The AI Hype Machine Just Rolls On, Living on Exhaust
Even chatbot enthusiasts, are starting to admit that scaling up LLMs will not create genuine artificial intelligence
The Flea Market of the Internet: Breaking the Addiction
When, after a bad experience, I called Amazon the “Walmart of the Internet,” a friend pointed out that Amazon is, in fact, much worse than Walmart
Sora: Life Is Not a Multiple-Choice Test
With Sora, as with other generative AI developments, some are quick to proclaim that artificial general intelligence has arrived. Not so fast.
Retracted Paper Is a Compelling Case for Reform
The credibility of science is being undermined by misuse of the tools created by scientists. Here's an example from an economics paper I was asked to comment on
Why Chatbots (LLMs) Flunk Routine Grade 9 Math Tests
Lack of true understanding is the Achilles heel of Large Language Models (LLMs). Have a look at the excruciating results
Internet Pollution — If You Tell a Lie Long Enough…
Large Language Models (chatbots) can generate falsehoods faster than humans can correct them. For example, they might say that the Soviets sent bears into space...
Computers Still Do Not “Understand”
Don't be seduced into attributing human traits to computers.
When it Comes to New Technologies Like AI, Tempers Run Hot
So far, the most tangible LLM successes have been in generating political disinformation and phishing scams.
Let’s Dispose of Exploding Pie Charts
Pie charts are seldom a good idea. Here's why.
Large Language Models are Still Smoke and Mirrors
Incapable of understanding, LLMs are good at giving bloated answers.I recently received an email invitation from Google to try Gemini Pro in Bard. There was an accompanying video demonstration of Bard’s powers, which I didn’t bother watching because of reports that a Gemini promotional video released a few days earlier had been faked. After TED organizer Chris Anderson watched the video, he tweeted, “I can’t stop thinking about the implications of this demo. Surely it’s not crazy to think that sometime next year, a fledgling Gemini 2.0 could attend a board meeting, read the briefing docs, look at the slides, listen to every one’s words, and make intelligent contributions to the issues debated? Now tell me. Wouldn’t that count as AGI?” Legendary software engineer Grady Booch replied, “That demo Read More ›
