Mind Matters Natural and Artificial Intelligence News and Analysis

Gary Smith

silhouette form of bull and bear on technical financial graph

How can someone who fails be faulted when everyone else is failing?

Market turbulence can cause endowment fund managers to travel with the herd — sacrifice returns in order to reduce annual volatility
The current, widely-favored 60/40 strategy has little or nothing to recommend it beyond the fact that it is what everyone else is doing. Read More ›
a pile of shovels and forks laying on the ground

AI Is Still a Delusion

Following instructions and performing fast, tireless, error-free calculations is not intelligence in any meaningful sense of the word
Not knowing what words mean, neither OpenAI’s ChatGPT 3.5 nor Microsoft’s Copilot nor Google’s Gemini can do a simple logic test. Read More ›
large language models Concept illustration Ai

LLMs Can’t Be Trusted for Financial Advice

The LLM responses demonstrated that they do not have the common sense needed to recognize when their answers are obviously wrong
It takes an experienced financial planner to distinguish between good and bad advice, so clients may as well skip the LLMs and go to the knowledgeable human. Read More ›
Robot, chat bot, android and digital evolution of robotics. Future processor development technologies. 3D illustration of quantum cyberspace. AI and global data

A Man, A Boat, and a Goat — and a Chatbot!

Forty-five years ago, Douglas Hofstadter noted a key problem with AI: It can’t do the astonishing things our brains do, as chatbots reveal when asked to solve puzzles
Not understanding what words mean or how they relate to the real world, chatbots have no way of determining whether their responses are sensible, let alone true Read More ›
Large Language Models - Generative AI illustration

Large Language Models Are Often Wrong, Never in Doubt

LLMs are statistically driven text generators, nothing more
As an economist, professor, and writer, I am reportedly in imminent danger of being replaced by large language models (LLMs). I’m not worried. Read More ›
University college audience or lecture room with students as back to school education study and learning during conference presentation and training seminar meeting

Universities Should Prioritize Critical Thinking Over Large Language Models

It is doubtful that LLMs can generate novel ways to boost productivity
No matter how many words they train on and no matter how many human trainers correct their mistakes, they will continue to lack the critical thinking skills. Read More ›
Chatbots Callcenter

Elon Musk: AI will be smarter than a human in 2025: Why he’s wrong

The superficial glibness of LLMs is a wonderful example of the adage that a little knowledge is a dangerous thing
Based on extensive training on untold amounts of text, LLMs are able to repackage superficially compelling answers that they literally do not understand. Read More ›
Effektives arbeiten im Büro mit der KI, nicht gegen die KI

The AI Hype Machine Just Rolls On, Living on Exhaust

Even chatbot enthusiasts, are starting to admit that scaling up LLMs will not create genuine artificial intelligence
Decades of geniuses trying to build computers that are as intelligent as they are have shown how truly remarkable our brains are—and how little we understand. Read More ›
traditional flea market at Brick Lane. Brick Lane flea market op

The Flea Market of the Internet: Breaking the Addiction

When, after a bad experience, I called Amazon the “Walmart of the Internet,” a friend pointed out that Amazon is, in fact, much worse than Walmart
Internet-based businesses tend to follow a life cycle in which quality deteriorates over time. Writer Cory Doctorow calls the process “enshittification.” Read More ›
Conceptual Illustration of Sora, OpenAI's Text-to-Video AI

Sora: Life Is Not a Multiple-Choice Test

With Sora, as with other generative AI developments, some are quick to proclaim that artificial general intelligence has arrived. Not so fast.
The hallucinations are symptomatic of generative AI models’ core problem: they can’t identify output problems because they know nothing about the real world. Read More ›
Businessman holding tablet and showing holographic graphs and stock market statistics gain profits. Concept of growth planning and business strategy. Display of good economy form digital screen.

Retracted Paper Is a Compelling Case for Reform

The credibility of science is being undermined by misuse of the tools created by scientists. Here's an example from an economics paper I was asked to comment on
In my book Distrust (Oxford 2023), I recommend that journals not publish data-driven research without public access to nonconfidential data and methods used. Read More ›
cute artificial intelligence robot with notebook

Why Chatbots (LLMs) Flunk Routine Grade 9 Math Tests

Lack of true understanding is the Achilles heel of Large Language Models (LLMs). Have a look at the excruciating results
Chatbots don’t understand, in any meaningful sense, what words mean and therefore do not know how the given numbers should be used. Read More ›
polar bear astronaut in space suit, generative ai

Internet Pollution — If You Tell a Lie Long Enough…

Large Language Models (chatbots) can generate falsehoods faster than humans can correct them. For example, they might say that the Soviets sent bears into space...
Later, Copilot and other LLMs will be trained to say no bears have been sent into space but many thousands of other misstatements will fly under their radar. Read More ›
A vibrant blue programming code background represents the intricate work of software developers and the art of computer scripting

Computers Still Do Not “Understand”

Don't be seduced into attributing human traits to computers.
Imagine people making decisions that are influenced by an LLM that does not understand the meaning of any of the words it inputs and outputs. Read More ›

When it Comes to New Technologies Like AI, Tempers Run Hot

So far, the most tangible LLM successes have been in generating political disinformation and phishing scams.
LLMs often remind us of clueless students who answer essay questions by writing everything they think is relevant, hoping the right answer is in there somewhere Read More ›
3d render gold metallic pie chart icon on dark background concept for analyze data information

Let’s Dispose of Exploding Pie Charts

Pie charts are seldom a good idea. Here's why.
Points can be made without sensationalized graphs that undermine the credibility of the argument. Let’s dispose of exploding pie charts. Read More ›
Coding hologram, ai programming and dark background with chat machine, large language model or app. Big data, cloud computing and artificial intelligence software on live web in technology abstract

Large Language Models are Still Smoke and Mirrors

Incapable of understanding, LLMs are good at giving bloated answers.

I recently received an email invitation from Google to try Gemini Pro in Bard. There was an accompanying video demonstration of Bard’s powers, which I didn’t bother watching because of reports that a Gemini promotional video released a few days earlier had been faked. After TED organizer Chris Anderson watched the video, he tweeted, “I can’t stop thinking about the implications of this demo. Surely it’s not crazy to think that sometime next year, a fledgling Gemini 2.0 could attend a board meeting, read the briefing docs, look at the slides, listen to every one’s words, and make intelligent contributions to the issues debated? Now tell me. Wouldn’t that count as AGI?” Legendary software engineer Grady Booch replied, “That demo Read More ›

Hand drawn robot

LLMs Are Still Faux Intelligence

Large language models are remarkable but it's a huge mistake to think they're "intelligence" in any meaningful sense of the word.
It is wishful thinking to interpret these results and other LLM performances as evidence of logical reasoning. Read More ›