Gary Smith
The AI Hype Machine Just Rolls On, Living on Exhaust
Even chatbot enthusiasts, are starting to admit that scaling up LLMs will not create genuine artificial intelligenceThe Flea Market of the Internet: Breaking the Addiction
When, after a bad experience, I called Amazon the “Walmart of the Internet,” a friend pointed out that Amazon is, in fact, much worse than WalmartSora: Life Is Not a Multiple-Choice Test
With Sora, as with other generative AI developments, some are quick to proclaim that artificial general intelligence has arrived. Not so fast.Retracted Paper Is a Compelling Case for Reform
The credibility of science is being undermined by misuse of the tools created by scientists. Here's an example from an economics paper I was asked to comment onWhy Chatbots (LLMs) Flunk Routine Grade 9 Math Tests
Lack of true understanding is the Achilles heel of Large Language Models (LLMs). Have a look at the excruciating resultsInternet Pollution — If You Tell a Lie Long Enough…
Large Language Models (chatbots) can generate falsehoods faster than humans can correct them. For example, they might say that the Soviets sent bears into space...Computers Still Do Not “Understand”
Don't be seduced into attributing human traits to computers.When it Comes to New Technologies Like AI, Tempers Run Hot
So far, the most tangible LLM successes have been in generating political disinformation and phishing scams.Let’s Dispose of Exploding Pie Charts
Pie charts are seldom a good idea. Here's why.Large Language Models are Still Smoke and Mirrors
Incapable of understanding, LLMs are good at giving bloated answers.I recently received an email invitation from Google to try Gemini Pro in Bard. There was an accompanying video demonstration of Bard’s powers, which I didn’t bother watching because of reports that a Gemini promotional video released a few days earlier had been faked. After TED organizer Chris Anderson watched the video, he tweeted, “I can’t stop thinking about the implications of this demo. Surely it’s not crazy to think that sometime next year, a fledgling Gemini 2.0 could attend a board meeting, read the briefing docs, look at the slides, listen to every one’s words, and make intelligent contributions to the issues debated? Now tell me. Wouldn’t that count as AGI?” Legendary software engineer Grady Booch replied, “That demo Read More ›
Computers May Know “How” but They Still Don’t Know “Why”
Computers will not equal, let alone surpass, human intelligence.LLMs Are Still Faux Intelligence
Large language models are remarkable but it's a huge mistake to think they're "intelligence" in any meaningful sense of the word.A Modest Proposal for the MLB
Major League Baseball got greedy and needs to reform.The MLB Coin-Flipping Contest
What are the chances that wild-card teams will make it to the World Series and win?Blue Zone BS: The Longevity Cluster Myth
We need to be reminded how much real science has done for us and how real science is done.Confusing Correlation with Causation
Computers are amazing. But they can't distinguish between correlation and causation.Artificial intelligence (AI) algorithms are terrific at discovering statistical correlations but terrible at distinguishing between correlation and causation. A computer algorithm might find a correlation between how often a person has been in an automobile accident and the words they post on Facebook, being a good software engineer and visiting certain websites, and making loan payments on time and keeping one’s phone fully charged. However, computer algorithms do not know what any of these things are and consequently have no way of determining whether these are causal relationships (and therefore useful predictors) or fleeting coincidences (that are useless predictors). If the program is black box, then humans cannot intervene and declare that these are almost certainly irrelevant coincidences. Even if Read More ›