
CategoryLarge Language Models (LLMs)


A Lack of Transparency Threatens AI — and the American Economy
Even the Wall Street Journal does not appear to understand the nature of the looming problems
Kids and Chatbots: How Dissenting Views Got Me Disinvited
As a lifelong software engineer, I am an insider on a topic like this and I know that their programming makes them unsuitable for children
Illusions: No, Large Language Models Do Not Understand
A recent New Yorker article is mistaken about this. For one thing, the LLMs have trouble distinguishing between causation and mere correlation
Case Study: How Not to Sue AI for Libel

Should Christians Harness AI Chatbots As a Force For Good? Part 2
The claims being made for what these bots can do for ministry would lead us into dangerous territory
A Non-Transhumanist Vision for AI in Education
Bill Dembski describes this new approach as amounting to edification rather than enhancement
Should Christians Harness AI Chatbots As A Force For Good? Part 1
In response to claims from a Christian ministry, as a Christian software developer, I offer some serious cautions
Scaling Up is Not Going to Make Large Language Models Intelligent
ChatGPT-5 misunderstood the jab in a simple Will Rogers joke five out of six times
Large Language Models: Inconsistent Because They’re Unintelligent
Here’s what happened when I tested popular LLMs on student exercise questions I have been using for over fifty years
Will Reliance on AI Mean a Vast Drop in New Knowledge Production?
Dependence on AI assistants, for example, was found to greatly reduce discussion among peers, where new ideas are offered and evaluated
Fearing the Terminator: Does Current Tech Warrant the Doomsaying?
People will worry less if they understand why the text generation programs not only do not think but in fact cannot think
Fearing the Terminator, Missing the Obvious
In Part 1 of my review of the new AI Doom book, If Anyone Builds It, Everyone Dies, we look at how the authors first developed the underlying idea
Different Bubbles, Different Methods of Detection: What of AI?
Circular financing is one of the signs. Even the media are beginning to catch on
Verify, Then Trust: the Human Fixes That Make LLMs Work
Here are some examples of fixes that programmers and other workers have applied over time to large language model (LLM) bloopers
Intelligence Demands More than Pressing a Lever to Obtain Water
I continue to be astonished by how willing people are to assume that LLMs are intelligent because they give glib, confident answers
AI Will Keep Hallucinating, says OpenAI
AI will always need human guidance.
Why OpenAI Will Collapse: LLMs Are Not Economical
If a model is not refreshed every few months, it quickly turns from a helpful assistant into a debt generation machine with outdated knowledge
Unexpected: ChatGPT and the Return of God
Scott Ventureyra talks about ChatGPT and other large language models (chatbots )AI in the unlikely role of unexpected apologists for Christianity