AI: Gary Marcus on driving the cost of bullshit to zero
AI analyst Gary Marcus assesses ChatGPT at age two. From its widely publicized introduction two years ago, hundreds of millions of people have tried it out:
But a lot of of theories about how ChatGPT would be used in practice have fizzled out. Remember how ChatGPT was gonna take over web search and wipe out Google? Two years later Google’s search share hasn’t diminished all that much. High school kids are still using ChatGPT to write term papers, but a lot of Fortune 500 companies are somewhat underwhelmed by the real world results they are getting.
Marcus on AI (November 28, 2024)
Problems like model collapse and hallucination may matter more in the corporate world than they do in high school.
Marcus reminds his readers of an exchange in a podcast with Ezra Klein at the New York Times (January 6, 2023):
Ezra Klein: … And what unnerved me a bit about ChatGPT was the sense that we are going to drive the cost of bullshit to zero when we have not driven the cost of truthful or accurate or knowledge advancing information lower at all. And I’m curious how you see that concern.
Gary Marcus: It’s exactly right. These systems have no conception of truth. Sometimes they land on it and sometimes they don’t, but they’re all fundamentally bullshitting in the sense that they’re just saying stuff that other people have said and trying to maximize the probability of that. It’s just auto complete, and auto complete just gives you bullshit.
He was criticized for saying that on the grounds that AI today is much more sophisticated than it was a few years ago. We are told that the industry is hard at work on the problem.
Still, he warns, “The reality is this. Two years on, on the most important question of all – factuality and reliability — we are still pretty much where we were when ChatGPT first came out: wishing and hoping. RAG, scaling, and system problems haven’t eradicated the inherent tendency of LLMs to hallucinate.”
A baked-in problem?
As Erik J. Larson has pointed out here at Mind Matters News, the problem is inherent in the nature of the programming. Hallucination could be reduced but only by adopting new methods.
You may also wish to read: From Data to Thoughts: Why Language Models Hallucinate The limits of today’s language models and paths to real cognition. We’ll need an architectural approach that can handle propositions—thoughts, judgments, reasoning structures—as first-order objects. (Erik J. Larson)