Is It Sinking In? Chatbots Will *Not* Soon Think Like Humans
Tech writer Gary Marcus: Even futurist Ray Kurzweil, a fount of optimism on the topic, is sounding less sure nowPsychologist and tech writer Gary Marcus scoffs at the idea that machines that think like people (artificial general intelligence or AGI) are just around the corner.
The author of Rebooting AI (Vintage 2019) says there are no big new developments in the offing:
It was always going to happen; the ludicrously high expectations from last 18 ChatGPT-drenched months were never going to be met. LLMs are not AGI, and (on their own) never will be; scaling alone was never going to be enough. The only mystery was what would happen when the big players realized that the jig was up, and that scaling was not in fact “All You Need”.
Gary Marcus, “The Great AI Retrenchment has begun,” Substack, June 15, 2024
Even futurist Ray Kurzweil is postponing and revising:
For years—and as recently as April in his TED talk—Ray Kurzweil famously projected that AGI would arrive in as 2029. But in an interview just published in WIRED, Kurzweil (who I believe to still works at Alphabet, hence knows what is immediately afoot) let his predictions slip back, for the first time, to 2032. (He also seemingly dropped the standard for AGI from general intelligence to writing topnotch poetry).
Marcus, “Retrenchment”
Readers may recall that Kurzweil told the 2023 COSM conference that “once AI reaches such a “general human capability” in 2029, it will have already “surpassed us in every way.” But he isn’t worried, because we humans are “not going to be left behind.” Instead, humans and AI are “going to move into the future together.”
He’d been saying such things at COSM conferences since 2019, though the COSM panel that evaluated his comments was significantly more skeptical than many tech experts.
This is from his current interview with Wired:
How will we know when AGI is here? That’s a very good question. I mean, I guess in terms of writing, ChatGPT’s poetry is actually not bad, but it’s not up to the best human poets. I’m not sure whether we’ll achieve that by 2029. If it’s not happening by then, it’ll happen by 2032. It may take a few more years, but anything you can define will be achieved because AI keeps getting better and better.
Steven Levy, “If Ray Kurzweil Is Right (Again), You’ll Meet His Immortal Soul in the Cloud,” Wired, June 13, 2024
Update: Ray Kurzweil has since written to Gary Marcus to say he stands by 2029 after all. MMN received an e-mail from Marcus to this effect on June 22; we were not able to find a link as yet.
To make chatbots better, the programmers will need to solve a number of problems, including:
The model collapse problem (everything becomes jackrabbits):
Model collapse: AI chatbots are eating their own tails. The problem is fundamental to how they operate. Without new human input, their output starts to decay. Meanwhile, organizations that laid off writers and editors to save money are finding that they can’t just program creativity or common sense into machines.
The hallucination problem (the Soviets sent bears into space):
Internet pollution — if you tell a lie long enough… LLMs can generate falsehoods faster than humans can correct them. Later, Copilot and other LLMs will be trained to say no bears have been sent into space but many thousands of other misstatements will fly under their radar. (Gary Smith)
And the innumeracy problem (I can’t count):
Marvin Minsky asks: Can GPT4 hack itself? Will AI of the future be able to count the number of objects in an image? Creativity and understanding, properly defined, lie beyond the capability of the computers of today and tomorrow. (Robert J. Marks)
These deep problems may be fundamental to what a chatbot is. We shall see.