Mind Matters Natural and Artificial Intelligence News and Analysis
machine-learning-concept-background-futuristic-concept-wallpaper-artificial-intelligence-brain-ai-generated-stockpack-adobe-stock
Machine learning concept background, futuristic concept wallpaper, artificial intelligence brain, AI generated

Tech writer accuses AI skeptics of offering “phony comfort”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Yesterday technology consultant Jeffrey Funk and Pomona business prof Gary Smith published a piece here at Mind Matters News on the way that the promise of artificial superintelligence just around the corner is evaporating. Improvements in large language models (chatbots) are becoming less significant all the time, rather than more.

AI analyst Gary Marcus offered a similar view at his Substack yesterday:

What kept me going during the dark years of 2022 and 2023 when Generative AI was wildly overhyped and I was constantly ridiculed was the secure knowledge that the truth would eventually come out.

This just in, right on the heels of Satya Nadella also publicly acknowledging diminishing returns: … Sunder Pichai confirmed our November gains were getting harder to come by. [From Jon Victor, The Information, December 4, 2024]

“Which CEO will be the last to see it?,” December 5, 2024

He didn’t have long to wait before tech writer Casey Newton shot back with “The phony comforts of AI skepticism,” insisting that AI superintelligence is real and dangerous (and coming soon).

Marcus doesn’t say that AI is fake and sucks, exactly. But his arguments are extremely useful to those who believe that AI is fake and sucks, because they give it academic credentials and a sheen of empirical rigor. And that has made him worth reading for me as I attempt to come to my own understanding of AI.

Platformer News, December 5, 2024

Marcus replied this morning, saying, among other things,

Newton doesn’t seem to get the difference between an LLM improving on a specific example and failing on a general class of errors. Ernest Davis and I wrote in 2020 in Technology Review that GPT-3 was “a fluent spouter of bullshit” that had trouble with causality, physical reasoning, psychological reasoning, mathematical reasoning, and so on. All that’s still true, even if LLMs can now get the specific examples we published (and which are now presumably in the training set). Not one of those areas of weakness has been remotely robustly solved.

“Hard-forked! Casey Newton’s distorted portrait of Gary Marcus and AI skepticism,” December 6. 2024

And much more. While Marcus thinks that superintelligence will arrive eventually (just not soon), non-experts may find ourselves asking whether problems that stem from the AI system having no actual mind (or real-world experience) are solvable in principle.


Tech writer accuses AI skeptics of offering “phony comfort”