Flash!: Bigger computers do not just start acting like minds
At his Substack, AI analyst Gary Marcus says that the fact is slowly beginning to sink in: “Some battles no longer need fighting. The notion on which this Substack was founded in May 2022 — that LLM scaling would not bring us to AGI — has gone mainstream.”
Scaling just means making computers bigger and faster in the hope that intelligence will somehow happen.

Marcus adds,
Perhaps I also no longer need to warn people that the idea that coding would disappear soon is bullshit. Anyone remember this prediction from Anthropic CEO Dario Amodei back in March? Some journalists I could name seem to take it seriously at the time. (I didn’t, instead calling it fantasy.)
“Scaling hasn’t gotten us to AGI, or ‘superintelligence’, let alone AI we could trust. What do we do next?,” September 3, 2025
Last week, he wrote an article on this topic in the New York Times (September 3, 2025):
Fundamentally, people like Mr. Altman, the Anthropic chief executive Dario Amodei and countless other tech leaders and investors had put far too much faith into a speculative and unproven hypothesis called scaling: the idea that training A.I. models on ever more data and using ever more hardware would eventually lead to A.G.I. or even a superintelligence that surpasses humans.
However, as I warned in a 2022 essay, “Deep Learning Is Hitting a Wall,” so-called scaling laws aren’t physical laws of the universe like gravity but hypotheses based on historical trends. Large language models, which power systems like GPT-5, are nothing more than souped-up statistical regurgitation machines, so they will continue to stumble into problems around truth, hallucinations and reasoning. Scaling would not bring us to the holy grail of A.G.I.
“The Fever Dream of Imminent Superintelligence Is Finally Breaking”, Sept. 3, 2025
What’s really significant is not that smart analysts are saying this. It’s that trendsetters appear to actually be listening.
If this trend continues, we will probably still hear a lot of nonsense but there will also be more reality-based thinking about AI that is more easily accessible than it used to be.
