The Reverse Flynn Effect and the Decline of Intelligence
How our modern world is making us dumber and why it doesn’t have toThis is the first in a two-part series exploring the concept of intelligence — both as it applies to us and to what we call “AI.” I want to break down the basics of the current IQ debate and what neuroscience tells us about general intelligence (g). Then I’ll examine François Chollet’s treatment of intelligence in his excellent paper On the Measure of Intelligence. The paper is aimed at AI scientists aiming to build artificial general intelligence (AGI).
The notion of intelligence underpins nearly every discussion about the past, present, and future of AI. And yet, the field often sidesteps the fundamental question: What is intelligence, really? My goal is to shed some light on that question.
Let’s turn to our intelligence — how we measure it, the debate over whether it’s changing (i.e., are we getting dumber?), and what role the brain plays in it all.
What is intelligence? From human minds to machine learning

Psychologist Linda Gottfredson and other 52 leading experts provided a widely cited definition of intelligence in a 1994 Wall Street Journal statement. Their original declaration contained 25 statements, each highlighting an essential aspect of intelligence. The definition has since been distilled — here, by Iain McGilchrist in The Matter With Things (2021):
Intelligence is a very general mental capacity which, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—‘catching on,’ ‘making sense’ of things, or ‘figuring out’ what to do.
Fair enough. A definition that broad and catch-all is unlikely to be wrong, but it might be too sprawling to serve as a precise tool for analysis.
For a sharper take, we turn to Chollet, a researcher at Google, who defines intelligence in a way that directly serves AI and AGI discussions:
The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty.
Chollet’s definition is useful for human intelligence as well as AI. Skill acquisition is a fundamental concept: if a cognitive system — natural or artificial — cannot acquire new skills, it can’t be considered intelligent. Likewise, generalization is crucial. Intelligence isn’t just about solving known problems — it’s about adapting to new, unfamiliar situations where prior experience offers no immediate advantage.
Chollet’s definition aligns with other AI-inspired formulations of intelligence. Consider the 2007 summary by Shane Legg and Marcus Hutter, who analyzed roughly 70 definitions from across psychology and AI:
Intelligence measures an agent’s ability to achieve goals in a wide range of environments.
Here we see the same core ideas: skill acquisition framed as “achieving goals” and generalization embedded in “a wide range of environments.”
The definitions here, whether from psychology or AI, expose an essential tension: is intelligence fundamentally about competence — mastering tasks, recognizing patterns, and applying knowledge effectively? Or is it about adaptability — the ability to efficiently learn new skills and generalize across tasks? The answer has deep implications — not just for how we understand human intelligence, but for what we expect from AI.
The reader should deduce from this: however patchy and contested the consensus on intelligence may be — whether natural or artificial — it tends to reduce to two key elements. Intelligence consists in (1) the ability to acquire skills to achieve a goal, and (2) the capacity to apply those skills in different contexts, expanding the circle of what can be learned and accomplished.
A bloodless notion of intelligence

To my mind, this is a rather bloodless notion of intelligence. It leaves little room for creativity, imagination, metaphor, or the more perplexing aspects of human cognition. But it’s a start. And when it comes to evaluating the sweeping claims of techno-futurists and AI enthusiasts, even this pared-down framework gives us some traction.
To paraphrase Immanuel Kant: skill acquisition without generalization is narrow; generalization without skill acquisition is blind. Why? Take the case of termites and flies. We might say termites are highly “skilled” at burrowing through wood, or that flies are experts at ruining picnics — but these abilities are so rigidly innate that their capacity for generalization is effectively nil. Termites don’t care for sandwiches, and flies have no interest in carving out tunnels.
Real intelligence requires both: the ability to acquire new skills and the flexibility to apply them across different domains. We need to learn new ways to act — and, crucially, to do so across contexts: in the air, by sea, in new environments (though maybe not across the universe — a discussion for another time).
This framework — skill acquisition conditioned on priors and experience — will be central when we examine AI systems like AlphaZero, Google DeepMind’s system for mastering board games like chess and Go. Likewise, the concept of generalization, familiar to data scientists as a key benchmark in machine learning, will be critical in evaluating AI’s potential to achieve general intelligence — AGI.
IQ tests and cognitive intelligence
In the early 20th century, Alfred Binet (1857‒1911) and Theodore Simon (1873‒1961) sought a way to distinguish children with mental disabilities from those with behavioral issues. The result was the Stanford‒Binet Intelligence Quotient, the familiar IQ test.
The test anticipates many of Chollet’s distinctions regarding AI. Notably, it postulates a form of general intelligence, “g”, defined as intelligence that applies to arbitrary problems solvable by humans. This general intelligence can take two forms:
- Crystallized intelligence (Gc): Intelligence derived from prior knowledge and training.
- Fluid intelligence (Gf): Intelligence that operates flexibly across novel and arbitrary problems, largely independent of specific knowledge or training.
Fluid g is the Holy Grail — the one we value most in human cognition and the one we ultimately want to engineer in machines.
IQ tests have been hotly debated and widely abused over the years. Yet they persist, largely because they repeatedly correlate with cognitive “desirables” like future income, educational attainment, and social status. As McGilchrist puts it:
High performance in [IQ tests] is predictive of broad success in many different kinds of cognitive activity, from educational and work achievements to social well-being.
We keep IQ tests around, and we argue about them, because they say something. Not everything, but something.
The brain’s role in fluid intelligence
Unsurprisingly, McGilchrist marshals troves of neuroscientific evidence — lesion studies, clinical experiments, and cognitive research — to challenge the assumption that logical-computational thinking (typically associated with the left hemisphere) is the primary driver of fluid g. Instead, the evidence points to the right hemisphere, specifically the long white matter tracts that connect different parts of the brain’s neural network. The most significant of these is the superior longitudinal fasciculus (SLF), which includes the arcuate fasciculus.

File:Sobo_1909_670-671.png/Public Domain.
What’s compelling here is that white matter tracts facilitate holistic, integrative connections, rather than being localized to specific brain regions. This aligns well with fluid intelligence itself, which depends on making connections in variable contexts rather than operating from predefined rules.
I won’t get lost in the neuroscientific weeds, but one study is worth mentioning: A large-scale study examining the effects of brain lesions on intelligence in 158 patients found a striking pattern:
Fluid intelligence (Gf) was associated with a broadly distributed network of brain regions, primarily within the right hemisphere. This suggests that Gf reflects the ability to integrate verbal, spatial, motor, and executive processes via a circumscribed set of cortical connections in the right hemisphere.
This finding reinforces the idea that fluid intelligence — our ability to adapt, reason abstractly, and tackle novel challenges — depends less on domain-specific processing and more on broad, cross-network integration.
Beyond computation
McGilchrist concedes that both hemispheres working together will almost certainly outperform either working alone — even the right. But the right hemisphere’s lateralization continues to fit a fascinating narrative: the “abstract manipulation” we often associate with the left hemisphere — language, logic, and computation — plays a surprisingly minor role in key cognitive assets like intelligence, imagination, and intuition.
Time and again, we find that human thought does not operate according to a computational model or metaphor. This fact, buttressed by a growing body of neuroscientific evidence, directly challenges the prevailing theory that we are merely flawed computers — and that machines will soon become better versions of us.

On the contrary. It increasingly appears that we are better versions of mere computation. The Gestalt, integrative capacities of the right hemisphere have no computational correlate — no equivalent in the linear, stepwise operations of today’s machines.
One immediate consequence of this misreading of the mind is that we have relegated ourselves to the realm of the computer. The modern world, obsessed with information processing, insists on ascribing to the left hemisphere’s linear, mechanical strategies the unknown — but essential — capacities of the right. Or more precisely: the capacities of both hemispheres working together—as master and emissary.
This was McGilchrist’s thesis in his landmark work The Master and His Emissary (2009, updated 2018). He argued that the right hemisphere integrates the reductive, mechanistic strategies of the left, positioning itself as the true master, while the left serves as its emissary. But in an increasingly technocentric world, shaped first by the Industrial Revolution and now raging like wildfire in the form of ubiquitous information technology, this balance has been subverted.
We should expect such a shift — from right to left — to come with a certain dumbing down. We should expect, in other words, a world that prioritizes processing over understanding.
Let’s turn back to IQ — and to the Flynn Effect.
Here’s the second part: