Mind Matters Natural and Artificial Intelligence News and Analysis
new-ideas-or-transformation-concept-with-crumpled-paper-ball-446070781-stockpack-adobestock
New ideas or transformation concept with crumpled paper balls and a crane, teamwork, creativity, business concept
Image Credit: Andreas Berheide - Adobe Stock

Stranger Things: Why Mad Scientists Are Mad

At the highest levels, creativity seems to bypass the deliberate, structured thought process altogether
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

We talked about serendipity in Part 1. I hesitate to begin this section with a gloss on the neuroscience of creativity because we haven’t yet even defined the term. But this is a feature, not a bug, I suppose — since the whole point is that creativity resists easy definition (it’s a bit like pornography in that respect).

The best definitions, for once, tend to come from the creative types themselves. One of their common themes is that creativity makes hitherto concealed connections, as French mathematician Henri Poincaré (1854‒1912) put it:

Creative ideas are those which reveal to us unsuspected kinship between other facts, long known, but wrongly believed to be strangers to one another.

Steve Jobs (1955‒2011) agreed:

A stunning visual of a brain split into two halves, showcasing electric energy in contrasting colors, symbolizing creativity and logic.Image Credit: Vangly - Adobe Stock

Creativity is just connecting things.

And psychiatrist Iain McGilchrist, referring to the generative requirements that make creativity possible, adds:

For creativity to succeed, there needs to be breadth of vision; the capacity to forge distant links; flexibility rather than rigidity; a willingness to respond to a changed or changing context; as well as tolerance of ambiguity and knowledge that is, at least at the outset, inherently imprecise.

The pre-scientific literature on creativity is full of these descriptions, all circling the same insight that seems perpetually out of reach, like someone trying to recall a dream. Most of us already know what creativity is. It shouldn’t be surprising that Poincaré and Jobs are getting at the same core idea. Still, what is it?

Divergent thinking and the ah-ha moment

In the 1960s, J.P. Guilford coined the term “divergent thinking” to distinguish it from thinking which converges on a single, correct answer to well-defined questions (hence, “convergent thinking”). As McGilchrist puts it such questions “do not require flexibility or originality.” He includes standardized multiple choice tests for intelligence as well as common school curricula in this category. The educational psychologist Arthur Cropley adds (referring to convergent thinking):

…it is thus most effective in situations where a read-made answer exists and needs simply to be recalled from stored information, or worked out from what is already known by applying conventional and logical search, recognition and decision-making strategies.

Arthur Cropley’s explanation of convergent thinking sounds suspiciously like what AI futurists call general intelligence. Stuart Russell, who coauthored the definitive textbook introduction to AI with Google’s Peter Norvig, writes in his 2019 book, Human Compatible: Artificial Intelligence and the Problem of Control that thinking is nothing but “achieving objectives.”

That’s a hopelessly reductive understanding of thinking or intelligence, but it has plenty of company in the AI world. Futurist Ray Kurzweil thinks thinking is “matching patterns.” The AI enthusiast’s view of thinking seems pigeonholed from the get-go into something like convergent thinking. If creativity requires divergent thinking, then engineering truly creative machines is a lost cause. McGill unpacks the connection between divergent thinking and creativity:

What divergent thinking covers is not just being able to make up new ideas at random — most of which would be worthless — but perceiving connexions [sic]and shapes or forms that guide thinking by analogy: to broaden a field that has become too narrow, or to find alternative ways of visualising something that has become too familiar.

Divergent thinking ventures off the path, in other words, and this brings us to insight, or the “ah-ha” moments central to creativity.

Insights seem not to simply achieve objectives (whatever that means) or match patterns, but to entirely change the field of play. In a descriptive study based on 120 subjects discussing their “ah-ha” moments, we learn:

Virtually all insights involved a change in understanding … a surprising number of insights were triggered by inconsistencies and contradictions. The insights that were triggered by contradictions seemed to depend on the person taking the anomalous data point seriously rather than attempting to explain it away…

The neuroscientist Vilayanur Ramachandran has observed that the left hemisphere is mainly interested in dismissing anomaly rather than seeing it as a bridge to an insight or “ah-ha” moment. He also points out that the right hemisphere is the “anomaly detector,” playing “devil’s advocate.” Here we see clear evidence of the close connection between the left hemisphere’s technical view of the world and the ideas coming out of AI. Unfortunately none of this has much to do with what we know about “ah-ha” moments.

Neuroscience is helping to uncover the close link between creativity and the right hemisphere — again, this is centrally important because the left hemisphere’s contribution to thinking and intelligence seems to have been co-opted by AI futurists and indeed by the field of AI generally. Here I’ll point out that the neural structure of the right hemisphere seems better suited for supporting insight. Why? It has more synapses and more connections. As McGilchrist explains:

In areas of association cortex such as the anterior temporal cortex, right hemisphere pyramidal neurons have more synapses overall, and especially more synapses for from the cell body, than those of the left hemisphere. What this means is that they make not only more connexions [sic] overall, but connexions over a far larger area.

AI is adopting the wrong set of ideas. The raison d’etre of work on neural networks and AI is that of problem-solving, pattern-matching, and achieving objectives. The left hemisphere prefers this view of intelligence and indeed of the world. The problem is, the Holy Grail of intelligence — insight and creativity — seems more and more a feature of the right hemisphere’s Gestalt view of things. That’s what makes our thinking so interesting and unique. It’s what gives us genius too, like Mozart.

Mozart’s uncertainty

Mozart, for his part, described his own experience of creativity in a possibly apocryphal but still revealing letter written toward the end of his career:

How do I write, and how do I come to flesh out what are large, general ideas? I really can’t tell you any more than this — because I myself don’t know any more about it, and can’t get any further with it.

He goes on to explain that he sees the composition rather than assembling it — language that Jobs or Poincaré would likely find unsurprising.

When I am, as it were, completely myself, entirely alone, and of good cheer… ideas flow best and most abundantly. Whence and how they come, I know not, nor can I force them.

Mozart, like Einstein after him, was describing something deeply non-verbal. Einstein, too, insisted that his ideas weren’t built out of words but out of images. The polymath Charles Sanders Peirce made similar remarks (he and Einstein were also left-handed).

At the highest levels, creativity seems to bypass the deliberate, structured thought process altogether.

Scientific reductionism and the stagnation of creativity research

The modern scientific community, for its part, has tended to use less evocative, more clinical language to describe creativity — often reducing it to divergent thinking or thinking in metaphors. But what’s striking is that, despite decades of research, we seem to have made almost no real progress in defining it.

Pop culture, meanwhile, has abused the concept entirely, turning it into a series of lifestyle prescriptions — first yoga, then morning routines, then some other commodity repackaged as a gateway to creative genius. Of course, even Mozart wasn’t immune to this kind of thinking — he rambled about long walks after a good meal as a key part of his process. The difference is, Mozart didn’t confuse the practice with the mystery itself.

Cognitive science, by contrast, has either ignored creativity or attempted to sweep it under some reductive rug. Witness Arne Dietrich of the American University of Beirut, who, in a famous paper (and later book) co-authored with Riam Kanso, set out to debunk the idea of creativity altogether:

Creativity can just as well be the result of a convergent process. What would we otherwise make of Edison’s assembly-line, nearly algorithmic approach to inventing? Or Bach’s methodical way of composing hundreds of cantatas?

This is where the scientific impulse to flatten everything becomes absurd.

What’s being argued here isn’t that creativity can take methodical forms — which is obvious — but that the process itself is indistinguishable from routine problem-solving. This is wrong on its face, but it’s revealing. The need to rationalize creativity into an algorithm — to make it safe, predictable, measurable — has led some scientists to claim that it was never anything special to begin with.

This is the real danger of reductionism — not just that it fails to explain creativity, but that it actively encourages dismissal of what cannot be reduced.

Next: The limits of what we can learn from studying creativity

Here’s the first part of my essay: The slow decline of a key aspect of creativity. The mechanization of mind is changing how we think about creativity — and not in a good way. In this first of three parts, I look at the role of serendipity — the art of making happy, unexpected discoveries — and how a mechanized world diminishes it.


Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Stranger Things: Why Mad Scientists Are Mad