Mind Matters Natural and Artificial Intelligence News and Analysis
Female Cyborg Robot - 3D render of a futuristic robot conveying artificial intelligence, work and production automation in the digital age

Software pioneer says general superhuman artificial intelligence is very unlikely

The concept, he argues, shows a lack of understanding of the nature of intelligence
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The 2014 science fiction film Transcendence featured a scientist who uploaded his consciousness into an AI program. Many people talk as though things like that are just around the corner. But industry pros say it isn’t really possible. Why not?

François Chollet, author of Keras, a framework for the Python deep learning language, offers a list of reasons, but starts by pointing to an underlying misconception: that a super-AI could be developed that would go on creating more super-AIs until something vastly more intelligent than a human being arises. He points out that such a process has not actually happened in the universe of which we have knowledge:

An overwhelming amount of evidence points to this simple fact: a single human brain, on its own, is not capable of designing a greater intelligence than itself. This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so. Clearly, the intelligence of a single human, over a single lifetime, cannot design intelligence, or else, over billions of trials, it would have already occurred. François Chollet, “The impossibility of intelligence explosion” at Medium

If we cannot design an intelligence, why do we think we can design a machine that can design an intelligence? Chollet goes on to make several more general points, one of which is that the effects of most human intelligence depend on a background of civilization that provides us with the tools that enable us to express it:

In particular, there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems. If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem. In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.

Intelligence is also subject to statistical laws that set boundaries on its scope:

This was first evidenced in Terman’s study, and later confirmed by others — for instance, an extensive 2006 metastudy by Strenze found a visible, if somewhat weak, correlation between IQ and socioeconomic success. So, a person with an IQ of 130 is statistically far more likely to succeed in navigating the problem of life than a person with an IQ of 70 — although this is never guaranteed at the individual level — but here’s the thing: this correlation breaks down after a certain point. There is no evidence that a person with an IQ of 170 is in any way more likely to achieve a greater impact in their field than a person with an IQ of 130. In fact, many of the most impactful scientists tend to have had IQs in the 120s or 130s — Feynman reported 126, James Watson, co-discoverer of DNA, 124 — which is exactly the same range as legions of mediocre scientists. At the same time, of the roughly 50,000 humans alive today who have astounding IQs of 170 or higher, how many will solve any problem a tenth as significant as Professor Watson?
François Chollet, “The impossibility of intelligence explosion” at Medium

Chollet believes that human intelligence will increase with the advance of civilization and of AI but at a linear pace, as is the case with other science achievements but that transcendence will remain the stuff of movies.

In a classical tale, Daphne escaped an unwanted suitor by becoming a tree.

In remote antiquity, our ancestors believed that human-like intelligence could survive in, say, a fish or a tree; the concept survives in fairy tales and fantasy. One wonders whether some AI advocates have a similar view of human intelligence, that it is a transferable thing, as opposed to our human response to our environment.

Note: François Chollet is the author of Deep Learning with Python (2017)

See also: Machines just don’t do meaning And that, says a computer science prof, is a key reason they won’t compete with humans

Human intelligence as a halting oracle (Eric Holloway)

Meaningful information vs. artificial intelligence (Eric Holloway)

AI is indeed a threat to democracy. But not in quite the way historian Yuval Noah Harari thinks (Michael Egnor)

Should robots run for office? A tech analyst sees a threat to democracy if they don’t

Too late to prevent rule by The Algorithm? Dilbert’s creator, Scott Adams, tells Ben Shapiro why he thinks politicians soon won’t matter.

and

How AI could run the world Its killer apps, in physicist Max Tegmark’s tale, include a tsunami of “message” films


Software pioneer says general superhuman artificial intelligence is very unlikely