Mind Matters Natural and Artificial Intelligence News and Analysis
franki-chamaki-694946-unsplash
Fluorescent sign saying
Photo by Franki Chamaki on Unsplash

Will artificial intelligence design artificial super-intelligence?

And then turn us all into super-geniuses, as some AI researchers hope? No, and here's why not
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The Singularity is the idea that once we can create general artificial intelligence (intelligence like that of a human being) in software, it will enter a never-ending self-improvement loop, leading to unprecedented, dramatic changes to humanity. AI researcher Ray Kurzweil, who popularized this concept, bases it on the assumption that artificial intelligence (AI) can improve itself indefinitely:

We are entering a new era. I call it “the Singularity.” It’s a merger between human intelligence and machine intelligence is going to create something bigger than itself. It’s the cutting edge of evolution on our planet. One can make a strong case that it’s actually the cutting edge of the evolution of intelligence in general, because there’s no indication that it’s occurred anywhere else. To me that is what human civilization is all about. It is part of our destiny and part of the destiny of evolution to continue to progress ever faster, and to grow the power of intelligence exponentially.To contemplate stopping that — to think human beings are fine the way they are — is a misplaced fond remembrance of what human beings used to be. What human beings are is a species that has undergone a cultural and technological evolution, and it’s the nature of evolution that it accelerates, and that its powers grow exponentially, and that’s what we’re talking about. The next stage of this will be to amplify our own intellectual powers with the results of our technology. Ray Kurzweil, “THE SINGULARITY: A Talk With Ray Kurzweil ” at The Edge (March 24, 2001)

Is that scenario likely?

The idea does have some intuitive appeal. Perhaps the human mind has unlimited potential to invent. And if the mind could make itself smarter, then the rate of invention will go up. Recent advances in AI have led some to claim that we are, even now, on the verge of Kurzweil’s singularity.

However, short-term gains do not necessarily entail long-term success. The fact I can drive towards the UK from my home in Maryland does not mean that I can reach the UK by continuing to drive. Once I hit the Atlantic coast, my road trip is over. My car and I could both go to the UK but only by sea or air, which is a different journey.

In the same way, a crucial—and missing—component in Kurzweil’s prediction is whether there is, in fact, a path that goes all the way from point A to point B. The fact we want to go from A to B does not imply that such a path exists, any more than the fact that we want to go to the UK implies that a road exists. If AI is to turn into the Singularity, we must identify a path from the current incremental improvements in AI (A) to runaway self-improvement of AI (B). To determine whether such a path exists, we need to examine what has led to current AI success.

Recent AI gains are mostly due to improvements in computational power and access to data. The basic techniques used to formulate and train AI models have remained more or less the same since the 1970s. For example, the well-publicized Deep Learning approach to AI relies on a training algorithm known as backpropagation, which originated in the field of control theory in the 1960s and was then applied to neural networks in the 1970s. The convolutional neural network, the key component of Deep Learning networks, was invented in the 1980s. So, as we see, the basic techniques have remained unchanged for many decades.

When a new kind of processor, the graphics processing unit (GPU), went mainstream, AI performance really jumped. These processors can perform the same operation on many different pieces of data at once. Applied to the large amount of data we now have, GPUs can train very complicated AI models comparatively quickly. Modern Deep Learning networks sometimes possess millions or even billions of neurons and they need massive amounts of data in order to develop general models. So the second part of modern AI’s success is the large quantities of data made available through the internet.

Given that AI has primarily benefitted from increased data size and processing power, software architect Brendan Dixon concludes, contrary to Ray Kurzweil, that an AI winter is looming: “ The worries of an impending winter arise because we’re approaching the limits of what massive data combined with hordes of computers can do.” Past AI booms and busts have, generally speaking, been related to processor improvements or lack thereof.

In addition to this historical perspective, we can examine the question from a mathematical perspective and arrive at the same conclusion: * There is an identity known as the data processing inequality, which states that if you have a dataset X with information about a quality Y, then processing the dataset X cannot increase the information regarding Y.

AI is a form of processing so AI algorithms can only reduce the information in the dataset, not increase it. The dataset is the source of the information that makes AI work. Therefore, in order to increase information, we must increase data. Assuming we have access to unlimited data, the limit on increasing the dataset’s size is the rate at which we can process it. The bottleneck in improving AI is the processing throughput.

May we expect a continuation of the boom-bust cycle for AI, and thus another spring after the predicted winter? If we assume that AI has advanced primarily due to processor improvements, then continued improvement is contingent on Moore’s law, which states that the number of transistors on a processor circuit doubles every two years. However, for the transistor density to increase, the size of the transistors must decrease. Because there is a limit to how small transistors can be, there is a limit to the applicability of Moore’s law. Further, because Moore’s law is an exponential law, the numbers multiply rapidly and we could hit the physical limit rather suddenly. Current indications are that Moore’s law’s speed has already slowed or even ceased to be a true description of the information technology (IT) industry today:

The chip industry has kept Moore’s prediction alive, with Intel leading the charge. And computing companies have found plenty to do with the continual supply of extra transistors. But Intel pushed back its next transistor technology, with features as small as 10 nanometers, from 2016 to late 2017. The company has also decided to increase the time between future generations (see “Intel Puts the Brakes on Moore’s Law”). And a technology roadmap for Moore’s Law maintained by an industry group, including the world’s largest chip makers, is being scrapped. Intel has suggested silicon transistors can only keep shrinking for another five years. Tom Simonite, “Moore’s Law Is Dead. Now What?” at Technology Review (2016)

This implies that insofar as AI advances have been due to Moore’s law, nature imposes a limit. So we will reach a point, perhaps soon, when processor improvements will cease, and thus AI improvements will also cease, or else reach a point of ever-diminishing returns. That point is known as “peak AI” because we have reached the point where a return on the investment in AI improvement is not worthwhile.

So, returning to our original question, whether there is a path to Kurzweil’s Singularity, we must conclude from our analysis that no such path exists and that unlimited self-improving AI is impossible.

* Note: An “identity” is an equation. It is always true even if you vary the types of numbers. For example, A = B whether A is 5 + 5 and B is 10 or, alternatively, A is ½ + ½ and B is 1

Eric Holloway Eric Holloway has a Ph.D. in Electrical & Computer Engineering from Baylor University. He is a current Captain in the United States Air Force where he served in the US and Afghanistan He is the co-editor of the book Naturalism and Its Alternatives in Scientific Methodologies. Dr. Holloway is an Associate Fellow of the Walter Bradley Center for Natural and Artificial Intelligence.

Also by Eric Holloway: How can we measure meaningful information

and

Has neuroscience disproved thinking?

See also: How humans can thrive in a world of increasing automation (Bill Dembski)


Will artificial intelligence design artificial super-intelligence?