Mind Matters Natural and Artificial Intelligence News and Analysis
throwing-of-dart-in-balloon-with-water-on-dark-background-stockpack-adobe-stock
Throwing of dart in balloon with water on dark background
Image licensed via Adobe Stock

Another AI Hype Bubble Pops

The age of improving giant AI models like ChatGPT is over
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a recent assessment of his company’s chatbot products like ChatGPT, OpenAI’s CEO Sam Altman  surprisingly opined to an audience at MIT. “I think we’re at the end of the era where [AI is]  going to be these … giant [large language] models … We’ll make them better in other ways.”

This sobering comment  is in contrast to a prophesy by philosopher David Chalmers who cautions about the dangerous future. He says today’s large language AI has a 20% chance of sentience in 10 years. Fired engineer Blake Lemoine goes further. He claims that Google’s LaMDA is already sentient.  

Such AI hyperbole is not new.

Here is a thumbnail sketch of some AI history that sheds light on such claims. Heeding Santayana’s famous observation: “Those who cannot remember the past are condemned to repeat it,” this history should be remembered.

Advances in AI come in quantum jumps with flashes of genius. It does not evolve slowly getting incrementally better and better. When Altman says “We’ll make [AI] better in other ways,” he means another flash of genius is needed.

Flashes of genius come sporadically. These newly introduced genius innovations are incrementally bettered by researchers building on the foundation of the innovation. The flash of genius is a single jump to nearly the top of a mountain. There are few flashes of genius and numerous follow-up papers. Some of the derivative papers that follow are little baby steps that increase the elevation by just a little bit. Papers pointing out the limitations of the new AI are particularly important. On the mountain of hype, discovered AI limitations decrease elevation to a realistic level below the peak of hype.

We’ve Been Here Before

The first AI wave began in the late 1950’s with the work of Frank Rosenblatt at Cornell and Bernie Widrow at Stanford. Widrow’s artificial intelligence was amazing. It could do voice recognition, predict the weather, and play blackjack at the level of a pro. This is captured in an old black-and-white 60-Minutes type interview available on YouTube. It’s fun to watch the genius of soft-spoken Bernie Widrow and see his inventions in action.

But there was also unwarranted hype in the early days of AI. Under the headline “New Navy Device Learns by Doing,” the New York Times published an article in 1958 that begins:

WASHINGTON, July 7 (UPI) — The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.

Does such hype sound familiar? This was over sixty years ago!

As outlined in the book Perceptrons, the first neural networks of Widrow and Rosenblatt were limited in what they did. Their so-called linearity stunted their application. Many of these limitations were later overcome with the flash of genius of error backpropagation applied to neural networks that contained  many layers of neurons that let neural networks go nonlinear. The error backpropagation algorithm was first published in Paul Werbos’ PhD dissertation at Harvard. It was a hit and is commonly used today. Go to Google Scholar and search “backpropagation.” You get over 800 thousand publication hits.

But even the new layered neural networks were constrained by something called the “curse of dimensionality.” The complexity of the training data has to be kept small in scope. Deep convolutional neural networks overcame this limitation. This was another flash of genius. One early application of this deep learning was radiology diagnosis. As usual, the new AI innovation was accompanied by hype. Geoffrey Hinton, a pompous neural network researcher, a half a decade ago claimed, “We should stop training radiologists now. It’s just completely obvious that within five years, deep learning is going to do better than radiologists.” But, like all new AI, the limitations of deep convolutional neural networks were not yet vetted. According to STAT, “the number of radiologists who have been replaced by AI is approximately zero. (In fact, there is a worldwide shortage of them.) … Radiology has proven harder to automate than Hinton … imagined.”

Everything Is In the Future

There have been other flashes of genius in AI. One is the GAN (generative adversarial network) from Ian Goodfellow and his colleagues in 2014. Among other uses, GANs are used to generate deep fakes. Reinforcement learning used to train AI how to beat world champions at chess and GO has developed alongside increasing computer power.

AI is hyped using delayed scrutiny. We are told mind-blowing AI is not happening now, but will occur sometime in the future. Note the pattern in the examples thus far considered:

  •  David Chalmers prophesies today’s AI has a chance of sentience “in 10 years.”  
  • The 1958 NY Times article says it expects the AI “will be able to” do incredible things someday.
  • Geoffrey Hinton said AI will replace radiologists “within five years.”    

Everything’s in the future.  

After prophesies about new AI are made, the limitations of the subject AI are invariably discovered that reveal such hype foolish. Geoffrey Hinton’s announcement of obsolete radiologists was wrong. The N.Y. Times was wrong about the future of the Navy’s AI.  And now OpenAI’s CEO Sam Altman suggests that with the advancement of current technology, philosopher David Chalmers is wrong about his forecast of sentience.

A Danish proverb applies here. “Making forecasts is dangerous, especially if it’s about the future.”

AI hype is like a financial bubble. People are fooled that, even though all the financial bubbles in the past have popped, this time it’s different. AI has and will continue to have a monumental impact on society. But when considering these impacts, prophesying snake oil salesmen making premature hyperbolic claims who seek notoriety and fortune should not be trusted.


Another AI Hype Bubble Pops