Mind Matters Natural and Artificial Intelligence News and Analysis
inferno-social-media-hype-concept-art-poster-stockpack-adobe-843041379-stockpack-adobe_stock
Inferno Social media hype concept art poster.

AI Pessimists vs True Believers: A Deepening Divide

There is a growing divide about AI even among AI experts but many of them like polarizing content, just as the consumers of political information do
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Sequoia’s David Cahn has said many times that $600 billion in annual AI software revenues are needed to justify the huge investments in generative AI infrastructure, but OpenAI ($4 billion), Microsoft, Palantir and all the others are only generating about $17 billion in annualized revenues right now. Goldman Sachs’ Jim Covello, Citadel’s Ken Griffen, and long-time bull Ed Yardeni say similar things, each arguing there is a huge generative AI bubble.

Gary Smith and I also emphasize revenues in our analyses. We use this metric to demonstrate that today’s AI bubble is much bigger than the Internet bubble. Although both bubbles involved huge outlays on infrastructure, the users of 25 years ago were willing to pay about $1.5 trillion (in 2024 dollars) in 2000 in the form of subscriptions to internet services, purchases of personal computers, and outlays on e-commerce, much more than the $17 billion Pitchbook claims are being paid by GenAI users in 2024. Yet the Internet bubble still popped.

The most recent earnings reports from Google, Microsoft, Meta, Apple, and Amazon continued to ignore software revenues.  Instead, they emphasized the usual rise in cloud revenues. For example, Google claims AI is writing ¼ of its software but experts say this is mostly autocomplete. This is despite many realizing that the age of experimenting with AI is over; “now they have to show the returns.”

MIT’s Daron Acemoglu recently said that vast amounts of money are being wasted on AI while Cambridge University’s Neil Lawrence (a computer scientist) has criticized the hype from UK politicians and Silicon Valley. New York University’s Gary Marcus has been a regular critic of AI for years and details these criticisms in his just-published book Taming Silicon Valley.

The True Believers

On the other side are investment analysts who believe that big productivity improvements and profits are right around the corner. A survey of  130 Bank of America equity research analysts found that “enterprise AI implementations are moving from pilots to production, which could boost S&P operating margins by 200 basis points (bps) over the next five years, equivalent to approximately $55 billion in cost savings, annually.” Some of this optimism comes from OpenAI, which forecasts $100 billion in revenues in 2029, its first year of profitability.

These types of optimistic analyses are a major reason why the market capitalizations for the magnificent seven (Nvidia, Google, Microsoft, Apple, Amazon, Tesla, and Meta) have increased by almost $10 trillion since January 2023, and may go higher. After all, institutional investors such as pension funds pay a lot of attention to research analysts.

Nobel Prizes, SpaceX and Optimistic Forecasts

The Nobel Prizes awarded  for physics and chemistry, the successful landing of SpaceX’s Super Heavy booster rocket with the help of “metal chopsticks, and Elon Musk’s recent “Reveal” event also contribute to this optimism. Although the Nobel committee didn’t mention ChatGPT in their press release, many journalists conflated the two by asking what the recipients thought about these services. The Wall Street Journal used SpaceX’s success to hype AI in an article entitled “SpaceX and Our Continuing Age of Discovery.”

When the Journal also asked 11 experts recently to give their projections for 2030, nine of them were very optimistic. New York University’s Amy Webb said “we will each use ‘advanced AI agents designed to replicate and emulate our unique decision-making processes.’” Gartner’s Erick Brethenoux said AI agents “will protect us from receiving email, phone calls, texts and instant messages that aren’t of much use to us, along with automatically responding to them.” (Eric Schmidt, former CEO of Google, wasn’t one of the eleven experts, but he has also talked optimistically about agents, even dangerous ones, as did the Financial Times recently.)  

Looking back from 2030 for the WSJ, Stanford’s Erik Brynjolfsson says that “AI systems reached unprecedented levels of capability, reshaping industries and jobs alike. Over half the Fortune 500 vanished.” Wharton’s Ethan Mollick has said, “We must prepare for a future where AI’s long-term effects surpass our current imaginations of what it can do — even as its short-term influence may fall short of the most ambitious predictions.”

Claiming that short-term trends don’t matter is a common refrain among true believers, probably because the current short-term trends aren’t very positive, even though they should be consistent with any such long-term forecast.

Short-Term Trends

I have already mentioned the slow growth in AI software revenues, but there are other issues as well. The frequency of hallucinations have been a big topic for years. The prestigious journal Nature recently published an article titled: “Larger and more instructable language models become less reliable.” IEEE Spectrum basically agrees. The beginning of the article’s abstract says: “The prevailing methods to make large language models more powerful and amenable have been based on continuous scaling up (that is, increasing their size, data volume and computational resources) and bespoke shaping up (including post-filtering, fine tuning or use of human feedback). However, larger and more instructable large language models may have become less reliable.”

A second paper challenges another pillar of AI hype, the purported increase in formal reasoning capability by AI models. The paper is authored by six AI Apple researchers, researchers who have every incentive to claim that they found evidence of reasoning. But they didn’t. Instead, their paper says: “current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data.”

A third recent paper, again from Nature, highlights an equally vexing problem, model collapse. This term describes the increasing homogenization of models as computer generated text replaces human-generated data on the internet. That process eliminates data for generative AI to be trained on. The paper concludes: “We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs).”

The short-term trends don’t look good, and any long-term forecast should be consistent with short-term trends. Even overly optimistic research analysts will eventually notice the short-term trends.

Social Media

Might social media such as LinkedIn provide a more balanced viewpoint? LinkedIn is filled with optimistic posts from tech influencers such as Steven Nouri, Allie Miller, Linas Beliunas, and Cassie Kozyrkov. Their posts typically received more than one thousand likes and are filled with terms such as “breakthroughs,” “game-changers,” “this is brilliant,” “this is wild,” and “nailed it” about product announcements and other events including Tesla’s recent event, although it was dubbed a flop by investors.

I must admit that I am a bit jealous because my likes are in the hundreds, far from what they get. But recently, I posted an article quoting Acemoglu with the hyperbolic title “Companies Wasting Billions on AI Spending.” The post received more than 1,300 likes and 220,000 views. Surprisingly, more than one-fourth of the likes were from data scientists, computer scientists and others with titles that suggested they work in AI.

I concluded that there is a growing divide about AI even among AI experts. Many of those experts like polarizing content, just as the consumers of political information do. The world is headed down an interesting road — and I don’t mean interesting in a positive way.

Jeffrey Funk is a retired professor, winner of the NTT DoCoMo Mobile Science Award. His sixth book, Unicorns, Hype and Bubbles, hit the bookstores on October 22.


AI Pessimists vs True Believers: A Deepening Divide