The AI Bubble: Hype, Reality, and Consequences
In this week's podcast, Pomona College economics prof Gary Jones discusses with Robert J. Marks what generative AI is and isn't good forAI has become the latest shiny object in technology, drawing massive investments and media attention. Some claim it will revolutionize industries and replace jobs, while others warn of overhyped expectations. In this week’s Mind Matters News podcast, Gary Smith, the Fletcher Jones Professor of Economics at Pomona College, draws parallels between the current AI boom and past economic bubbles, such as the dot-com bubble of the early 2000s. A transcript is available here.
Investors are pouring money into AI ventures with the hope of unprecedented returns. But Smith argues that these expectations are unrealistic. For instance, it has been estimated that generative AI companies would need to generate $600 billion annually to justify current investments— a target he believes is unattainable. This echoes the inflated valuations of companies during the dot-com era, which ultimately crashed when revenue failed to meet lofty projections.
Smith explains that, as in the dot-com bubble, many investors are driven by FOMO (fear of missing out) rather than by sound financial judgment. He cites historical examples, such as Isaac Newton’s losses during the South Sea Bubble, to illustrate how even the smartest minds can fall victim to speculative frenzy. AI hype may lead to a similar “bubble pop” as reality catches up with inflated expectations.
Generative AI as a Financial Advisor?
One reason for skepticism around AI lies in its fundamental limitations. Generative AI, such as ChatGPT, relies on statistical correlations to perform tasks. It can predict the next word in a sentence or identify patterns in data, but it does not understand meaning. Smith emphasizes this distinction, stating, “Statistics is not understanding.”
This lack of understanding has real-world implications. For example, AI-powered investment funds, designed to make stock trading decisions, have consistently underperformed. Smith and a student studied 10 fully AI-managed funds, finding that all of them trailed the S&P 500, and half have since shut down. Even funds that combined AI with human oversight underperformed the market on average.
These failures highlight a critical flaw in AI: it cannot distinguish between meaningful correlations and coincidental patterns. As Smith points out, statistical models can find patterns between completely unrelated variables. Without understanding the underlying data, AI is prone to drawing false conclusions.
Misuses and Risks of AI
AI’s statistical nature becomes even more concerning when applied to sensitive areas like job applications, loan approvals, and criminal sentencing. Decisions based solely on statistical correlations risk reinforcing biases and errors. For instance, a model might favor or disadvantage certain groups based on coincidental correlations, perpetuating inequality.
Smith connects this issue to the “replication crisis” in science, where researchers often publish findings based on weak or coincidental correlations. These findings fail when others attempt to replicate them, undermining scientific credibility. Similarly, AI’s reliance on patterns without understanding could lead to poor decision-making and ethical dilemmas.
AI as Addictive Entertainment
Generative AI is also criticized for its role as “addictive entertainment.” Many users engage with AI tools like ChatGPT for trivial purposes, such as asking absurd questions or having pretend conversations with celebrities. While entertaining, these activities do little to boost productivity and can even waste time.
Smith notes that AI’s lifelike responses make it easy for users to become overly reliant on it, especially students. Studies show that many students use AI to complete assignments, bypassing critical thinking and problem-solving skills. Rather than fostering learning, this reliance may hinder intellectual development.
Productive Uses of AI
Despite its flaws, AI can be a helpful tool when used appropriately. For instance, writers can use generative AI to refine drafts, and designers can employ AI as a creative aid. However, ethical questions arise when AI is used to create original works. Should AI-generated content be eligible for copyright or patents? Determining where AI serves as a tool versus where it functions as the sole creative force is a complex and ongoing debate.
Smith stresses that users must fact-check AI-generated outputs, as they are often confidently wrong. AI may produce compelling text or visuals, but without understanding, it cannot verify accuracy. This makes human oversight essential.
Lessons from the AI Boom
Smith offers several lessons for navigating the AI hype. First, investors should critically assess claims made by AI proponents, many of whom have vested interests in promoting the technology. Second, following Warren Buffett’s advice, people should avoid investing in things they don’t fully understand. Finally, AI’s limitations must be acknowledged, especially its inability to separate meaningful correlations from spurious ones.
Conclusion
The current AI boom mirrors past economic bubbles, driven by hype and inflated expectations. While generative AI has impressive capabilities, its reliance on statistical patterns without understanding poses significant risks. Misuses of AI in sensitive areas and its role as addictive entertainment further highlight the need for caution.
AI is not a replacement for human intelligence but a tool to complement it. By recognizing its limitations and applying it responsibly, our society can harness its benefits without falling victim to the bubble’s inevitable burst.