Sundar Pichai Says AI Will Be as Big as Fire
The AI bubble is going to pop.Ask someone how big AI will be, and the answer is likely huge. But how big is huge?
Why does this matter? Because big forecasts encourage big investments, trials, and purchases. After big consulting companies predicted eight years ago that AI would have economic gains of about $15 trillion by 2030, many countries and companies felt the need to pay for their own reports from those same consultants. Of course, those consultants said that those countries could experience rapid productivity gains and those companies could experience rising profits if they implemented AI in the right way, which was of course under the guidance of the consulting companies!
Eight years later and few of their predictions have come true. But their optimistic predictions are back again, with big forecasts for generative AI, forecasts that the stock market seems to have taken seriously because the increases in market capitalization over the last year for Alphabet, Amazon, Apple, Microsoft, Nvidia, and Netflix reached $4 trillion at various moments this year. And by the way, JP Morgan says those stocks are still under-valued.
“How Big” Is an Old Question
The question of how big is big is not a new one. It is one addressed by many historians, and economists concerned with creative destruction and other aspects of innovation. Steam engines enabled the power of water wheels to be utilized everywhere, not just near moving water. Grinding grains, sawing lumber, cleaning woven cloth, and making paper or wrought iron were just a few of the early applications, with machine tools joining this list in the 19th century. Electricity provided further benefits enabling machine tools in factories to be easily organized by work processes. Previously, machines were connected to steam engines through belts located in the ceilings of factories, and thus the belts determined their placement on the factory floor.
These technologies provided us with cheaper food, clothing, and manufactured goods, the latter of which became even more important as new ones were added in the late 19th and early 20th centuries. Automobiles, appliances, elevators, bicycles, sewing machines, light bulbs, subways, and the more efficient manufacturing of these developments ushered in the steady improvements in the standard of living of the 19th and early 20th centuries. This is documented by Robert Gordon in his seminal 2016 work, The Rise and Fall of American Growth.
Gordon is just the latest of many academics who have entered the argument over which technologies were the most important. This is another way of asking how big is big. Market sizes could be compared, benefits to humans that go beyond market sizes could be tallied up, and the number of lives saved or the contribution to human longevity could be estimated.
Unfortunately, a declining number of people can participate in these discussions because today’s academics address these questions not through logic, examples, and other evidence but through excessive quantitative analysis. For instance, as we have described elsewhere, many responses to the question of “how big robots will be” have used “the U.S. Department of Labor’s O∗NET database, which assesses the importance of various skill competencies for hundreds of occupations. For example, using a scale of 0 to 100, O*NET gauges finger dexterity to be more important for dentists (81) than for locksmiths (72) or barbers (60).” Researchers code occupations as either automatable or not and correlated these yes/no assessments with O*NET’s scores. “Using these statistical correlations, the researchers then estimated the probability of computerization for 702 occupations.”
Unfortunately, not only have these studies wildly overestimated the impact of robots and AI over the last ten years, but they have also distracted us from more logical analyses.
Everett Rogers gave us common sense back in 1962. His book, The Diffusion of Innovations, is still one of the best-selling books on innovation of all time, long before management gurus became common. He describes the first users of many innovations and the reasons for why others didn’t become first users.
Geoffrey Moore expanded on Rogers’ ideas in his 1991 best seller, Crossing the Chasm. The chasm was the gap between early and mainstream markets in which suppliers needed to redesign the products and refine the marketing messages for the mainstream markets.
The point of Moore’s and Rogers’ books was that an understanding of early users can tell us a lot about a product’s future, and an understanding that many practitioners and theorists once displayed.
AI’s Problems Are Evident
My occasional colleague, Gary Smith, and I have written extensively about the lack of successful applications for AI’s large language models, along with their lack of critical thinking and their hallucinations in many articles. But you don’t have to believe us. Look at the Wall Street Journal, one of the most popular sources of business information for decades.
Some of their recent articles are downright pessimistic, with titles in the last month such as “Early Adopters of Microsoft’s AI Bot Wonder if It’s Worth the Money” and “Google and Anthropic Are Selling Generative AI to Businesses, Even as They Address Its Shortcomings” while others try to satisfy both sides: “AI Is Taking On New Work. But Change Will Be Hard — and Expensive.”
The third article quotes the CEO of the Boston Consulting Group: “Many employers are finding that they are spending more money on AI than they are realizing in productivity improvements.” The article also says: “With some AI software priced at $30 per employee a month, a number of executives have questioned the price tag,” a direct reference to Copilot.
When $30 a month is considered a lot of money, you know the productivity improvements are small. After all, most white-collar workers earn at least $75,000 a year, and often several times that amount.
And then there is the fact that the article presents no examples of companies trying to transform their main area of work (outside of brief mentions of coding and Hollywood), much less a success story. Instead, it emphasizes emails, call centers, marketing, and other support work:
“Chip maker Qualcomm is looking to create more marketing content for social-media platforms like TikTok.” A manufacturer of instruments, Thermo Fisher Scientific, “is using generative AI in corporate functions such as marketing to help write advertising content.”
At Ecolab, a water-management and infection-prevention company, executives are testing generative AI to analyze earnings reports from rivals and to help in preparing for its own calls with investors.
The AI Bubble Will Burst
But for the most amusing application, here is one for human resources. Cisco had generative AI analyze the chat logs between an employee and manager, “hoping to suss out the source of the tension.” “The manager didn’t feel heard as the employee asked some of the same questions over and over. The employee, seeking as much clarity as possible, could sense a high degree of frustration.” “There was just this ‘Aha!’” says Cisco’s top human-resources executive.
Do any of these applications sound as big as those for water wheels, steam engines, or electricity described above, or for fire, which Alphabet’s CEO Sundar Pichai has compared AI?
The AI bubble will pop because AI is not as big as fire. There are some good applications and the experimentation by companies will undoubtedly find them. But we all wish that this search could have been done more effectively, and we wish universities provided us with better tools for doing this. Some of their recent analyses are better, but they still suffer from the top-down, we-know-how-to-do-it logic. They should remember the logic used by Rogers and Moore, not only because it likely works better than their quantitative assessments, but also because it might help their students find real opportunities.