AI Productivity Hype: The New “Cargo Cult Science”?
Physicist Richard Feynman coined the term to describe imaginary scenarios for success based on simple misunderstanding of realities
Foundation
Nobel Physics Laureate Richard Feynman (1918–1988) called the human propensity to easily believe wonderful, fanciful things cargo cult science. He named this phenomenon after a “cargo cult” developed by people in the Pacific Islands who believed that planes carrying food would continue to land on their islands if they built replicas of landing strips and control towers. “The planes never came. These people missed the fact that it was the advent of war, not the presence of landing strips, that caused the planes to land there.”
AI’s purported impact on productivity and employment
Fast forward to the 2020s. A similar type of cargo cult science is being used to justify wild claims about AI’s impact on productivity and employment. The easiest way to persuade investors that you are proficient in AI is to claim that your announced layoffs, or at least your lack of hiring, are because you are so good at implementing AI. In reality, a lack of demand is usually the reason for layoffs or not hiring. And even if you later retract those claims, investors will rarely penalize you to the extent that they previously rewarded you for the announcements of layoffs.
Over the last year, this occurred with IBM, Klarna, and Starbucks. And the Wall Street Journal has been documenting the mixed impact of AI on jobs and productivity for years within companies who have touted AI such as Johnson & Johnson, IBM, and many others, as I described last month in an article here at Mind Matters News.
This type of cargo cult science also extends to startups. Builder.ai was claiming that it was helping companies automate coding when in reality, Indian coders were secretly writing the code behind the scenes. The truth was discovered long after Builder.ai became a unicorn, valued at more than $1 billion.
In none of these cases did the companies provide details about how AI enabled the improvements in productivity which purportedly led to the layoffs. Instead, they used the layoffs, and in particular, the fear of them, to make investors believe that they are rapidly reducing costs through AI — just as the Pacific Islanders believed landing strips would bring back supplies of food.

Some tech leaders have further exaggerated those fears as announcements of tech layoffs have increased. For instance, tech leaders such as Anthropic CEO Dario Amodei “warns that rapid advances in AI could eliminate up to 50% of all entry-level white-collar jobs within the next five years,” a claim that many economists have criticized.
One economist at the W. E. Upjohn Institute for Employment Research says this is not possible because Amodei’s prediction suggests unemployment could reach 10-20%, levels not seen since the Great Depression of the 1930s.
“That is a wildly unprecedented vision,” he added, noting that in the 1980s and 90s, computer adoption gave the world all kinds of tools that reshaped the labor market. But labor productivity grew just 2% to 3%. By the way, a site that tracks tech layoffs reports that layoff announcements peaked in January 2023, reaching 30% of that level in the second quarter of 2025.
Clearly, there is no scientific support for Amodei’s comments, yet they achieved their goal. They keep investors believing in the impact of AI so they will continue to support the share prices of AI companies, including Amodei’s company, Anthropic.
Interestingly, some economists have noticed that any increase in layoffs can be better explained by changes in tech policy than by AI. The federal government has been letting companies deduct 100% of their R&D spending, including salaries for the people involved in it since 1954. But a 2017 change to this 70-year-old tax law was implemented in 2022, and now three years later, we might be seeing the impact of this policy, not that of AI.
Why is it easier to hype technologies now than in the past?
Why is the tech sector able to push their cargo cult science? This has a lot to do with the changes in media resulting from 30 years of the internet.
Back in 2019, in an article entitled What’s Behind Technological Hype?, I argued that hype about new technologies had increased dramatically in the previous quarter century. One factor was the reduced entry barriers for news sites which resulted from the commercialization of the internet in the 1990s. Reduced barriers meant that free sites proliferated, and news publishers were forced to look for sources of income other than subscriptions. Big companies found that reputable news sites would publish their version of events for small fees while they could also publish similar articles on their free websites. It was a big win for those with money.

Today, consulting companies, big tech companies, and venture capitalists push their narratives, using big money, through reputable new sites, social media, and “influencers,” who range from Mr. Beast to Sam Altman. Some analysts have concluded that many of Altman’s messages are intended to build a religion more than a company, a telling example of cargo cult science.
The internet, social media, and influencers have also contributed to a steady decline in the trust placed in the media and other institutions. More than 70% of Americans trusted the media “a great deal” or “a fair amount” in the 1970s, but “that trust hit a record low of 31% in 2024” Similar declines have occurred for trust in the government. In 1964, 77% of Americans trusted the government to do what is right while only 22% of Americans trust government today.
Who is now trusted with news? According to an op-ed in the Wall Street Journal, “a colossal 69% of today’s consumers trust influencer recommendations.” This includes recommendations from Mr. Beast, Khaby Lame, and PewDiePie, along with Sam Altman, Marc Andreessen, Peter Thiel, and many other of the tech bros.
You still don’t believe me? Just listen to what one young entrepreneur, and recipient of a Thiel fellowship, says about Peter Thiel’s book Zero to One: “Probably the Best Book I’ve Read, and I’ve Only Read a Few Pages.” (By the way, it is a good book).
Or listen to what Sam Altman says proudly about young people: “They don’t really make life decisions without asking ChatGPT what they should do.” In other words, generative AI must be great because someone uses it for free to do something important. Cargo cult science at its best.