Mind Matters Natural and Artificial Intelligence News and Analysis
generative-ai-overview-for-project-managers-fund-invetment-a-824288989-stockpack-adobe_stock
generative ai overview for project managers fund invetment and data analysis.
Image Credit: Papisut - Adobe Stock

Why Generative AI Will Not End Up Running the World

Business prof Gary Smith and computer engineering prof Robert J. Marks talk about what we should and shouldn't believe about claims that computers taking over
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

In a recent Mind Matters News podcast, host Robert J. Marks interviewed Pomona College economics professor Gary Smith on the promises and pitfalls of generative AI.

A particular focus was its inability to solve everyday problems reliably. Smith, co-author with Jeffrey Funk of a recent article, “The Promise of Artificial General Intelligence is Evaporating,” shared his insights on why artificial general intelligence (AGI)—the idea that AI can match human reasoning, creativity, and understanding—remains unattainable under current approaches.

Here are some of the main points of discussion:

What is artificial general intelligence (AGI)?

AGI is a system theoretically capable of performing anything a human can do, including reasoning, learning, and creativity. It would require a machine to think beyond its training data, solving problems and making decisions in novel situations. While generative AI, such as OpenAI’s large language models (LLMs), excels at synthesizing responses based on vast amounts of data, it falls short of the reasoning and adaptability that define human intelligence.

Smith noted that AGI has been hyped by tech companies and media. Often, their claims involve conflating impressive feats of pattern recognition with genuine intelligence. For instance, some proponents claim that AGI is near — or even here — because LLMs can generate plausible-sounding text or solve simple logical puzzles. However, as Smith pointed out, these systems often fail when faced with real-world or slightly modified problems, exposing their lack of understanding.

Failures with everyday problems

Smith emphasized that LLMs often struggle with practical, everyday tasks that humans handle with reasonable success. For example:

1. Logical Puzzles

He described a classic river-crossing puzzle involving a man, a goat, and a cabbage, where constraints must be met to solve the problem. When presented with the original puzzle, an LLM might provide the correct solution because it has likely encountered the problem in its training data. However, when Smith simplified the scenario to just a man and a goat, the AI produced an absurdly verbose and nonsensical response. This demonstrated its current inability to apply logical reasoning to variations of the problem.

2. Practical Financial Decisions

In another example, Smith asked LLMs whether it was better to take a one-year car loan at 9% interest or a ten-year loan at 1% interest. The models incorrectly recommended the one-year loan because total payments were calculated without considering the time value of money — a fundamental concept in finance.

Similarly, when asked about the optimal time for a retiree to start collecting Social Security benefits, the models failed to account for key variables like lifespan and the present value of money, leading to inaccurate advice.

3. Understanding Context

Generative AI often fails to comprehend the meaning behind words or phrases.

For instance, an AI might respond to “John’s mother has three children: Snap, Crackle, and ___” by completing the phrase with “Pop” rather than “John,”demonstrating its reliance on statistical patterns rather than understanding.

4. Training on Polluted Data

A particularly troubling issue is the phenomenon of “model collapse.” As LLMs are trained on internet data—much of which is now AI-generated—their training becomes polluted with misinformation and fabricated facts. Smith shared an example where an AI confidently claimed that Russia had sent bears into space, providing fabricated names, dates, and sources. This misinformation arises because the AI replicates and amplifies errors in its training data, creating a feedback loop that degrades accuracy over time.

The Illusion of Intelligence

Smith and Marks discussed how LLMs create an illusion of intelligence by generating text that seems coherent but lacks true understanding. Smith likened this to a student memorizing physics formulas without grasping the underlying concepts. Reading more textbooks may improve the student’s recall, but it won’t enhance their understanding.

Similarly, scaling up LLMs by feeding them more data offers diminishing returns, as the systems lack the reasoning capabilities to interpret the data meaningfully.

The Band-Aid Approach

Tech companies attempt to address these failures by “patching” their models—adding corrections when errors are identified. For example, after users highlighted mistakes, developers may hard-code fixes for specific queries, such as ensuring the AI correctly completes the “Snap, Crackle, and John” puzzle. However, Smith argued that this approach is unsustainable. Fixing errors manually for millions of potential edge cases requires significant human effort and does not fundamentally improve the AI’s reasoning abilities.

AGI could not grasp meaning

Smith believes AGI is unlikely because current AI lacks the capacity for abstract thinking, creativity, and true understanding of the world. For example, AGI would require an ability to recognize context, adapt to new information, and make judgments beyond its training data. These traits are essential for solving real-world problems but are absent in existing models.

One major barrier is the inability of AI to grasp meaning. Marks provided an example of current AI where an image-generating AI failed to understand the word “not.” When asked to create a picture of Times Square without pink dancing hippos, the AI included a pink hippo, illustrating its inability to process negation.

The Broader Impact

Smith also warned of the societal costs of over-reliance on AI. Generative models consume vast amounts of electricity and water, divert resources, and lead talented individuals to focus on incremental improvements in flawed systems rather than addressing more pressing problems. Additionally, the proliferation of AI-generated misinformation could erode trust in information systems, exacerbating the problem of “polluted” data.

A Cautious Future

While Smith acknowledged that AI has made remarkable advances in recent years, he urged caution in applying it to high-stakes decisions. For tasks with minimal consequences—such as prompting memory or generating creative ideas—AI can be a helpful tool. However, for critical areas like finance, law, and medicine, reliance on AI remains risky.

In conclusion, Smith and Marks argued that AGI is unlikely to emerge from current approaches, as scaling up existing models does not address their fundamental limitations. Until AI demonstrates true reasoning and understanding, its applications must be carefully limited to avoid costly and dangerous errors.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Why Generative AI Will Not End Up Running the World