Mind Matters Natural and Artificial Intelligence News and Analysis
hiking
Hiking team people helping each other friend giving a helping hand while climbing up on the mountain rock adventure travel concept of friendship support trust teamwork success.
Image licensed via Adobe Stock

GPT-4: Signs of Human-Level Intelligence?

Competence and understanding matter just as much if not more than mere "intelligence"
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

You’ve heard about GPT-3, but how about GPT-4? OpenAI has publicly released the new AI program, and researchers have already claimed that it shows “sparks” of human intelligence, or artificial general intelligence (AGI). Maggie Harrison writes at Futurism,

Emphasis on the “sparks.” The researchers are careful in the paper to characterize GPT-4’s prowess as “only a first step towards a series of increasingly generally intelligent systems” rather than fully-hatched, human-level AI. They also repeatedly highlighted the fact that this paper is based on an “early version” of GPT-4, which they studied while it was “still in active development by OpenAI,” and not necessarily the version that’s been wrangled into product-applicable formation.

-Maggie Harrison, Microsoft Researchers Claim GPT-4 Is Showing “Sparks” of AGI (futurism.com)

The machine isn’t perfect and still makes mistakes, but it is quite improved from its predecessor, and the hype it’s getting may be somewhat warranted. It might be beating a dead horse at this point, but the optimism surrounding AI like GPT-4 is understandable, yet perhaps overwrought; Gary Smith writes here on AI’s limits with complex tasks, such as legal argumentation and mathematical problem-solving. And yet, the tech optimists see GPT-4 as a milestone in the journey towards AGI. Are they chasing a pipe dream or are our chatbot overlords already looming? Harrison continues,

Still, there are a few more caveats to the AGI argument, with the researchers admitting in the paper that while GPT-4 is “at or beyond human-level for many tasks,” its overall “patterns of intelligence are decidedly not human-like.” So, basically, even when it does excel, it still doesn’t think exactly like a human does.

Regardless of how good these LLMs get at computing and problem-solving, they won’t be able to achieve the uniqueness of human thought, which goes beyond the technical. Machines cannot understand their computations, but humans can. Competence wins the day. Smith writes in his piece,

The most relevant question is not whether computers satisfy some endlessly debated definition of intelligence, but whether computers have the competence to be trusted to perform specific tasks. The answer will sometimes be yes, often no, and never found in conversations with text-generators.

-Gary Smith, Let’s Take the “I” Out of AI | Mind Matters

Smith points out that in matters of trust, and especially in human relationships, competence and understanding matter just as much if not more than mere “intelligence.”


Peter Biles

Writer and Editor, Center for Science & Culture
Peter Biles graduated from Wheaton College in Illinois and went on to receive a Master of Fine Arts in Creative Writing from Seattle Pacific University. He is the author of Hillbilly Hymn and Keep and Other Stories and has also written stories and essays for a variety of publications. He was born and raised in Ada, Oklahoma and serves as Managing Editor of Mind Matters.

GPT-4: Signs of Human-Level Intelligence?