Mind Matters Natural and Artificial Intelligence News and Analysis
doubtful-young-man-thinking-stockpack-adobe-stock
Doubtful young man thinking
Image licensed via Adobe Stock

Why Are We Obsessed With How Smart AI Is?

The people with the most specific knowledge should be assessing applications for AI and their risks.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Some say we will soon achieve artificial general intelligence (AGI), others say no, and humans will always be smarter than computers. If you are in the former camp, you probably think that AI will dramatically increase productivity and cause massive employment. If you think the opposite, you will likely think the economic impact will be small.

But why are we obsessed with intelligence? Do we think that the smartest person becomes the CEO? No, few people would make this argument because CEOs are supposed to accomplish things, usually obtain profits, by creating new businesses, cutting costs, or combining resources in new and useful ways. You can criticize the emphasis on profits, but don’t think that CEOs get their jobs by taking exams.

Why the Focus on Machine “Intelligence”?

So why don’t we think in the same way about AI? Why do we keep focusing on how well ChatGPT and other LLMs write, or how well they perform on exams? We would never give our CEOs an exam, but studies have been comparing ChatGPT or previous versions of AI to humans for years.

The Godfather of AI, Geoffrey Hinton says tech will get smarter than humans. The academic journal Science Alert notes: “Five Experts Explain Whether AI Could Ever Become as Intelligent as Humans.” Softbank’s Masayoshi Son says artificial general intelligence is so powerful that within a decade it will surpass all human knowledge. MIT’s Technology Review says: “AI hype is built on high test scores. Those tests are flawed.”

The fourth article should make people suspicious of claims that AI will soon become smarter than humans. The article summarizes the results from many tests that have been used to assess AI, showing that many of the tests that AI cannot pass are passed by young children. Referring to one problem that AI failed:

“This is the sort of thing that children can easily solve. The stuff that these systems are really bad at tends to be things that involve an understanding of the actual world, like basic physics or social interactions — things that are second nature for people.”

At least some people recognize that our obsession with intelligence can lead to problems. For instance, an article from the Carnegie Endowment for International Peace is “How Hype Over AI Superintelligence Could Lead Policy Astray.” The article argues such hype can distract us from more practical problems such as disinformation (e.g., alleged destruction of Pentagon), arresting 8-month pregnant women for car-jacking, and congestion from self-driving vehicles, problems that won’t go away even if AI passes more exams.

The Various Types of Intelligence

One of the best examples of an AI leader going down the rabbit hole of how intelligent AI will become is OpenAI’s CEO Sam Altman. According to a profile of him in New York Magazine, he apparently “has a disconcerting penchant for using the term median human,” a phrase that seemingly equates to a robotic tech bro version of “Average Joe,”

He says,

“For me, #AGI” — artificial general intelligence — “is the equivalent of a median human that you could hire as a co-worker.” He explained this theoretical AI would be able to “do anything that you’d be happy with a remote coworker doing just behind a computer, which includes learning how to go be a doctor, learning how to go be a very competent coder.”

Unfortunately, Altman’s use of the term median human is the type of vague pronouncement that reveals the limitations of his own intelligence. People are intelligent in different ways. Some have knowledge in one subject and some in others. Some have theoretical knowledge, and some practical, all at various levels from coding and engineering down to metal and woodworking, gardening, and husbandry. This is one reason why so many different tests have been used to assess AI.

Altman’s vague pronouncements bother other people too, particularly when he talks about risks to society. According to an engineer interviewed for the New York magazine article,

“To mitigate risks from a technology, you need to define, with precision, what that technology is capable of doing, how it can help and hurt society — and Altman sticks to generalities when he says AI might annihilate the world. (Maybe someone will use AI to invent a superbug; maybe someone will use AI to launch nukes; maybe AI itself will turn against humans — the solutions for each case are not clear.)”

Furthermore, the expert argues,

“We don’t need a new agency. AI should be regulated within its use cases, just like other technologies. AI built using copyrighted material should be regulated under copyright law. AI used in aviation should be regulated in that context. Finally, if Altman were serious about stringent safety protocols, he would be taking what he considers to be the smaller harms far more seriously.”

The people with the most specific knowledge should be assessing applications for AI and their risks, not Sam Altman or his friends. The recent agreement between writers and Hollywood companies suggests others also think this way.

What about those university exams that ChatGPT passed? The biggest lesson from giving university exams to ChatGPT is that students should be tested in other ways. We need to test students on problem-solving, not on regurgitating information because this is what we want graduates to do in organizations.

And how should we assess ChatGPT and other forms of AI? We need to look at their accomplishments just as we do when we assess a person’s candidacy for advancement. Look at how well AI can improve processes and whether or not it meets or exceeds customer expectations while achieving this with fewer resources. Who cares if ChatGPT will become smarter than humans or not? Let’s use it to make people’s lives better.


Jeffrey Funk

Fellow, Walter Bradley Center for Natural and Artificial Intelligence
Jeff Funk is a retired professor and a Fellow of Discovery Institute’s Walter Bradley Center for Natural and Artificial Intelligence. His book, Competing in the Age of Bubbles, is forthcoming from Harriman House.

Why Are We Obsessed With How Smart AI Is?