Mind Matters Natural and Artificial Intelligence News and Analysis
chat-bot-and-future-marketing-concept-chatbot-icon-hand-holding-mobile-phone-with-automatic-chatbot-message-screen-with-abstract-background-stockpack-adobe-stock
Chat bot and future marketing concept , Chatbot icon , Hand holding mobile phone with automatic chatbot message screen with abstract background
Photo licensed via Adobe Stock

Let’s Call AI What It Really Is: Faux Intelligence

Gary Smith at Salon: While GPT-3 can string words together in convincing ways, it has no idea what the words mean
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Pomona College business and investments prof Gary Smith warns Salon readers not to be too gullible about what human-sounding chatbots really amount to.

He notes that in the 1960s, a pioneer chatbot called ELIZA convinced many psychiatric patients that they were interacting with a real psychiatrist. The machine simply repeated back their statements as questions, a popular psychiatric technique at the time because it generated more and more discussion — from the patient. The patients’ belief that they were interacting with a human being came to be called the Eliza effect.

Has much changed?

If you play around with GPT-3 (and I encourage you to do so) your initial response is likely to be astonishment — a full-blown Eliza effect. You seem to be having a real conversation with a very intelligent person. However, probing deeper, you will soon discover that while GPT-3 can string words together in convincing ways, it has no idea what the words mean. Predicting that the word down is likely to follow the word fell does not require any understanding of what either word means — only a statistical calculation that these words often go together.

Consequently, GPT-3 is prone to making authoritative statements that are utterly and completely false. Consider this exchange:

Human: I have 47 pennies. I throw away 44 of these pennies and divide the remaining pennies into three groups. How many pennies are in each group?

AI: There would be 3 pennies in each group.

Human: Are you sure?

AI: Yes, I am sure. The result of 44 divided by 3 is 14.4, so the remainder after accounting for 44 pennies would be 3. Therefore, each group of pennies will have 3 pennies.

GPT-3’s answer and explanation are given confidently but are entirely incorrect. The answer is not determined by dividing 44 by 3. The result of 44 divided by 3 is not 14.4. The last sentence does not follow from any of the previous sentences and the correct answer is not 3.

Gary N. Smith, “An AI that can “write” is feeding delusions about how smart artificial intelligence really is” at Salon (January 1, 2023)

The root cause of the error is that GPT-3 only identifies likely sequences of words; it neither calculates nor responds to the logic of what is being asked. So, if there is no human who intervenes at the chatbot’s end, these nonsensical dialogues are comparatively easy to create.

But many chatbot users do not test the program like that. Some may believe, like Eliza’s partners, that they are dealing with a human being.

Perhaps we need a new name for the Eliza effect, for a new era. Possibly, the “Internet speaks” effect?

You may also wish to read: Chatbots: Still dumb after all these years. Intelligence is more than statistically appropriate responses. Despite the boasts of Google and OpenAI about their human-like chatbot models, algorithms lack the understanding needed for a true conversation. (Gary Smith)


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Let’s Call AI What It Really Is: Faux Intelligence