Mind Matters Natural and Artificial Intelligence News and Analysis
analyst-working-with-computer-in-business-analytics-and-data-management-system-to-make-report-with-kpi-and-metrics-connected-to-database-corporate-strategy-for-finance-operations-sales-marketing-stockpack-adobe-stock
Analyst working with computer in Business Analytics and Data Management System to make report with KPI and metrics connected to database. Corporate strategy for finance, operations, sales, marketing..
Photo licensed via Adobe Stock

The machine is not talking to you. You are talking to yourself.

At Futurism, Maggie Harrison discusses the reasons humans decide that AI is “alive.”

Maggie Harrison, a staff writer for Futurism, offers a no-nonsense talk to those who believe in the humanity of the chatbot LaMDA, as announced in June by Google software engineer Blake Lemoine: First, she notes, the idea isn’t even uncommon among software engineers:

As Cade Metz wrote for The New York Times, many in the AI industry hold beliefs similar to Lemoine’s. One prominent inventor, Philip Bosua, told the Times he believes OpenAI’s GPT-3 (another language modeling system like Google’s LaMDA) is also sentient. Yet another said that though he think’s GPT-3 intelligence is somewhat “alien,” it “still counts.” There’s a clear, wide gap there between those who think the machine is alive, and the simple computer science backing those who say otherwise. The reasons for it might not be readily evident, but a bridge between the two demonstrating just how one would cross the threshold from non-believer to believer has…actually existed for decades.

Maggie Harrison, “Analysis: No, AI Isn’t Sentient, You Absolute Morons” at Futurism (August 9, 2022)

But it wasn’t uncommon among therapy seekers in the 1960s either. She mentions, of course, the Eliza effect, first observed in the 1960s by MIT professor Joseph Weizenbaum (1923–2008), who observed the intense emotional attachment many people had developed to a chatbot of that name.

As she notes, anthropomorphism is a very common tendency in humans. Children take their teddy bears’ feelings seriously. Adults laugh at cartoons featuring clever roosters and genius cats. Techies mutter darkly that the machines all belong to a union and, when they perform oddly, we should first just be patient and say nice things to them… Of course we don’t really behave as though those things were true. It is just a way of thinking. But, as Harrison notes, the way AI is discussed probably encourages the tendency:

For example, calling a machine ‘intelligent’ insists that it is so, if artificially. And though intelligence — the capacity to gain and apply knowledge — isn’t a synonym for sentience, the two are often equivocated.

Maggie Harrison, “Analysis: No, AI Isn’t Sentient, You Absolute Morons” at Futurism (August 9, 2022)

Indeed. Sentience is the ability to feel. A dog may be very sensitive to both canine and human emotions but he can’t tell you the cube root of 30 in a millisecond. A computer, programmed to do so, could tell you the cube root of 30 in a millisecond without any consciousness or intelligence at all.

These issues can become important when we are assessing questions such as whether artificial intelligences should hold patents or copyrights. As computer engineering prof Robert J. Marks would say, “Artificial intelligence (AI) should no more be given a patent on an invention than my word processor should be granted a copyright on the article I’m writing.”

You may also wish to read: Google’s chatbot LaMDA sounds human because — read the manual… What would you expect LaMDA to sound like? Whales? ET? I propose a test: “Human until PROVEN otherwise” It’s impressive but, studying the documentation, I think I know what happened to Blake Lemoine. He was hired to chat with LaMDA and didn’t understand…(Eric Holloway)


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

The machine is not talking to you. You are talking to yourself.