Mind Matters Natural and Artificial Intelligence News and Analysis

TagSentience and AI

COSM2022-Nov10-174A0052-ai-panel
Panel on AI at COSM 2022

AI’s Lack of Understanding

How are we to view AI in an era where it increasingly seems to mimic human intelligence so well?

We’ve been highlighting a number of interviews from last year’s COSM conference, which attracted many of the most celebrated and respected engineers, scholars, and scientists from around the country. In today’s featured clip, George Montañez, Assistant Professor of Computer Science at Harvey Mudd College, shares his perspective on COSM 2022’s panel on artificial intelligence—areas of agreement and disagreement about the nature and future of artificial intelligence. Montañez, like Robert J. Marks, with whom he shared the panel, thinks AI is impressive and that we’ve made incredible strides in the field of computation, but maintains that these systems lack understanding and sentience. Blake Lemoine, the other panelist in the conversation, thinks AI is capable of a bit more than mere algorithmic Read More ›

analyst-working-with-computer-in-business-analytics-and-data-management-system-to-make-report-with-kpi-and-metrics-connected-to-database-corporate-strategy-for-finance-operations-sales-marketing-stockpack-adobe-stock
Analyst working with computer in Business Analytics and Data Management System to make report with KPI and metrics connected to database. Corporate strategy for finance, operations, sales, marketing..

The machine is not talking to you. You are talking to yourself.

At Futurism, Maggie Harrison discusses the reasons humans decide that AI is “alive.”

Maggie Harrison, a staff writer for Futurism, offers a no-nonsense talk to those who believe in the humanity of the chatbot LaMDA, as announced in June by Google software engineer Blake Lemoine: First, she notes, the idea isn’t even uncommon among software engineers: As Cade Metz wrote for The New York Times, many in the AI industry hold beliefs similar to Lemoine’s. One prominent inventor, Philip Bosua, told the Times he believes OpenAI’s GPT-3 (another language modeling system like Google’s LaMDA) is also sentient. Yet another said that though he think’s GPT-3 intelligence is somewhat “alien,” it “still counts.” There’s a clear, wide gap there between those who think the machine is alive, and the simple computer science backing those Read More ›

Chatbot / Social Bot mit Quellcode im Hintergrund

Google’s Chatbot LaMDA Sounds Human Because — Read the Manual…

What would you expect LaMDA to sound like? Whales? ET? I propose a test: “Human until PROVEN otherwise”

Recently Google employee Blake Lemoine caused a media storm over the LaMDA chatbot he was working on, that he claims is sentient (it feels things like a human being). A heavily edited transcript has been released that shows him and a collaborator having a very coherent conversation with LaMDA. Many have been quick to dismiss his claims about the chatbot’s sentience, accusing the Googler of falling prey to the Eliza effect: anthropomorphizing a probability distribution over words (thus believing that he is talking to a human). The accusation is that Lemoine generated a large number of dialogs, then edited down the exchange to create a coherent narrative. Google placed Lemoine on leave, technically for breaking the non-disclosure agreement (NDA) that Read More ›