Mind Matters Natural and Artificial Intelligence News and Analysis

TagSentience and AI

analyst-working-with-computer-in-business-analytics-and-data-management-system-to-make-report-with-kpi-and-metrics-connected-to-database-corporate-strategy-for-finance-operations-sales-marketing-stockpack-adobe-stock
Analyst working with computer in Business Analytics and Data Management System to make report with KPI and metrics connected to database. Corporate strategy for finance, operations, sales, marketing..

The machine is not talking to you. You are talking to yourself.

At Futurism, Maggie Harrison discusses the reasons humans decide that AI is “alive.”

Maggie Harrison, a staff writer for Futurism, offers a no-nonsense talk to those who believe in the humanity of the chatbot LaMDA, as announced in June by Google software engineer Blake Lemoine: First, she notes, the idea isn’t even uncommon among software engineers: As Cade Metz wrote for The New York Times, many in the AI industry hold beliefs similar to Lemoine’s. One prominent inventor, Philip Bosua, told the Times he believes OpenAI’s GPT-3 (another language modeling system like Google’s LaMDA) is also sentient. Yet another said that though he think’s GPT-3 intelligence is somewhat “alien,” it “still counts.” There’s a clear, wide gap there between those who think the machine is alive, and the simple computer science backing those Read More ›

Chatbot / Social Bot mit Quellcode im Hintergrund

Google’s Chatbot LaMDA Sounds Human Because — Read the Manual…

What would you expect LaMDA to sound like? Whales? ET? I propose a test: “Human until PROVEN otherwise”

Recently Google employee Blake Lemoine caused a media storm over the LaMDA chatbot he was working on, that he claims is sentient (it feels things like a human being). A heavily edited transcript has been released that shows him and a collaborator having a very coherent conversation with LaMDA. Many have been quick to dismiss his claims about the chatbot’s sentience, accusing the Googler of falling prey to the Eliza effect: anthropomorphizing a probability distribution over words (thus believing that he is talking to a human). The accusation is that Lemoine generated a large number of dialogs, then edited down the exchange to create a coherent narrative. Google placed Lemoine on leave, technically for breaking the non-disclosure agreement (NDA) that Read More ›