Science Writer: No Way To Tell If AI Is Conscious
Absent a definition of consciousness, it might not be possible to prove extravagant claims wrong
Addressing the extravagant claims about “conscious AI” just around the corner, science writer Lindsey Laughlin offers a cautionary note at Ars Technica: We don’t really know what consciousness even is, in humans:
There’s only one way we can know: by empirically identifying how consciousness works in organic lifeforms and developing a method by which we can consistently recognize it. We need to understand consciousness in ourselves before we have any hope of recognizing its presence in artificial systems. So before we dive deep into the complex consequences of sentient silicon and envision a future filled with conscious computers, we must resolve an ancient question: What is consciousness, and who has it?
Lindsey Laughlin, “Could AIs become conscious? Right now, we have no way to tell,” Ars Technica, July 10, 2024
There are, as she notes, many theories of consciousness. Three she looks at are Integrated Information Theory (IIT), championed by Christof Koch, Michael Graziano’s Attention Schema Theory (AST), and Bernard Baars’s Global Neuronal Workspace Theory (GNWT).
Koch and his theory became controversial last year, due to accusations of panpsychism (everything is conscious) — although that view is becoming much more acceptable in science. Meanwhile, a contest between IIT and GWT seems to have ended in a sullen draw. So far, Graziano’s Attention Schema Theory — “a novel way to explain the brain basis of subjective awareness in a mechanistic and scientifically testable manner” — seems to have stayed out of trouble…
But, as neurosurgeon Michael Egnor noted here earlier this year, “in so many discussions of consciousness we rarely find an attempt to define the quality with any rigor. We can hardly explain it if we can’t even define it.”
He adds,
I think that much of our talk about consciousness is what philosopher Ludwig Wittgenstein (1889–1951) called a language game. Language is a mutually-agreed-upon set of rules for usage of words. Language games are attempts to talk about things by using social conventions about meanings of words and about ideas the words refer to. Yet language games often lead us away from clarity and truth. We become caught up in rhetorical acrobatics — talk about talk, as philosophy is sometimes described. Michael Egnor, “Leading neuroscientist wavers on physical view of consciousness,” Mind Matters News, January 29, 2024
The difficulty of defining consciousness could very well be one of the reasons why tech moguls can get away with extravagant claims about conscious AI. Absent definitions, who could prove them wrong?