Philosopher warns of social conflict over AI consciousness
The Guardian’s UK technology editor, Robert Booth, tells us that Jonathan Birch, a London School of Economics philosophy prof, is warning that there will be significant social discord between those who think AI is sentient (capable of feeling) and those who do not.
His comments followed on a group of AI experts predicting the dawn of AI consciousness by 2035.
Birch, an expert in animal sentience who has pioneered work leading to a growing number of bans on octopus farming, was a co-author of a study involving academics and AI experts from New York University, Oxford University, Stanford University and the Eleos and Anthropic AI companies that says the prospect of AI systems with their own interests and moral significance “is no longer an issue only for sci-fi or the distant future”.
They want the big tech firms developing AI to start taking it seriously by determining the sentience of their systems to assess if their models are capable of happiness and suffering, and whether they can be benefited or harmed.
“I’m quite worried about major societal splits over this,” Birch said. “We’re going to have subcultures that view each other as making huge mistakes … [there could be] huge social ruptures where one side sees the other as very cruelly exploiting AI while the other side sees the first as deluding itself into thinking there’s sentience there.”
“AI could cause ‘social ruptures’ between people who disagree on its sentience,” November 17, 2024
In short, there will be AI rights groups along the lines of animal rights groups, opposing the exploitation of AI.
But now really…
It’s helpful to keep in mind that no one has any idea what human consciousness — as opposed to intelligence — is or how it arises. So how could we know that machines will be conscious by 2035?
Actually, theorists have been backing away from physicalism in recent years. And physicalism (the mind is merely the actions of neurons in the brain) must be the basic assumption behind claims about AI consciousness.
As far as sentience (suffering) is concerned, we can look at issues around animal suffering: Philosopher Thomas Nagel famously asked, “Is there anything that it is like to be a bat?” If so, then the bat has sentience and suffering is possible.
So, is there anything that it is like to be millions of lines of code? One rather hopes that the activists will find other causes instead. There is no use dividing society over nonsense.