Mind Matters Natural and Artificial Intelligence News and Analysis
funny-boy-and-beagle-dog-are-watching-laptop-on-sofa-in-room-stockpack-adobe-stock
funny boy and beagle dog are watching laptop on sofa in room
Photo licensed via Adobe Stock

AI expert: Stop Distinguishing Between AI, Human and Animal Minds

Aaron Sloman’s approach to minds sounds a bit like panpsychism — which is increasingly accepted in science — but there are differences

Philip Ball, author of The Book of Minds: How to understand ourselves and other beings, from animals to AI to aliens (University of Chicago Press, 2022), profiles University of Birmingham computer scientist Aaron Sloman, whose 1984 paper, “The structure of the space of possible minds” sought to account for human, animal, and AI minds as “behaving systems.” Along the way, Sloman came to a significant conclusion:

“We must abandon the idea that there is one major boundary between things with and without minds,” he wrote. “Instead, informed by the variety of types of computational mechanisms already explored, we must acknowledge that there are many discontinuities, or divisions within the space of possible systems: the space is not a continuum, nor is it a dichotomy.”

Philip Ball, “The study of nonhuman intelligence could be missing major insights” at Big Think (May 14, 2022)

That sounds like panpsychism, a view that is increasingly accepted in science. The panpsychist believes that all of nature, animate and inanimate, participates at some level in consciousness but it is most highly developed in humans. But Sloman’s approach effaces the distinction between animate entities like humans or mice and inanimate entities like computers, which is a somewhat different emphasis.

He is quite thoroughgoing in his approach to minds as systems:

“These explorations can be expected to reveal a very richly structured space,” Sloman wrote, “not one-dimensional, like a spectrum, not any kind of continuum. There will be not two but many extremes.” These might range from mechanisms so simple – like thermostats or speed controllers on engines – that we would not conventionally liken them to minds at all, to the kinds of advanced, responsive, and adaptive behaviour exemplified by simple organisms such as bacteria and amoebae. “Instead of fruitless attempts to divide the world into things with and things without the essence of mind, or consciousness,” he wrote, “we should examine the many detailed similarities and differences between systems.”

Philip Ball, “The study of nonhuman intelligence could be missing major insights” at Big Think (May 14, 2022)
Aaron Sloman

So, in addition to the war on human exceptionalism, we have a dissent from “biological exceptionalism.”

Sloman believes that those who seek artificial general intelligence (AGI) — essentially human-like intelligence in computers — would be best off to study how cognition evolved in a variety of animal species believed to be intelligent, such as crows, elephants, and whales. If they can replicate that process, the next step would be to produce a human-like mind.

Two difficulties arise. First, life forms are not merely alive; they seek to go on living. Can we replicate a natural drive like that in an artificial, inanimate entity? We could try programming it into AI. But we may find that there is a difference between what we can program into AI and what we have never needed to program into life forms — and can’t easily prevent. For some species, like the octopus and other cephalopods, for example, the drive to survive may have naturally increased their intelligence as part of a package of survival solutions after they had absorbed their shells — though we do not know how they managed it.

Second, it’s not clear how closely human intelligence is linked to the hardware of the human brain. As has been noted here before, people with split brains, or half a brain, (or less) can function normally. That’s not very machine-like at all and seems to defy conventional nature. These instances better suit a model in which the human mind is immaterial — but that raises some difficult issues if the goal is to instantiate it in a computer.

So the problem is not simply that there are different kinds of minds but that the minds may not even be comparable. A smart crow may ace an experiment but he didn’t design it and he isn’t writing the paper. He isn’t even curious about it; he just wants the food reward. And a computer that is missing half its hardware probably wouldn’t work at all.

Sloman’s ideas are certainly stimulating but it’s not clear where they would lead. We come up against the hard reality that the human mind really is different.


You may also wish to read: Woman missing key language part of brain scores 98% in vocab test Missing her left temporal lobe, she was told for years by doctors that her brain did not make sense. In a recent journal paper, she was permitted to write a personal note, asking “Please do not call my brain abnormal, that creeps me out. My brain is atypical.”


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

AI expert: Stop Distinguishing Between AI, Human and Animal Minds