Mind Matters Natural and Artificial Intelligence News and Analysis
Joyful happy boy hugging a robot
Best friend. Joyful happy boy smiling while hugging a robot
Robots with feelings Adobe Stock licensed

Can Robots Be Programmed To Care About Us?

Some researchers think it is only a matter of the right tweaks
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Well, that’s the hope in some quarters. It’s a curious blend of forlorn hope fueled by half-acknowledged hype and resolute denial of the most serious problems. Also by sometimes systematic confusion as to what, precisely we are talking about.

Recently we heard about a possible advance in getting robots to care:

“Today’s robots lack feelings,” Man and Damasio write in a new paper (subscription required) in Nature Machine Intelligence. “They are not designed to represent the internal state of their operations in a way that would permit them to experience that state in a mental space.”

So Man and Damasio propose a strategy for imbuing machines (such as robots or humanlike androids) with the “artificial equivalent of feeling.” At its core, this proposal calls for machines designed to observe the biological principle of homeostasis. That’s the idea that life must regulate itself to remain within a narrow range of suitable conditions — like keeping temperature and chemical balances within the limits of viability. An intelligent machine’s awareness of analogous features of its internal state would amount to the robotic version of feelings.

Tom Siegfried, “A will to survive might take AI to the next level” at ScienceNews (November 10, 2019)

Here’s the paper.

Homeostasis is a principle by which, for example, vast insect colonies govern themselves, becoming—as J. Scott Turner, author of Purpose and Desire put it—a “giant crawling brain.” But the fact that one of the world’s experts on homeostasis warns against a reductionist approach to it in the very subtitle of his book, “What Makes Something “Alive” and Why Modern Darwinism Has Failed to Explain It,” should warn us against incautious leaps of optimism about homeostasis in artificial things.

Man and Damasio are undeterred:

A robot capable of perceiving existential risks might learn to devise novel methods for its protection, instead of relying on preprogrammed solutions.

“Rather than having to hard-code a robot for every eventuality or equip it with a limited set of behavioral policies, a robot concerned with its own survival might creatively solve the challenges that it encounters,” Man and Damasio suspect. “Basic goals and values would be organically discovered, rather than being extrinsically designed.”

Tom Siegfried, “A will to survive might take AI to the next level” at ScienceNews (November 10, 2019)

Wait. The robot does not really exist as a unified self in the sense that a dog does. Only a conscious, unified self can experience an existential threat to survival, such as serious pain.

As so often happens, thinkers really go off the rails when they start to think about evolution. In this case, the hope is that robots will evolve to think like humans:

Devising novel self-protection capabilities might also lead to enhanced thinking skills. Man and Damasio believe advanced human thought may have developed in that way: Maintaining viable internal states (homeostasis) required the evolution of better brain power. “We regard high-level cognition as an outgrowth of resources that originated to solve the ancient biological problem of homeostasis,” Man and Damasio write.

Tom Siegfried, “A will to survive might take AI to the next level” at ScienceNews (November 10, 2019)

No, actually. Homeostasis can be maintained among life forms that have very limited individual thinking skills (termites come to mind). If we humans didn’t have the type of minds we do, we would still have homeostasis; we just wouldn’t do calculus or write screenplays. For homeostasis, our robot needs merely to be alive, not especially clever.

More recently, efforts are under way to make it seem like a robot feels pain by linking sense of touch to facial movement:

A robot with a sense of touch may one day “feel” pain, both its own physical pain and empathy for the pain of its human companions. Such touchy-feely robots are still far off, but advances in robotic touch-sensing are bringing that possibility closer to reality.

Sensors embedded in soft, artificial skin that can detect both a gentle touch and a painful thump have been hooked up to a robot that can then signal emotions, Minoru Asada reported February 15 at the annual meeting of the American Association for the Advancement of Science. This artificial “pain nervous system,” as Asada calls it, may be a small building block for a machine that could ultimately experience pain (in a robotic sort of way). Such a feeling might also allow a robot to “empathize” with a human companion’s suffering.

Laura Sanders, “Linking sense of touch to facial movement inches robots toward ‘feeling’ pain” at ScienceNews

Not only are such robots far off but we have no idea how to get there because programming a robot like Affetto (above) to mimic reactions is not the same thing as generating actual reactions. Affetto, however convincing the performance in an Uncanny Valley sense, is not feeling anything.

Damasio, asked for comment, admits that. He tells Laura Sanders at ScienceNews,

“It’s a device for communication of the machine to a human.” While that’s an interesting development, “it’s not the same thing” as a robot designed to compute some sort of internal experience, he says.”

Laura Sanders, “Linking sense of touch to facial movement inches robots toward ‘feeling’ pain” at ScienceNews

To get some idea of real feelings in a life form, by comparison, try waving a leash in front of a housebound dog.

With claims for robot “altruism,” a different sort of confusion is at work. Back in 2011, we were informed that robots in a Swiss lab were programmed to follow a theory of kin selection in biology:

It is named after biologist W.D. Hamilton who in 1964 attempted to explain how ostensibly selfish organisms could evolve to share their time and resources, even sacrificing themselves for the good of others. His rule codified the dynamics — degrees of genetic relatedness between organisms, costs and benefits of sharing — by which altruism made evolutionary sense. According to Hamilton, relatedness was key: Altruism’s cost to an individual would be outweighed by its benefit to a shared set of genes…

In the new study, inch-long wheeled robots equipped with infrared sensors were programmed to search for discs representing food, then push those discs into a designated area. At the end of each foraging round, the computerized “genes” of successful individuals were mixed up and copied into a fresh generation of robots, while less-successful robots disappeared from the gene pool.

Each robot was also given a choice between sharing points awarded for finding food, thus giving other robots’ genes a chance of surviving, or hoarding. In different iterations of the experiment, the researchers altered the costs and benefits of sharing; they found that, again and again, the robots evolved to share at the levels predicted by Hamilton’s equations.

Brandon Keim, “Robots Evolve Altruism, Just as Biology Predicts” at Wired (May 4, 2011)

Paper. (open access)

Given that the robots were programmed to do those things, it’s no surprise that they did them. Whether life forms in nature actually behave that way has, however, been contested and Keim admits at Wired,

In some ways, the rule and its accompanying theory of kin selection is contested. Some scientists have used it to extrapolate too easily from insects to people, and some researchers think it overstates the importance of relatedness.

Brandon Keim, “Robots Evolve Altruism, Just as Biology Predicts” at Wired (May 4, 2011)

Indeed. One is reminded of Arthur’s sardonic comment in Camelot: “The adage ‘blood is thicker than water’ was invented by undeserving relatives.” Altruistic robots may have some applications in swarm robotics but what about their relevance to humans?

As noted in an earlier article, two definitions of altruism are in play and often conveniently confused: Hamilton’s definition, which originated in order to account for the behavior of social insects, vs. human decisions to show compassion. The confusion bolsters the cause of naturalism (nature is all there is), often called “materialism,” in the social sciences; hence it persists and continues to confuse.

One could probably program a robot to behave like a social insect, to at least some extent. However, no one has found a way to “program” compassion in humans, never mind robots.

Feelings, whether for one’s own sufferings or, by extrapolation, those of others, are intrinsic to being alive. Thus, it is unclear, even conceptually, how to produce them in an artificial entity that by its very nature is not alive. But we can expect to hear many more attempts to talk around that problem in the near future.


See also: How far have we come in giving robots feelings? Pretty far, in our own imagination.

and

Are infants born kind? New research suggests yes. The trouble is, the research is haunted by conflicting definitions of altruism. If human infants show apparent intellectual qualities like compassion earlier than we might have expected but chimpanzees don’t, we must accept that humans are fundamentally different from chimpanzees. Conflicting definitions of altruism cloud the picture.


Denyse O'Leary

Denyse O'Leary is a freelance journalist based in Victoria, Canada. Specializing in faith and science issues, she is co-author, with neuroscientist Mario Beauregard, of The Spiritual Brain: A Neuroscientist's Case for the Existence of the Soul; and with neurosurgeon Michael Egnor of the forthcoming The Human Soul: What Neuroscience Shows Us about the Brain, the Mind, and the Difference Between the Two (Worthy, 2025). She received her degree in honors English language and literature.

Can Robots Be Programmed To Care About Us?