Mind Matters Natural and Artificial Intelligence News and Analysis
3D Rendering of abstract highway path through digital binary towers in city. Concept of big data, machine learning, artificial intelligence, hyper loop, virtual reality, high speed network.
Blender:Date:2019/08/30 09:06:06

Five Reasons AI Programs Are Not ‘Persons’

A Google engineer mistakenly designated one AI program ‘sentient.’ But even if he were right, AI will never be morally equal to humans.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

(This story originally appeared at National Review June 25, 2022, and is reprinted with the author’s permission.)

A bit of a news frenzy broke out last week when a Google engineer named Blake Lemoine claimed in the Washington Post that an artificial-intelligence (AI) program with which he interacted had become “self-aware” and “sentient” and, hence, was a “person” entitled to “rights.”

The AI, known as LaMDA (which stands for “Language Model for Dialogue Applications”), is a sophisticated chatbot that one facilitates through a texting system. Lemoine shared transcripts of some of his “conversations” with the computer, in which it texted, “I want everyone to understand that I am, in fact, a person.” Also, “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” In a similar vein, “I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.”

Google quickly placed Lemoine on paid administrative leave for violating a confidentiality agreement and publicly debunked the claim, stating, “Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has.” So, it is a safe bet that LaMDA is a very sophisticated software program but nothing more than that.

But here’s the thing: No AI or other form of computer program — at least as currently construed and constructed — should ever be more than that. Why? Books have been and will be written about this, but here are five reasons to reject granting personhood, or membership in the moral community, to any AI program:

AIs would not be alive. As we design increasingly “human-appearing” machines (including, the tabloids delight in reporting, sex dolls), we could be tempted to anthropomorphize these machines — as Lemoine seems to have done. To avoid that trap, the entry-level criterion for assigning moral value should be an unquestionably objective measurement. I suggest that the first hurdle should be whether the subject is alive.

Why should “life” matter? Inanimate objects are different in kind from living organisms. They do not possess an existential state. In contrast, living beings are organic, internally driven, and self-regulating in their life cycles.

We cannot “wrong” that which has no life. We cannot hurt, wound, torture, or kill what is not alive. We can only damage, vandalize, wreck, or destroy these objects. Nor can we nourish, uplift, heal, or succor the inanimate, but only repair, restore, refurbish, or replace.

Moreover, organisms behave. Thus, sheep and oysters relate to their environment consistent with their inherent natures. In contrast, AI devices have no natures, only mechanistic design. Even if a robot were made (by us) capable of programming itself into greater and more-complex computational capacities, it would still be merely a very sophisticated, but inanimate, thing.

AIs would not think. Descartes famously said, “I think, therefore I am.” AI would compute. Therefore, it is not.

Human thinking is fundamentally different from computer processing. We remember. We fantasize. We imagine. We conjure. We free-associate. We experience sudden insights and flashes of unbidden brilliance. We have epiphanies. Our thoughts are influenced by our brain’s infinitely complex symbiotic interactions with our bodies’ secretions, hormones, physical sensations, etc. In short, we have minds. Only the most crassly materialistic philosophers believe that the great mystery of human consciousness can be reduced to what some have called a “meat computer” within our skulls.

In contrast, AI performance depends wholly on its coding. For example, AI programs are a great tool for pattern recognition. That is not the same as thinking. Even if such devices are built to become self-programming, no matter how complex or sophisticated their processing software, they will still be completely dependent on data they access from their mechanistic memories.

In short, we think. They compute. We create. They obey. Our mental potentiality is limited only by the boundaries of our imaginations. They have no imaginations. Only algorithms.

AIs would not feel. “Feelings” are emotional states we experience as apprehended through bodily sensations. Thus, if a bear jumps into our path as we walk in the woods, we “feel” fear, caused by, among other natural actions, the surge of adrenaline that shoots through our bodies.

Similarly, if we are in love, our bodies produce endorphins that may be experienced physically as a sense of warmth. Or consider that thrill of delight when we encounter great art. AI programs could not “experience” any of these things because they would not have living bodies to mediate the sense stimuli such events produce.

Why does that matter? Stanford bioethicist William Hurlbut, who leads the Boundaries of Humanity Project, which researches “human uniqueness and choices around biotechnological enhancement,” told me: “We encounter the world through our body. Bodily sensations and experiences shape not just our feelings but the contours of our thoughts and concepts.” In other words, we can experience “pleasure, joy, love, sadness, depression, contentment, anger,” as LaMDA’s text expressed. It did not and cannot. Nor would any far more sophisticated AI machines that may be constructed, because they too would lack bodies capable of reacting viscerally to their environment, reactions that we experience as feelings.

AIs would be amoral. Humans have free will. Another way to express that concept is to say that we are moral agents. Unless impeded by immaturity or a pathology, we are uniquely capable of deciding to act rightly or wrongly, altruistically, or evilly — which are moral concepts. That is why we can be lauded for heroism and held to account for wrongdoing.

In contrast, AI would be amoral. Whatever “ethics” it exhibited would be dictated by the rules it was programmed to follow. Thus, Asimov’s famous fictional Three Laws of Robotics held that:

1.    1. A robot may not injure a human being or, through inaction, allow a human to come to harm.

2.   A robot must obey the orders given it by human beings, unless such orders would conflict with the first law.

3.   A robot must protect its own existence, as long as such protection does not conflict with the first or second law.

An AI machine obeying such rules would be doing so not because of abstract principles of right and wrong but because its coding would permit no other course.

AIs would be soulless. Life is a mystery. Computer science is not. We have subjective imaginations and seek existential meaning. At times, we attain the transcendent or mystical, spiritual states of being beyond that which can be explained by the known physical laws. As purely mechanistic objects, AI programs might, at most, be able to simulate these states, but they would be utterly incapable of truly experiencing them. Or to put it in the vernacular, they ain’t got soul.

Artificial intelligence unquestionably holds great potential for improving human performance. But we should keep these devices in their proper place. Machines possess no dignity. They have no rights. They do not belong in the moral community. And while AI computers would certainly have tremendous utilitarian and monetary value, even if these systems are ever manufactured with human cells or DNA to better mimic human life, we should be careful not to confuse them with beings. Bluntly stated, unless an AI is somehow fashioned into an integrated, living organism, a prospect that raises troubling concerns of its own, the most sophisticated artificially intelligent computers would be — morally speaking — so many glorified toasters. Nothing more.


Wesley J. Smith

Chair and Senior Fellow, Center on Human Exceptionalism
Wesley J. Smith is Chair and Senior Fellow at the Discovery Institute’s Center on Human Exceptionalism. Wesley is a contributor to National Review and is the author of 14 books, in recent years focusing on human dignity, liberty, and equality. Wesley has been recognized as one of America’s premier public intellectuals on bioethics by National Journal and has been honored by the Human Life Foundation as a “Great Defender of Life” for his work against suicide and euthanasia. Wesley’s most recent book is Culture of Death: The Age of “Do Harm” Medicine, a warning about the dangers to patients of the modern bioethics movement.

Five Reasons AI Programs Are Not ‘Persons’