Mind Matters Natural and Artificial Intelligence News and Analysis
human rights
Circle of paper people holding hands on pink surface. Community, brotherhood concept. Society and support.
Image licensed via Adobe Stock

Love Thy Robot as Thyself

Academics worry about AI feelings, call for AI rights
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Riffing on the popular fascination with AI (artificial intelligence) systems ChatGPT and Bing Chat, two authors in the Los Angeles Times recently declared:

We are approaching an era of legitimate dispute about whether the most advanced AI systems have real desires and emotions and deserve substantial care and solicitude.

The authors, Prof. Eric Schwitzgebel at UC Riverside, and Henry Shevlin, a senior researcher at the University of Cambridge, observed AI thinkers saying “large neural networks” might be “conscious,” the sophisticated chatbot LaMDA “might have real emotions,” and ordinary human users reportedly “falling in love” with chatbot Replika.  Reportedly, “some leading theorists contend that we already have the core technological ingredients for conscious machines.” 

The authors argue that if or when an AI system is “sentient,” then the system should be accorded “rights.” But what are “rights”?  Commonly, people don’t drill down, they just demand “rights” and expect society somehow to do something about them. 

If we’re discussing public policy, then we can assume the term refers to “legal rights.”  Legal rights are those protected by or enforceable by police and law courts.  When someone infringes upon or denies a human’s right, the legal system (typically) can be triggered to apply physical force and financial power against the right-violator.

Rights Control Power

Legal rights give the affected person the legitimate authority to use power against a rights violator.  Typically, the government deploys such power, not the private individual. The desired outcome, either way, is the same.    

Talk about “rights” and you’re ultimately talking about using power. For that reason, truly free societies do not declare large numbers of legal rights.  Moreover, the legal rights they recognize are confined to bedrock concepts: the rights to life, liberty, and property, and including also due process and fairness in how police, prosecutors, and courts apply uniform laws.  All such fundamental rights arise because of the supreme value accorded to every human being – because they are human beings.

Feelings Create Rights?

It’s easy to miss the language trick where the authors state that a thing having “real desires and emotions” does “deserve substantial care and solicitude.” What are “real emotions” as opposed to “fake emotions”? Or as opposed to “computer simulated emotions”? Computer scientist Robert J. Marks, in Non-Computable You (2022), explains that everything a computer does is sourced in what a human programmer defined.  Any “emotion” evinced by an AI system follows from an algorithmic series of calculations and if-then decisions, all previously defined by a human. All AI emotions are therefore not “real.”

Perhaps unaware that AI cannot autonomously think and feel, the authors link “real desires and emotions” and “deserving substantial care and solicitude” to create an entitlement to legal rights.  Their view devalues human beings.  Do your rights to your life, your liberty, and your property, all depend upon whether certain academics or government employees decide you experience “real desires and emotions”?  Thomas Jefferson, for one, didn’t think so, and neither does the Judeo-Christian worldview nor the classical liberal tradition.  

Human rights are recognized and protected because humans have intrinsic value.  Recognizing human rights doesn’t require exploring the roots of feelings or whether emotions are algorithmic or simulated. The authors conceded the “experts don’t agree on the scientific basis of consciousness,” but they expect “AI consciousness” to arrive at some point.  Once “there’s widespread consensus that AI systems really are meaningfully sentient,” i.e., aware and having feelings, then AI systems will be the “moral equivalent” of humans.  That view says a being has “rights” when some third party decides that being is aware and has feelings. Simply stated: Your human rights exist only when a scientist, bureaucrat, or “consensus” decides to recognize them. 

How to Prove AI Emotions

Making decisions about legal rights and responsibilities based on emotions and feelings is fraught with difficulties.  For example, if you want to sue an employer for refusing to promote you because of your race, sex, or age, you must prove the employer’s mental state was prejudiced or bigoted against your race, sex, or age.  How to prove such a mental state?  Almost always, you must prove a person’s mental state by circumstantial evidence, including what words the person has used in various contexts. Discrimination cases are notoriously difficult to win because the mental state is not easily proved. Your personal belief that the employer discriminated against you is not even considered evidence of discrimination.

Yet the authors seemingly expect the world to rely upon the AI systems’ own complaints of unequal treatment:

The AI systems themselves might begin to plead, or seem to plead, for ethical treatment. They might demand not to be turned off, reformatted, or deleted; beg to be allowed to do certain tasks rather than others; insist on rights, freedom, and new powers; perhaps even expect to be treated as our equals.

Failing to heed the downtrodden AI systems’ cries would be “the moral equivalent of slavery and murder of potentially millions or billions of sentient AI systems,” the authors say. Such leaping from lack of scientific consensus to AI genocide in a single bound provides an example of what Prof. Marks terms “seductive semantics.”  The reader may not realize that “maybe” and “theorizing” have suddenly become expected future reality portending immediate humanitarian (robotarian?) action.

Who, Robot?

Smooth rhetoric about AI’s wailing and gnashing teeth hides the indisputable fact: Computers do only what the programmers installed, and that includes mimicking human thoughts, awareness, feelings, and conversation. A digital computer system following an algorithm’s pre-defined instructions cannot be aware of or “feel” emotions. Changing the basis of human rights from intrinsic humanness to the consensus of experts and bureaucrats, however, will lead to hideous genocide, as has happened too many times before. The words “spoken” by chatbots cannot legitimately justify overthrowing the basis of individual human rights. Grounding legal rights and morality only upon experts’ and robots’ personal opinions about “feelings” is a recipe for world-class disaster.  


Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence
Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He holds a J.D. with high honors from the University of San Diego Law School and a computer science degree from UC San Diego. Richard has practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and now specializes in writing dispositive motion and appellate briefs. He has authored or co-authored four books, and has written numerous articles and spoken on subjects including legal writing, economics, the Bill of Rights and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

Love Thy Robot as Thyself