Mind Matters Natural and Artificial Intelligence News and Analysis
3d-illustration-roboter-auge-stockpack-adobe-stock
3D Illustration Roboter Auge
Licensed via Adobe Stock

Why Don’t Robots Have Rights? A Lawyer’s Response

Robots are hardware and software packages that lack a nature or any abilities outside of whatever their designers imagine
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

“Free the Robots!” “Equal Rights for Robots!” Or maybe: “Set Us Robots Free!”

Such future protest signs might well pop up in social media, to judge from “Why don’t robots have rights?” (Big Think, October 31, 2022) Writer Jonny Thomson worries that “ future generations will look back aghast at our behavior” when humans can “no longer exploit or mistreat advanced robots” as will presumably be the case in the 21st century. Dig into the article and get techno-whiplashed as Thomson suddenly starts talking about “the 22nd century [when robots] are our friends, colleagues, and gaming partners.”

Robot Women 2

Thomson’s article considers robot rights as analogous to animal rights. The summary asserts:

  • When discussing animal rights and welfare, we often reference two ideas: sentience (the ability to feel) and sapience (the capacity for complex intelligence).
  • Yet, when we discuss robots and advanced AI, both sentience and sapience are curiously absent from the debate.
  • We are on a technological threshold between “AI” and “dignified AI.” Maybe it’s time to talk more about robot rights.

Although Thomson’s article deploys the term “rights” and discusses animal rights and human rights, it doesn’t define the term. Bypassing the question of whether AI systems and robots could be deemed “persons,” the article focuses on the question: “What is it about robots that excludes them from rights, protections, and respect?” Or: At what point should humans treat robots “with respect and care”?

Think + Feel = Rights?

The article asserts animal life forms range from amoebas to primates, and their “rights” to respect, care, and protection increase on a spectrum evaluating two factors: (1) sentience (the ability to feel); and (2) sapience (the capacity for complex intelligence). Thomson admits we don’t know whether an animal has sentience (feelings), but suggests humans should assume sentience and treat animals accordingly. Primates and mammals should be deemed high on the sentience scale from what we can observe about them. That factor supports recognizing their animal “rights.”

On the sapience scale, Thomson argues that we don’t know exactly how much complex intelligence an animal has, but we observe behaviors in animals consistent with intelligent activity. Moving up the scale from amoebas to fish, reptiles, birds, mammals, and primates, the animal behaviors are consistent with increasing complex intelligence. We should assume animals deserve “rights” increasing with their estimated intelligence.

The more sentience and sapience, the higher “rank” the life form is, and the more deserving of “rights” to care, consideration, and respect. Assuming the abilities to feel and to act intelligently are the determinants of “rights,” Thomson’s article urges that AI systems (robots) be evaluated using the same factors.

The proposal bakes in some hefty assumptions: (1) that robots can be built to become animal-like or human-like, with sophisticated generalized intelligence; and (2) that robot AI can be built to feel emotions and pain via software simulations of hormones and electrochemical reactions in an animal or human brain. Both assumptions are speculative, but let’s follow the logic.

Assume that in 50 years the robots do have sapience, e.g., artificial general intelligence, along the lines observed in animals and humans. Also, when the robots act, speak, and use body language consistent with the ways that we detect emotions in fellow humans, we decide that their sentience is at least on par with that of mammals or primates. In Thomson’s view, we should then recognize the robots’ rights to care, consideration, and respect analogous to our treatment of mammals, primates, newborn humans, and disabled or demented humans.

Do Rights Come from Human Subjective Estimations?

Has Thomson’s article pretty much set up a working and beneficial framework for robot rights? Consider first that the two factors that determine whether to recognize rights, sapience and sentience, are both subjective.

Deciding whether a robot is sapient comes from a human’s evaluation of how “intelligent” the robot seems to the human. To measure human and animal intelligence requires humans to define the criteria and figure how much weight to give certain outcomes of observations and tests. The Scholastic Aptitude Test (SAT), for example, tests people for their knowledge and skill in defined categories that correlate statistically with success in college. The SAT is “objective” only in that it applies the same criteria to all test takers and derives from statistical measures.

Futuristic robot man being constructed by robotic arm or cyborg under construction. Isolated on background. 3D illustration.

A robot programmed to excel at the SAT is not necessarily as intelligent as a human who scores well or even poorly. There are so many types of intelligence that manifest totally independent of college admissions and academic success. Many are observable but not numerically calculated. The choice of sapience testing methods to apply to robots is thus itself subjective.

Deciding whether a robot is sentient comes directly from human observation of the robot’s verbal and nonverbal communication. Relying upon human evaluation of emotions and feelings is totally subjective. Thomson’s article concedes that displaying emotions can be imitated by hardware and software, but asserts that animal and human emotions are probably electrochemical interactions or software. So who is to decide that artificial digital system emotions should be deemed less significant than electrochemical system emotions?

The big question should be: Do we accept that the entitlement to “rights” should depend upon subjective human evaluations of sapience and sentience? And: Do we accept the notion that an entity can be entitled to “rights” depending on how humans evaluate its abilities to think and feel?

Rights Are Rooted in Objective Sources

In contrast to the subjective basis of rights is an objective rights framework. The natural law worldview starts with objective facts. As philosopher John Locke put it: “Reason …teaches all Mankind, who would but consult it, that being all equal and independent, no one ought to harm another in his Life, Health, Liberty, or Possessions.” The source of reason, moral equality, and independence, is a Creator, a source of creative intelligence. The Declaration of Independence put it similarly, identifying as “self evident” truths “that all men are created and endowed by their Creator with certain inalienable rights, among them, life, liberty, and the pursuit of happiness.”

In parallel, the Judeo-Christian worldview starts by declaring that God created the material Universe and the Earth, and then created human beings in His image. That means human beings resemble God in important ways, represent God on Earth, and can love and have relationships with God, fellow humans, and other beings. Humans are thus God-like, not equal to God, but having many attributes conferred by God. Uniquely, humans were created “a little lower than the heavenly beings,” e.g., angels, and were “crowned with glory and honor.”

In the Judeo-Christian and natural rights view, a human being has rights because of being human. The human right to life reflects the human’s origins as a specially-created and designed being. The right to liberty flows from understanding that human life needs liberty to thrive and to love. The right to property enables humans to prosper materially during life.

All three fundamental rights are objective — they do not depend upon whether some other human being or machine decides that some humans aren’t sapient or sentient enough to deserve rights.

A just society protects these natural rights of every human, with penalties imposed upon violators of others’ rights. Respect and care for animals comes from the same source: God’s creation and value of the animals, along with God’s orders to steward the creation wisely and gratefully.

Robots Lack Natural Rights

Would robots qualify for rights protected by a natural law system, which in practice means rights protected by police forces and courts? No — for several reasons.

First, robots are not human, and nothing humans artificially make is equivalent to human.

Second, robots are not the creations of God and thus lack God’s concern about their existence (as best we can surmise).

Third, robots are hardware and software packages that lack a nature or any abilities outside of whatever the designers imagine. Rights to liberty and property, for examples, are meaningless to robots. Any interest in life, liberty, or property — fundamental to humans and relevant in some ways to many animals — is lacking in robots except as the builders have developed and implemented computations and behaviors that imitate such interest.

Robot Rights Downgrade Human Rights

Dangerous in Thomson’s piece: the notion that an entity’s rights depend upon whether other human beings subjectively deem that entity smart enough and emotional enough. Thomson’s vision of near-human robots implies that humans’ rights will likely then be defined by the same criteria as are robots’.

When some humans in power can deem otherwise peaceful humans as deserving fewer rights based upon subjective criteria, the concept of “rights” devolves to “acceptability” and “permissions.” Talk about dystopia…


Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence
Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He holds a J.D. with high honors from the University of San Diego Law School and a computer science degree from UC San Diego. Richard has practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and now specializes in writing dispositive motion and appellate briefs. He has authored or co-authored four books, and has written numerous articles and spoken on subjects including legal writing, economics, the Bill of Rights and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

Why Don’t Robots Have Rights? A Lawyer’s Response