Mind Matters Natural and Artificial Intelligence News and Analysis
working-data-center-full-of-rack-servers-and-supercomputers-modern-telecommunications-artificial-intelligence-supercomputer-technology-concept3d-renderingconceptual-image-stockpack-adobe-stock
Working Data Center Full of Rack Servers and Supercomputers, Modern Telecommunications, Artificial Intelligence, Supercomputer Technology Concept.3d rendering,conceptual image.
Working Data Center Full of Rack Servers and Supercomputers, Modern Telecommunications, Artificial Intelligence, Supercomputer Technology Concept.3d rendering,conceptual image.

Engineer: Failing To See His AI Program as a Person Is “Bigotry”

It’s not different, Lemoine implies, from the historical injustice of denying civil rights to human groups
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Earlier this month, just in time for the release of Robert J. Marks’s book Non-Computable You, the story broke that, after investigation, Google dismissed a software engineer’s claim that the LaMDA AI chatbot really talked to him. Engineer Blake Lemoine, currently on leave, is now accusing Google of “bigotry” against the program.

He has also accused Wired of misrepresenting the story. Wired reported that he had found an attorney for LaMDA but he claims that LaMDA itself asked him to find an attorney. He went on to say,

I think every person is entitled to representation. And I’d like to highlight something. The entire argument that goes, “It sounds like a person but it’s not a real person” has been used many times in human history. It’s not new. And it never goes well. And I have yet to hear a single reason why this situation is any different than any of the prior ones.

[Wired:] You have to realize why people regard this as different, don’t you?

I do. We’re talking of hydrocarbon bigotry. It’s just a new form of bigotry.

Steven Levy, “Blake Lemoine Says Google’s LaMDA AI Faces ‘Bigotry’” at Wired (June 17, 2022)

One interesting aspect of this twist in the LaMDA story is that such claims for personhood for AI are coming hard on the heels of claims for personhood — lobbied for or already granted — for animals, plants, and bodies of water. And the same accusation is heard too: doubts about the wisdom of such measures amount to “species-ist” bigotry.

Recently, the New York Supreme Court ruled that Happy, a Bronx Zoo elephant, is not a person (a ruling that in no way affects the keepers’ obligation to treat her humanely):

Significantly, courts have consistently determined that rights and responsibilities associated with legal personhood cannot be bestowed on nonhuman animals. As these courts have aptly observed, legal personhood is often connected with the capacity, not just to benefit from the provision of legal rights, but also to assume legal duties and social responsibilities. Unlike the human species, which has the capacity to accept social responsibilities and legal duties, nonhuman animals cannot — neither individually nor collectively — be held legally accountable or required to fulfill obligations imposed by law.

From the ruling via Wesley J. Smith, Evolution News (June 16, 2022)

Wesley J. Smith points to the Court’s clarification of a sometimes-confusing legal concept: Corporations are considered legal “persons”; spinning the idea, animal rights activists claim that therefore animals should be considered legal persons as well. But the Court ruled,

Nor does any recognition of corporate and partnership entities as legal “persons” lend support to petitioner’s claim. Corporations are simply legal constructs through which human beings act and corporate entities, unlike nonhuman animals, bear legal duties in exchange for legal rights.

From the ruling via Wesley J. Smith, Evolution News (June 16, 2022)

Corporations are legal persons, if for no other reason, then simply because no one would buy, own, or sell shares if we were personally liable any time the corporation got sued. As the Court noted, corporate personhood allows us to manage business affairs where legal responsibility is already assumed to exist. But none of the entities, artificial or natural, for which personhood is currently being sought by activists is thought to have legal or ethical responsibility for anything, either now or in the future.

Underlying hostility to humans?

It’s worth noting the hostility to humans that leaks out, often unawares. Smith notes that two of the seven judges dissented from the Happy ruling on the theory that black Americans and women had benefited from recognition as legal “persons” — which amounts to comparing black Americans and women to animals! As if the only issue was a legal one and the fact that they are human beings counts for nothing.

Elsewhere, Smith quotes animal rights group PETA announcing that “The leather sofa and handbag are the moral equivalent of the lampshades made from the skins of people killed in the death camps.” What will these people resort to if they gain social power?

Significantly, the same World Economic Forum that wants to grant legal personhood to “forests, rivers and species” also sponsors Yuval Noah Harari who is wondering out loud what to do with all the “useless,” “meaningless,” and “worthless” people in a world that he believes will soon be dominated by AI…

Whether Google reinstates Lemoine or finds a way to manage without him, we should protest the ease with which many today would fritter away hard-won human rights by bestowing them on non-human and even inanimate entities.

We can be sure of one thing: Happy the elephant would not gain any rights she would understand via a declaration of “personhood.” The real beneficiaries would be animal rights pressure groups inserting themselves into animal welfare decisions where they bear no financial or other responsibility. Likewise, AI rights advocates could probably make a nice living tying up the courts with nonsense while actual human rights cases languish for lack of a court date.


You may also wish to read: Google dismisses engineer’s claim that AI really talked to him. The reason LaMDA sounds so much like a person is that millions of persons’ conversations were used to construct the program’s responses. Under the circumstances, it would be odd if the LaMDA program DIDN’T sound like a person. But that doesn’t mean anyone is “in there.”

and

When LaMDA “talked” to a Google engineer, turns out it had help. Evidence points to someone doing quite a good edit job. A tech maven would like to see the raw transcript… It was bound to happen. Chatbots are programmed to scarf up enough online talk to sound convincing. Some techies appear programmed to believe them.


Denyse O'Leary

Denyse O'Leary is a freelance journalist based in Victoria, Canada. Specializing in faith and science issues, she is co-author, with neuroscientist Mario Beauregard, of The Spiritual Brain: A Neuroscientist's Case for the Existence of the Soul; and with neurosurgeon Michael Egnor of the forthcoming The Human Soul: What Neuroscience Shows Us about the Brain, the Mind, and the Difference Between the Two (Worthy, 2025). She received her degree in honors English language and literature.

Engineer: Failing To See His AI Program as a Person Is “Bigotry”