Mind Matters Natural and Artificial Intelligence News and Analysis
robotic arm in court
Digital Law Technology
Image licensed via Adobe Stock

Can Professor Turley Sue ChatGPT for Libel?

The world wide web of reputation destruction is here
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Isn’t there a law against falsely accusing people of serious crimes or misconduct and then publishing damaging lies to the world? Yes. For centuries in English-speaking countries, the victim of such lies could sue the false accuser in civil court for libel per se. Nowadays, libel and its oral statement cousin, slander, are grouped together as defamation. Under American law, it isn’t easy to bring and win a lawsuit even when your case seems strong, but at least the law provides some recourse for defamation.

How about when the false accuser is ChatGPT? Jonathan Turley, the nationally known George Washington University law professor and commentator, woke up one morning to discover:

ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChatGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper.

From these facts, can Prof. Turley win a libel case against ChatGPT? Under District of Columbia law (and in most states), Turley, because he is a “public figure” and not a private individual, would have to name a legally-responsible defendant, and then he’d have to prove:

(1) the defendant published a false statement about Turley;

(2) the false statement was defamatory;

(3) Turley suffered actual injury as a result; and

(4) by clear and convincing evidence, that the defendant published the statement either knowing that the statement was false, or with reckless disregard of whether it was false or not.     

Easy Part: Proving Statements False, Defamatory, and Published

Consider the four elements of proving libel. The first element seems obviously provable: there were false statements published about Turley. Once a statement is communicated to a third person, it is considered “published,” and ChatGPT did communicate the statement to people. Turley identified the key falsehoods: (1) a sexual harassment allegation made on (2) a trip while (3) he was teaching at a certain school, all reported in (4) a newspaper article.  Items (1) through (4) are entirely untrue.

The second element appears clearly provable as well. A statement is defamatory “if it tends to injure a person in his or her trade, profession or community standing, or lowers him or her in the estimation of the community.”  Turley reported the ChatGPT statement said he was accused of sexual harassment when he was not accused. A good faith allegation of sexual harassment against a well-known public figure like Prof. Turley is defamatory because the allegation would tend to injure him in his profession and in the community. Indeed, if the accusation alleged an actual criminal act, then if untrue the accusation would be libelous on its face.

The third element to prove would be Turley’s actual injury resulting from the false statements. Under the traditional Anglo-American common law libel rules, the courts would presume a person is injured by such a scandalous, damaging accusation. Under current law, however, a public figure faces a tougher battle to prove “actual damage” from a published defamatory statement. Consider Presidents Trump and Biden. Both men are daily attacked and called all sorts of names and accused of all sorts of things; no libel lawsuits follow. Nevertheless, accusing a public figure like Turley of specific heinous misconduct might provably damage his ability to get or retain existing business relationships that depend upon his maintaining stellar personal character. 

Can the Mindless Bot Know or Care?

The fourth element presents the steepest challenge: Prof. Turley would have to prove that ChatGPT published the defamatory accusation either “knowing” it was false, or recklessly not caring whether it was true. It is already tricky to sue ChatGPT itself since the defendant in a successful civil suit has always been a human individual or entity (corporation, government, or other legal “person”). ChatGPT would not be a traditionally acceptable proper defendant.

Moreover, to prove defamation, Turley as a public figure has to show ChatGPT knew it was publishing a false defamatory statement. The other avenue would be proving ChatGPT recklessly didn’t care if it published a false accusation of serious misconduct.

Hold it right there. Can ChatGPT be held to “know” anything at all?  Ask ChatGPT – the bot identifies as

a large language model trained by OpenAI … I am designed to understand natural language and generate responses to various prompts and questions. My purpose is to assist users in generating human-like text based on their input.

ChatGPT does not claim to be a truth-teller. It does not claim to know anything, only to generate “human-like text.” If ChatGPT cannot know anything, then it cannot be liable for publishing text about which it knows nothing.  ChatGPT doesn’t even know what its published sentences mean, let alone whether they are true.

Consider the other approach. Can ChatGPT be held to have acted “recklessly” by publishing a false defamatory accusation without caring if it’s true? As a “large language model” that generates “human-like text” based upon human inputs, ChatGPT is not a “caring” machine. It manipulates text, cleverly for sure, but without consciousness of meanings or harms the sentences might cause to human feelings and lives.

Additionally, unlike the elements in most civil claims that require showing the fact is “more likely true than not true” (aka the “preponderance of the evidence” test), the fourth element of the defamation claim must be proved by “clear and convincing evidence.” Clear and convincing evidence is evidence that produces in the judge or jury’s mind a firm belief or conviction that the given fact or conclusion is true – a much higher standard of persuasion.

To hold ChatGPT liable under current law for publishing scandalous lies against public figures, courts would have to hold ChatGPT responsible to figure out whether every sentence it publishes is true. As powerful as ChatGPT is, no one contends ChatGPT is anywhere near powerful enough to ferret out whether any given sentence is factually true. And ChatGPT cannot be expected to be careful and not publish statements recklessly, since ChatGPT doesn’t know if any statement made or written by anyone, anywhere, is true.

“It Wasn’t Me – It’s the Machine’s Fault!”

The law of defamation confers no remedy to a person like Prof. Turley who is defamed internationally – if ChatGPT is considered the author and publisher of the false defamatory statements. But ChatGPT was programmed by humans. Can the system designers and programmers be held responsible for every sentence ChatGPT publishes – given that ChatGPT is not asserting anything to be true?  

ChatGPT’s human creators cannot know what inputs go into ChatGPT’s algorithms, because inputs come in after the software is finished and running. The human creators cannot foresee any given utterance from the bot.  All they know is the bot will generate human-like text.

As people continue to believe computer-delivered information is true and accurate, people will receive false defamatory statements from powerful bots that give the impression of truth. ChatGPT and its e-brethren will inevitably become the world’s worst and most devastating defamation generators. 

Even though proving defamation is easier when the victim is a private individual rather than a public figure, standard defamation law still looks at what the author and publisher knew or should have known or done to avoid publishing a defamatory falsehood. The bots and search engines do not know the truth, they know only text and numbers. Everyday defamation law offers little recourse against online computer defamation.

Next to ponder: Must this problem be solved by governments and courts? Or would a better solution arise from humans rejecting AI supremacy? 


Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence
Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He holds a J.D. with high honors from the University of San Diego Law School and a computer science degree from UC San Diego. Richard has practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and now specializes in writing dispositive motion and appellate briefs. He has authored or co-authored four books, and has written numerous articles and spoken on subjects including legal writing, economics, the Bill of Rights and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

Can Professor Turley Sue ChatGPT for Libel?