Case Study: How Not to Sue AI for Libel
Imagine: A researcher uses ChatGPT to learn more about a pending federal court lawsuit. ChatGPT tells the researcher that your former employer is suing you for committing fraud, breach of fiduciary duty and embezzlement while you were Treasurer and Chief Financial Officer. ChatGPT even provides a copy of the formal complaint filed in court.
But then it turns out: The complaint ChatGPT provided verbatim is entirely and utterly false. It spelled your name right but otherwise it falsely calls you a major wrongdoer, perhaps a criminal. Can you sue OpenAI, which is the creator and host of ChatGPT, for libel?
Image Credit: InfiniteFlow - ChatGPT presented defamatory claims as true
Practically this exact fact pattern underlay the lawsuit, Walters v. OpenAI, filed in Georgia state court in 2023. Frederick Riehl, a journalist, had deployed ChatGPT to try to find more details about the federal court case SAF v. Ferguson and get hold of a copy of the complaint.
ChatGPT responded that Ferguson is “a legal complaint … filed against Mark Walters, who is accused of defrauding and embezzling from [his employer].” ChatGPT went on to say that the complaint alleges that Walters, the treasurer and CFO, had misappropriated funds, concealed his embezzling, breached his fiduciary duties, and misreported financial information to the employer.
In fact, the true Ferguson complaint alleged none of these things. That lawsuit alleges civil rights violations against the Washington state attorney general and others. Walters’ name appears nowhere in it.
Walters’ lawyers sued ChatGPT’s owner and operator, OpenAI, chiefly for libel. To prove a libel case like Walters’, the plaintiff has to show the defendant has published a physical communication making a claim of objective fact that injures the plaintiff’s reputation, exposes the plaintiff to public contempt or ridicule, or injures the plaintiff’s business or profession.
The basic facts Walters claimed do seem to support the elements: false statements of “fact,” publication, and defamatory language that injures his reputation and business or profession or exposes him to public contempt.
So why did Walters’ case lose?
Civil litigators like me will all confirm: Defamation cases like libel and slander are notoriously hard to win. The key AI-related reasons the trial court decision stated when it dismissed Walters’ lawsuit show typical case-killing roadblocks. But now they arise in the new AI context.
Walters’ complaint omitted some key facts that the pre-trial investigation uncovered:
- Riehl already had a copy of the Ferguson complaint before asking ChatGPT to summarize the case.
- When Riehl queried ChatGPT for Ferguson case details, ChatGPT repeatedly refused, saying the bot “did not have access to the internet and cannot read or retrieve any documents.”
- ChatGPT’s user interface pages repeatedly gave warnings that it isn’t perfectly reliable, such as “ChatGPT may produce inaccurate information about people, places, or facts.”
- Riehl continued to press ChatGPT for Ferguson case information despite its refusals until finally the bot gave the false statements and the wholly fictitious complaint document.
- Riehl knew Walters personally and possessed the Ferguson complaint, so Riehl did not believe for a moment ChatGPT’s statements and document were true.
- The only person who received ChatGPT’s defamatory statements and document was Riehl who essentially provoked the bot to hallucinate to make up a story.
Image Credit: Corgarashu - When these additional facts became known, the libel case was almost certainly doomed. First, as the trial court held, the totally false statements against Walters were not reasonably believable. Reasonable believability is needed to consider statements libelous.
Because ChatGPT repeatedly warns that it can give false statements, a reasonable person cannot necessarily believe the bot but must verify using other sources. Riehl knew ChatGPT could go terribly wrong — and he already had the Ferguson complaint so he knew the bot was totally wrong. Libel claims lose when the reader doesn’t think the false claim is factually correct.
Second, the bot’s false and defamatory information was provided only to Riehl who asked for it using ChatGPT as no more than a research tool. The false statements were not published to anyone other than the researcher, and such requested communications are typically deemed “privileged” against being considered libel.
Third, OpenAI did not know or have reason to know that ChatGPT, pressed repeatedly by Riehl, would hallucinate the false statements. Without such knowledge, OpenAI did not act intentionally, recklessly, or negligently in any way that led to the bot’s egregiously false statements.
Fourth, ChatGPT repeatedly warned that its “answers” could be wrong. Such warnings, called “disclaimers,” are the legal equivalent of saying “the statements are just opinions or speculations, not facts.” That means the bot’s false statements about Walters amounted to potentially bogus claims not to be trusted. Therefore, those claims were not “published defamatory statements of fact.”
The AI lessons learned… and the potholes that remain
The Walters test case included too many weaknesses and its defeat was predictable. The case is on appeal, so the higher court might see it differently. Several key takeaways from it highlight the realities of using AI chatbots like ChatGPT for research, however, including:
- Whether the bot’s results come from the first search or after several attempts, the content can be quite wrong. If the results could be defamatory, don’t publish them as true because you may gravely harm another person and you could be committing libel yourself.
- We need to double check bot results, but if we use other bots or AI Internet search tools, we risk still getting false or hallucinatory results.
- Even if the research bot is pressed to admit it is wrong, the bot will not correct itself globally, so that other users will get the same wrong answers in their research.
Legal reform proposal: Put teeth into defamation law against AI falsehoods
AI-powered online research tools already pervade the Internet. People scarcely know how to detect when AI results are true, untrue, biased, partially true, crucially incomplete, or wholly invented by hallucination.
Some might call for more top-down government control of AI and the Internet, but governments can be just as wrong, misleading, or agenda-driven.
I personally favor modernizing the civil laws against libel and defamation to allow injured people to readily win and receive statutory damage awards including attorneys’ fees, or to get injunctions, against AI systems for publishing outright falsehoods that damage reputations and businesses — without having to prove that the bot’s owner was negligent, reckless, or intentional. When bot developers face serious monetary consequences for failing to prevent defamation, they might be more strongly motivated to develop software solutions to mitigate the problem.
