Mind Matters Natural and Artificial Intelligence News and Analysis
the-imposing-court-gavel-in-the-digital-environment-symbolizes-the-decision-and-legal-protection-for-large-companies-generative-ai-stockpack-adobe-stock
The imposing court gavel in the digital environment symbolizes the decision and legal protection for large companies. Generative AI
Image licensed via Adobe Stock

Let’s Apply Existing Laws to Regulate AI

No revolutionary laws needed to fight harmful bots
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a recent article, Professor Robert J. Marks reported how artificial intelligence (AI) systems had made false reports or gave dangerous advice:

  • The Snapchat ChatGPT-powered AI feature “told a user posing as a 13-year-old girl how to lose her virginity to a 31-year-old man she met on Snapchat.”
  • Snapchat’s ChatGPT reportedly advised a user posing as age 15 how to have an “epic birthday party” by giving “advice on how to mask the smell of alcohol and pot.”
  • When a 10-year-old child asked Amazon’s Alexa for a “challenge to do,” Alexa reportedly suggested: “Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.”
  • Jonathan Turley, the nationally known George Washington University law professor and commentator, discovered he had been terribly defamed. ChatGPT had published a sexual harassment supposedly made against him, which included entirely false statements of “fact” and referenced a non-existent newspaper article.

Prof. Marks suggested that instead of having government grow even bigger trying to “regulate” AI systems such as ChatGPT:

How about, instead, a simple law that makes companies that release AI responsible for what their AI does? Doing so will open the way for both criminal and civil lawsuits.

Strict Liability for AI-Caused Harms

Prof. Marks has a point. Making AI-producing companies responsible for their software’s actions is feasible using two existing legal ideas. The best known such concept is strict liability. Under general American law, strict liability exists when a defendant is liable for committing an action – regardless of whether the defendant committed the action with intent, recklessness, or negligence.  

That concept appeared in the 20th century as strict products liability. The person suffering harm from a product needs only to prove that a defect in the design or manufacturing of the product caused the harm. Another way is to prove the user wasn’t adequately warned of the product’s risks. The harmed person doesn’t have to prove the product supplier was “at fault.”

Strict products liability could apply to AI systems including bots like ChatGPT. Following Prof. Marks’ thought process, if ChatGPT gave information or advice that a user reasonably relied upon and caused or suffered damage, then the bot’s maker would be liable for the harm.

Untethered AI Systems May Be Exceptionally Dangerous

Another possible legal concept is abnormally dangerous activity liability. Dating back to the 19th century, the rule applies in situations where a landowner maintains a hazard, for example, a toxic waste processing plant next to a residential area. Toxic waste processing (1) is not a “common” activity, but (2) does create a foreseeable and highly significant risk of physical harm to people who do not benefit from the activity, and (3) can harm such people even if everyone involved acts reasonably.  With these three factors present, the toxic plant is an abnormally dangerous activity. Its owners and operators can be strictly liable for harm caused to other people, regardless of fault.

Those three factors could apply to AI. People fear that AI systems like ChatGPT and other bots, and other online information and advising systems, can invade and even direct the worlds of youngsters and naively-trusting adults. AI systems are manufactured and hosted by a small number of providers, so they are not a “common activity” that everyday people do or know all about. Already we see bots giving objectively dangerous information to people who do not benefit from such information. (Not to mention deliberate criminality using bots.)

Moreover, the AI systems can harm users, especially kids, even if the AI programmers thought they were acting reasonably when designing the systems. AI systems might well be classified as abnormally dangerous activities – and that means the manufacturers and providers could be held strictly liable for harm the AI systems cause.

Does Strict Liability Make Sense?

Usually, our society holds people responsible for things they actually do wrong. We also punish or penalize people for failing to do things right, when they owed a duty to act properly and correctly. Strict liability, however, holds people responsible regardless of whether they did something wrong or failed to do something right. Can that concept make sense?

There can be debate about the subject, but there are several situations when strict liability can be beneficial. The makers of foods, drugs, complicated machines or other devices, know far more about their products than the users know. They know if the products contain toxic materials or hidden dangers, for example. Makers typically can professionally test the efficacy and safety of their products; the users cannot. The makers have the information to give buyers and users warnings. And the makers can obtain insurance to compensate buyers and users who suffer injuries through no fault of their own. These factors make it reasonable to hold the makers responsible for injuries their products cause to users, without making the user prove the maker did something specifically wrong.

AI systems seem a perfect fit for strict liability. Only the programmers know how their systems work and what software features exist. The programmers can test their software, and they can give warnings to users based upon their intimate knowledge of the product. AI users, on the other hand, know nothing about the AI systems or how they work, and cannot test the systems. AI users must rely on the programmers and trust the software to work properly and not cause harm.

The only people who can prevent AI from harming users and others are the programmers. Like the makers of foods, drugs, and machines, the programmers and their companies are the main entities who can prevent AI systems from being capable of causing unintentional or negligent damage. (A criminal user of AI systems would be responsible for criminal misuse, of course, not the AI programmers, unless the programmers made the systems to facilitate crimes.) People harmed by AI systems should not have to prove the programmers did something wrong – but only that the software caused foreseeable unreasonable harm. That is how strict liability balances the scales while encouraging AI programmers to be careful and to obtain insurance when users and others suffer damage.  

Sue the Bums vs. Call the Cops

Society will have to decide if laws addressing AI-caused harm should be enforced by police and criminal prosecutions, or by injured victims suing for damages in civil courts. Typically, strict liability concepts make it easier to sue the manufacturers and other providers in civil court for damages caused by their products. A typical civil lawsuit, however, costs a lot to maintain and can take years. Special fast-track courts, however, could be created to focus on getting victims relief from AI-caused harm.

Laws can conceivably be passed to authorize police and prosecutors to investigate and charge people with the crimes their AI products commit. Because governments can prosecute crimes speedily, while the victims don’t have to pay for lawyers to make a case, the criminal law system could effectively deter AI misconduct.

Yet the idea of government police rounding up programmers for chatbot messages seems a tad authoritarian. Recent reports say Chinese authorities arrested a man for using ChatGPT to write a news story falsely claiming a train accident had killed nine people. Indications are the writer faces five to ten years in prison.

The threat of strict liability lawsuits or prosecutions, as Prof. Marks suggested, may well reduce the dangers of AI systems and bots. Using those existing concepts of law, rather than inventing new laws and government bureaucracies, can meet the challenges of harmful AI bot misconduct.

A version of this article previously appeared in The Stream. Used with permission.


Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence
Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He holds a J.D. with high honors from the University of San Diego Law School and a computer science degree from UC San Diego. Richard has practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and now specializes in writing dispositive motion and appellate briefs. He has authored or co-authored four books, and has written numerous articles and spoken on subjects including legal writing, economics, the Bill of Rights and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

Let’s Apply Existing Laws to Regulate AI