Mind Matters Natural and Artificial Intelligence News and Analysis

TagLaw

reflection-of-mountain-range-in-lake-grand-teton-national-park-stockpack-adobe-stock
Reflection of mountain range in lake, Grand Teton National Park

Should We Give Nature “Rights”?

The nature rights movement is more ideological than rational

The major science journals are growing increasingly hard left politically. The prestigious journal Science, in particular, has swallowed progressive ideology–including supporting the “nature rights” movement. The rights of nature–which include geological features–are generally defined as the right to “exist, persist, maintain and regenerate its vital cycles, structure, functions and its processes in evolution.” Nature is, of course, not sentient. So, this campaign is really about granting environmental extremists legal standing to enforce their policy desires through litigation as legal guardians serving nature’s best interests. But the movement has a problem. It is clearly ideological rather than rational. So now, three law professors and a biologist writing in Science urge scientists to promote the agenda by giving courts a scientific pretext to enforce nature rights laws, or even, impose the Read More ›

the-imposing-court-gavel-in-the-digital-environment-symbolizes-the-decision-and-legal-protection-for-large-companies-generative-ai-stockpack-adobe-stock
The imposing court gavel in the digital environment symbolizes the decision and legal protection for large companies. Generative AI

Let’s Apply Existing Laws to Regulate AI

No revolutionary laws needed to fight harmful bots

In a recent article, Professor Robert J. Marks reported how artificial intelligence (AI) systems had made false reports or gave dangerous advice: Prof. Marks suggested that instead of having government grow even bigger trying to “regulate” AI systems such as ChatGPT: How about, instead, a simple law that makes companies that release AI responsible for what their AI does? Doing so will open the way for both criminal and civil lawsuits. Strict Liability for AI-Caused Harms Prof. Marks has a point. Making AI-producing companies responsible for their software’s actions is feasible using two existing legal ideas. The best known such concept is strict liability. Under general American law, strict liability exists when a defendant is liable for committing an action Read More ›

robotic-hand-pressing-a-keyboard-on-a-laptop-3d-rendering-stockpack-adobe-stock
Robotic hand pressing a keyboard on a laptop 3D rendering

How to Stop Troubling Abuse From Artificial Intelligence

Allowance of lawsuits will give AI developers pause before releasing their raw unvetted technology on the world

Artificial intelligence can give unintended and dangerous advice. What is the best way to keep things like the following from happening? ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChatGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper. Who’s responsible for these actions? How can AI be controlled to assure such careless responses are eliminated? Read on and you’ll see the answer is obvious. Attorney and Bradley Center Fellow Richard W. Stevens has talked about legal options of Professor Turley in a defamation lawsuit. But what about the Read More ›

Middle-aged man calling his attorney for legal assistance

AI in the Courtroom: Will a Robot Sentence You?

Some experts think AI might be fairer than human judgment. Others are not so sure

One Superior Court judge has warned that many cases don’t come down to information alone, which is all AI can do. Law professor David DeWolf also expresses concern about increasing dependence upon law—a form of coercion—to regulate human behavior, a choice that is irrelevant to the growth of AI in the courtroom.

Read More ›