The major science journals are growing increasingly hard left politically. The prestigious journal Science, in particular, has swallowed progressive ideology–including supporting the “nature rights” movement. The rights of nature–which include geological features–are generally defined as the right to “exist, persist, maintain and regenerate its vital cycles, structure, functions and its processes in evolution.” Nature is, of course, not sentient. So, this campaign is really about granting environmental extremists legal standing to enforce their policy desires through litigation as legal guardians serving nature’s best interests. But the movement has a problem. It is clearly ideological rather than rational. So now, three law professors and a biologist writing in Science urge scientists to promote the agenda by giving courts a scientific pretext to enforce nature rights laws, or even, impose the Read More ›
In a recent article, Professor Robert J. Marks reported how artificial intelligence (AI) systems had made false reports or gave dangerous advice: Prof. Marks suggested that instead of having government grow even bigger trying to “regulate” AI systems such as ChatGPT: How about, instead, a simple law that makes companies that release AI responsible for what their AI does? Doing so will open the way for both criminal and civil lawsuits. Strict Liability for AI-Caused Harms Prof. Marks has a point. Making AI-producing companies responsible for their software’s actions is feasible using two existing legal ideas. The best known such concept is strict liability. Under general American law, strict liability exists when a defendant is liable for committing an action Read More ›
Artificial intelligence can give unintended and dangerous advice. What is the best way to keep things like the following from happening? ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChatGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper. Who’s responsible for these actions? How can AI be controlled to assure such careless responses are eliminated? Read on and you’ll see the answer is obvious. Attorney and Bradley Center Fellow Richard W. Stevens has talked about legal options of Professor Turley in a defamation lawsuit. But what about the Read More ›
One Superior Court judge has warned that many cases don’t come down to information alone, which is all AI can do. Law professor David DeWolf also expresses concern about increasing dependence upon law—a form of coercion—to regulate human behavior, a choice that is irrelevant to the growth of AI in the courtroom.