Mind Matters Natural and Artificial Intelligence News and Analysis

TagThe Stream

no-ai-artificial-intelligence-forbidden-sign-lawsuit-copyright-stockpack-adobe-stock
No AI Artificial Intelligence Forbidden Sign Lawsuit Copyright

Supreme Court Ruling Strikes a Blow to “Generative AI”

Ouch. That's a big loss for AI. Here's why:

Can generative AI “think outside the box” even as it draws from preexisting material on the internet? Are the images it produces protected under “fair use”? The Supreme Court of the United States (SCOTUS) has decided “no.” AI fails to be “transformative,” meaning it can’t create new meaning apart from its source material. Robert J. Marks reported on the recent lawsuit Warhol v. Goldsmith, writing, Assume AI is trained with all of the musical compositions of Bach. If the AI generates music that sounds like Bach, it is not transformative. The “meaning or message” can be construed as being the same. It’s still like Bach. On the other hand, if the AI is trained only on Bach but generates music Read More ›

depressed-teenager-browsing-the-internet-on-his-mobile-phone-as-he-is-lying-on-his-bed-in-the-dark-stockpack-adobe-stock
Depressed teenager browsing the internet on his mobile phone as he is lying on his bed in the dark.

Artificial Intelligence, Artificial Wisdom

What manner of harms are we creating?

By Tom Gilson Richard Stevens’ May 11 Stream article, “AI Legal Theories,” suggests we consider making Artificial Intelligence companies legally responsible for the harms they cause. We do that already with consumer products, so in principle it should be possible to do the same with AI. Enforcement would be by civil law. Injured parties would presumably be given standing to sue the source of the harm without having to prove negligence. That gets us somewhere, but not far enough. It settles the question of who is legally responsible. But responsible for what? Specifically, what will we call harm? Who will decide? Based on what standard of wisdom? Stevens gives this example of harm, citing an earlier Stream article by Robert J. Marks: “The Snapchat ChatGPT-powered Read More ›

the-imposing-court-gavel-in-the-digital-environment-symbolizes-the-decision-and-legal-protection-for-large-companies-generative-ai-stockpack-adobe-stock
The imposing court gavel in the digital environment symbolizes the decision and legal protection for large companies. Generative AI

Let’s Apply Existing Laws to Regulate AI

No revolutionary laws needed to fight harmful bots

In a recent article, Professor Robert J. Marks reported how artificial intelligence (AI) systems had made false reports or gave dangerous advice: Prof. Marks suggested that instead of having government grow even bigger trying to “regulate” AI systems such as ChatGPT: How about, instead, a simple law that makes companies that release AI responsible for what their AI does? Doing so will open the way for both criminal and civil lawsuits. Strict Liability for AI-Caused Harms Prof. Marks has a point. Making AI-producing companies responsible for their software’s actions is feasible using two existing legal ideas. The best known such concept is strict liability. Under general American law, strict liability exists when a defendant is liable for committing an action Read More ›

creative-conversion-of-woman-holding-a-shard-of-broken-mirror-and-eyes-from-another-exposure-artistic-conversion-stockpack-adobe-stock
Creative conversion of woman holding a shard of broken mirror and eyes from another exposure artistic conversion

The Real Danger in AI

We are highly susceptible to suggestions about what an image means

By Jeff Gardner The threat that artificial intelligence (AI) poses to us has been dominating the news cycle. Exactly what AI will do to us is hard to predict — it hasn’t happened yet. But some, like Elon Musk, worry that AI will be used primarily to peddle lies to us. Musk is right, but not because AI is the next thing in fake news. “Fake news” is already here, and it’s not composed of made-up stories. It is someone’s opinion being passed off as the story, the “facts” of the event. With fake news, the events are real, but the assigned meaning, the “frame” as it is called in the media, is manufactured. The Problem AI’s danger to us Read More ›

cyborg-hologram-watching-a-subway-interior-3d-rendering-stockpack-adobe-stock
Cyborg hologram watching a subway interior 3D rendering

Should We Shut the Lid on AI?

The real danger posed by AI is not its potential. It is the lack of ethics

By John Stonestreet & K. Leander Recently, a number of prominent tech executives, including Elon Musk, signed an open letter urging a 6-month pause on all AI research. That was not enough for AI theorist Eliezer Yudkowsky. In an opinion piece for TIME magazine, he argued that “We Need to Shut it All Down,” and he didn’t mince his words: Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI … is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ Using a tone dripping with panic, Yudkowsky even suggested that countries like the U.S. should be willing Read More ›

virtual-screen-immersive-interface-background-stockpack-adobe-stock
Virtual screen immersive interface background

It’s Not What It Looks Like

Our natural tendency to connect meaning with images is both a strength and a vulnerability

The human brain tends to think concretely. We barter thoughts, words, and ideas through images. It’s why metaphorical language can be so powerful in conveying otherwise abstract ideas. I immediately think of the verse in the Bible: “But let justice roll down like waters, and righteousness like an ever-flowing stream” (Amos 5:24). It’s hard for me to picture justice on its own, but a raging waterfall? That’s a powerful image. I can now imagine what justice, in some aspect, might look like. Our natural tendency to think this way is both a strength and a vulnerability. A recent article from The Stream relates the human imagination to the current conversation over AI. While the debates rage over AI’s most pertinent Read More ›