Mind Matters Natural and Artificial Intelligence News and Analysis

CategoryArtificial Intelligence

Daft Punk headset
Illustration of a Futuristic cyber helmet inspired by music group Daft Punk . Cyberspace Augmented Reality. , Generative AI

Daft Punk and Being Human After All

Former member said AI concerns contributed in part to the band's breakup

Legendary electronic pop duo Daft Punk split ways two years ago after decades of collaboration and are esteemed as pioneers in the creative ways they infused their music with cutting edge sonic technology. However, in a recent interview, ex-member Thomas Bangalter said that artificial intelligence contributed in part to the band’s separation. Per a report from Futurism, Bangalter said, We tried to use these machines to express something extremely moving that a machine cannot feel, but a human can. We were always on the side of humanity and not on the side of technology. -NOOR AL-SIBAI, Daft Punk Says They Broke Up Partially Over Fear of AI (futurism.com) The pop duo notoriously dressed up in robotic garb during their musical Read More ›

finding a solution in the maze
Businessman in creative blue maze background.

AI is a Tool, Not a Solution for Everything

Life is complicated and our problems can't always be solved through an engineering approach

AI is a tool. In the words of Robert J. Marks from an interview at last year’s COSM conference, it can’t be a “friend or foe,” because it’s not a living entity. It’s something that can be used (or abused). Such a neutral, balanced attitude toward AI is alien to many Big Tech moguls who are pitching AI as the solution to many of the world’s complicated problems. For them, on the flipside of the doomsday alarmists, AI will usher in a new epoch of human progress. They may be partially right about that, but according to this writer at Slate, their hype reflects a commitment to technological “solutionism,” the idea that tech, once adequately suited for the task, can Read More ›

library with arches
Custom library

From One Author to an AI

What might John Steinbeck think of AI writing a novel?

John Steinbeck was among the most prominent literary figures of the twentieth century, responsible for works such as The Grapes of Wrath, East of Eden, and The Pearl. East of Eden, arguably his best work, is a sprawling novel about two families in the Salinas Valley in California and is often interpreted as a modern-day retelling of the book of Genesis, particularly the story of Cain and Abel. If we were to ask Steinbeck today about AI’s ability to write a good novel, what might the celebrated writer say? Well, an interview with Steinbeck from Paris Review, while it doesn’t have anything to do with computers or artificial intelligence, does reveal a significant aspect of Steinbeck’s own philosophy of writing Read More ›

the scales of justice
Scales of Justice in the dark Court Hall. Law concept of Judiciary, Jurisprudence and Justice. Copy space. Based on Generative AI

AI in the Courtroom: How to Program a Hot Mess

Could AI make competent judicial choices in the court?

Imagine we’re assigned to design the artificial intelligence (AI) software to carry out legal analysis of cases like a human judge. Our project is “CourtGPT,” a system that receives a factual and legal problem in a case where there are two opposing parties, analyzes how certain statutes and other legal principles apply to the facts, and delivers a decision in favor of one of the parties. CourtGPT will make “legal decisions,” not decide “jury questions of fact,” and thus will function like a judge (not juror). To write a computer program of any complexity, we start by describing the entire program’s operations in English (my native tongue). Pro tip: If you cannot describe how your program operates in human language, then you cannot Read More ›

writing text in floating boxes
Businessman showing online document validation icon, Concepts of practices and policies, company articles of association Terms and Conditions, regulations and legal advice, corporate policy

AI and Human Text: Indistinct?

Here's a mathematical proof that challenges the assumption that AI and human-made text are the same

What is a poor teacher to do? With AI everywhere, how can he reliably detect when his students are having ChatGPT write their papers for them? To address this concern, a number of AI text detector tools have emerged.  But do they work? A recent paper claims that AI generated text is ultimately indistinguishable from human generated text. They illustrate their claim with a couple experiments that fool AI text detectors by simple variations to AI generated text. Then, the authors go on to mathematically prove their big claim that it is ultimately impossible to tell AI text and human text apart. However, the authors make a crucial assumption. Faulty Premises The proof assumes that AI generated text will become closer and closer to Read More ›

self driving car
Self driving car on a road. Autonomous vehicle. Inside view.

The Irony in Musk’s AI Distrust

As a leader in AI, why is Musk advocating a pause on its development?

Elon Musk joined a petition to “pause” AI research, citing concern over its potential harms and overreach. This is interesting, since Musk originally funded OpenAI, which is now at the forefront of cutting-edge AI systems like ChatGPT. In addition, Musk’s ventures with self-driving cars and his confidence in neural technology all cast him as a leader in the AI revolution. So why is he calling for a hiatus? According to a recent Slate article, the warnings against Large Language Models (LLMs) are a distraction from the more dangerous AI inventions like the self-driving car. Musk uses sci-fi alarmism to exacerbate the fear of a machine takeover, while his own experiments in automation have also proved to be dangerous for human Read More ›

Elon_Musk_at_a_Press_Conference

Elon Musk to AI Labs: Press Pause

The petition reflects growing concern over the proper role of AI in human society and its potential for overreach

Over 1,000 leaders and experts in technology and science, including Elon Musk, are now urging artificial intelligence labs to pause their research and distribution of new AI technologies. They believe moving forward so swiftly on AI research could bring about unintended consequences in the future, and that we don’t understand AI well enough yet to be casting so much trust in it. According to The New York Times, The open letter called for a pause in the development of A.I. systems more powerful than GPT-4, the chatbot introduced this month by the research lab OpenAI, which Mr. Musk co-founded. The pause would provide time to introduce “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be Read More ›

reaching out ai
White cyborg finger about to touch human finger 3D rendering

Robert Marks at The Daily Caller

Despite the confidence in new AI coming from Big Tech executives, it makes quite glaring mistakes

Robert J. Marks wrote a piece at The Daily Caller this week on artificial intelligence, ChatGPT, and the manifold problems of new AI systems like Google’s Bard and older ones such as Amazon’s Alexa. Despite the confidence in new AI coming from Big Tech executives, it makes quite glaring mistakes, although Marks believes AI has its genuine uses and benefits. Snapchat’s chatbot “My AI” gave advice about how to hide the smell of pot and alcohol to someone posing as a disgruntled teenager. Microsoft’s Bing bot professed its love for a tech journalist. A Google app made egregiously racist errors. ChatGPT is also politically biased despite claiming neutrality. Marks writes, Many warn of the future dangers of artificial intelligence. Many Read More ›

chalk graph
businessman hand writing a business graph on a touch screen inte

A Graph Can Tell a Story—Sometimes It’s an Illusion

Mistakes, chicanery, and "chartjunk" can undermine the usefulness of graphs

A picture is said to be worth a thousand words. A graph can be worth a thousand numbers. Graphs are, as Edward Tufte titled his wonderful book, the “visual display of quantitative information.” Graphs should assist our understanding of the data we are using. Graphs can help us identify tendencies, patterns, trends, and relationships. They should display data accurately and encourage viewers to think about the data rather than admire the artwork. Unfortunately, graphs are sometimes marred (intentionally or unintentionally) by a variety of misleading techniques or by what Tufte calls “chartjunk” that obscures rather than illuminates. I have described elsewhere many ways in which mistakes, chicanery, and chartjunk can undermine the usefulness of graphs. I recently saw a novel Read More ›

hiking
Hiking team people helping each other friend giving a helping hand while climbing up on the mountain rock adventure travel concept of friendship support trust teamwork success.

GPT-4: Signs of Human-Level Intelligence?

Competence and understanding matter just as much if not more than mere "intelligence"

You’ve heard about GPT-3, but how about GPT-4? OpenAI has publicly released the new AI program, and researchers have already claimed that it shows “sparks” of human intelligence, or artificial general intelligence (AGI). Maggie Harrison writes at Futurism, Emphasis on the “sparks.” The researchers are careful in the paper to characterize GPT-4’s prowess as “only a first step towards a series of increasingly generally intelligent systems” rather than fully-hatched, human-level AI. They also repeatedly highlighted the fact that this paper is based on an “early version” of GPT-4, which they studied while it was “still in active development by OpenAI,” and not necessarily the version that’s been wrangled into product-applicable formation. -Maggie Harrison, Microsoft Researchers Claim GPT-4 Is Showing “Sparks” Read More ›

bard the bot
Artificial intelligence ai chat bot idea hands holding a phone against a blurred abstract background chatbot providing website support. Generative AI

The Bard is Beginning to Sing

Google has unleashed its own bot. Does that mean they're in the running in the AI race now?

Google has officially made its AI chatbot “Bard” available in the United States and the UK, notching their place in the artificial intelligence “arms race” that has taken the tech world by storm since OpenAI released its GPT-3 program last November. Microsoft released its own bot just a few weeks ago, though it immediately heralded concern for its unpredictable mannerisms and even “creepy” behavior, as described by Kevin Roose from The New York Times. “Google’s had a rocky AI journey after being caught out by the runaway success of OpenAI’s ChatGPT chatbot,” write a team of contributors at Forbes. This all begs the question: is Google really prepared to be a leader in this field? It may be a forefather Read More ›

observing the data
In the System Control Room Technical Operator Stands and Monitors Various Activities Showing on Multiple Displays with Graphics. Administrator Monitors Work of  Artificial Intelligence.

Gary Smith’s New Book Reviewed in Washington Post

Smith argues that science itself is being undermined by the tools scientists use

Walter Bradley Center Senior Fellow Gary Smith’s book Distrust: Big Data, Data-Torturing, and the Assault on Science was reviewed in The Washington Post today. Smith is a frequent contributor to Mind Matters and teaches economics at Pomona College. In his new book, Smith argues that science itself is being “undermined” by the tools scientists use. Reviewer Abby Ohlheister writes, Smith, an economist whose work often examines the misuse of data and statistics in a variety of disciplines, argues that the current crisis of trust in science falls at the intersection of three forces: disinformation, data torturing and data mining. Disinformation, as Smith writes, is “as old as the human race,” but accelerated in speed and reach alongside social media. Data Read More ›

COSM2022-Nov10-174A0113-blake-lemoine-panel

Lemoine at COSM 2022: A Conversation on AI and LaMDA

Will AI ever become "sentient"?

Blake Lemoine, ex-Google employee and AI expert, sat down with Discovery Institute’s Jay Richards at the 2022 COSM conference last November. Together they discussed AI, Google, and how and why Lemoine got to where he is today. Lemoine famously claimed last year that LaMDA, Google’s breakthrough AI technology, had achieved sentience. Lemoine explains that many people at Google thought AI had the potential for sentience, but that such technology should not be made prematurely for fear of the negative impacts it could have on society. You can listen to their interesting and brief conversation in the video below, and be sure to see more sessions from the 2022 COSM conference featuring Lemoine and other leaders and innovators in technology on Read More ›

bulb brain
Creative Idea with Brain and Light Bulb Illustration, with Generative AI Technology

Dear Silicon Valley: You’re Over-Hyping ChatGPT

The abilities of these new chatbots are grossly overstated

Gary Smith and Jeffrey Funk, frequent Mind Matters contributors, co-wrote a piece at Salon on the over-exaggerated dreams big tech has for AI. They write, Silicon Valley’s pre-eminent leaders love prematurely predicting that their products will completely upend the world as we know it. The latest case study comes from Sam Altman, CEO of OpenAI, the company behind the ChatGPT AI chatbot that has gone viral for its convincing imitations of human writing. Two years ago, Altman wrote a manifesto, “Moore’s Law for Everything,” in which he forecast that artificial intelligence would make huge swaths of both white collar and blue collar jobs obsolete. -Smith & Funk, Don’t believe the hype: why ChatGPT is not the “holy grail” of AI research | Read More ›

looking at art
Rear view of a guy in his 30s looking at an art exhibition

Artists and Writers Unite Against AI

New Guardian article highlights the artists and writers pushing back against AI

A new article from The Guardian covers the struggle of artists, writers, and other “creatives” who are pushing back against AI systems like Midjourney and ChatGPT, which they purport is starting to replace their viability. Betsy Reed writes, Poetry may still be a hard nut for AI to crack convincingly, but among the first to face a genuine threat to their livelihoods are photographers and designers. Generative software can produce images at the touch of the button, while sites like the popular NightCafe make “original”, data-derived artwork in response to a few simple verbal prompts. The first line of defence is a growing movement of visual artists and image agencies who are now “opting out” of allowing their work to Read More ›

three-pigs-laptop

AI vs. Human Intentionality

If ChatGPT were trained over and over on its own output, it would eventually turn to gibberish

We can do a simple experiment that demonstrates the difference between AI and human intentionality. ChatGPT and the like are a sophisticated form of a mathematical model known as a Markov chain. A Markov chain is based on the Markov assumption that the future is entirely a product of the recent past. In other words, if we know the recent past, then nothing else we learn about the more distant past will improve our ability to predict the future. In ChatGPT terms, this means ChatGPT is based on the assumption that everything we need to know to predict future words is contained within a limited window of previously seen words. ChatGPT’s window was 3,000 words, and I believe the newest version has Read More ›

COSM2022-Nov10-174A0082-bob-marks***compressed

Robert J. Marks on Fox News

Dr. Marks talked about the hype over AI and where the danger really lies

Robert J. Marks, director of the Walter Bradley Center, was the subject of a new op-ed at Fox News, which featured his unique insights on artificial intelligence and how it can best be used by human beings. Contrary to much of the present alarmism over AI systems like ChatGPT, Marks remains level-headed about AI’s place in today’s world, pushing back against “sci-fi” apocalypticism while at the same time addressing real issues and concerns. Marks said, “No, AI is not going to take over. You’re not going to be its pet in the future. It’s going to be a tool which disrupts society, but we’re still going to be in charge. -Scott Whitlock, AI expert warns of too much ‘hype’: Humans Read More ›

moon
Moon detailed closeup

We Can’t Build a Hut to the Moon

The history of AI is a story of a recurring cycle of hype and disappointment

Once upon a time there live a tribe who lived on the plains. They were an adventurous tribe, constantly wanting to explore. At night they would see the moon drift lazily overhead, and became curious. How could they reach the moon? The moon was obviously higher than their huts. Standing on the highest hut no one could reach the moon. At the same time, standing on the hut got them closer to the moon. So, they decided to amass all their resources and build a gigantic hut. Reason being that if standing on a short hut got them closer to the moon, then standing on a gigantic hut would get them even closer. Eventually the tribe ran out of mud and Read More ›

hive mind
Artificial intelligence. Network in hive mind brain. Vintage futuristic poster.

Is the “Singularity” Already Here?

Two experts in the tech field warn that the singularity, as we've come to imagine it, might be right around the corner

The singularity refers to the future point when the human and the machine will merge––when the personal and technological distinction breaks down entirely. It sounds dystopian and far off. But, two experts in the tech field warn that the singularity, as we’ve come to imagine it, is right upon us. Writing for The Conversation, John Kendall Hawkins and Sandy Boucher comment, To understand why this isn’t the stuff of fairy tales, we need only look as far as recent developments in brain-computer interfaces (BCIs). BCIs are a natural beginning to the singularity in the eyes of many futurists, because they meld mind and machine in a way no other technology so far can. Elon Musk’s company Neuralink is seeking permission from the US Food Read More ›

woman reading book
a woman is reading a book and holding coffee

ChatGPT and Personal Consciousness

AI vs. the human voice in literature and the arts

This week, Peter Biles, Writer & Editor for Discovery Institute’s Center for Science & Culture, wrote a piece for Salvo on ChatGPT and the uniqueness of the human voice in literature and the arts. Biles cites Christina Bieber Lake, professor of English at Wheaton College, from her book Beyond the Story: American Literary Fiction and the Limits of Materialism. Bieber Lake pushes back against the reductionistic worldview of Darwinistic materialism, appealing to the personal nature of the human being and the relationships we share together. Since a computer fails to practice personal consciousness, it also fails to create meaningful literature, which always involves two persons––one person speaking to another. Biles also cites Robert J. Marks’s essential book on the topic Read More ›