Mind Matters Natural and Artificial Intelligence News and Analysis

Search ResultsChatgpt

finding a solution in the maze
Businessman in creative blue maze background.

AI is a Tool, Not a Solution for Everything

Life is complicated and our problems can't always be solved through an engineering approach

AI is a tool. In the words of Robert J. Marks from an interview at last year’s COSM conference, it can’t be a “friend or foe,” because it’s not a living entity. It’s something that can be used (or abused). Such a neutral, balanced attitude toward AI is alien to many Big Tech moguls who are pitching AI as the solution to many of the world’s complicated problems. For them, on the flipside of the doomsday alarmists, AI will usher in a new epoch of human progress. They may be partially right about that, but according to this writer at Slate, their hype reflects a commitment to technological “solutionism,” the idea that tech, once adequately suited for the task, can Read More ›

library with arches
Custom library

From One Author to an AI

What might John Steinbeck think of AI writing a novel?

John Steinbeck was among the most prominent literary figures of the twentieth century, responsible for works such as The Grapes of Wrath, East of Eden, and The Pearl. East of Eden, arguably his best work, is a sprawling novel about two families in the Salinas Valley in California and is often interpreted as a modern-day retelling of the book of Genesis, particularly the story of Cain and Abel. If we were to ask Steinbeck today about AI’s ability to write a good novel, what might the celebrated writer say? Well, an interview with Steinbeck from Paris Review, while it doesn’t have anything to do with computers or artificial intelligence, does reveal a significant aspect of Steinbeck’s own philosophy of writing Read More ›

the scales of justice
Scales of Justice in the dark Court Hall. Law concept of Judiciary, Jurisprudence and Justice. Copy space. Based on Generative AI

AI in the Courtroom: How to Program a Hot Mess

Could AI make competent judicial choices in the court?

Imagine we’re assigned to design the artificial intelligence (AI) software to carry out legal analysis of cases like a human judge. Our project is “CourtGPT,” a system that receives a factual and legal problem in a case where there are two opposing parties, analyzes how certain statutes and other legal principles apply to the facts, and delivers a decision in favor of one of the parties. CourtGPT will make “legal decisions,” not decide “jury questions of fact,” and thus will function like a judge (not juror). To write a computer program of any complexity, we start by describing the entire program’s operations in English (my native tongue). Pro tip: If you cannot describe how your program operates in human language, then you cannot Read More ›

writing text in floating boxes
Businessman showing online document validation icon, Concepts of practices and policies, company articles of association Terms and Conditions, regulations and legal advice, corporate policy

AI and Human Text: Indistinct?

Here's a mathematical proof that challenges the assumption that AI and human-made text are the same

What is a poor teacher to do? With AI everywhere, how can he reliably detect when his students are having ChatGPT write their papers for them? To address this concern, a number of AI text detector tools have emerged.  But do they work? A recent paper claims that AI generated text is ultimately indistinguishable from human generated text. They illustrate their claim with a couple experiments that fool AI text detectors by simple variations to AI generated text. Then, the authors go on to mathematically prove their big claim that it is ultimately impossible to tell AI text and human text apart. However, the authors make a crucial assumption. Faulty Premises The proof assumes that AI generated text will become closer and closer to Read More ›

close-up-of-calculation-table-printed-in-an-old-book-stockpack-adobe-stock
close up of calculation table, printed in an old book

An Illusion of Emergence, Part 2

A figure can tell a story but, intentionally or unintentionally, the story that is told may be fiction

I recently wrote about how graphs that use logarithms on the horizontal axis can create a misleading impression of the relationship between two variables. The specific example I used was the claim made in a recent paper (with 16 coauthors from Google, Stanford, UNC Chapel Hill, and DeepMind) that scaling up the number of parameters in large language models (LLMs) like ChatGPT can cause “emergence,” which they define as qualitative changes in abilities that are not present in smaller-scale models but are present in large-scale models; thus they cannot be predicted by simply extrapolating the performance improvements on smaller-scale models. They present several graphs similar to this one that seem to show emergence: However, their graphs have the logarithms of Read More ›

self driving car
Self driving car on a road. Autonomous vehicle. Inside view.

The Irony in Musk’s AI Distrust

As a leader in AI, why is Musk advocating a pause on its development?

Elon Musk joined a petition to “pause” AI research, citing concern over its potential harms and overreach. This is interesting, since Musk originally funded OpenAI, which is now at the forefront of cutting-edge AI systems like ChatGPT. In addition, Musk’s ventures with self-driving cars and his confidence in neural technology all cast him as a leader in the AI revolution. So why is he calling for a hiatus? According to a recent Slate article, the warnings against Large Language Models (LLMs) are a distraction from the more dangerous AI inventions like the self-driving car. Musk uses sci-fi alarmism to exacerbate the fear of a machine takeover, while his own experiments in automation have also proved to be dangerous for human Read More ›

Elon_Musk_at_a_Press_Conference

Elon Musk to AI Labs: Press Pause

The petition reflects growing concern over the proper role of AI in human society and its potential for overreach

Over 1,000 leaders and experts in technology and science, including Elon Musk, are now urging artificial intelligence labs to pause their research and distribution of new AI technologies. They believe moving forward so swiftly on AI research could bring about unintended consequences in the future, and that we don’t understand AI well enough yet to be casting so much trust in it. According to The New York Times, The open letter called for a pause in the development of A.I. systems more powerful than GPT-4, the chatbot introduced this month by the research lab OpenAI, which Mr. Musk co-founded. The pause would provide time to introduce “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be Read More ›

reaching out ai
White cyborg finger about to touch human finger 3D rendering

Robert Marks at The Daily Caller

Despite the confidence in new AI coming from Big Tech executives, it makes quite glaring mistakes

Robert J. Marks wrote a piece at The Daily Caller this week on artificial intelligence, ChatGPT, and the manifold problems of new AI systems like Google’s Bard and older ones such as Amazon’s Alexa. Despite the confidence in new AI coming from Big Tech executives, it makes quite glaring mistakes, although Marks believes AI has its genuine uses and benefits. Snapchat’s chatbot “My AI” gave advice about how to hide the smell of pot and alcohol to someone posing as a disgruntled teenager. Microsoft’s Bing bot professed its love for a tech journalist. A Google app made egregiously racist errors. ChatGPT is also politically biased despite claiming neutrality. Marks writes, Many warn of the future dangers of artificial intelligence. Many Read More ›

chalk graph
businessman hand writing a business graph on a touch screen inte

A Graph Can Tell a Story—Sometimes It’s an Illusion

Mistakes, chicanery, and "chartjunk" can undermine the usefulness of graphs

A picture is said to be worth a thousand words. A graph can be worth a thousand numbers. Graphs are, as Edward Tufte titled his wonderful book, the “visual display of quantitative information.” Graphs should assist our understanding of the data we are using. Graphs can help us identify tendencies, patterns, trends, and relationships. They should display data accurately and encourage viewers to think about the data rather than admire the artwork. Unfortunately, graphs are sometimes marred (intentionally or unintentionally) by a variety of misleading techniques or by what Tufte calls “chartjunk” that obscures rather than illuminates. I have described elsewhere many ways in which mistakes, chicanery, and chartjunk can undermine the usefulness of graphs. I recently saw a novel Read More ›

bard the bot
Artificial intelligence ai chat bot idea hands holding a phone against a blurred abstract background chatbot providing website support. Generative AI

The Bard is Beginning to Sing

Google has unleashed its own bot. Does that mean they're in the running in the AI race now?

Google has officially made its AI chatbot “Bard” available in the United States and the UK, notching their place in the artificial intelligence “arms race” that has taken the tech world by storm since OpenAI released its GPT-3 program last November. Microsoft released its own bot just a few weeks ago, though it immediately heralded concern for its unpredictable mannerisms and even “creepy” behavior, as described by Kevin Roose from The New York Times. “Google’s had a rocky AI journey after being caught out by the runaway success of OpenAI’s ChatGPT chatbot,” write a team of contributors at Forbes. This all begs the question: is Google really prepared to be a leader in this field? It may be a forefather Read More ›

observing the data
In the System Control Room Technical Operator Stands and Monitors Various Activities Showing on Multiple Displays with Graphics. Administrator Monitors Work of  Artificial Intelligence.

Gary Smith’s New Book Reviewed in Washington Post

Smith argues that science itself is being undermined by the tools scientists use

Walter Bradley Center Senior Fellow Gary Smith’s book Distrust: Big Data, Data-Torturing, and the Assault on Science was reviewed in The Washington Post today. Smith is a frequent contributor to Mind Matters and teaches economics at Pomona College. In his new book, Smith argues that science itself is being “undermined” by the tools scientists use. Reviewer Abby Ohlheister writes, Smith, an economist whose work often examines the misuse of data and statistics in a variety of disciplines, argues that the current crisis of trust in science falls at the intersection of three forces: disinformation, data torturing and data mining. Disinformation, as Smith writes, is “as old as the human race,” but accelerated in speed and reach alongside social media. Data Read More ›

looking at art
Rear view of a guy in his 30s looking at an art exhibition

Artists and Writers Unite Against AI

New Guardian article highlights the artists and writers pushing back against AI

A new article from The Guardian covers the struggle of artists, writers, and other “creatives” who are pushing back against AI systems like Midjourney and ChatGPT, which they purport is starting to replace their viability. Betsy Reed writes, Poetry may still be a hard nut for AI to crack convincingly, but among the first to face a genuine threat to their livelihoods are photographers and designers. Generative software can produce images at the touch of the button, while sites like the popular NightCafe make “original”, data-derived artwork in response to a few simple verbal prompts. The first line of defence is a growing movement of visual artists and image agencies who are now “opting out” of allowing their work to Read More ›

three-pigs-laptop

AI vs. Human Intentionality

If ChatGPT were trained over and over on its own output, it would eventually turn to gibberish

We can do a simple experiment that demonstrates the difference between AI and human intentionality. ChatGPT and the like are a sophisticated form of a mathematical model known as a Markov chain. A Markov chain is based on the Markov assumption that the future is entirely a product of the recent past. In other words, if we know the recent past, then nothing else we learn about the more distant past will improve our ability to predict the future. In ChatGPT terms, this means ChatGPT is based on the assumption that everything we need to know to predict future words is contained within a limited window of previously seen words. ChatGPT’s window was 3,000 words, and I believe the newest version has Read More ›

COSM2022-Nov10-174A0082-bob-marks***compressed

Robert J. Marks on Fox News

Dr. Marks talked about the hype over AI and where the danger really lies

Robert J. Marks, director of the Walter Bradley Center, was the subject of a new op-ed at Fox News, which featured his unique insights on artificial intelligence and how it can best be used by human beings. Contrary to much of the present alarmism over AI systems like ChatGPT, Marks remains level-headed about AI’s place in today’s world, pushing back against “sci-fi” apocalypticism while at the same time addressing real issues and concerns. Marks said, “No, AI is not going to take over. You’re not going to be its pet in the future. It’s going to be a tool which disrupts society, but we’re still going to be in charge. -Scott Whitlock, AI expert warns of too much ‘hype’: Humans Read More ›

moon
Moon detailed closeup

We Can’t Build a Hut to the Moon

The history of AI is a story of a recurring cycle of hype and disappointment

Once upon a time there live a tribe who lived on the plains. They were an adventurous tribe, constantly wanting to explore. At night they would see the moon drift lazily overhead, and became curious. How could they reach the moon? The moon was obviously higher than their huts. Standing on the highest hut no one could reach the moon. At the same time, standing on the hut got them closer to the moon. So, they decided to amass all their resources and build a gigantic hut. Reason being that if standing on a short hut got them closer to the moon, then standing on a gigantic hut would get them even closer. Eventually the tribe ran out of mud and Read More ›

retro-wave-80s-computer-all-in-one-illuminated-by-neon-light-isolated-on-black-stockpack-adobe-stock
Retro wave 80s computer all-in-one illuminated by neon light isolated on black

AI is Old News, Says Tech Consultant

AI actually dates back to the 1950s. It is not new, says Funk

This week, Robert J. Marks sat down with technology consultant and retired professor Jeffrey Funk, who contributes often to Mind Matters, usually in tandem with Gary Smith. Marks and Funk talked about tech startups, where the industry is headed, and the exaggerated hype that currently attends the discourse over AI. Funk talked about the various stages of AI development. “AI is not new,” he said. AI is 70 years old. ChatGPT and other generative AI models are based on neural networks, which have become economical through Moore’s Law, through this incredible increase in computing power that has been going on since the 1950s. But it’s slowed dramatically. -Jeffrey Funk, Jeffrey Funk on AI, Startups, and Big Tech | Mind Matters Read More ›

human rights
Circle of paper people holding hands on pink surface. Community, brotherhood concept. Society and support.

Love Thy Robot as Thyself

Academics worry about AI feelings, call for AI rights

Riffing on the popular fascination with AI (artificial intelligence) systems ChatGPT and Bing Chat, two authors in the Los Angeles Times recently declared: We are approaching an era of legitimate dispute about whether the most advanced AI systems have real desires and emotions and deserve substantial care and solicitude. The authors, Prof. Eric Schwitzgebel at UC Riverside, and Henry Shevlin, a senior researcher at the University of Cambridge, observed AI thinkers saying “large neural networks” might be “conscious,” the sophisticated chatbot LaMDA “might have real emotions,” and ordinary human users reportedly “falling in love” with chatbot Replika.  Reportedly, “some leading theorists contend that we already have the core technological ingredients for conscious machines.”  The authors argue that if or when Read More ›

hands-stockpack-adobe-stock
hands

Observing and Communing

What human art and literature do that AI can't

AI image generators like Midjourney or DALL-E are generally adept at capturing the accuracy of the human form. The concerns over copyright, job infringement, and general degradation of the visual arts via such AI are ongoing concerns for many artists and practitioners. However, a new New Yorker article by Kyle Chayka identifies a noticeable flaw in AI artwork: human hands. Missing the Big Picture Chayka begins by recalling an art class where he was asked to draw his own hand. It’s an assignment for beginners, and as behooves a novice, tempts the artist to focus more on the specific contours of the hand instead of the overall structure and form. The forest gets lost in the trees, so to speak. Read More ›

students-making-notes-stockpack-adobe-stock
Students making notes

Learning to Communicate

Why writing skills are so important, especially in today's artificial world

Educators have been shaken by fears that students will use ChatGTP and other large language models (LLMs) to answer questions and write essays. LLMs are indeed astonishing good at finding facts and generating coherent essays — although the alleged facts are sometimes false and the essays are sometimes tedious BS supported by fake references. I am more optimistic than most. I am hopeful that LLMs will be a catalyst for a widespread discussion of our educational goals. What might students learn in schools that will be useful long after they graduate? There are many worthy goals, but critical thinking and communication skills should be high on any list. I’ve written elsewhere about how critical thinking abilities are important for students Read More ›

earth-at-night-from-outer-space-with-city-lights-on-north-america-continent-3d-rendering-illustration-earth-map-texture-provided-by-nasa-energy-consumption-electricity-industry-ecology-concepts-stockpack-adobe-stock
Earth at night from outer space with city lights on North America continent. 3D rendering illustration. Earth map texture provided by Nasa. Energy consumption, electricity, industry, ecology concepts.

Robert J. Marks on The Laura Ingraham Show

In response to those who believe AI will take over the world, Marks says, "Look at history."

Robert J. Marks, director of Discovery Institute’s Walter Bradley Center, recently appeared on a podcast episode with Fox News host Laura Ingraham to talk about artificial intelligence, tech, and Dr. Marks’s book Non-Computable You: What You Do That AI Never Will. Ingraham prefaced the conversation with some thoughts on the rapidly evolving technological world we find ourselves in, and the changes such developments are inflicting on society. In response to the futurism and unbounded optimism in AI systems like ChatGPT that many modern figures hold, Marks said that what computers do is strictly algorithmic, This leads us to the idea of whether or not there are non-computable characteristics of human beings, and I think there is growing evidence that there Read More ›