Mind Matters Natural and Artificial Intelligence News and Analysis

TagChatGPT

close-up-of-calculation-table-printed-in-an-old-book-stockpack-adobe-stock
close up of calculation table, printed in an old book

An Illusion of Emergence, Part 2

A figure can tell a story but, intentionally or unintentionally, the story that is told may be fiction

I recently wrote about how graphs that use logarithms on the horizontal axis can create a misleading impression of the relationship between two variables. The specific example I used was the claim made in a recent paper (with 16 coauthors from Google, Stanford, UNC Chapel Hill, and DeepMind) that scaling up the number of parameters in large language models (LLMs) like ChatGPT can cause “emergence,” which they define as qualitative changes in abilities that are not present in smaller-scale models but are present in large-scale models; thus they cannot be predicted by simply extrapolating the performance improvements on smaller-scale models. They present several graphs similar to this one that seem to show emergence: However, their graphs have the logarithms of Read More ›

Elon_Musk_at_a_Press_Conference

Elon Musk to AI Labs: Press Pause

The petition reflects growing concern over the proper role of AI in human society and its potential for overreach

Over 1,000 leaders and experts in technology and science, including Elon Musk, are now urging artificial intelligence labs to pause their research and distribution of new AI technologies. They believe moving forward so swiftly on AI research could bring about unintended consequences in the future, and that we don’t understand AI well enough yet to be casting so much trust in it. According to The New York Times, The open letter called for a pause in the development of A.I. systems more powerful than GPT-4, the chatbot introduced this month by the research lab OpenAI, which Mr. Musk co-founded. The pause would provide time to introduce “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be Read More ›

reaching out ai
White cyborg finger about to touch human finger 3D rendering

Robert Marks at The Daily Caller

Despite the confidence in new AI coming from Big Tech executives, it makes quite glaring mistakes

Robert J. Marks wrote a piece at The Daily Caller this week on artificial intelligence, ChatGPT, and the manifold problems of new AI systems like Google’s Bard and older ones such as Amazon’s Alexa. Despite the confidence in new AI coming from Big Tech executives, it makes quite glaring mistakes, although Marks believes AI has its genuine uses and benefits. Snapchat’s chatbot “My AI” gave advice about how to hide the smell of pot and alcohol to someone posing as a disgruntled teenager. Microsoft’s Bing bot professed its love for a tech journalist. A Google app made egregiously racist errors. ChatGPT is also politically biased despite claiming neutrality. Marks writes, Many warn of the future dangers of artificial intelligence. Many Read More ›

chalk graph
businessman hand writing a business graph on a touch screen inte

A Graph Can Tell a Story—Sometimes It’s an Illusion

Mistakes, chicanery, and "chartjunk" can undermine the usefulness of graphs

A picture is said to be worth a thousand words. A graph can be worth a thousand numbers. Graphs are, as Edward Tufte titled his wonderful book, the “visual display of quantitative information.” Graphs should assist our understanding of the data we are using. Graphs can help us identify tendencies, patterns, trends, and relationships. They should display data accurately and encourage viewers to think about the data rather than admire the artwork. Unfortunately, graphs are sometimes marred (intentionally or unintentionally) by a variety of misleading techniques or by what Tufte calls “chartjunk” that obscures rather than illuminates. I have described elsewhere many ways in which mistakes, chicanery, and chartjunk can undermine the usefulness of graphs. I recently saw a novel Read More ›

hiking
Hiking team people helping each other friend giving a helping hand while climbing up on the mountain rock adventure travel concept of friendship support trust teamwork success.

GPT-4: Signs of Human-Level Intelligence?

Competence and understanding matter just as much if not more than mere "intelligence"

You’ve heard about GPT-3, but how about GPT-4? OpenAI has publicly released the new AI program, and researchers have already claimed that it shows “sparks” of human intelligence, or artificial general intelligence (AGI). Maggie Harrison writes at Futurism, Emphasis on the “sparks.” The researchers are careful in the paper to characterize GPT-4’s prowess as “only a first step towards a series of increasingly generally intelligent systems” rather than fully-hatched, human-level AI. They also repeatedly highlighted the fact that this paper is based on an “early version” of GPT-4, which they studied while it was “still in active development by OpenAI,” and not necessarily the version that’s been wrangled into product-applicable formation. -Maggie Harrison, Microsoft Researchers Claim GPT-4 Is Showing “Sparks” Read More ›

bard the bot
Artificial intelligence ai chat bot idea hands holding a phone against a blurred abstract background chatbot providing website support. Generative AI

The Bard is Beginning to Sing

Google has unleashed its own bot. Does that mean they're in the running in the AI race now?

Google has officially made its AI chatbot “Bard” available in the United States and the UK, notching their place in the artificial intelligence “arms race” that has taken the tech world by storm since OpenAI released its GPT-3 program last November. Microsoft released its own bot just a few weeks ago, though it immediately heralded concern for its unpredictable mannerisms and even “creepy” behavior, as described by Kevin Roose from The New York Times. “Google’s had a rocky AI journey after being caught out by the runaway success of OpenAI’s ChatGPT chatbot,” write a team of contributors at Forbes. This all begs the question: is Google really prepared to be a leader in this field? It may be a forefather Read More ›

observing the data
In the System Control Room Technical Operator Stands and Monitors Various Activities Showing on Multiple Displays with Graphics. Administrator Monitors Work of  Artificial Intelligence.

Gary Smith’s New Book Reviewed in Washington Post

Smith argues that science itself is being undermined by the tools scientists use

Walter Bradley Center Senior Fellow Gary Smith’s book Distrust: Big Data, Data-Torturing, and the Assault on Science was reviewed in The Washington Post today. Smith is a frequent contributor to Mind Matters and teaches economics at Pomona College. In his new book, Smith argues that science itself is being “undermined” by the tools scientists use. Reviewer Abby Ohlheister writes, Smith, an economist whose work often examines the misuse of data and statistics in a variety of disciplines, argues that the current crisis of trust in science falls at the intersection of three forces: disinformation, data torturing and data mining. Disinformation, as Smith writes, is “as old as the human race,” but accelerated in speed and reach alongside social media. Data Read More ›

bulb brain
Creative Idea with Brain and Light Bulb Illustration, with Generative AI Technology

Dear Silicon Valley: You’re Over-Hyping ChatGPT

The abilities of these new chatbots are grossly overstated

Gary Smith and Jeffrey Funk, frequent Mind Matters contributors, co-wrote a piece at Salon on the over-exaggerated dreams big tech has for AI. They write, Silicon Valley’s pre-eminent leaders love prematurely predicting that their products will completely upend the world as we know it. The latest case study comes from Sam Altman, CEO of OpenAI, the company behind the ChatGPT AI chatbot that has gone viral for its convincing imitations of human writing. Two years ago, Altman wrote a manifesto, “Moore’s Law for Everything,” in which he forecast that artificial intelligence would make huge swaths of both white collar and blue collar jobs obsolete. -Smith & Funk, Don’t believe the hype: why ChatGPT is not the “holy grail” of AI research | Read More ›

three-pigs-laptop

AI vs. Human Intentionality

If ChatGPT were trained over and over on its own output, it would eventually turn to gibberish

We can do a simple experiment that demonstrates the difference between AI and human intentionality. ChatGPT and the like are a sophisticated form of a mathematical model known as a Markov chain. A Markov chain is based on the Markov assumption that the future is entirely a product of the recent past. In other words, if we know the recent past, then nothing else we learn about the more distant past will improve our ability to predict the future. In ChatGPT terms, this means ChatGPT is based on the assumption that everything we need to know to predict future words is contained within a limited window of previously seen words. ChatGPT’s window was 3,000 words, and I believe the newest version has Read More ›

COSM2022-Nov10-174A0082-bob-marks***compressed

Robert J. Marks on Fox News

Dr. Marks talked about the hype over AI and where the danger really lies

Robert J. Marks, director of the Walter Bradley Center, was the subject of a new op-ed at Fox News, which featured his unique insights on artificial intelligence and how it can best be used by human beings. Contrary to much of the present alarmism over AI systems like ChatGPT, Marks remains level-headed about AI’s place in today’s world, pushing back against “sci-fi” apocalypticism while at the same time addressing real issues and concerns. Marks said, “No, AI is not going to take over. You’re not going to be its pet in the future. It’s going to be a tool which disrupts society, but we’re still going to be in charge. -Scott Whitlock, AI expert warns of too much ‘hype’: Humans Read More ›

moon
Moon detailed closeup

We Can’t Build a Hut to the Moon

The history of AI is a story of a recurring cycle of hype and disappointment

Once upon a time there live a tribe who lived on the plains. They were an adventurous tribe, constantly wanting to explore. At night they would see the moon drift lazily overhead, and became curious. How could they reach the moon? The moon was obviously higher than their huts. Standing on the highest hut no one could reach the moon. At the same time, standing on the hut got them closer to the moon. So, they decided to amass all their resources and build a gigantic hut. Reason being that if standing on a short hut got them closer to the moon, then standing on a gigantic hut would get them even closer. Eventually the tribe ran out of mud and Read More ›

hive mind
Artificial intelligence. Network in hive mind brain. Vintage futuristic poster.

Is the “Singularity” Already Here?

Two experts in the tech field warn that the singularity, as we've come to imagine it, might be right around the corner

The singularity refers to the future point when the human and the machine will merge––when the personal and technological distinction breaks down entirely. It sounds dystopian and far off. But, two experts in the tech field warn that the singularity, as we’ve come to imagine it, is right upon us. Writing for The Conversation, John Kendall Hawkins and Sandy Boucher comment, To understand why this isn’t the stuff of fairy tales, we need only look as far as recent developments in brain-computer interfaces (BCIs). BCIs are a natural beginning to the singularity in the eyes of many futurists, because they meld mind and machine in a way no other technology so far can. Elon Musk’s company Neuralink is seeking permission from the US Food Read More ›

woman reading book
a woman is reading a book and holding coffee

ChatGPT and Personal Consciousness

AI vs. the human voice in literature and the arts

This week, Peter Biles, Writer & Editor for Discovery Institute’s Center for Science & Culture, wrote a piece for Salvo on ChatGPT and the uniqueness of the human voice in literature and the arts. Biles cites Christina Bieber Lake, professor of English at Wheaton College, from her book Beyond the Story: American Literary Fiction and the Limits of Materialism. Bieber Lake pushes back against the reductionistic worldview of Darwinistic materialism, appealing to the personal nature of the human being and the relationships we share together. Since a computer fails to practice personal consciousness, it also fails to create meaningful literature, which always involves two persons––one person speaking to another. Biles also cites Robert J. Marks’s essential book on the topic Read More ›

retro-wave-80s-computer-all-in-one-illuminated-by-neon-light-isolated-on-black-stockpack-adobe-stock
Retro wave 80s computer all-in-one illuminated by neon light isolated on black

AI is Old News, Says Tech Consultant

AI actually dates back to the 1950s. It is not new, says Funk

This week, Robert J. Marks sat down with technology consultant and retired professor Jeffrey Funk, who contributes often to Mind Matters, usually in tandem with Gary Smith. Marks and Funk talked about tech startups, where the industry is headed, and the exaggerated hype that currently attends the discourse over AI. Funk talked about the various stages of AI development. “AI is not new,” he said. AI is 70 years old. ChatGPT and other generative AI models are based on neural networks, which have become economical through Moore’s Law, through this incredible increase in computing power that has been going on since the 1950s. But it’s slowed dramatically. -Jeffrey Funk, Jeffrey Funk on AI, Startups, and Big Tech | Mind Matters Read More ›

chatgpt-chat-concept-ai-artificial-intelligence-businessman-using-technology-smart-robot-ai-enter-command-prompt-contact-for-business-information-analysis-futuristic-technology-transformation-stockpack-adobe-stock
ChatGPT Chat concept AI, Artificial Intelligence. businessman using technology smart robot AI, enter command prompt, contact for business information analysis, Futuristic technology transformation.

Jeffrey Funk on AI, Startups, and Big Tech

In this podcast episode, technology consultant and author Jeffrey Funk joins Robert J. Marks to talk about the artificial intelligence industry, how it’s used by Big Tech, and AI’s exaggerated hype.  How do we respond to AI when technology is changing every year? Additional Resources

cute-3d-snapchat-ghost-cartoon-character-stockpack-adobe-stock
Cute 3D Snapchat Ghost Cartoon Character

You Too, Snapchat? Another AI Bot Hits the Scene

"My AI" is eerily human, like the Bing bot, and just as inappropriate

Snapchat introduced a new feature in its app: an AI chatbot “friend” called “My AI.” (Just what lonesome teens need.) We’ve already seen the rogue behavior of Bing’s chatbot, which, in conversation with a New York Times tech journalist, dubbed itself “Sydney” and started beseeching its human counterpart to leave his wife and fall in love with it. Romantic, right? Not so much. The journalist left the experience with the creepy sense that AI had just crossed a sensitive boundary, and that tech companies need to get better at controlling the unpredictable beast they’ve unleashed. “My AI” Gives Shady Advice to Kids Just a couple of weeks later and here we are with AI making inroads into an app used Read More ›

human rights
Circle of paper people holding hands on pink surface. Community, brotherhood concept. Society and support.

Love Thy Robot as Thyself

Academics worry about AI feelings, call for AI rights

Riffing on the popular fascination with AI (artificial intelligence) systems ChatGPT and Bing Chat, two authors in the Los Angeles Times recently declared: We are approaching an era of legitimate dispute about whether the most advanced AI systems have real desires and emotions and deserve substantial care and solicitude. The authors, Prof. Eric Schwitzgebel at UC Riverside, and Henry Shevlin, a senior researcher at the University of Cambridge, observed AI thinkers saying “large neural networks” might be “conscious,” the sophisticated chatbot LaMDA “might have real emotions,” and ordinary human users reportedly “falling in love” with chatbot Replika.  Reportedly, “some leading theorists contend that we already have the core technological ingredients for conscious machines.”  The authors argue that if or when Read More ›

hands-stockpack-adobe-stock
hands

Observing and Communing

What human art and literature do that AI can't

AI image generators like Midjourney or DALL-E are generally adept at capturing the accuracy of the human form. The concerns over copyright, job infringement, and general degradation of the visual arts via such AI are ongoing concerns for many artists and practitioners. However, a new New Yorker article by Kyle Chayka identifies a noticeable flaw in AI artwork: human hands. Missing the Big Picture Chayka begins by recalling an art class where he was asked to draw his own hand. It’s an assignment for beginners, and as behooves a novice, tempts the artist to focus more on the specific contours of the hand instead of the overall structure and form. The forest gets lost in the trees, so to speak. Read More ›

students-making-notes-stockpack-adobe-stock
Students making notes

Learning to Communicate

Why writing skills are so important, especially in today's artificial world

Educators have been shaken by fears that students will use ChatGTP and other large language models (LLMs) to answer questions and write essays. LLMs are indeed astonishing good at finding facts and generating coherent essays — although the alleged facts are sometimes false and the essays are sometimes tedious BS supported by fake references. I am more optimistic than most. I am hopeful that LLMs will be a catalyst for a widespread discussion of our educational goals. What might students learn in schools that will be useful long after they graduate? There are many worthy goals, but critical thinking and communication skills should be high on any list. I’ve written elsewhere about how critical thinking abilities are important for students Read More ›

earth-at-night-from-outer-space-with-city-lights-on-north-america-continent-3d-rendering-illustration-earth-map-texture-provided-by-nasa-energy-consumption-electricity-industry-ecology-concepts-stockpack-adobe-stock
Earth at night from outer space with city lights on North America continent. 3D rendering illustration. Earth map texture provided by Nasa. Energy consumption, electricity, industry, ecology concepts.

Robert J. Marks on The Laura Ingraham Show

In response to those who believe AI will take over the world, Marks says, "Look at history."

Robert J. Marks, director of Discovery Institute’s Walter Bradley Center, recently appeared on a podcast episode with Fox News host Laura Ingraham to talk about artificial intelligence, tech, and Dr. Marks’s book Non-Computable You: What You Do That AI Never Will. Ingraham prefaced the conversation with some thoughts on the rapidly evolving technological world we find ourselves in, and the changes such developments are inflicting on society. In response to the futurism and unbounded optimism in AI systems like ChatGPT that many modern figures hold, Marks said that what computers do is strictly algorithmic, This leads us to the idea of whether or not there are non-computable characteristics of human beings, and I think there is growing evidence that there Read More ›