Mind Matters Natural and Artificial Intelligence News and Analysis

CategoryArtificial Intelligence

abstract-digital-human-face-artificial-intelligence-concept-of-big-data-or-cyber-security-3d-illustration-stockpack-adobe-stock
Abstract digital human face.  Artificial intelligence concept of big data or cyber security. 3D illustration

If Consciousness Is Not Physical, How Can an AI Duplicate It?

Robert Lawrence Kuhn interviewed Berkeley philosopher Hubert Dreyfus on the question before his death in 2017

At Closer to Truth, Robert Lawrence Kuhn interviewed the late philosopher Hubert Dreyfus (1929–2017) a couple of years back on the question “Is consciousness entirely physical.” The interview was released May 18, 2022 (10:03 min). Here’s the big question about consciousness, our inner experience of what things feel like. Is consciousness a product of the physical world alone? Because if consciousness is the output of the physical brain by itself, however complex, then consciousness as physicalism would defeat those who believe, or hope for, the existence of nonphysical realities. Some philosophers (physicalists) do maintain that consciousness is entirely physical or, more commonly, they dance around the point. For example, philosopher David Papineau said in 2020, “If only we could stop Read More ›

digital-chatbot-robot-application-conversation-assistant-ai-artificial-intelligence-concept-stockpack-adobe-stock
Digital chatbot, robot application, conversation assistant, AI Artificial Intelligence concept.

Note to Parents: Grooming and Wokeness Are Embedded in Chatbots

With or without tuning, all AI chatbots are biased one way or another. AI without bias is like water without wet

First impressions of a person can be wrong. Further interactions can reveal disturbing personality warts. Contrary to initial impressions, we might find out they lie, they are disturbingly woke,  they can’t do simple math, their politics is on the extreme left, and they have no sense of humor or common sense.   I have just described Open AI’s GPT3 chatbot, ChatGPT. Initially, users are gobsmacked by the its performance. Its flashy prose responses to simple queries look amazing.  But become roommates with the chatbot for a few hours and its shortcomings become evident .  It can’t get its facts straight, can’t do simple math problems, hates Donald Trump, and is being groomed to be “woke.” Its performance warts are so numerous that Bradley Center Senior Fellow Gary N. Smith hoists a Read More ›

technology-and-network-concept-stockpack-adobe-stock
Technology and network concept

Is ChatGPT Solely a Neural Network? I Tested That…

Changing the random number test to a "computer easy, human hard" test requires simply that we ask ChatGPT to reverse the random number. It couldn't.

ChatGPT is a direct descendent of GPT-3, and is a fancy form of a fancy machine learning algorithm called a neural network. For an overview of all of ChatGPT’s neural network complexity, here is a fun article. However, all that is beside the point. The important thing about a neural network: It can only generate what is in its training data. Therefore, ChatGPT can only produce what is in its training data. ChatGPT’s training data does not include the conversation you or I are having with ChatGPT. Therefore, if something novel occurs in the conversation, ChatGPT cannot reproduce it. That is, if ChatGPT is a neural network. Conversely, if ChatGPT reproduces novel text from the conversation, then ipso facto ChatGPT is not a Read More ›

deep-learning-neural-network-ai-artificial-intelligence-concept-stockpack-adobe-stock
Deep learning neural network AI Artificial intelligence concept

Programmer: Deep Learning Can’t Give Us Computers That Think

Artificial general intelligence (AGI) — computers that think like humans — would need to be able to generalize, which he says Deep Learning doesn’t do

At Medium late last year, a programmer/researcher who writes under the name “Rebel Science,” assessed Deep Learning, a favorite of the self-driving car quest, as “useless” because it can’t generalize: The biggest problem with DL is its inherent inability to effectively generalize. Without generalization, edge cases are an insurmountable problem, something that the autonomous vehicle industry found out the hard way after wasting more than $100 billion by betting on DL. Rebel Science, “Deep Learning Is Not Just Inadequate for Solving AGI, It Is Useless” at Medium (Nov 2, 2022) Displaying for readers a picture of a bicycle, he continues, A deep neural network cannot perceive this bicycle unless it has been previously trained to recognize it. Generalization is the Read More ›

internet-law-concept-stockpack-adobe-stock
Internet law concept

You’ve Got a Robot Lawyer in Your Pocket (Really?)

The DoNotPay AI lawyer program might be useful for fighting parking tickets but it is unsuited to serious litigation where much more complex issues are at stake

The Gutfeld! program on Fox News on January 6, 2023, recently had fun discussing robots replacing lawyers to practice law. In faux serious rhyme, Greg Gutfeld intoned: “Can a computer that’s self aware, keep you from the electric chair?” Sparking the conversation was the report that an artificial intelligence (AI) smartphone app was slated to assist a defendant fighting a parking ticket in a currently-undisclosed courtroom: Gigabytes of text could stream forth addressing the near infinite number of questions raised about robot lawyers. For now, let’s just explore the “robot lawyer” app built by DoNotPay. The company’s website declares: “The DoNotPay app is the home of the world’s first robot lawyer. Fight corporations, beat bureaucracy and sue anyone at the Read More ›

mini-robot-work-stockpack-adobe-stock
mini robot work

Can a Chatbot Tell Jokes. Yes, If They Are Stale

As chatbots sort through the vast mass of online information for appropriate responses to questions, jokes were bound to come up

Corinne Purtill reported a year ago at Time Magazine on Jon the Robot, a chatbot that was programmed to learn to tell stand-up comedy jokes: An experiment billed as a comedy act, Jon is the brainchild of Naomi Fitter, an assistant professor in the School of Mechanical, Industrial and Manufacturing Engineering at Oregon State University. The tiny android performs when a handler (who must also hold the mic) presses a button, then tells the same jokes in the same order, like a grizzled veteran comic at a down-market Vegas casino. Corinne Purtill, “Artificial Intelligence Can Now Craft Original Jokes—And That’s No Laughing Matter” at Time 2030 (January 4, 2022) But the robot’s act is more human than it might first Read More ›

top-mark-on-essay-stockpack-adobe-stock
Top mark on essay

Students Depend on ChatGPT for Final Exams

The new bot will only get better from here, but it won’t help students become better thinkers

ChatGPT, OpenAI’s new artificial intelligence chatbot, has made headlines for over a month now, and for good reason. It’s an advanced bot designed to problem solve. It can “converse” with people on a range of topics. A problem for us to solve now is how to deal with ChatGPT’s invasion into the sphere of education. Students report using ChatGPT on final exams and papers according to a recent write-up from The College Fix. One College of Staten Island student used the bot on both final exams and “got As on both.” He commented that “half the kids in my class used it.” The student also noted that he used the chatbot to complete a multiple-choice exam, on which he got Read More ›

chat-bot-and-future-marketing-concept-chatbot-icon-hand-holding-mobile-phone-with-automatic-chatbot-message-screen-with-abstract-background-stockpack-adobe-stock
Chat bot and future marketing concept , Chatbot icon , Hand holding mobile phone with automatic chatbot message screen with abstract background

Let’s Call AI What It Really Is: Faux Intelligence

Gary Smith at Salon: While GPT-3 can string words together in convincing ways, it has no idea what the words mean

Pomona College business and investments prof Gary Smith warns Salon readers not to be too gullible about what human-sounding chatbots really amount to. He notes that in the 1960s, a pioneer chatbot called ELIZA convinced many psychiatric patients that they were interacting with a real psychiatrist. The machine simply repeated back their statements as questions, a popular psychiatric technique at the time because it generated more and more discussion — from the patient. The patients’ belief that they were interacting with a human being came to be called the Eliza effect. Has much changed? If you play around with GPT-3 (and I encourage you to do so) your initial response is likely to be astonishment — a full-blown Eliza effect. Read More ›

silver paper clips
An extreme close up image of metal paper clips

Santa Fe Prof Dissects End-of-World Super-AI Claims

There seems to be little communication, she notes, between people concerned about sci-fi AI risks and people concerned about predictable everyday risks

Santa Fe Institute professor of complexity Melanie Mitchell takes issue — in a gentle way — with those who warn about the dangers of superintelligent machines (AI alignment) destroying us all: In one scenario, for example, Oxford Future of Humanity Institute’s Nick Bostrom developed a scenario by which a super AI, told to make paper clips, might use up the world’s resources in doing so. Her comment: To many outside these specific communities, AI alignment looks something like a religion — one with revered leaders, unquestioned doctrine and devoted disciples fighting a potentially all-powerful enemy (unaligned superintelligent AI). Indeed, the computer scientist and blogger Scott Aaronson recently noted that there are now “Orthodox” and “Reform” branches of the AI alignment Read More ›

ai-machine-learning-hands-of-robot-and-human-touching-on-big-data-network-connection-background-science-and-artificial-intelligence-technology-innovation-and-futuristic-stockpack-adobe-stock
AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.

Google Dismisses Engineer’s Claim That AI Really Talked to Him

The reason LaMDA sounds so much like a person is that millions of persons’ conversations were used to construct the program’s responses

This story was #5 in 2022 at Mind Matters News in terms of reader numbers. As we approach the New Year, we are rerunning the top ten Mind Matters News stories of 2022, based on reader interest. In “Google dismisses engineer’s claim that AI really talked to him” (June 14, 2022), our News division looks at what happened when software engineer Blake Lemoine, now ex-Google, became convinced that the large language program he tended to was a person. Google engineer Blake Lemoine was working with LaMDA (Language Model for Dialogue Applications), a large language program which motors through trillions of words on the internet to produce coherent answers using logic. Along the way, he convinced himself that the program is Read More ›

chatbot-conversation-person-using-online-customer-service-with-chat-bot-to-get-support-artificial-intelligence-and-crm-software-automation-technology-virtual-assistant-on-internet-stockpack-adobe-stock
Chatbot conversation. Person using online customer service with chat bot to get support. Artificial intelligence and CRM software automation technology. Virtual assistant on internet.

Yes, ChatGPT Is Sentient — Because It’s Really Humans in the Loop

ChatGPT itself told me there could be humans crafting its input. My tests indicate that that’s likely true

OpenAI, recently released a new AI program called ChatGPT. It left the internet gobsmacked, though some were skeptical, and concerned about its abilities. Particularly about ChatGPT writing students’ homework for them! [ChatGPT] also appears to be ominously good at answering the types of open-ended analytical questions that frequently appear on school assignments. (Many educators have predicted that ChatGPT, and tools like it, will spell the end of homework and take-home exams.) Kevin Roose, “The Brilliance and Weirdness of ChatGPT” at New York Times (December 5, 2022) The really amazing thing is ChatGPT’s humanlike responses. They gives an observer an unnerving suspicion that the AI is actually sentient. Maybe it is actually sentient. Wait, what? You heard me. The AI is Read More ›

Customer service and support live chat with chatbot and automati

Chatbots: Still Dumb After All These Years

Intelligence is more than statistically appropriate responses

This story, by Pomona College business and investment prof Gary Smith was #6 in 2022 at Mind Matters News in terms of reader numbers. As we approach the New Year, we are rerunning the top ten Mind Matters News stories of 2022, based on reader interest. At any rate: “Chatbots: Still dumb after all these years.” (January 3, 2022) In 1970, Marvin Minsky, recipient of the Turing Award (“the Nobel Prize of Computing”), predicted that within “three to eight years we will have a machine with the general intelligence of an average human being.”  Fifty-two years later, we’re still waiting. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of Read More ›

sad emoticon
Emoticon sadness LED

Physicist Max Tegmark Worries About Intelligent AI’s “Suffering”

What strikes me about Tegmark’s approach to the question is its fundamental lack of seriousness

In a recent interview, MIT theoretical physicist Max Tegmark talked to Robert Lawrence Kuhn at Closer to Truth about “transhuman brains” (Dec 20, 2022, 8:43 min): Transhuman brains are the melding of hyper-advanced electronics and super-artificial intelligence (AI) with neurobiological tissue. The goal is not only to repair injury and mitigate disease, but also to enhance brain capacity and boost mental function. What is the big vision, the end goal — how far can transhuman brains go? What does it mean for individual consciousness and personal identity? Is virtual immortality possible? What are the ethics, the morality, of transhuman brains? What are the dangers? Here’s a partial transcript and comments: Max Tegmark: I think it’s pretty clear that artificial intelligence Read More ›

Artificial Intelligence Playing Go

The Game-Playing AI Does Not Always Win, It Turns Out

Enterprising researchers beat KataGo at Go by taking advantage of its known blind spots

At Vice, science writer Tatyana Woodall tells us that clever researchers developed a rival adversarial AI to trick KataGo into losing games: Players have often used KataGo to test their skills, train for other matches, and even analyze past games, yet in a study posted recently on the preprint server arXiv, researchers report that by using an adversarial policy—a kind of machine-learning algorithm built to attack or learn weaknesses in other systems—they’ve been able to beat KataGo at its own game between 50 to 99 percent of the time, depending on how much “thinking ahead” the AI does. Funnily enough, the new system doesn’t win by trumping KataGo all out, but instead by forcing KataGo into a corner, essentially tricking Read More ›

waiting-for-inspiration-stockpack-adobe-stock
Waiting for inspiration!

Will Chatbots Put Writers out of Business?

Some writers are saying yes. Machines can now do mediocre, run of the mill writing

As chatbots become more sophisticated, some writers predict that they will take over run-of-the-mill writing jobs that don’t need or attract much creativity. For example, copyright and plagiarism consultant Jonathan Bailey points out, “AI doesn’t have to be great, just good enough.” In 2017, five years ago, the Washington Post revealed that an AI bot named Heliograph had produced some 850 articles for the paper. However, those stories were for things such as local high school football games and financial reports. They were all short, formulaic and not worth sending a human reporter to. At the time, the stories had generated more than 500,000 clicks. In short, they were ridiculously successful, delivering information that was important and sought after, but Read More ›

black-and-white-computer-keyboard-keys-close-up-concept-of-unstructured-big-data-that-need-to-be-sorted-ready-to-be-consumed-by-machine-learning-model-for-deep-learning-stockpack-adobe-stock
Black and white computer keyboard keys close-up. Concept of unstructured big data that need to be sorted ready to be consumed by machine learning model for deep learning.

Large Learning Models Are An Unfortunate Detour in AI

Gary Smith: Even though LLMs have no way of assessing the truth or falsity of the text they generate, the responses sound convincing

For decades, computer scientists have struggled to construct systems possessing artificial general intelligence (AGI) that rivals the human brain—including the ability to use analogies, take into account context, and understand cause-and-effect. Marvin Minsky (1927–2016) was hardly alone in his overly optimistic 1970 prediction that, “In from three to eight years we will have a machine with the general intelligence of an average human being.” AGI turned out to be immensely more difficult than imagined and researchers turned their attention to bite-size projects that were doable (and profitable). Recently, large language models (LLMs) — most notably OpenAI’s GPT-3 — have fueled a resurgence of hope that AGI is almost here. GPT-3 was trained by breaking 450 gigabytes of text data into Read More ›

reaching toward chatbot
Chat bot Robot Online Chatting Communication Business Internet Technology Concept

Why We Should Not Trust Chatbots As Sources of Information

A linguist and an information theorist say that chatbots lack any awareness of the information they provide — and that matters

Linguist Emily M. Bender and information theorist Chirag Shah, both of the University of Washington, have a message for those who think that the chatbot they are talking to is morphing into a real person: No. Not only that but there are good reasons to be very cautious about trusting chatbots as sources of information, all the more so because they sound so natural and friendly. First, decades of science fiction, the authors point out, have taught us to expect computer scientists to develop a machine like that: However, we must not mistake a convenient plot device — a means to ensure that characters always have the information the writer needs them to have — for a roadmap to how Read More ›

the-face-businessman-and-a-robot-adobe-stock
The face of a businessman and a robot opposite each other look into the eyes. Modern technologies, robot versus human, artificial intelligence, neural networks. 3D render, 3D illustration.

C-Span asks Marks: How Can AI Be Made Sentient? Innovative?

If they were hoping for a computers to be their best buddies, they might be disappointed in the computer engineer’s answer

Walter Bradley Center director Robert J. Marks was a guest at C-SPAN 2 TV in July, discussing his book, Non-Computable You: What You Do That Artificial Intelligence Never Will (Discovery Institute Press, 2022), and we now have the transcript as well as the video: Here’s the link to the video. Excerpt: Peter Slen: Professor Marks, what’s the missing element in artificial intelligence? To make it sentient. To make it innovative too… The premise of my book is that it will never be there. There are certain things which are non computable. This goes back to the genius Alan Turing in the 1930s. Now, Turing is probably best known for helping crack the Enigma code that helped win World War II Read More ›

COSM2022-Nov10-174A0069-bob-marks

Non-Computable You: Bradley Center Celebrates Human Creativity in 2022

Humans are unique and will never be replaced by the machines they invent.

Note: Mind Matters News is made possible by support from the Walter Bradley Center for Natural and Artificial Intelligence, a project of the non-profit Discovery Institute. Here is a report of our activities for 2022. If you benefit from the work of Mind Matters News, could you donate to support our work in 2023? Humans are unique and will never be replaced by the machines they invent. That was the powerful takeaway from this year’s new book by Walter Bradley Center Director Robert J. Marks, Non-Computable You: What You Do that Artificial Intelligence Never Will. The release of Marks’ book was one of many ways that the Bradley Center advanced its mission in 2022 to defend human dignity and creativity Read More ›

Techno adam and God
Disintegrating reaching hands concept illustration in vaporwave style color palette isolated on blue background.

What is Art Without the Human Mind?

AI art tools can wow us with technical skill, but fail to generate meaning

There’s no doubt that tools like OpenAI can create impressive, detailed renderings of images. Type in “Master Yoda riding a musk ox in Taiwan” and you’ll get…something. A friend of mine sent me a two-headed bunny dressed as Obi-Wan Kenobi and a few other comic horrors, and I knew he’d been playing around with an AI art tool. But, it doesn’t take much reflection to feel that something important is missing in these artificially generated images. Sure, they’re detailed and colorful, and accurate. You can type in a scene and have it pop up on command. However, what do actual human artists think of these tools, and what do they essentially miss? Artist Peter Mohrbacher gave a balanced assessment of Read More ›