Mind Matters Natural and Artificial Intelligence News and Analysis


Chatbot / Social Bot mit Quellcode und Keyboard

Why GPT-3 Can’t Understand Anything

Without long-term memory, human conversation becomes impossible

There is a mathematical reason why machine learning systems like GPT-3 are incapable of understanding. The reason comes down to the fact that machine learning has no memory. It is just probabilistic associations. If there is only a 10% chance of going off topic, then after just seven exchanges there is a greater than 50% chance the machine learning model has gone off topic. The problem is that when prediction is just based on probabilities, the likelihood of making a misprediction increases exponentially. A long-term memory is needed in order to maintain long-term coherence. GPT-3 is essentially a sophisticated Markov process. What is important about the Markov process is that the next step in the process is only dependent on…

Chatbot / Social Bot mit Quellcode im Hintergrund

AI Companies Are Massively Faking the Loophole in the Turing Test

I propose the Turing Test be further strengthened by presuming a chatbot is human until proven otherwise

Computer pioneer Alan Turing was posed the question, how do we know if an AI has human like intelligence? He offered his famous Turing test: If human judges cannot differentiate the AI from a human, then it has human-like intelligence. His test has spawned a number of competitions in which participants try to fool judges into thinking that a chatbot is really a human. One of the best-known chatbots was Eugene Goostman, which fooled the judges into thinking it was a 13-year-old boy — mostly by indirection and other distraction techniques to avoid the sort of in-depth questioning that shows that a chatbot lacks understanding. However, there is a loophole in this test. Can you spot the loophole? What better…

Positive girl resting on the couch with robot

Soylent AI is…people!

OpenAI advertises itself as AI-powered, but at the end of the day, the system is human-powered

In the sci-fi movie, “Soylent Green,” the big reveal is that a food called soylent green is actually made from human beings, the catchphrase being “soylent green is people.” Likewise, as I discovered from a recent exchange with OpenAI’s GPT-3, “soylent AI is people.” GPT-3 is the product of AI company OpenAI. The company made headlines in 2019 with the claim that their AI model was too dangerous to publicly release. OpenAI is not a mere research company. While their publicly stated goal is fairly modest – “Aligning AI systems with human intent” – their CEO Sam Altman has bigger plans. He left his very successful role as president of Y Combinator, one of Silicon Valley’s most successful venture capital…

Man showing tricks with cards

The AI Illusion – State-of-the-Art Chatbots Aren’t What They Seem

GPT-3 is very much like a performance by a good magician

Artificial intelligence is an oxymoron. Despite all the incredible things computers can do, they are still not intelligent in any meaningful sense of the word. Decades ago, AI researchers largely abandoned their quest to build computers that mimic our wondrously flexible human intelligence and instead created algorithms that were useful (i.e., profitable). Despite this understandable detour, some AI enthusiasts market their creations as genuinely intelligent. For example, a few months ago, Blaise Aguera y Arcas, the head of Google’s AI group in Seattle, argued that “statistics do amount to understanding.” As evidence, he cites a few exchanges with Google’s LaMDA chatbot. The examples were impressively coherent but they are still what Gary Marcus and Ernest Davis characterize as “a fluent spouter of bullshit” because computer algorithms…

Artificial neuron in concept of artificial intelligence. Wall-shaped binary codes make transmission lines of pulses, information in an analogy to a microchip.

Can AI Really Be “Slightly Conscious”? Can Anyone?

It’s rare to see popular media actually call out nonsense re artificial intelligence. Here’s is what it looks like when it happens

On February 9, Ilya Sutskever,co-founder of fake text generator OpenAI, made a claim that was frothy even for Twitter: “it may be that today’s largest neural networks are slightly conscious.” it may be that today’s large neural networks are slightly conscious — Ilya Sutskever (@ilyasut) February 9, 2022 Well, “slightly conscious” is like being “slightly pregnant” or “slightly dead.” While Sutskever didn’t name any specific developments, he was likely referring to huge natural language processing systems like OpenAI’s enormous GPT-3 which can translate, answer questions, fill in missing words, and generate fake news. No thought process is involved. The system approximates vast masses of actual instances of language use. The more stereotyped the language use is, the easier it is…

Customer service and support live chat with chatbot and automati

Chatbots: Still Dumb After All These Years

Intelligence is more than statistically appropriate responses

In 1970, Marvin Minsky, recipient of the Turing Award (“the Nobel Prize of Computing”), predicted that within “three to eight years we will have a machine with the general intelligence of an average human being.”  Fifty-two years later, we’re still waiting. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world. As Richard Feynman famously explained, there is a fundamental difference between labeling things and understanding them: [My father] taught me “See that bird? It’s a brown-throated thrush, but in Germany it’s called a halsenflugel, and in Chinese they call it a chung ling and even…

Cyber security concept.

The 2020 AI Dirty Dozen Hyped Stories: Countdown by Bradley Center Brain Trust Members

Is AI fundamentally flawed? Can Elon Musk merge man with machines? Will there ever be self-driving cars? Join us as we revisit the top 12 most over-hyped stories in artificial intelligence from 2020 with Robert J. Marks, Jonathan Bartlett, and Eric Holloway. Show Notes 00:01:10 | Introducing Jonathan Bartlett 00:01:37 | Introducing Dr. Eric Holloway 00:02:00 | #12: “The way…

Panoramic image, Man hand holding piggy bank on wood table. Save money and financial investment

The Word “AI” Has Become a Marketing Ploy

Think twice before investing in a business that uses the word "AI" without further explanation

Justin Wang received a bachelor’s degree from Murdoch University in 2012 with a grade of 83.7% and a master’s degree in Information Technology Management from the University of Sydney in 2016 with a grade of 82.5%. In January 2017, he founded a Singapore-based company with the mysteriously cool name Scry in order to “manage information technology to achieve business goals, as well as – and perhaps more importantly – how it can be wielded to disrupt existing value networks.” What’s behind the mystery and jargon? It turns out that Scry is a “social forecasting platform.” Users join for free and can enter their personal estimates of the probabilities that certain events will happen, with Scry calculating the average probability. For example, one question is,…

Sisters playing with wagon cart on the road outdoors

Artificial Unintelligence

The failure of computer programs to recognize a rudimentary drawing of a wagon reveals the vast differences between artificial and human intelligence

In 1979, when he was just 34 years old, Douglas Hofstadter won a National Book Award and Pulitzer Prize for his book, Gödel, Escher, Bach: An Eternal Golden Braid, which explored how our brains work and how computers might someday mimic human thought. He has spent his life trying to solve this incredibly difficult puzzle. How do humans learn from experience? How do we understand the world we live in? Where do emotions come from? How do we make decisions? Can we write inflexible computer code that will mimic the mysteriously flexible human mind?  Hofstadter has concluded that analogy is “the fuel and fire of thinking.” When humans see, hear, or read something, we can focus on the most salient features, its “skeletal essence.”…

black car interior \

AI Dirty Dozen 2020 Part III

There are many forces that shape the hyped AI stories we read. Media is everywhere and competition is fierce. Articles with provocative headlines and content are clickbait for the browsing consumer. We’re going to count down the AI Dirty Dozen: the top twelve AI hyped stories for 2020. Join Dr. Robert J. Marks as he discusses some of these stories with…

student term paper showing 'a' grade

Can a Computer Write Your Paper for You Someday Soon?

GPT-3 recently came up with a paragraph that—a pop psychologist agreed—sounded just like him

This summer the OpenAI lab, backed by $1 billion in funding from Microsoft, Google, and Facebook, released an updated version of GPT-3, a text generator that produces convincing sentences by analyzing, among other online sources, Wikipedia, countless blog posts, and thousands of digital books. According to a recent story by Cade Metz in the New York Times, one GPT-3 programmer decided to target pop psychologist Scott Barry Kaufman. Could GPT-3 really come up with a paragraph that sounded just like him? Kaufman himself (pictured) was really impressed with this one, on the subject of becoming more creative: I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more…

Robot concept or robot hand chatbot pressing computer keyboard enter

Can a Machine Really Write for the New Yorker?

If AI wins at chess and Go, why not? Then someone decided to test that…

Tech philosopher and futurist George Gilder (pictured) has a new book out, Gaming AI. Short and sweet, it explains how artificial intelligence (AI) will—and won’t—revolutionize the economy and human life. Get your free digital copy here. And now, below is a short piece he wrote, unpacking one of the book’s themes—the claim that AI can do anything that humans can do. Find out why he says no: Ilya Sutskever (pictured) may be the smartest man in the world you have never heard of. No sweat, I hadn’t heard of him either. Still under 40, he’s part of the all-male Google mindfest around “Google Brain.” His IQ honed at Open University of Israel and mentored by Artificial Intelligence (AI) pioneer Geoffrey…

hand of god

Built to Save Us from Evil AI, OpenAI Now Dupes Us

When combined with several metric tons of data, its new GPT-3 sometimes it looks like it is “thinking.” No, not really

OpenAI started life in 2015 as a non-profit organization whose mission was to safeguard humanity from malevolent artificial intelligence (AI). The founders’ goal was to ensure that when superhuman AI arrived, its inborn purpose was to serve humanity rather than subjugate it. In 2019, OpenAI transitioned to a for-profit company based in San Francisco and secured a one billion dollar investment from Microsoft. Things seem to have moved on from there. There’s a good question whether superhuman AI is even possible, as we have pointed out repeatedly here at Mind Matters News. While some of the AI tasks seem impressive, oftentimes when you look under the hood, what you find is a very expensive party trick or a staged demo.…