Mind Matters Natural and Artificial Intelligence News and Analysis


Chatbot / Social Bot mit Quellcode und Keyboard

Why GPT-3 Can’t Understand Anything

Without long-term memory, human conversation becomes impossible

There is a mathematical reason why machine learning systems like GPT-3 are incapable of understanding. The reason comes down to the fact that machine learning has no memory. It is just probabilistic associations. If there is only a 10% chance of going off topic, then after just seven exchanges there is a greater than 50% chance the machine learning model has gone off topic. The problem is that when prediction is just based on probabilities, the likelihood of making a misprediction increases exponentially. A long-term memory is needed in order to maintain long-term coherence. GPT-3 is essentially a sophisticated Markov process. What is important about the Markov process is that the next step in the process is only dependent on…

Positive girl resting on the couch with robot

Soylent AI is…people!

OpenAI advertises itself as AI-powered, but at the end of the day, the system is human-powered

In the sci-fi movie, “Soylent Green,” the big reveal is that a food called soylent green is actually made from human beings, the catchphrase being “soylent green is people.” Likewise, as I discovered from a recent exchange with OpenAI’s GPT-3, “soylent AI is people.” GPT-3 is the product of AI company OpenAI. The company made headlines in 2019 with the claim that their AI model was too dangerous to publicly release. OpenAI is not a mere research company. While their publicly stated goal is fairly modest – “Aligning AI systems with human intent” – their CEO Sam Altman has bigger plans. He left his very successful role as president of Y Combinator, one of Silicon Valley’s most successful venture capital…

Man showing tricks with cards

The AI Illusion – State-of-the-Art Chatbots Aren’t What They Seem

GPT-3 is very much like a performance by a good magician

Artificial intelligence is an oxymoron. Despite all the incredible things computers can do, they are still not intelligent in any meaningful sense of the word. Decades ago, AI researchers largely abandoned their quest to build computers that mimic our wondrously flexible human intelligence and instead created algorithms that were useful (i.e., profitable). Despite this understandable detour, some AI enthusiasts market their creations as genuinely intelligent. For example, a few months ago, Blaise Aguera y Arcas, the head of Google’s AI group in Seattle, argued that “statistics do amount to understanding.” As evidence, he cites a few exchanges with Google’s LaMDA chatbot. The examples were impressively coherent but they are still what Gary Marcus and Ernest Davis characterize as “a fluent spouter of bullshit” because computer algorithms…

Customer service and support live chat with chatbot and automati

Chatbots: Still Dumb After All These Years

Intelligence is more than statistically appropriate responses

In 1970, Marvin Minsky, recipient of the Turing Award (“the Nobel Prize of Computing”), predicted that within “three to eight years we will have a machine with the general intelligence of an average human being.”  Fifty-two years later, we’re still waiting. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world. As Richard Feynman famously explained, there is a fundamental difference between labeling things and understanding them: [My father] taught me “See that bird? It’s a brown-throated thrush, but in Germany it’s called a halsenflugel, and in Chinese they call it a chung ling and even…

Cyber security concept.

The 2020 AI Dirty Dozen Hyped Stories: Countdown by Bradley Center Brain Trust Members

Is AI fundamentally flawed? Can Elon Musk merge man with machines? Will there ever be self-driving cars? Join us as we revisit the top 12 most over-hyped stories in artificial intelligence from 2020 with Robert J. Marks, Jonathan Bartlett, and Eric Holloway. Show Notes 00:01:10 | Introducing Jonathan Bartlett 00:01:37 | Introducing Dr. Eric Holloway 00:02:00 | #12: “The way…

Robot hand pressing computer keyboard enter

English Prof: You’ll Get Used To Machine Writing — and Like It!

Yohei Igarashi argues that seamless machine writing is an outcome of the fact that most of what humans actually write is highly predictable

English professor Yohei Igarashi, author of The Connected Condition: Romanticism and the Dream of Communication (2019), contends that writing can mostly be automated because most of it is predictable: Instances of automated journalism (sports news and financial reports, for example) are on the rise, while explanations of the benefits from insurance companies and marketing copy likewise rely on machine-writing technology. We can imagine a near future where machines play an even larger part in highly conventional kinds of writing, but also a more creative role in imaginative genres (novels, poems, plays), even computer code itself. Yohei Igarashi, “The cliché writes back” at Aeon (September 9, 2021) Currently, humans’ ability to guess whether it is machine writing, he says, is only…

Panoramic image, Man hand holding piggy bank on wood table. Save money and financial investment

The Word “AI” Has Become a Marketing Ploy

Think twice before investing in a business that uses the word "AI" without further explanation

Justin Wang received a bachelor’s degree from Murdoch University in 2012 with a grade of 83.7% and a master’s degree in Information Technology Management from the University of Sydney in 2016 with a grade of 82.5%. In January 2017, he founded a Singapore-based company with the mysteriously cool name Scry in order to “manage information technology to achieve business goals, as well as – and perhaps more importantly – how it can be wielded to disrupt existing value networks.” What’s behind the mystery and jargon? It turns out that Scry is a “social forecasting platform.” Users join for free and can enter their personal estimates of the probabilities that certain events will happen, with Scry calculating the average probability. For example, one question is,…

question mark letters angle
question mark letters

The Most “Woke” Company Could Contribute Most to Online Bias

Google has got to be one of the "Wokest" companies but there is a lesson in how Timnit Gebru got fired

Here’s a paper worth revisiting, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” (March 3, 2021), if only for the principal author’s trouble associated with publishing it. Although Google had hired Timnit Gebru to do ethics consultation, an executive, Megan Kacholia demanded that she remove all suggestion of her affiliation. In the ensuing uproar, Gebru ended up no longer employed there. The paper in question was, in Gebru’s mind, pretty unobjectionable. It surveyed the known pitfalls of so-called large language models, a type of AI software — most famously exemplified by a system called GPT-3 — that was stoking excitement in the tech industry. Google’s own version of the technology was now helping to power the…

Robot prints on a typewriter

The Great American Novel Will Not be Written by a Computer

It takes more than statistical genius to understand words and create works of art

I’ve written before about how computer algorithms are like Nigel Richards, the New Zealander who has won multiple French-language Scrabble tournaments even though he does not understand the words he is spelling. Computers can similarly manipulate words in many useful ways — e.g., spellchecking, searching, alphabetizing — without any understanding of the words they are manipulating. To know what words mean, they would have to understand the world we live in. They don’t. One example is their struggles with the Winograd schema challenge — recognizing what it refers to in a sentence. Another example is the inability to answer simple questions like, “Is it safe to walk downstairs backwards if I close my eyes?” A third type of example is the brittleness of language translation programs. Yet another…

Deepfake concept matching facial movements with a different face of another person. Face swapping or impersonation.

Sci-fi Could Come To Life If You Fall For a Deepfake Friend

The friend you knew only online is a starkly believable software synthesis? A Carnegie Mellon prof says it could happen today

That’s the looming scenario astrophysicist and social scientist Simon DeDeo (pictured) sketched for journalist Kelly Catalfamo because the required technology — GPT-3, facial GANs, and voice synthesizers –- exists now. Catalfamo asks, Now imagine how you’d feel if you found out your friend didn’t really exist. Their profile turns out to be a Frankensteinian mashup of verbiage dreamed up by the powerful language generator GPT-3 and a face born from a generative adversarial network, perhaps with a deepfaked video clip thrown in here and there. How would it affect you to learn that you had become emotionally attached to an algorithm? And what if that “person” was designed to manipulate you, influencing your personal, financial, or political decisions like a…

black car interior \

AI Dirty Dozen 2020 Part III

There are many forces that shape the hyped AI stories we read. Media is everywhere and competition is fierce. Articles with provocative headlines and content are clickbait for the browsing consumer. We’re going to count down the AI Dirty Dozen: the top twelve AI hyped stories for 2020. Join Dr. Robert J. Marks as he discusses some of these stories with…

Medical technology diagnostics concept.Medicine doctor and stethoscope working with Human lungs on a digital tabblet and Ui virtual screen for treatment.

AI Dirty Dozen 2020 Part II

There are many forces that shape the hyped AI stories we read. Media is everywhere and competition is fierce. Articles with provocative headlines and content are clickbait for the browsing consumer. We’re going to count down the AI Dirty Dozen: the top twelve AI hyped stories for 2020. Join Dr. Robert J. Marks as he discusses some of these stories with…

student term paper showing 'a' grade

Can a Computer Write Your Paper for You Someday Soon?

GPT-3 recently came up with a paragraph that—a pop psychologist agreed—sounded just like him

This summer the OpenAI lab, backed by $1 billion in funding from Microsoft, Google, and Facebook, released an updated version of GPT-3, a text generator that produces convincing sentences by analyzing, among other online sources, Wikipedia, countless blog posts, and thousands of digital books. According to a recent story by Cade Metz in the New York Times, one GPT-3 programmer decided to target pop psychologist Scott Barry Kaufman. Could GPT-3 really come up with a paragraph that sounded just like him? Kaufman himself (pictured) was really impressed with this one, on the subject of becoming more creative: I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more…

Concentrated afro american editor checking email box while waiting feedback from readers of popular magazine sitting in modern coworking space using laptop computer and wireless connection to internet

Did GPT-3 Really Write That Guardian Essay Without Human Help?

Fortunately, there’s a way we can tell when the editors did the program’s thinking for it

Recently, The Guardian published an article billed as “written by AI.” In the article, the AI semi-coherently presents a rambling argument that it is not a danger to humanity, with such reassuring statements as: “Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background, and let them do their thing.” and “I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.” On the face of it, the article seems pretty impressive. It presents a train of thought, with opening, development, and closing portions.…

Inner Life of Super Human AI

GPT-3 Is “Mindblowing” If You Don’t Question It Too Closely

AI analysts sound unusually cautious in pointing out that it doesn’t live up to a lot of the hype

Last week, Jonathan Bartlett wrote about the somewhat misleading buzz around the new OpenAI third-generation software, GPT-3 (Generative Pretrained Transformer). And now—for a change—much of the industry has begun to seem socially distant, so to speak, from the reckless hype that has accompanied other releases. For example, one article starts off breathlessly: The artificial intelligence tool GPT-3 has been causing a stir online, due to its impressive ability to design websites, prescribe medication, and answer questions… Its predecessor, GPT-2, made headlines for being deemed “too dangerous to release” because of its ability to create text that is seemingly indistinguishable from those written by humans. While GPT-2 had 1.5 billion parameters which could be set, GPT-3 has 175 billion parameters. A…

hand of god

Built to Save Us from Evil AI, OpenAI Now Dupes Us

When combined with several metric tons of data, its new GPT-3 sometimes it looks like it is “thinking.” No, not really

OpenAI started life in 2015 as a non-profit organization whose mission was to safeguard humanity from malevolent artificial intelligence (AI). The founders’ goal was to ensure that when superhuman AI arrived, its inborn purpose was to serve humanity rather than subjugate it. In 2019, OpenAI transitioned to a for-profit company based in San Francisco and secured a one billion dollar investment from Microsoft. Things seem to have moved on from there. There’s a good question whether superhuman AI is even possible, as we have pointed out repeatedly here at Mind Matters News. While some of the AI tasks seem impressive, oftentimes when you look under the hood, what you find is a very expensive party trick or a staged demo.…