Computer scientist and AI expert Selmer Bringsjord provides a wide-ranging discussion of artificial intelligence (AI) and its capabilities. Bringsjord, a professor at Rensselaer, addresses three features humans possess that AI machines won’t be able to duplicate in his view: consciousness, cognition, and genuine creativity. Additional Resources
The GPT-3 chatbot is awesome AI. Under the hood, GPT3 is a transformer model that uses sequence-to-sequence deep learning that can produce original text given an input sequence. In other words, GPT-3 is trained by using how words are positionally related. The arrangement of words and phrases to create well-formed sentences in a language is called syntax. Semantics is the branch of linguistics concerned with the meanings of words. GPT-3 trains on the syntax of training data to learn and generate interesting responses to queries. This was the intent of the programmers. GPT-3 is not directly concerned with semantics. Given a tutorial on a topic from the web, for example, GPT-3 does not learn from the tutorial’s teaching, but only Read More ›
Recently, Bill Meyer interviewed Walter Bradley Center director Robert J. Marks on his Oregon-based talk show about “Why computers will never understand what they are doing,” in connection with his new book, Non-Computable You: What You Do That Artificial Intelligence Never Will (Discovery Institute Press, 2022). We are rebroadcasting it with permission here as (Episode 194). Meyer began by saying, “I started reading a book over the weekend that I am going to continue to eagerly devour because it cut against some of my preconceived notions”: https://mindmatters.ai/wp-content/uploads/sites/2/2022/07/Mind-Matters-194-Bob-Marks-Bill-Meyer.mp3 A partial transcript, notes, and Additional Resources follow. Meyer and Marks began by discussion the recent flap at Google where software engineer Blake Lemoine claimed that the AI he was working with was Read More ›
Is AI fundamentally flawed? Can Elon Musk merge man with machines? Will there ever be self-driving cars? Join us as we revisit the top 12 most over-hyped stories in artificial intelligence from 2020 with Robert J. Marks, Jonathan Bartlett, and Eric Holloway. Show Notes 00:01:10 | Introducing Jonathan Bartlett 00:01:37 | Introducing Dr. Eric Holloway 00:02:00 | #12: “The way Read More ›
What we really need is not a Turing test or a Lovelace test, but a Terminator test. Just imagine. If we create an all-powerful artificial intelligence, we cannot assume it will be friendly. We cannot guarantee anything about the AI’s behavior due to something known as Rice’s theorem. Rice’s theorem states that all non-trivial semantic properties of programs are undecidable. Benevolence is certainly a non-trivial semantic property of programs, which means we cannot guarantee benevolent AIs. Therefore, what we really need is a way to distinguish the all-powerful artificial intelligence from human intelligence, so we can protect ourselves from humanized mass murdering robots. Let us think about this in terms of test errors. When we perform a test on some Read More ›
Neuroscientist Joel Frohlich (pictured) asks us to reflect on the “philosophical zombie.” That’s not the zombie of the late nite frites. It’s an entity that behaves outwardly in every respect like you and me but has no inner experience (think Stepford Wives). Philosopher David Chalmers originated the term in 1996, by way of illustrating why consciousness is a Hard Problem. A powerful computer can crunch through many difficult jobs without any inner life or consciousness. But, Frohlich, who is editor in chief of the science communications website Knowing Neurons, asks, what if we weren’t sure? How would we test that? Trying to determine if a powerful AI is conscious means getting past programming that might enable it to generate plausible Read More ›
In last week’s podcast, “The Chaitin Interview IV: Knowability and Unknowability,” Walter Bradley Center director Robert J. Marks interviewed mathematician Gregory Chaitin, best known for Chaitin’s Unknowable Number, on a number of things, including whether computers can show creativity. Chaitin has thought a lot about that: https://episodes.castos.com/mindmatters/Mind-Matters-127-Gregory-Chaitin.mp3 This portion begins at 21:34 min. A partial transcript, Show Notes, and Additional Resources follow. Robert J. Marks: We’re talking, just in general, about the unknowable. Roger Penrose recently won a Nobel Prize for his work with Stephen Hawking on black hole theory. He also wrote a book called The Emperor’s New Mind: Concerning Computers, Minds and The Laws of Physics (1989) and he followed it up with The Shadows of the Mind: Read More ›
What does it mean for something to be unknowable? Is creativity non-computable? Do all things have a level of consciousness? Jump into today’s podcast, where Robert J. Marks continues his discussion with Gregory Chaitin about mathematical theory and philosophy. Show Notes 00:23 | Introducing Gregory Chaitin 00:40 | What is unknowability? 06:07 | Does non-computable mean unknowable? 09:43 | A Read More ›
Won’t quantum computers be smarter than regular ones? No. Still No. What about optical computing, computing with DNA, or some other exotic form of computation? Always No. A skeptical reader might ask, Why such a definitive answer? How do you deal with the spectacular performance of deep learning? What about AlphaGo Zero? What about Watson? What about the infamous Deep Blue? What about quantum supremacy? Don’t these examples all disprove your point? No. All forms of computation past, present, and future will be physical. And all physical phenomena can be modeled by a Turing machine (pictured). No matter how fast the computer runs, the computer will never be more powerful than a Turing machine. A Turing machine consists of five Read More ›
There are many forces that shape the hyped AI stories we read. Media is everywhere and competition is fierce. Articles with provocative headlines and content are clickbait for the browsing consumer. We’re going to count down the AI Dirty Dozen: the top twelve AI hyped stories for 2020. Join Dr. Robert J. Marks as he discusses some of these stories with Read More ›
At first, “What is information?” seems like a question with a simple answer. Stuff we need to know. Then, if we think about it, it dissolves into paradoxes. A storage medium—a backup drive, maybe—that contains vital information weighs exactly the same as one that contains nothing, gibberish, or dangerously outdated information. There is no way we can know without engaging intelligently with the content. That content is measured in bits and bytes, not kilograms and joules—which means that it is hard to relate to other quantities in our universe. In this week’s podcast, “Robert J. Marks on information and AI, Part 1.” neurosurgeon Michael Egnor interviews Walter Bradley Center director and computer engineering prof Robert J. Marks on how we Read More ›
In a thought-provoking essay, San José State University philosopher Anand Vaidya asks, should it be okay to dismantle Star Trek‘s robotic crew member Data for research purposes, as proposed in the “The Measure of a Man” episode in Star Trek: The Next Generation? Some of the Trek brass seemed to think so: Vaidya disagrees: As real artificial intelligence technology advances toward Hollywood’s imagined versions, the question of moral standing grows more important. If AIs have moral standing, philosophers like me reason, it could follow that they have a right to life. That means you cannot simply dismantle them, and might also mean that people shouldn’t interfere with their pursuing their goals. Anand Vaidya, “If a robot is conscious, is it Read More ›
In a recent podcast, Rensselaer professor Selmer Bringsjord discusses AI and creativity with computer engineering professor and Walter Bradley Center director Robert J. Marks. The difference between writing novels and playing games like Go and chess is that writing novels does not mean winning according to a set of rules. A machine can be programmed with rules and do the calculations faster—much, much faster—than a human. A good novel requires creativity in the face of situations that are only partly definable. If a novel succeeds, many people agree that the writer has captured essential elements of human nature and life circumstances. That’s what makes the great novels so memorable. Sports reporting is somewhere in the middle in that a great Read More ›
The Turing test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behaviour indistinguishable from a human. Many think that Turing’s proposal for intelligence, especially creativity, has been proven inadequate. Is the Lovelace test a better alternative? What are the capabilities and limitations of AI? Robert J. Marks and Dr. Selmer Bringsjord discuss Read More ›
The list is a selection from “Bingecast: Robert J. Marks on the Limitations of Artificial Intelligence,” a discussion between Larry L. Linenschmidt of the Hill Country Institute and Walter Bradley Center director Robert J. Marks. The focus on why we mistakenly attribute understanding and creativity to computers. The interview was originally published by the Hill Country Institute and is reproduced with thanks. https://episodes.castos.com/mindmatters/Mind-Matters-097-Robert-Marks.mp3 Here is a partial transcript, listing six limits of AI as we know it: (The Show Notes, Additional Resources, and a link to the full transcript are below.) 1. Computers can do a great deal but, by their nature, they are limited to algorithms. Larry L. Linenschmidt: When I read the term “classical computer,” how does a computer function? Let’s build on Read More ›
Like Excel, AI assists programmers in their design work. AI can search through trillions of possibilities, using data from a million sources, to find a successful design. But the structure of the search and the source of the data is the choice of the programmer. A look at how famous inventors developed products that changed the world sheds some light on the process.
The feats of machines like AlphaGo are due to superior computational power, not to creativity at originating new ideas. Computer scientist Selmer Bringsjord sees the ability to write, say, a novel of ideas as a more realistic test of human vs. computer achievement.
In the view of Rensselaer philosopher and computer scientist Selmer Bringsjord, the iconic Turing test for human-like intelligence in computers is inadequate and easily gamed. Merely sounding enough like a human to fool people does not establish human-like intelligence. He proposes the much more challenging Lovelace test, based on an observation from computer pioneer Ada Lovelace (1815–1852) that true creativity is what distinguishes humans from machines.
The Turing test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behaviour indistinguishable from a human. Many think that Turing’s proposal for intelligence, especially creativity, has been proven inadequate. Is the Lovelace test a better alternative? Robert J. Marks and Dr. Selmer Bringsjord discuss the Turing test, the Lovelace test, and machine Read More ›