Mind Matters Natural and Artificial Intelligence News and Analysis
Oh no!
Oh no!

Six Limitations of Artificial Intelligence As We Know It

You’d better hope it doesn’t run your life, as Robert J. Marks explains to Larry Linenschmidt

The list is a selection from “Bingecast: Robert J. Marks on the Limitations of Artificial Intelligence,” a discussion between Larry L. Linenschmidt of the Hill Country Institute and Walter Bradley Center director Robert J. Marks. The focus on why we mistakenly attribute understanding and creativity to computers. The interview was originally published by the Hill Country Institute and is reproduced with thanks. 

Here is a partial transcript, listing six limits of AI as we know it: (The Show Notes, Additional Resources, and a link to the full transcript are below.)

1. Computers can do a great deal but, by their nature, they are limited to algorithms.

Larry L. Linenschmidt: When I read the term “classical computer,” how does a computer function? Let’s build on that to talk about supercomputers and kind of build into just a foundation of how these things work so we can then talk about the theory of AI and what it is and what it isn’t.

Robert J. Marks: One of the things that we can identify that humans can do that computers can’t do are things which are non-algorithmic. If it’s non-algorithmic, it means it’s non-computable. Actually, Alan Turing showed back in his initial work that there were things which were not algorithmic. It’s very difficult, for example, to write a computer program to analyze another computer program. Turing showed that specific instantiations of that were non-algorithmic. This is something which is taught to a freshman computer science students, so they know what algorithmic and non-algorithmic/non-computable is. Again, non-computable is a synonym for non-algorithmic.

We have a number of aspects that are non-algorithmic. I would say calling it creativity, sentience, consciousness are probably things that you can not write a computer program to simulate.

Note: The film The Imitation Game (2014) dramatizes the way Turing led a team that broke the Nazi’s “unbreakable” code, Enigma, during World War II, using pioneer computing techniques.

Robert J. Marks: Basically, Turing showed that computers were limited by something called algorithms, and we hear about algorithms a lot. “Such and such is doing an algorithm and Facebook has initiated an algorithm to do something.” The question is, what is an algorithm?

The algorithm is simply a step-by-step procedure to accomplish something. If you go to your shampoo bottle and you look at the back and it says, “Wet hair, apply shampoo, rinse, and then repeat.” That’s an algorithm because it tells you the step-by-step procedures that you need to wash your hair.

Larry L. Linenschmidt: Well, that’s a pretty short algorithm for me since I don’t have much hair, but go right ahead.

Robert J. Marks: Isn’t that right? Well, the interesting thing about that algorithm is if you gave that to a computer, that computer would wash its hair forever because it doesn’t say repeat once, it just says repeat…

An algorithm I like to think of as a recipe. If you look at the recipe for baking a vanilla coconut cake, for example, it will tell you the ingredients that you need and then it will give you a step-by-step procedure for doing it. That is what an algorithm is and, in fact, it is what computers are limited to do. Computers are only able to perform algorithms.

Note: Have a look at “Things exist that are unknowable: A tutorial on Chaitin’s number” by Robert J. Marks, for some sense of the limits of knowledge that computers will not transcend.

2. Today’s computers are faster but not more intelligent.

Larry L. Linenschmidt: I have a cellphone that I understand has more power than a room full of computers 50 years ago that Army intelligence used. A massive increase in computing capability, isn’t there?

Robert J. Marks: Yes there is, but by increasing the speed and using parallel computers, we have just increased the speed of the computers. There is a principle taught to computer scientists called the Church-Turing Thesis, which basically says that… Alan Turing’s original machine could also do what the computers today do. The only thing that computers could do today is do things a lot faster. That is really good, that is very useful, but in terms of the ability of the computer, they are still restricted to algorithms. I’m not sure if you’ve ever heard of the quantum computer…

Larry L. Linenschmidt: Yes.

Robert J. Marks (pictured): Which is kind of the new rage where you use this strange, weird world of quantum physics in order to get computational results. Even quantum computing is algorithmic and is constrained by the Church-Turing Thesis. With quantum computers, we’re going to be doing them like lightning, but still, all of the stuff we could do we could do with Turing’s original machine. Now, with Turing’s original machine, it might take us a trillion years in order to do it compared to today, but nevertheless, the capability is with Turing’s original machine. We’re just getting faster and faster and we can do more interesting things because of that speed.

Note: You may also wish to read “Google vs. IBM?: Quantum supremacy isn’t the big fix anyway.” If human thought is a halting oracle, then even quantum computing will not allow us to replicate human intelligence (Eric Holloway).

3. Then there’s the creativity gap.

Larry L. Linenschmidt: One of the things we talked about earlier were algorithms and what computers can do and some things that maybe they can’t do. What are the things that maybe computers will never be able to do?

Robert J. Marks: Well, I think maybe the biggest testable thing that computers will never be able to do is creativity. Computers, they can only take the data which they’ve been presented and interpolate. They can’t, if you will, think outside of the box. If you look at the history of creativity, like great scientists like the Galileo and Einstein and such, they actually had to take the data that they were given. They had to discard it and they came up with something which was brand new. It wasn’t just a reshuffling of the status quo, which is basically what a computer can do, it was actually a creative act outside of the available data.

Note: Typical claims for computer-generated art, music, or copywriting involve combining masses of similar material and producing many composites, the most comprehensible of which are chosen by the programmers for publication. The conventional test of computer intelligence, the Turing test, measures only whether a computer can fool a human under certain circumstances. The Lovelace test, which searches for actual creativity, is not much used and has not been passed.

4. Computers don’t actually experience things (qualia), which limits actual understanding.

Robert J. Marks: Qualia is kind of the subjective experience that one has of themselves. Imagine, for example, having a big, delicious red apple and you anticipate taking the bite out of it. You take the bite, you feel the crispness, you feel the tart sweetness, you feel the crunch as you chew it and swallow it. That is an experience and the question is, do you think you could ever write an algorithm to explain that qualia experience to a computer? I don’t think so. I think that that is something which is unique to the human being…

John Searle was a philosopher and he said that, “There is no way that a computer understands anything.” He illustrated this with the Chinese room: The basic idea was, you slipped a little slip of paper with something written in Chinese through a little slot. Inside the room, somebody picked it up and they looked at it and they wanted to translate it to something, say, like Portuguese.

There’s a big bunch of file cabinets in the room. The person in the room took this little sheet, looked through all of the file cabinets, and finally found something that matched the little sheet. He took the little translation in Portuguese, wrote it down, refiled the original things, went to the door and slipped out the translation into the Portuguese.

Now, externally, the person would say, “My gosh, this guy knows Chinese, he knows Portuguese. This computer is really, really smart.” Internally, the guy that was actually going through the file cabinets, doing the pattern matching in order to find out what the translation was, had no idea what Chinese was, had no idea what Portuguese was. He was just following a bunch of instructions.

Larry L. Linenschmidt: The computer processes, it turns out work product based on how it’s directed, but in terms of understanding, as we think of understanding like you would expect one of your students to understand what you’re teaching, they don’t understand. They compute. They process data. Is that a fair way of putting it? …

Robert J. Marks: Consider the world champions at Jeopardy. If you think about it, that’s just a big Chinese room. You have all of Wikipedia and all of the internet available to you and you’re given some sort of question on Jeopardy and you have to get the answer. Watson beating the world champions in Jeopardy is exactly an example of a Chinese room, except the room is a lot bigger because computers are a lot faster and can do a lot better.

Note: A mistake Watson made playing Jeopardy illustrates the limitations: “Why did Watson think Toronto was in the U.S.A.?” How that happened tells us a lot about what AI can and can’t do, to this day. Hint: Assessment of the relevance of possible clues may not be the greatest strength of a Watson type program.

5. AI achievements are narrowly focused.

Larry L. Linenschmidt (pictured): Well, there’s one other game example that comes up quite a bit in the literature, and that’s the Game Go, and apparently Go is the most complicated game and a computer did very well. Is that just an extension of the same idea that it was able to match possible outcomes and evaluate the best of those? Or what? How do you look at that?

Robert J. Marks: Go was a remarkable computer achievement. I don’t want to derogate this at all. They used the concept called reinforcement learning and this reinforcement learning was used in chess and Go. It was actually used to win the old arcade games where, just by looking at the pixels in an arcade game such as Pac-Man, for example, the computer could learn how to win. Now, in all of these cases, of course, there was the concept of the rules. You’ve got to know the rules. The fact that Go was mastered by the program is an incredibly accomplishment of computer science. However, notice that the computer is doing exactly what it was programmed to do. It was programmed to play Go, and Go is a very narrow application of artificial intelligence.

I would be impressed if the computer program would pass something called the Lovelace test, which is the test that computer programs are given for to test their creativity. The Lovelace Test basically says that you have seen creativity if the computer program does something that can’t be explained by the programmers. Now, you might get some surprising results. There was some surprising results that Alpha Go used when it played the master, but “surprising” doesn’t count. It’s still in the game of Go. If AlphaGo had gone on to do something like—let me make the point by exaggeration—give you investment advice or to forecast the weather without additional programming, that would be an example of AI creativity…

Algorithms in computers are the result of human creativity. That is not a controversial viewpoint. The current CEO of Microsoft, Satya Nadella, says the same thing. He says that, “Look, computers are never going to be creative. Creativity will always be a domain of the programmer.”

Note: “Creativity does not follow computational rules provides a look at the concept. Philosopher Sean Dorrance Kelly muses on why machines are not creative.

6. Computers may surprise us but surprise does not mean creativity.

Larry L. Linenschmidt: Well, let me ask the question about AI a little bit differently. Self-learning, a computer teaching itself to do something different, a way that the programmer’s not foreseeing. There’s a program called Deep Patient and it’s a way of managing information on the medical side and a couple of other programs that I read about and they solved the problem, but they aren’t doing it in a way that the developer of the network can explain. Now, does that imply that there’s a learnability going on in there? Some way that they’re doing it? Or is everything that they’re doing, even if it’s not fully understood by the developer, still subject to the way that the developer set up the network?

Robert J. Marks: Well, one of the things we have to differentiate here is the difference between surprise and creativity. I have certainly written computer programs that have the element of surprise in them. I look at them and I say, “Wow, look at what it’s doing,” but then I look at the program and say, “Yeah, this was one of the solutions that I considered.” One of the ideas, especially in computer search, is to lay out thousands, maybe millions or billions of potential different solutions, and you don’t know what the effect of those solutions are going to be. It would be almost like putting out a bunch of different recipes for cake. You had different amounts of batter, different amounts of milk, a number of different eggs, the amount of oil that you put in, et cetera, and what you want to do is you want to figure out what the best one is.

If you have no domain expertise, if you want to walk around in the search space and try to find the best combination, you might get something which is totally unexpected. We did something in swarm intelligence, which is modeling social insects. We actually applied evolutionary computing, which is an area in electrical engineering, and we evolved dweebs, it was a predator-prey sort of problem and our prey was the dweebs and our predator was the bullies and the bullies would chase around the dweebs. We would evolve and try to figure out, what was the best way for the dweeb colony of the colony swarm to survive the longest? The result that we got was astonishing and very surprising.

What happened was that there was self-sacrifice that the dweebs learned. One dweeb would run around the playground and be chased by the bullies and self-sacrifice himself, and then, I guess, dweebs are males because I said himself, so they would kill the dweeb and then there would be other dweebs which would come out and individually they would self-sacrifice themselves. By using up all of the time in order to survive, the colony of dweebs survived for a very, very long time, which was exactly what we told it to do.

Now, once we looked at that, we were surprised by the result, but we looked back at the code and we said, “Yeah, of these thousands, millions of different solutions that we proposed, we see how this one gave us the surprise.” Surprise can’t be confused with creativity. If the surprise is something which is consequent to what the programmer decided to program, then it really isn’t creativity. The program has just found one of those millions of solutions that work really well in, possibly, a surprising manner.

So what does it mean to be a creative human in a world of smart machines?

Larry L. Linenschmidt: As you’re explaining it, I’m thinking that a computer is as good as its programmer, it’s good at matching, it’s good at putting things together, but true creativity, what the entrepreneur Peter Thiel refers to the fact that a lot of people can take us from one to infinite but it’s that zero to one that is creativity in the tech world, in the business world that sets us apart.A computer can’t take us from zero to one. It needs instructions, doesn’t it?

Robert J. Marks: It does, and in his book, Zero to One, Thiel talks about the requirement of creativity. His philosophy is parallel to that of some other people, Jay Richards, for example, and George Gilder, who look at business in a very different way from those who see it as a Darwinian competition. They say, “No, what drives entrepreneurs is creativity. You come up with a new idea like a PayPal or a Facebook or an Uber.

That creativity in business is never going to come from a computer. A computer would have never come up with the idea of Uber unless the programmer programmed it to look in a set of different things. That was something which was creative which was above and beyond the algorithmic…

Larry L. Linenschmidt: Yes. Jay Richards’ book The The Human Advantage: The Future of American Work in an Age of Smart Machines, has countless examples of entrepreneurs seeing a need and then filling that need. It’s totally against the idea that capitalism is just about greed. He made the case that capitalism or free market enterprise is really altruistic, that the best entrepreneurs actually fill in a need. That’s reality, isn’t it?

Robert J. Marks: Yes it is, yes it is.

You may also enjoy earlier conversations between Robert J. Marks and Larry L. Linenschmidt:

Why we don’t think like computers: If we thought like computers, we would repeat package directions over and over again unless someone told us to stop.


What did the computer learn in the Chinese room? Nothing. Computers don’t “understand” things and they can’t handle ambiguity, says Robert J. Marks.

Show Notes

  • 03:03 | Introducing Dr. Robert J. Marks II, Distinguished Professor at Baylor University
  • 05:25 | The gift of nerdhood
  • 06:28 | Nature as a source of creativity
  • 07:17 | STEM is not for everyone
  • 09:02 | Difficulties and opportunities in a university setting
  • 11:21 | About the Walter Bradley Center for Natural and Artificial Intelligence
  • 15:50 | How do computers function?
  • 17:39 | Algorithms — recipes for computers
  • 18:56 | Some non-algorithmic human qualities
  • 20:12 | The Church-Turing Thesis and computing capability
  • 21:17 | Quantum computing
  • 22:55 | What is artificial intelligence?
  • 24:54 | Neural networks and their relation to artificial intelligence
  • 02:15 | What are things that computers will never be able to do?
  • 02:30 | Computers and creativity
  • 03:16 | Qualia
  • 03:59 | Do computers understand things?
  • 08:03 | Computers and ambiguity
  • 09:43 | Computers winning Go
  • 12:01 | The Algorithm-of-the-Gaps
  • 13:33 | Art by artificial intelligence
  • 16:49 | A computer is as good as its programmer
  • 19:35 | AI and self-learning
  • 21:32 | Swarm intelligence and unexpected solutions
  • 23:22 | Developing neural networks
  • 25:00 | Ethics and the misuse of artificial intelligence

Additional Resources

Download transcript.

Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Six Limitations of Artificial Intelligence As We Know It