Mind Matters Natural and Artificial Intelligence News and Analysis
robot-and-human-cooperating-in-jigsaw-puzzle-stockpack-adobe-stock.jpeg
Robot and human cooperating in jigsaw puzzle

Thinking Machines? Has the Lovelace Test Been Passed?

Surprising results do not equate to creativity. Is there such a thing as machine creativity?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In “Thinking machines? The Lovelace test raises the stakes,” Rensselaer philosopher and computer scientist Selmer Bringsjord argued that the iconic Turing test for human-like intelligence in computers is inadequate and easily gamed. That is, merely sounding enough like a human to fool people does not establish human-like intelligence in the product; it may point only to superior cunning in the creators. He pioneered the much more challenging Lovelace test, based on an observation from computer pioneer Ada Lovelace (1815–1852) that true creativity distinguishes humans from machines.

Here is a partial transcript, beginning at 13:49, of the continued discussion between Bringsjord and Walter Bradley Center director Robert J. Marks on whether the Turing test or his Lovelace test has ever been passed by a machine. For example, did AlphaGo, the Go-playing program, pass the test?:

13:49 | AlphaGo’s “creative” move

Robert J. Marks (right): I think some people would probably point to the match between AlphaGo and the world Go champion (I believe his name was Sedol). There was an incredible move at one point, where AlphaGo made a move which was totally contrary to convention and some people point to that and they say, this move was creative. Any comment on that?

Selmer Bringsjord: I have many comments on that… If the machine running this algorithm is able to work fast enough, no human is going to be able to comprehend how it does anything. Forget about one move; it’s going to be invincible. It’s going to do things time after time after time that are completely mysterious—and efficacious as well. And yet it’s running a simple algorithm.

So the counterargument is, look, it’s not a subjective piece of behavior at this level. We have to start with some kind of task that we don’t have a garden variety search algorithm for.

Now, in the case of Go, it’s a bit different because machine learning plays a significant role and the machine is approximating the function by running a gradual process that we also can’t follow. But the bottom line is, we know the task is intrinsically easy and we know the algorithm does exist.

However AlphaGo or any other such machine arrives at its ability to play really good Go, we know that if there was enough time and energy available [to humans] this is really absolutely not a difficult game. We already have perfection, we can define it. So that really doesn’t count.

A novel, or even a short story, would be rather a different affair.

16:45 | Creative writing

I just listened, in audiobook form, to Flannery O’Connor’s The Violent Bear It Away. What struck me about this novel is that, over and over again, she makes you feel what her character is feeling, with just a few sentences. So her consciousness is present and exploited and she comes up with these amazing sentences, two or three in a row, that just get the job done.

We can’t say that the production of sentences like that is, even in theory, possible to carry out by following an algorithm because we don’t have the algorithm. We don’t have what it would take to do it.

So, returning to your original objection, I think it’s an interesting objection but when we look at AlphaGo, that’s not a true contender here.

Robert J. Marks: Well I think that AlphaGo was trained to play Go and that’s exactly what it it. It’s what the programmers designed it to do.

18:07 | Has the Lovelace test been passed?

Robert J. Marks: Let me ask you this: In your monitoring of the Lovelace test since you proposed it in 2003, have you seen any place where your Lovelace test has been passed by AI?

Selmer Bringsjord (right): I’ve never had any conversation with anyone in that camp about the Lovelace test but I am pretty sure that if I did that, they would say, well, look, you yourself have criticized our systems for being black boxes. So if they’re black boxes, they satisfy one of the criteria that you’ve given us, which is that the developers don’t know how it happened.

They’d be, to some degree, dishonest but it’s true, these are black boxes and that’s why they’re been much concern about explanations being absent about how they do what they do. But in the case of all these machines, we get back to what I said about AlphaGo in the case of Go. We know that the function that they are trying to learn are relatively simple. In fact, we know where they fall mathematically… [In the case of autonomous cars staying in their lane] Come on, just because we don’t know how the system arrived at the ability to approximate the computing of that function doesn’t mean that we as human beings don’t know that function. It’s extremely simple. The things that the Lovelace test, if passed, would present us with and confront us with are not relevant to those kinds of tasks.

Robert J. Marks: I can tell you that in monitoring the literature, having learned of your Lovelace test, I usually apply it to what I see in the news and nothing that I have seen really passes the Lovelace test, which is really, I think, kind of fascinating. Basically, computers do what they’re trained to do. Even Deep Learning does that and does it quite well. You might get some weird results. You might get some surprising results. But surprising results do not equate to creativity.

Selmer Bringsjord: No. That’s exactly right. And the pre-engineering or the engineering that provides the setup for the process of learning to happen is, just as you say, highly premeditated and very rigid and non-trivial. But the point here is, it’s understood from the get-go that it provides a rather artificial context and it precludes creativity.

21:33 | How could it be proven that the Lovelace test was passed?

Robert J. Marks: Let me ask you a question: What would it take to convince you at AI had passed the Lovelace test? An example.

Selmer Bringsjord: I don’t know enough about the visual arts to go in that direction so I would pick something from the space that I know and spent a lot of time experiencing. I would say that if the machine can produce a novel that’s of the right sort, that’s going to get my attention. Then I have to verify all the background information, to make sure that it satisfies all the constraints on the test I cannot have a pre-stored novel; I cannot have prestored passages with someone banking on the fact that I’m not familiar with Proust… All that stuff verified in the background and yet it still gives me a novel of the right type, that’s something worth writing home about.

Robert J. Marks: Well, let me tell you about an example I came up with. I’d like to get your reaction. If AlphaGo, without additional programming, was able to play checkers, I think that that would be creative.

Selmer Bringsjord: It would be creative but there is, in my mind, a continuum of types of creativity. That would be some form of creativity. I would say that’s problem-solving. And probably the world’s leading authority on musical creativity, David Cope, does say explicitly that if the machine can do problem-solving, it catches people by surprise, he would stick to his guns and say that’s creative.

I absolutely reject that notion. I think the next step up is MacGuyver creativity — what I called N creativity fairly recently. And that is, all the humans put their minds together and create a wonderful artifact intended to be used in a particular domain for a particular set of tasks. The machine takes it and does something completely different with it, So that is beyond what you’re talking about. Not only is a game a game but the two games you’re talking about have the same formal structure. That’s why we know how difficult they are. And, in the history of AI, general game-playing, which has kind of petered out and wasn’t really its idea, tells us that that jump, from Go to checkers or whatever it is, is not that large. But even the MacGuyver creativity, I don’t know that there are cases of that out there.

Given that, it’s a long way from origination, I do see what you see as creative but it’s a problem-solving type of creativity and that doesn’t cut it. That’s not genuine origination. And that was the complaint to Turing; Lovelace’s “Wait a minute, we originate things. A computer doesn’t originate anything.”

See also: Thinking machines? The Lovelace test raises the stakes. The Turing test has had a free ride in science media for far too long, says an AI expert. (This earlier article is the partial transcript and notes to the earlier part of the podcast.)


Further reading:

Why AI appears to create things. When AlphaGo made a winning move, it exhibited no more creative insight than when it played pedestrian moves (Brendan Dixon)

Why AI fails to actually create things (Brendan Dixon)

and

Creativity does not follow computational rules. A philosopher muses on why machines are not creative.

Show Notes

00:43 | Introducing Selmer Bringsjord, Professor — Rensselaer Polytechnic Institute (RPI)
01:43 | What is the Turing test?
03:56 | The Lovelace objection
04:26 | Ada Lovelace
07:40 | The consciousness objection
08:57 | Eugene Goostman
09:48 | The Lovelace test
13:49 | AlphaGo’s “creative” move
16:45 | Creative writing
18:07 | Has the Lovelace test been passed?
21:33 | How could it be proven that the Lovelace test was passed?
25:05 | Ray Kurzweil and singularity

Additional Resources


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Thinking Machines? Has the Lovelace Test Been Passed?