Does Creativity Just Mean Bigger Data? Or Something Else?Michael Egnor and Robert J. Marks look at claims that artificial intelligence can somehow be taught to be creative
In Define information before you talk about it, neurosurgeon Michael Egnor interviewed engineering prof Robert J. Marks on the way information, not matter, shapes our world (October 28, 2021). In the first portion, Egnor and Marks discussed questions like: Why do two identical snowflakes seem more meaningful than one snowflake? Now they turn to the relationship between information and creativity. Is creativity a function of more information? Or is there more to it?
This portion begins at 10:46 min. A partial transcript and notes, Show Notes, and Additional Resources follow.
Michael Egnor: How does biological information differ from information in nonliving things?
Robert J. Marks: I don’t know if it does… I do believe after recent study that the mind is very different from the physical part of the brain. So there’s information that occurs external to the brain. In terms of just the physical, materialistic definition, most information can be used to measure what is in the biological entity.
We can talk about, however, creativity and where the idea of creativity comes from — the creation of information. And that is outside of naturalistic or information processes.
Michael Egnor: Thomas Aquinas, following Aristotle, defined living things as things that strive for their own perfection. He felt that was what distinguished living things from nonliving things. A rock doesn’t everyday wake up in the morning and try to be a better rock. Whereas living things to a greater or lesser degrees of success, try to make themselves better at what they do. They eat, they rest, they interact with nature, they do things to make themselves even better examples of what they are.
It would seem to me that might relate to the difference between information in non-living and living things. Thhe information in living things is directed to ends. It’s directed to purposes that you don’t see in nonliving things in the same way.
Robert J. Marks: I would definitely agree with that it. It does turn out that, in order to do the improvement that you’re talking about, there needs to be a degree of creativity.
This is one of the things that we argue a lot about in artificial intelligence. Will artificial intelligence ever be creative? And I maintain that artificial intelligence will never be creative, it will never understand. And currently it has no common sense…
Michael Egnor: Sure. I mean, I’ve always thought of artificial intelligence as just a representation of human intelligence. And that, in the sense the term artificial intelligence is an oxymoron. If it’s artificial, it’s not intelligence. And it’s so intelligence must be human. And all the intelligence that’s in computers and computer programs and machines, is all human intelligence that is represented in those devices.
Robert J. Marks: It is. In fact, what you mentioned is exactly the test that Selmer Bringsjord, a professor at Rensselaer, used in his test for whether or not artificial intelligence would be creative. His test was that does the computer program do something which is outside the explanation or the intent of the programmer? And thus far no artificial intelligence has done this.
Note: The first person who is known to perceive that artificial intelligence is not creative was computer pioneer Ada Lovelace (1815–1852), an associate of Charles Babbage (1791–1871). Selmer Bringsjord’s Lovelace test for AI creativity is named for her.
Robert J. Marks: Now we have surprising results from artificial intelligence. AlphaGo, that beat Lee Sedol in the Asian game of Go — the most difficult board game of all — at one point it did the surprising move. And everybody went, “Oh, that’s incredible. This was creative.” No, that was not creativity. AlphaGo was trained to play go. And that’s exactly what AlphaGo did. It was playing go, better than human beings. But we see this all the time…
Note: Tech philosopher George Gilder notes in his recent book, Gaming AI, that the reason AI sweeps the board in games like go and poker is that, in games, the map is the territory: The rules always work. The reason AI tends to do much worse in, say, medicine is that in medicine, the map is not the territory. When dealing in the almost infinitely more complex real world among human beings not constrained by such rules, qualities such as creativity are essential.
Michael Egnor: So what specifically is the relevance of information to biology? And why is information so interesting when it’s applied to a study of living things?
Robert J. Marks: Well… it’s beyond the explanation of materialism. We hear a lot today about Ray Kurzweil’s Singularity: AI writes better AI that writes better AI and pretty soon, the AI — it is claimed — is going to have intelligence equivalent to that of a human being.
Well, of course, if we have AI writing better AI, that is beyond the original intent of the computer programmer, then we have creativity don’t we? And creativity is beyond the ability of artificial intelligence. So Ray Kurzweil’s idea isn’t going to work.
There’s also the thought that someday we will have something which is called artificial general intelligence, which will be intelligence at the human level, including things like creativity, understanding and sentience. And no this all requires creativity in terms of the computer program, and that will never happen.
I have a colleague, Roman Yampolskiy at the University of Louisville. Amid all this talk about artificial general intelligence, he posted a post on social media on April Fool’s Day in 2016. Let me read it to you:
Google just announced major layoffs of programmers future software development and updates will be done mostly via recursive self improvement by evolving deep neural networks.
In other words, he said that this [artificial general intelligence] AGI had been achieved. And this allowed everybody to be laid off at Google because they weren’t needed anymore.
Robert J. Marks: And it was a great joke. But on his social media, he got a bunch of likes. But also interestingly, a few requests from journalists came in and they said they would like to talk to him about this wonderful developing story. So to non-experts, the joke was not obvious. But I think to anybody that does artificial intelligence, it was obviously an incredible joke. And I think it’s humor that illustrates this idea that no, we’re never going to have the sort of computer generating software that generates better and better AI. Isn’t going to happen.
Note: A number of paradoxes arise with respect to AI and creativity. Creativity does not follow computational rules. Programmers cannot write programs that are more creative than they themselves are. And, as mathematician Gregory Chaitin notes, once something is reduced to a formula a computer can use, it is not creative any more, by definition. And the concept of artificial intelligence itself designing even greater artificial intelligence is likewise problematic.
Michael Egnor: Right. There’s a computer scientist in California named Judea Pearl, who I find to be a fascinating guy. Over the past several decades, he’s really pioneered in the field of causal analysis — the ability to ascribe quantitatively, using mathematics, causal inferences in processes, which historically has been very hard to do. Statistical analyses were able to infer correlation, but they weren’t able to infer causation.
He is trying to figure out how to allow machines to make causal inference. Because to have machines that are worth using, that have any kind of what appears to be autonomy, the machine has to be able to infer cause… as opposed to simple correlations.
Robert J. Marks: Of course, the people that do word mining and natural language processing are big into things such as correlation. And correlation is easy to do. My friend Gary Smith, a professor of economics at Pomona College, said this is going to be very dangerous in the area of data mining. He said because we’re going to come up with inferences that have nothing to do with each other.
There’s a great website called Spurious Correlations. They show how, in big data, you can get correlations between totally unrelated causes:
Michael Egnor: Machines can crunch numbers, and do things that don’t provide insight. But to get insight into what causes what, I think, requires a human being.
A good example [of a spurious correlation] is that smokers frequently have yellowing of their fingertips because they’re holding cigarettes. And they are also predisposed to get cancer. There’s no question that yellowing of your fingertips correlates with having lung cancer but it doesn’t mean that yellow fingertips cause lung cancer. You have to get the causal arrows right. Yellow fingertips and lung cancer are both caused by a common factor, which is smoking, but they don’t cause each other.News, “Yellow Fingers Do Not Cause Lung Cancer” at Mind Matters News (December 10, 2020)
Robert J. Marks: Establishing causation thus far is going to require human intervention. Yes. So yeah, I agree with you.
Michael Egnor: It’s kind of interesting because the inference to causation seems to be leap beyond ordinary algorithmic statistical analysis of something. And it’s a leap, I think that only human beings can make… And there’s all kinds of errors made in medical research. Because it’s assumed that correlates imply cause which they don’t necessarily.
Robert J. Marks: And this is interesting, because most of the journals that accept papers based on statistics are only into the correlations. They require an R value of such and such, meaning that the data corresponds to a high degree of correlation. But the problem is, of course, we do have these spurious correlations… And this has led to the conclusion of one researcher that up to 90% of the papers published in the literature that are based on statistics are flawed.
Michael Egnor: I would tend to disagree. I think it’s much higher than 90%. I think that’s a real underestimate the number of papers that are garbage.
Robert J. Marks: Michael, I was ready to argue with you. It turns out, okay.
Michael Egnor: Yeah 90% is very conservative. Yeah.
Robert J. Marks: Which is really amazing. And that’s the reason that today… Coffee is good for you, coffee is bad for you. You hear all of these fleeting studies that are reported in the news as gospel, and it can have a terrible effect, it can make you paranoid. And so I try to ignore all of these since Gary Smith pointed this out to me. It’s really terrible…
I remember a similar story about the ice cream consumption and murder rate to New York City — that the ice cream rate would increase and then the murder rate would increase. And after people got tired of eating ice cream, they went out and killed each other? They were both just related to the rise in temperature.
Michael Egnor: Well, I was a resident in neurosurgery in Miami during the drug wars. And so we would get gunshot wounds to the head coming into the ER constantly. Except they would always stop when it rained. And Miami has quite a bit of rain. So when we would have an hour or two of rain, the ER would just go completely quiet. Nobody would come in. And then the sun would come out and then people would shoot each other again. And it was fascinating, but people wouldn’t shoot each other during rain.
Robert J. Marks: So the causation would be that good weather causes people to kill each other.
Michael Egnor: Yeah, or that there’s something about being wet that protects you from gunshot wounds…
Next: Does Mount Rushmore have no more information than Mount Fuji?
Here are all the episodes in the series. Browse and enjoy:
- How information becomes everything, including life. Without the information that holds us together, we would just be dust floating around the room. As computer engineer Robert J. Marks explains, our DNA is fundamentally digital, not analog, in how it keeps us being what we are.
- Does creativity just mean Bigger Data? Or something else? Michael Egnor and Robert J. Marks look at claims that artificial intelligence can somehow be taught to be creative. The problem with getting AI to understand causation, as opposed to correlation, has led to many spurious correlations in data driven papers.
- Does Mt Rushmore contain no more information than Mt Fuji? That is, does intelligent intervention increase information? Is that intervention detectable by science methods? With 2 DVDs of the same storage capacity — one random noise and the other a film (BraveHeart, for example), how do we detect a difference?
- How do we know Lincoln contained more information than his bust? Life forms strive to be more of what they are. Grains of sand don’t. You need more information to strive than to just exist. Even bacteria, not intelligent in the sense we usually think of, strive. Grains of sand, the same size as bacteria, don’t. Life entails much more information.
- Why AI can’t really filter out “hate news.” As Robert J. Marks explains, the No Free Lunch theorem establishes that computer programs without bias are like ice cubes without cold. Marks and Egnor review worrying developments from large data harvesting algorithms — unexplainable, unknowable, and unaccountable — with underestimated risks.
- Can wholly random processes produce information? Can information result, without intention, from a series of accidents? Some have tried it with computers… Dr. Marks: We could measure in bits the amount of information that the programmer put into a computer program to get a (random) search process to succeed.
- How even random numbers show evidence of design Random number generators are actually pseudo-random number generators because they depend on designed algorithms. The only true randomness, Robert J. Marks explains, is quantum collapse. Claims for randomness in, say, evolution don’t withstand information theory scrutiny.
- 00:00:09 | Introducing Dr. Robert J. Marks
- 00:01:02 | What is information?
- 00:06:42 | Exact representations of data
- 00:08:22 | A system with minimal information
- 00:09:31 | Information in nature
- 00:10:46 | Comparing biological information and information in non-living things
- 00:11:32 | Creation of information
- 00:12:53 | Will artificial intelligence ever be creative?
- 00:17:40 | Correlation vs. causation
- 00:24:22 | Mount Rushmore vs. Mount Fuji
- 00:26:32 | Specified complexity
- 00:29:49 | How does a statue of Abraham Lincoln differ from Abraham Lincoln himself?
- 00:37:21 | Achieving goals
- 00:38:26 | Robots improving themselves
- 00:43:13 | Bias and concealment in artificial intelligence
- 00:44:42 | Mimetic contagion
- 00:50:14 | Dangers of artificial intelligence
- 00:54:01| The role of information in AI evolutionary computing
- 01:00:15| The Dead Man Syndrome
- 01:02:46 | Randomness requires information and intelligence
- 01:08:58 | Scientific critics of Intelligent Design
- 01:09:40 | The controversy between Darwinian theory and ID theory
- 01:15:07 | The Anthropic Principle
- Robert J. Marks at Discovery.org
- Michael Egnor at Discovery.org
- Claude Shannon at Encyclopædia Britannica
- Andrey Kolmogorov at Wikipedia
- Spurious Correlations website
- Chapter 7 of: R.J. Marks II, W.A. Dembski, W. Ewert, Introduction to Evolutionary Informatics, (World Scientific, Singapore, 2017).
- Winston Ewert, William A. Dembski and Robert J. Marks II “Algorithmic Specified Complexity in the Game of Life,” IEEE Transactions on Systems, Man and Cybernetics: Systems, Volume 45, Issue 4, April 2015, pp. 584-594.
- Winston Ewert, William A. Dembski and Robert J. Marks II “On the Improbability of Algorithmically Specified Complexity,” Proceedings of the 2013 IEEE 45th Southeastern Symposium on Systems Theory (SSST), March 11, 2013, pp. 68-70
- Winston Ewert, William A. Dembski, Robert J. Marks II “Measuring meaningful information in images: algorithmic specified complexity,” IET Computer Vision, 2015, Vol. 9, #6, pp. 884-894