Mind Matters Natural and Artificial Intelligence News and Analysis
summer-meadow-blow-balls-landscape-painting-stockpack-adobe-stock
Summer meadow blow balls landscape painting
Image licensed via Adobe Stock

Could AI ever pass the Van Gogh test?

Van Gogh was crazy but he was talented and AI can be neither
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The Van Gogh Test for sheer creativity? Thursday night at COSM presented a live, in-person interview with Federico Faggin, the Italian physicist and computer engineer who co-won the prestigious Kyoto prize in 1997 for helping develop the Intel 4004 chip.

Faggin was interviewed by technology reporter Maria Teresa Cometto, who asked him to regale the audience with tales about helping to design early microchips. Eventually Faggin recounted a time when he was “studying neuroscience and biology, trying to understand how the brain works,” and came upon a startling realization:

And at one point I asked myself, “But wait a second, I mean these books, all this talk about electrical signals, biochemical signals, but when I taste some chocolate, I mean I have a taste. So where’s the taste of the chocolate coming from? They’re not electrical signals, right? A computer, does it taste this? Does it have a sensation or a feeling for the signals that he has in his memory or in his CPU? Of course not. So where are sensations and feelings coming from?” … And so I discovered what was later called the hard problem of consciousness.

Cometto then asked him the big question: “So can consciousness emerge from a computer or from an artificial intelligence program?”

Without hesitation Faggin offered an unmistakable answer: “No, I can say a definite no. And I can explain it.”

The hard problem of consciousness that Faggin was referencing pertains to the origin of “qualia” — a term exposited by philosopher David Chalmers. According to the Internet Encyclopedia of Philosophy, “Qualia are the subjective or qualitative properties of experiences,” such as what it feels like to see a sunset or prick your finger on a thorn or smell a rose. Faggin explains it this way:

Consciousness is the ability that we have to know through an experience. An inner experience is something that we feel within ourselves. It’s not something out there. It’s within ourselves. We know that. … And we know because we feel what we know. And the feelings are called qualia.

Maria Cometto and Federico Faggin at COSM 2022

But can a computer experience these feelings — can we program a computer replicate qualia? According to Faggin, during this period of intellectual exploration he was still a materialist, and believed that the human mind is no more than the brain. Based upon this belief, he embarked on what he eventually realized was an impossible task — the creation of a conscious computer:

It was a personal project at that point to try to figure out how can I make a conscious computer. And in my spare time I was thinking, how can I do that? And the more I tried, the worse it got. I mean, there is no way that you can convert electrical signals into sensations and feelings. They are two different categories of not even one is an object or at least a physical structure. But the feelings, sensations and feelings that you cannot touch them. You cannot measure them, you cannot feel them. So how has it possible?

This recounts the conversation with computer scientist and former Google engineer Blake Lemoine at the AI panel from earlier in the day. According to Lemoine, Google’s LaMDA chatbot “argues that it is sentient because it has feelings, emotions and subjective experiences. Some feelings it shares with humans in what it claims is an identical way.”

Federico Faggin on consciousness at COSM 2022

Faggin would reply there’s a fundamental difference between humans and computers: “We know because we feel,” he said. “A computer knows because it has data.”

Yet Lemoine might reply that enough data could be sufficient to make a computer sentient. AI can learn just as humans can learn, and this repeated feedback-selection-learning process is crucial to developing a sophisticated mind. As he argued in the morning panel, “the training data that these systems have is analogous to the experiences that a human has had in their life that they’ve learned from.”

Even artistic creativity might be learned, Lemoine suggested. Creativity “requires feedback and artists get feedback all the time,” he said. “They produce new stuff, people clap or people boo.”

As a possible counterexample, consider the case of Vincent Van Gogh. According to art history lore, Van Gogh sold only a few paintings (possibly only one verified painting) during his entire lifetime. He has been called a hermit who did not work well with others. As an artist, therefore, he received little “feedback” from sales or critical interaction to tell him what art worked and what didn’t.

Yet his art was undeniably novel and brilliant.

Van Gogh painted simply because he loved art and did art for the sake of art. There was something within Van Gogh that drove him to create art, and despite his reclusive nature, that inner love drove him to hone his art to perfection. As Van Gogh reportedly said, “I put my heart and my soul into my work, and have lost my mind in the process.”

Could the kind of AI described by Lemoine ever repeat the life of Vincent Van Gogh? Could a program create, improve, and even perfect a form of art simply because it loves art — not because it was receiving feedback-selection loops as the program ran its course? Could AI be driven by something internal — a love for something rather than by feedback from the external world?

Van Gogh was also a troubled soul. Depressed, anxious, and unstable, he tragically killed himself in the end. As ugly as suicide is, it undeniable that it is often driven by overwhelming negative feelings. It would be easy to program a computer to self-destruct. But could AI ever destroy itself due to negative feelings? This would require a program to have feelings in the first place.

There are many tests for determining if true AI has been achieved. But if Federico Faggin is right then computer-based AI could never pass the Van Gogh test.

You may also wish to read: Experts at COSM Debate Whether Chatbot was Sentient. Turned out quite pleasant. Google fired him in 2022 — but what really happened there? It’s not clear that Lemoine’s chatbot, LaMBDA, has any idea what it is saying, as computer prof George Montanez noted.


Casey Luskin

Associate Director and Senior Fellow, Center for Science and Culture
Casey Luskin is a geologist and an attorney with graduate degrees in science and law, giving him expertise in both the scientific and legal dimensions of the debate over evolution. He earned his PhD in Geology from the University of Johannesburg, and BS and MS degrees in Earth Sciences from the University of California, San Diego, where he studied evolution extensively at both the graduate and undergraduate levels. His law degree is from the University of San Diego, where he focused his studies on First Amendment law, education law, and environmental law.

Could AI ever pass the Van Gogh test?