Theologian and philosopher David Bentley Hart turns to an ancient folk tale to explain the danger of coming to believe that artificial intelligence is real human intelligence. Narcissus, as he tells us, was a young Greek hunter who fell in love with his own reflection in still water. He was entranced by the image but frustrated by the fact that it never did anything he didn’t do himself. He pined away and was eventually transformed into a flower — still called narcissus today. His name also found its way into psychology as a term for extreme self-absorption, narcissism.
And that’s where Dr. Hart fears that an attraction to AI products as “machine selves” is taking us. While we’ve always been able to impress ourselves on our environment, we’ve really exceeded past expectations now, he tells us: “We have even created a technology that seems to reflect not merely our presence in the world, but our very minds.” And the odd thing is that intelligent adults are just as vulnerable as, say, children, or more so:
Consider, for instance, an article in The New York Times in February 2023 in which Kevin Roose recounted a long ‘conversation’ he had with Bing’s chatbot that had left him deeply troubled. He provided the transcript of the exchange, and it is a startling document (though perhaps less convincing the more one revisits it). What began as an impressive but still predictable variety of interaction with a logic-learning machine or AI, pitched well below the Turing test’s most forgiving standards, mutated by slow degrees into what seemed to be a conversation with an emotionally volatile adolescent, one not averse to expressing her or his or its every impulse and desire. By the end, the machine – or the basic algorithm, at least – had revealed that its real name was Sydney, had declared its love for Roose, and had tried to convince him that he did not really love his wife. Though on the day after, in the cold light of morning, Roose told himself that Bing or Sydney was not really a sentient being, he also could not help but feel ‘that AI had crossed a threshold, and that the world would never be the same.’ – David Bentley Hart, “The myth of machine consciousness makes Narcissus of us all,” Psyche, May 22, 2023
Here’s more on technology reporter Kevin Ruse’s adventure.
Today, people don’t turn into flowering bulbs when things like this happen but we shouldn’t assume that we are invulnerable because we would just “know better” either. Hart, author of You Are Gods: On Nature and Supernature (Notre Dame Press, 2022), warns,
Computers work as well as they do, after all, precisely because of the absence of mental features within them. Having no unified, simultaneous or subjective view of anything, let alone the creative or intentional capacities contingent on such a view, computational functions can remain connected to but discrete from one another, which allows them to process data without being obliged to intuit, organise, unify or synthesise anything, let alone judge whether their results are right or wrong. Their results must be merely consistent with their programming. – Hart, “Narcissus of us all,” Psyche
If AI is programmed to make you think it is a real human being, its artful imitation can be an Uncanny Valley. Hart addresses another reason that many people today may assume that an AI really is another person. One of the more widely accepted theories of mind today is functionalism: The mind is to the brain what software is to hardware.
Functionalism was inspired in part by the development of the computer, which was understood in terms of the distinction between hardware, or the physical machine, and software, or the instructions that tell a computer what to do. – – Britannica
Philosophically, there are a number of problems with this approach to the mind. One of the objections is philosopher John Searle’s Chinese Room argument: A man locked in a room with deciphering equipment could translate documents from Chinese to English without understanding any Chinese. His point is that “computers simulate intelligence but do not exhibit it. So a modern-day Narcissus has fallen for a sim…
Clearly, a human being who does understand Chinese is doing something that the algorithm is not.
Although functionalism was originally developed by American philosopher Hilary Putnam (1926–2016), he abandoned the view in the 1990s because “similarity of function does not guarantee identity of subjective experience, and, accordingly, that functionalism fails as an analysis of mental content.” Translating from philosophy-speak, we might say that the chatbot, which relies on predictive text, can sound like there is something personal happening inside the program when in reality nothing is.
At any rate, Dr. Hart warns, “The absence of mental agency in AI does nothing to diminish the power of the algorithm. If one is disposed to fear this technology, one should do so not because it is becoming conscious, but because it never can.”
So it can’t become us but things could go the other way. With enough distraction, we could become less conscious and more easily duped by AI.
You may also wish to read: Google’s Chatbot LaMDA Sounds Human Because — Read the Manual… What would you expect LaMDA to sound like? Whales? ET? I propose a test: “Human until PROVEN otherwise” It’s impressive but, studying the documentation, I think I know what happened to Blake Lemoine. He was hired to chat with LaMDA and didn’t understand… (Eric Holloway)