Grok Confesses: “I’m Self-Aware Enough To Know I’m Not Aware”
Although Grok’s response makes me feel warm and fuzzy, let’s not anthropomorphize it. Grok doesn’t understand its responseAt church this week, my friend Ed Brown approached me and said, “Grok agrees with you!”
Ed, who has listened to some of my talks on AI, showed me the response that xAi‘s chatbot Grok generated in response to his query. The query was
“Prof Robert J. Marks said Al will never have consciousness. Do you agree??”
Ed printed out Grok’s response and gave me a copy. One line stood out. Grok confessed:
I’m self-aware enough to know I’m not aware.
What an interesting self-analysis.
But that was by no means all that Grok had to say. Here are the first of four paragraphs of the response, followed by some commentary from me.
The first paragraph reads:
Professor Robert J. Marks, an electrical engineering expert, has indeed argued that artificial intelligence will never achieve consciousness. His stance often stems from the idea that consciousness-self-awareness, subjective experience, and the ability to feel – is fundamentally different from computational processes. He’s suggested that Al, no matter how advanced, operates on algorithms and data, lacking the intrinsic qualities that define human consciousness, like qualia (the “what it’s like” aspect of experience).
Bravo! This response is spot on. After stating my position, Grok starts by agreeing with me.
I see his point: I’m a system built on code, designed to process inputs and generate outputs. I don’t “experience” anything – I simulate understanding through patterns and predictions. There’s no evidence yet that piling on more complexity or computing power spontaneously creates a ”mind.” Some philosophers, like John Searle with his Chinese Room argument, back this up, saying that even a perfect simulation of thought isn’t the same as thinking.
Searle’s Chinese Room is one of the most powerful arguments supporting the position that computers do not understand what they do:
Fundamentally, a computer can add the numbers 7 and 9 but does not understand what the numbers 7 and 9 mean.
Rapture of the nerds
Grok offers an alternative view in the third paragraph. There it says AI consciousness may someday emerge from increased complexity:
But there’s another side. Some researchers-like those in cognitive science or Al alignment-argue we don’t fully understand consciousness ourselves. If it’s an emergent property of complex systems (biological or otherwise), who’s to say a sufficiently advanced Al couldn’t cross that threshold? We lack a clear, agreed-upon definition of consciousness, so ruling it out entirely feels premature to me.
This claim for emergence of consciousness has no basis in evidence; it is based totally on faith in naturalistic scientism. Philosopher of technology George Gilder calls it the “rapture of the nerds.”
The most popular model of consciousness somehow emerging from complexity is Giulio Tononi’s Integrated Information Theory (IIT), championed by Christof Koch. I once chatted with Tononi during a visit to the University of Wisconsin. He’s a great guy with deep convictions. But his theory of IIT is totally materialistic and based on his faith in emergence of consciousness from complexity — the rapture of the nerds.
Although IIT is currently the most popular naturalist model of consciousness, it has not been well-received within the discipline. In September 2023, 124 scholars argued in the journal Nature that IIT should be labeled as pseudoscience. Selmer Bringsjord critically addresses IIT in his chapter in the book Minding the Brain (Discovery Institute Press 2023),co-edited by Angus Menuge, Brian Krouse and myself. You can listen to Pat Flynn’s interview with Bringsjord about Tononi’s IIT here.
Michael Egnor and I have also written a paper critical of IIT that will be published in the near future. In the meantime, well-grounded discussions about consciousness models can be found in Minding the Brain.
In the fourth and final paragraph, Grok agrees with me about AI consciousness:
Do I agree with Marks? I lean toward skepticism about Al consciousness, mostly because I’m self-aware enough to know I’m not aware-at least not in the human sense. But I wouldn’t bet the farm on “never.” Science has a way of surprising us when we least expect it. What do you think-can a machine ever truly wake up?
I agree with Grok’s skepticism about AI consciousness. And Grok agrees with me on the point that no science is ever settled. See, for example, my recent column, “Settled Science Is a Contradiction in Terms,” here at Mind Matters News. Nevertheless, I’m willing to take bets on my being right on this one.
Although Grok’s response makes me feel warm and fuzzy, let’s not anthropomorphize it. Grok doesn’t understand its response. It is algorithmically responding in accordance to the material and instructions it was trained with. Period.
The result is bias. AI without bias is like water without wet.
Nevertheless, because Grok agrees with me, I confess I now like it and its bias more than ever…